Let's Talk about Fairly Testing GPU Performance R.E. Warming Up GPUs

The Nano is a small card and was made for Mini ITX cases. You wouldn't buy one to put in a tower when there's bigger faster and cheaper cards for that. I don't think most of the other cards would fit in a mini case. It should be tested in what it was made for.

So only some cards should be tested inside a case, others no. Got it.
 
§kynet;1041848115 said:
So only some cards should be tested inside a case, others no. Got it.

All cards should be tested in cases. And niche cards like this should be tested in the niche cases they're designed to fit into.

I run an mITX system myself but the case lets me fit a full sized card in it so I run a full size non-ref cooled card. This specific card costs what my 980ti costs and isn't worth it to me.
 
§kynet;1041848115 said:
So only some cards should be tested inside a case, others no. Got it.

Actually you don't got it. The Nano should be tested in a mini itx case as that what it will most likely be bought for. The large cards will not fit. All cards should be tested in a case though.
 
I thought the point of using an open air case was to remove the case as a factor. There is no way for a reviewer to accurately duplicate every user environment. the idea is to accurately measure against a known measure. If all video cards are tested in the same open air environment you can easily compare how one does against the others.

I will agree that i the case of the nano what we need to see are small case comparisons against other cards you might choose instead, but only because this is a very niche device.

No matter what case they choose you will complain, its too big, its too small, it has too many fans, not enough fans, and it would be never ending.

I don't think it is a bad idea to also provide some tempatures from some sort of standard case, but I wouldn't want that to be the primary testing platform

yeah but I'd like to know if the case is a factor, if I buy a 980 Ti overclocked to the ballz and run it in my case that's pretty crammed is it going to throttle down after 30 mins and offer me 980/390x performance?

I guess they could aim a heat gun at the cards intake as well test how well they do in a summer time situation
 
I disagree with you, Brent.


Here is how ALL review sites should do FAIR reviews.

  • No benchmark or "measurement" tools can be used that is made by a specific video card vendor such as FCAT tools as an example unless open source.
  • ALL benchmark software MUST be open source so that any "foul" code that takes advantage of a vendor's specific hardware capabilities can be spotted and removed.
  • All benchmark software must adhere STRICTLY to the requirements of Mircosoft's DirectX and Khronos Groups OpenGL standards. None of this special sauce benchmarks that cater to specific hardware (refer to point 2).
  • Any games used for performance testing must not have any enhancements by Nvidia or AMD. This means all game with Nvidia's Gamworks, TWIMTP or AMD's Gaming Evolved will not be allowed. Only code that is written to STRICTLY follow the developer's code manuals of DriectX or OpenGL.

My suggestion is that you guys hire an in-house programmer to make benchmark software that tests DirectX performance capabilities and OpenGL capabilities of video cards. You must also post the source code of all your benchmarks so that we can guarantee that you or other review sites are not being paid by competitors to put in "poison" code or performance-enhancing code.

Um, no, just no. When I come here to read a video card review, I'm looking to see how that card performs on games I want to play. I don't want to play a benchmark. If a game has optimizations that favor one vendor over another, I want to know that. I'm looking for a gaming experience, not a synthetic counting of something I'm not even using.


Look at it this way:
Games exist. People want to play games. That's why they're buying GPUs in the first place.
If there are a bunch of games that are using Technology X from Vendor Y, and people want to play those games, then those are the games that need to be tested with new cards.
Whether Technology X favours Vendor Y or Vendor Z makes absolutely no difference whatsoever when your end goal is to play games rather than compare whose-number-is-biggest.

This right here... so much this!
 
I disagree with you, Brent.


Here is how ALL review sites should do FAIR reviews.

  • No benchmark or "measurement" tools can be used that is made by a specific video card vendor such as FCAT tools as an example unless open source.
  • ALL benchmark software MUST be open source so that any "foul" code that takes advantage of a vendor's specific hardware capabilities can be spotted and removed.
  • All benchmark software must adhere STRICTLY to the requirements of Mircosoft's DirectX and Khronos Groups OpenGL standards. None of this special sauce benchmarks that cater to specific hardware (refer to point 2).
  • Any games used for performance testing must not have any enhancements by Nvidia or AMD. This means all game with Nvidia's Gamworks, TWIMTP or AMD's Gaming Evolved will not be allowed. Only code that is written to STRICTLY follow the developer's code manuals of DriectX or OpenGL.



My suggestion is that you guys hire an in-house programmer to make benchmark software that tests DirectX performance capabilities and OpenGL capabilities of video cards. You must also post the source code of all your benchmarks so that we can guarantee that you or other review sites are not being paid by competitors to put in "poison" code or performance-enhancing code.

Great for measuring theoretical performance, which is just about as a useful figure as who can bitcoin mine better when they are looking for a harddrive.

Doesn't reflect real world performance, and that is the most important thing.

Theoretical performance doesn't matter if it doesn't translate to real world performance, in fact it can be entirely deceiving.

There is also another flaw in the neutral benchmarking: you are not taking any inherent GPU bias with respect to benchmarks at all. Some GPUs may have micro arch that is better suited to run benchmarks than another, but may or may not have worse real life performance.

Great for scientific research purpose, but it's both not enough (you are not investigating enough, such as to WHY is one better than another, you are basically concluding A is better than B BECAUSE it ran higher, which is a very premature conclusion), and not applicable to the end user.

Why does theoretical performance not matter? It doesn't matter to an average if a GPU X can perform neutral benchmark better than Y if Y can play games at 20% higher minimum and higher fps than X. I personally couldn't really care less if a GPU can used to run Skynet but can't play Galaga without it being a slide show.
 
Last edited:
Something that might not be considered by a lot of the people are that games can have very different visual design goals and take different paths (implementations) to get there. I don't see how one can actually make the determination, especially the masses with very little access to pertinent information, that one specific design goal and implementation is "right" while another is "wrong" somehow.

Then factor in how GPU architectures will have varying strengths and weaknesses compared to each other you can now see why a completely neutral test and representative test is not really realistic.
 
Great for measuring theoretical performance, which is just about as a useful figure as who can bitcoin mine better when they are looking for a harddrive.

Doesn't reflect real world performance, and that is the most important thing.

Theoretical performance doesn't matter if it doesn't translate to real world performance, in fact it can be entirely deceiving.

There is also another flaw in the neutral benchmarking: you are not taking any inherent GPU bias with respect to benchmarks at all. Some GPUs may have micro arch that is better suited to run benchmarks than another, but may or may not have worse real life performance.

Great for scientific research purpose, but it's both not enough (you are not investigating enough, such as to WHY is one better than another, you are basically concluding A is better than B BECAUSE it ran higher, which is a very premature conclusion), and not applicable to the end user.

Why does theoretical performance not matter? It doesn't matter to an average if a GPU X can perform neutral benchmark better than Y if Y can play games at 20% higher minimum and higher fps than X. I personally couldn't really care less if a GPU can used to run Skynet but can't play Galaga without it being a slide show.

Apparently no one bothered to properly comprehend my bullet points.


If the benchmark software STRICTLY follows the coding guides as outlined for DirectX and OpenGL, then IT IS NOT giving any architecture an advantage over the other. Both Nvidia and AMD design their hardware and build up the drivers in accordance to what is required in DirectX and OpenGL. If an architecture outperforms another, then it's simply from the incompetence of the engineering team that designed the less performing product. They had access to the exact same material, so it's their own fault.


The reason real world games are all over the place in FPS is because both sides are playing dirty tricks with poison code and/or performance code. That is NOT REAL WORLD performance. That's con artist tricks that ignore standards of what Microsoft and Khronos Group have laid forth. I'm getting tired of these stupid con games that make hardware appear superior to another when in reality if the STANDARDS were followed, we would get the REAL WORLD picture of who actually performs better. The consumer is getting screwed over in so many ways that we just all accepted all the cons as the norm. It's like none of us want to stand up and say enough is ENOUGH! NO MORE of this TWIMTP, Gameworks, or Gaming Evolved crap.


Here is an analogy for you: Imagine you bought a high-performance car that has 650 HP from company X, but the competitor, company Y, happens to own all the gas stations and they make performance cars also. Well, their best car can only produce 500 HP. So since they own all the gas stations, they make sure the high-performance car from company X is always getting crap gas so the high-performance car never TRULY reaches its true performance. The consumer is too ignorant to know what is going on so they assume the car from Company X is a PR stunt when they said it had 650 HP because it gets beat by the car made by Company Y.


I bet my bottom dollar that the majority of the games we play will have poison code and/or performance code in them if a good hacker could reverse compile the games!! You think we would ever get the source code to these games. NOPE!! This needs to STOP!! It's time that all these game developers follow the standards laid forth by Microsoft and Khronos Group and tell AMD and Nvidia to go shove it where the sun does not shine.
 
warming up a video card? what do you think this is? a 1980's diesel truck? :p

it's true all electronics need some time to stabilize after power on but it happens so fast that by the time you pulled your finger away from the power the unit is ready to go...

any delay indicates a problem....

oh well so much for the warm up theory
 
warming up a video card? what do you think this is? a 1980's diesel truck? :p

it's true all electronics need some time to stabilize after power on but it happens so fast that by the time you pulled your finger away from the power the unit is ready to go...

any delay indicates a problem....

oh well so much for the warm up theory

I believe this is about thermal warm up.

Modern GPU are able to dynamically adjust it's clock speed based on temperatures. I've seen this in my own OC experience where my card would throttle it's clock speed when it hits 80c, and this only happens after several minutes of intensive 3D rendering. Therefore, if I were to benchmark my card, the first 5 minutes is not indicative of the actual performance I'll get when I'm actually gaming for an extended period of time.

So it is important to let the graphic card warm up to a stable temperature before we start benchmarking.
 
If the benchmark software STRICTLY follows the coding guides as outlined for DirectX and OpenGL, then IT IS NOT giving any architecture an advantage over the other.
You seem to be under the impression that APIs do something they certainly do not. APIs expose GPU function to an engine in a slightly more friendly way than direct access. That is the entirety of their existence and purpose. APIs in absolutely no way dictate performance, either absolute or relative.


To use your car example: DirectX requires that a car have 4 wheels, a steering wheel that adjust at least two of those wheels, and engine driving at least two of those wheels, and accelerator that modulates engine power, and brakes that prevent the wheels turning. It does not define transmission type or mechanism, engine power, type width, body shape, etc.
 
You seem to be under the impression that APIs do something they certainly do not. APIs expose GPU function to an engine in a slightly more friendly way than direct access. That is the entirety of their existence and purpose. APIs in absolutely no way dictate performance, either absolute or relative.


To use your car example: DirectX requires that a car have 4 wheels, a steering wheel that adjust at least two of those wheels, and engine driving at least two of those wheels, and accelerator that modulates engine power, and brakes that prevent the wheels turning. It does not define transmission type or mechanism, engine power, type width, body shape, etc.


Yes the API only defines what has to be done, not how they get done at a hardware level.
 
warming up a video card? what do you think this is? a 1980's diesel truck? :p

it's true all electronics need some time to stabilize after power on but it happens so fast that by the time you pulled your finger away from the power the unit is ready to go...

any delay indicates a problem....

oh well so much for the warm up theory

You don't get it. Pcper did a test with a reference R9 290X back in late 2013, check it out:

http://www.pcper.com/reviews/Graphi...figurable-GPU/Cold-versus-Hot-R9-290X-Results
 
I do get it. the chip is designed to work in a set thermal envelope... when it gets to the upper limits it has to do something aka cut power and the fastest way to do that is throttle to maintain thermal envelope.

as your case warms up you have less margin for heat removal so it is logical that performance would degrade... why do you think water cooling give you more overhead?
 
I think the power envelope on the Nano is a bigger deal than the thermals for throttling, but then again that fan gets awfully loud.

The reason you'd test on an open test bed is that it makes things like case airflow moot. A tester could easily bias the results by laying it out so that airflow is ideal for one card but not the other. A case optimized for airflow will often be cooler than an open testbed, because the hot air doesn't linger in areas where there isn't a fan, while a case will move that hot air constantly. mITX makes this more challenging, though.

But as to the discussion of biased games: you should evaluate the games better. Dying Light looks like a game that's coded poorly. Its VRAM management is awful, that's why it "stresses" the VRAM on the card. It probably lazily allocates texture memory with no anticipatory loading. It doesn't support multi-card well at all. All these things make it a poorly written game itself. It shouldn't be used, if only to not give the game more exposure. For the same reason that you call out slow cards for slow performance in particular circumstances, you should do the same things with the games that you throw at your cards.
 
Therefore, A.) the reviewer must find out what the dynamic clock speeds are and report the actual clock speed after a long period of gaming, B.) test video cards after this warm up period, C.) use tests that are long enough to realize these real-world clock frequencies and never do a cold start test.

This is how "FAIR" review sites, test cards. This is how we, at HardOCP, as a "FAIR" website, test video cards. This is how we would have tested Nano.

As you look forward to reviews this week, keep this topic in mind.
So are you going to explain why Nano was to be tested in a case and your other reviews are open bench? You consider this a fair and consistent way to test?
 
Nah they just test first person shooters these days. They throw in Witcher 3 just to say they test other types of games, yet they ignored Dragon Age Inquisition this whole time except for 1 round of testing. Yet Dragon Age is a very gpu demanding game and a very popular one at that. They are bad for testing the same type of game over and over, while other sites use canned benchmarks but at least it's on many different types of games. I don't take any one site as the gospel and I dont think any of you should as well, look around and you will see what the average is for each card. I would not call Hardocp biased but I will say there is no love loss for AMD here. After all these articles and crying by them, they will likely find themselves without Zen when it comes out. They would have been better off to say, we were excluded and left it at that, all this will cost them more then they know.
 
§kynet;1041849834 said:
So are you going to explain why Nano was to be tested in a case and your other reviews are open bench? You consider this a fair and consistent way to test?

Because otherwise there's no point even testing the Nano? What kind of performance do you expect out of it on an open test bench?
 
§kynet;1041849834 said:
So are you going to explain why Nano was to be tested in a case and your other reviews are open bench? You consider this a fair and consistent way to test?

Because AMD is advertising it as itx card ?
And Nano pricing makes it pointless card for people with bigger cases ?
 
Well H s review metology is just a littel pice of the pussel, it only tell you how good the card is in an open bench.

It tells nothing abaut if the cooler is good enough, once it is in the case. And there are tests on the net, that shows, excatly that many cards, both amd, nvidia trottels when they get hot-

And no brent that is not so hard to Test, u dont have to play the game to find out if the cooler is good enough, run it half an hour with a canned benchmark, see the frekvensy, then put a cover over your open testbench, that mimic the same airflow that a medium good case have of airflow, then run the same canned bechmark for half an our.

That would improve your reviews, because its an easy way to test an ecentiell feauture of the card in question, is the cooler good or not.

And if that isnt importent, then.............
 
nah brent should go LN2 or Go home he should be a real enthusiast and use LN2 or Dry Ice if he is too chicken:D
 
Have there been any actual test yet where this subject mater have shown different results yet? Just curious
 
IMG0047725.png


This kind of information is usful in a fair gpu test, and it shows how losly just an open bench test is.

its from harware.fr

It shows how important the cooler is, same gpu can act like 2 diffrent cards, deepending of the cooler.
 
Apparently no one bothered to properly comprehend my bullet points.
*snip etc*.

Right, so you are willing to let Microsoft monopolise the gaming codes rather than nVidia or AMD?

That seems to be what I am getting from this post. You have no interest in reality, as you are far more interested in 'theoretical' GPU (if it is actually possible to call it that) than actual performance. I am not even going into the architectural difference between the GPUs having different impact with different kinds of rendering that your theoretical performance would indicate.

Oh, and the GW etc argument? That has been addressed bazillion times.
 
Personally, I think it would be interesting if [H] started adding closed-case poor airflow (i.e. 1 front intake fan, 1 rear exhaust fan, 1 cpu fan, no side vent, no direct airflow over GPU with only a rear exhaust vent or pci brackets open nearby, 24° C ambient room temp) 'hotbox' clock & temperature graphs + noise measurements to their standard testing, or at least do an occasional special article on the topic which tests a bunch of reference and aftermarket GPUs in that situation. It would give a nice contrast to open bench, and show which aftermarket cooler designs fair best against your standard reference blower in such poor airflow situations.

My decade old full-tower aluminum Lian-Li V2100 case (still in use for the rig in my signature) is absolutely horrible in regards to airflow over the GPU if you have all the 5.25" bays full such as I do, which makes judging how well modern aftermarket coolers would handle such a situation rather difficult. Old full-tower cases such as this are pretty much worse-case for GPUs nowadays, likely similar poorly designed SFF yet without the space limitations. That somehow NVIDIA models with the nicely designed Titan reference blowers often end up more expensive and very rare compared to multi-fan aftermarket coolers just makes purchases of new GPUs all that much harder. And as you can see from that case diagram, closed-loop GPU watercoolers such as those on the Fury-X aren't an option either, from lack of an open fan slot.

Yes, I know very well that one of these days I'll need to update my case or migrate all my HDDs to a dedicated server. I'll likely do so whenever I make my next major build, but choices are very few nowadays for full-tower cases which have a combinations of built-in mount points and open 5.25" slots to fit a couple additional 3in2 bays to directly migrate my 14 3.5" HDDs and 2 SSDs in any practical manner. I ultimately ended up just changing the thermal paste on my GTX770, setting a 80% power limit (modern games usually use ~75-78% power at 100% utilization with the bios on this Galaxy model without any power limit set), overclocked slightly, temp limit to 90° C, fan speed to a very silent constant 1800rpm, opened up the PCI brackets directly above the GPU, and have the case placed in the vicinity of the floor A/C vent. I found this to be a very nice balance in terms of perf/noise in this less than optimal airflow situation, since it results a 'controlled throttle' state where it bounces between slightly overclocked 1228Mhz (standard full voltage) yet never below the standard 1201Mhz factory overclocked max in-game clock (now slightly undervolted) based on load at a constant 90° C temp. I could maintain 80° C if I wanted to, but then it would be loud which doesn't seem worth it to me, especially since I lost the silicon lottery and my GPU isn't stable above 1228Mhz even with an overvolt.

The release of the Nano does make me somewhat curious how it would compare against a GTX 980Ti with a restricted power limit and a controlled throttle overclock + undervolt similar to what I'm doing with my fullsize GTX 770. NVIDIA's Boost 2.0 makes it a bit tricky to do this on GPU models which don't already have a voltage controller supporting undervolting, but it is possible without causing the GPU dropping to low-power 3D clocks if you set the power limit slightly above + temp limit right at the border of the initial single-step boost clock throttling threshold, and have cooling adequate to maintain a nearly constant temp with 100% in-game GPU utilization at that point. Considering how well Maxwell seems to overclock, I'd assume you could achieve a rather significant undervolt without in-game clocks falling below stock max in-game boost performance levels. Though its hard to really know when review sites usually only cover oc over-volt and oc stock-volt, but not stock-perf/oc under-volt to see which GPUs have good headroom in that area.
 
IMG0047725.png


This kind of information is usful in a fair gpu test, and it shows how losly just an open bench test is.

its from harware.fr

It shows how important the cooler is, same gpu can act like 2 diffrent cards, deepending of the cooler.

I was a little surprised by this chart at first. Then I remembered if that's reference 980ti the fan curve is very conservative for whatever reason. Maybe they thought downclocks were more acceptable than having more noise. We know how "hot and noisy" can kill a card... It's interesting because you know it was a calculated choice. It's not by any means a limitation of the cooler. I had my card running at 1450 but with a bit of noise. It's still very important that we know at xx dB of noise eventually you'll downclock.

I always loved [H]'s in depth analysis. Most of us know Brent takes his time to do it right and that's why it's so time consuming. Personally if I was doing SFF I'd water cool it and use high end parts, so most of this heat discussion doesn't apply to me.

I wish I worked for AMD so I could find out what the hell they are thinking. To create a distance between yourself and some of the most trusted review sites, that will eventually review your card anyways, most likely looking for what you were trying to hide is mind boggling to me. They completely destroyed their "less evil underdog" appearance.
 
Well, i wanted the Nano, but coilwhine is a nono, but the cooler on a card is very importen, dosnt matter if its AMD or Nvidia, so it should be tested, cause it can impact a lot.
 
More relevant now with the 1080 Sucker's Edition.

Nvidia GeForce GTX 1080 im Test (Seite 6)

24ggDDi.jpg


Not on an open bench, but in a gaming case. In a few games, it's real clock speed is below the paper boost spec of 1733mhz, even reaching the BASE boost clocks.

The difference is a 10-15% performance loss compared to review sites that do a short few minute benchmark without prior warming up the cards. TechPowerUp for example said their 1080 boosts to 1.89ghz, hence, they have some of the biggest perf gains when compared to the 980Ti.

Even more interesting, when you OC it and leave it gaming for a long period of time, see what happens:

DIY GTX 1080 'Hybrid' Results – Higher Stable Clock, 102% Lower Thermals

gtx-1080-ghybrid-oc-stability-v-t.png


Yes that's right gentlemen. Even on an open bench setup, this card when OC and gaming for longer than 12 minutes, start to have massive throttling, even far below the base clocks.

Quote:

"This chart shows the clock-rate versus time. The GTX 1080 chokes on its thermals (and power), and begins spiraling once it's racked-up heat over the period of an hour-long test. 82C is the threshold for a clock-rate reduction on the GP104 GPU, as we show above.

You can see that the clock remains stable for a good 10+ minutes, but starts dying after that. The clock-rate recovers about 15 minutes later. These dips cause screen severe frame dropping or complete screen blackouts, in the worst cases."

I have never seen a prior NV GPU in the recent era that behaves this way.
 
I have seen this. On my MSI 780 Lightning, but that was power throttling.

I have heard the Titan X's have bad thermal throttling as well.
 
You know, you could just increase the fan speed to overcome the thermal limit that is causing the throttling in your example where it is overclocked :)
You do note that HardOCP managed it here ok when OCing.
However they still hit some blips even with temp at 63degree, but that is possibly down to voltage profile and how Boost3 operates when it peaks so some kind of "limiter" - IMO that is and would need to be proofed.

I do think though NVIDIA are too soft with default fan profile even when not OCing.
I think some forget that the reference 980ti could also suffer from throttling.
BTW that 1733 is a figure gameshardware decided to use as an average.

Anyway if you want to OC beyond basic 10%, then of course someone should buy the custom AIB.
And TBH the custom AIB usually makes sense for most consumers (those who do not need a blower/exhaust) as it has a more ideal cooling system in terms of noise-performance-large OC.
I think most being critical do not buy NVIDIA cards by waiting for the more popular custom AIB, or are not even NVIDIA customers :)

Cheers
 
Last edited:
You know, you could just increase the fan speed to overcome the thermal limit that is causing the throttling in your example where it is overclocked :)

That's what Computerbase.de did with their maximum testing, they ran 100% fanspeed, max power limit raised 120%. It did not stay above 2ghz, after 20 minutes, it drop down to ~1721 to 1785mhz. This was inside a case though, not an open bench where ambient (the blower's intake air) would be better.

The point here is while it can do a 10% OC, it cannot maintain it for longer than 20 minutes, inside a gaming case.
 
That's what Computerbase.de did with their maximum testing, they ran 100% fanspeed, max power limit raised 120%. It did not stay above 2ghz, after 20 minutes, it drop down to ~1721 to 1785mhz. This was inside a case though, not an open bench where ambient (the blower's intake air) would be better.

The point here is while it can do a 10% OC, it cannot maintain it for longer than 20 minutes, inside a gaming case.
I am not sure I follow, they showed after 20mins it still operated above base clock of 1600, some games could maintain boost clock some cannot and very few others are even higher, this is not OC.
That 10%-15% is easily achievable from base clock IF NVIDIA makes the fan noise more like the reference 970 (simplest change), currently it is quieter than reference 970.
One reason of the variability is that it is much more dynamic than in the past and not just based on temp but also voltage, however I do think NVIDIA were too soft with their fan profile and/or need to tweak the Boost3 algorithm with how it defines frequency based upon voltage and thermals.
Your chart above was for normal operation I think: Nvidia GeForce GTX 1080 im Test (Seite 6)

When they overclock they said:
Nvidia GeForce GTX 1080 im Test (Seite 10)
The 2.0 GHz sound barrier was the test sample so that not to break permanently. In the short term are indeed at greater than 2.0 GHz, but not later than five minutes, the frequency is lower. In Anno 2205, the GeForce GTX 1080 works finally with about 1,870 MHz instead of 1720 MHz and in Star Wars: Battlefront with 1,970 MHz instead of 1780 MHz. Depending on the game, the speed increases in order to ten to eleven percent.

Their issues and what many said (when you resolve the thermal issue setting fan to a much more aggressive profile):
The Power Target falls from low
A problem during overclocking, 1080 Founders Edition will come to the buyer of the GeForce GTX, is the power target. Even on the maximized value picks this one at a overclocking immediately and reduces the clock. Without this behavior still one or the other additional percentage would performance possible - partner cards will allow a further power connector on the PCB significantly more, which is expected.


But lets not forget to OC properly as mentioned a few times, you will need a more advanced OC tool that is currently in alpha, the EVGA utility that hooks into Boost3.
Cheers
 
Last edited:
If you're confused about Computerbase.de's results, their "Maximum" is 100% fan and max out on the power limit.

In that table, the right column is the result of running the reference 1080 with 100% fan and 120% power limit, after 20 minutes in a gaming case.

Gamers Nexus also did an OC with the reference 1080 and found the 2.1ghz or so was stable for about 2 minutes, then it stayed at 2ghz, for about 10 minutes or so more, before it starts to throttle.

There's a few ways to interpret the result.

1. Thermal Limited
2. Power Limited

It looks to be a combination of both. On the default fan settings, it becomes thermal limited. On 100% fan speed and OC, it becomes power limited.

What this means is if you intend to get a 1080 and slap a water cooler on it to get amazing OC, it ain't going to happen, the PCB is just too weak on phases for it.

ie. Get a custom 1080.
 
I was looking at the middle column not far right...
And then went to their specific OC page which is better than the far right column for details and seems to align more with what I said: Nvidia GeForce GTX 1080 im Test (Seite 10)
BTW I do not think we are disputing that there is a thermal/voltage throttling issue caused by one or more factors (primary one being the fan for normal operation and supply-fan when OC).
But my point is you can still hit 10% over base clock by adjusting the fan, which is too soft in terms of its profile.
TBH I prefer the article done on this site and also TPU with regards to boost/OC behaviour than theirs as it is not very clear what they did with the fan as they just mention temperature and power target.
Temperature target according to other sites is something you can raise the ceiling of max temp allowed rather than related to fan profile, also their results do not seem to match other sites that max'd the fan for OC.

Cheers
 
Last edited:
The following charts make it pretty clear that fan profile and offsets/Boost3 behaviour has a lot to play with regards to getting 10% above boost, setting temperature and power targets are not enough.

01-Clock-Rate_w_600.png



03-Temperatures_w_600.png



The behaviour is pretty similar to what Brent/Kyle mentions in their own analysis.
And yeah that Fumark is very strange behaviour, seems to be kicking in some kind of limiter/protection and also somehow forces temperature beyond 83degree ceiling.
Cheers
 
  • Like
Reactions: N4CR
like this
Glad to see this is finally being noticed by many! Seems there can be some very vehement denials to this information for some strange reason.

Missing fets + extremely concentrated 16nm chip = shitty long term clocks with this cooler, power + thermal limits in some cases. Their 2.1GHz frame limited demo was a total load of misleading crap for the average consumer.
 
All I did with my Titan X was make a custom fan profile, at 80C it was 100%. I set the temp target a little higher than 80C. I was no longer thermally limited!

If you want to OC your card, setting up a custom profile shouldn't be too much to ask.

This 1080 situation is NO different than the reference 980/980ti/Titan X. (Besides price gouging.) You need to customize the fan profile or waterblock, then BIOs mod or hard mod to remove PL. I still don't get the drama. AIBs are usually still power limited without BIOs mods.
 
This 1080 situation is NO different than the reference 980/980ti/Titan X. (Besides price gouging.) You need to customize the fan profile or waterblock, then BIOs mod or hard mod to remove PL. I still don't get the drama. AIBs are usually still power limited without BIOs mods.
you shouldn't have to do any of this with a card to get it to where they claim it can go, especially a hard mod.
 
Dayaks, they used a 100% fan profile and still on a regular use they achieved on average 1785 MHz, which is nice, but quite a bit different than the 2100 they tried to pass off with the cherry picked Doom test of the original presentation.

I would think that if anyone wants a 1080 they should seriously consider waiting a couple weeks for non FE versions at the least.
 
Back
Top