NVIDIA GeForce GTX 1080 Founders Edition Review @ [H]

Yes they are allowed to add circuitry/VRMs/additional power inputs... However, if Nvidia is staying the course they've been on for the last few years, the cards will still be voltage locked.

Speaking of voltage, I wonder if it'll scale. Maxwell didn't scale a whole hell of a lot. Hopefully Brent looks into that in the OC review.
 
Impressive results for a single card at 4K. I await to see the configuration in SLI in the coming weeks. If I like the 4K results in SLI, looks like I will be switching back to team green this year. Newegg CC is in hand waiting to order.
 
that's right.. but I think this time, with all the talks about the relations of clocks and fan speed mentioned in the article would have worth maybe... another 30 minutes to brent to test how the card behave with an aggressive fan profile or 100% fan speed if desired.. 30 minutes or 60 minutes for both test would have keep us all happy.. instead we have now to outsource to another review sites to see numbers with a tested improved fan curve or 100% fan speed.
Has anyone found reviews testing at increased fan speeds (that also note decibel levels please)?
 
This just baseline - add in driver improvements and AIB custom configuration I see a lot of growth for performance for this card. In short it is a beast of a card. For those with 980, 290x etc. If one can get an AIB card for $599 - that seems to be very reasonable and actually almost good. Now if the 1070 performs 20% less, it still will beat everything out now. Now your talking $399!

Good review overall I say, nice excitement and a day we have been waiting for like forever (node change)

I think AMD is really shooting more towards the mainstream with Polaris less with high end cards.
 
so a lot of people suddenly forgot the fact that his card it's replacing the GTX 980 and not the GTX 980Ti which is already a good chunk faster than the GTX 980. We 980 TI/Titan X owners will have a proper replace of our cards later.. at this point in the market, Nvidia doesn't have any reason to put out the strongest possible GPU because as the same they did with the GTX 680, GTX 780 and GTX 980 they just want to put out something faster out and milk some money, this time is even easier because they don't have any competition, which could be even worse as AMD stated their focus in Mid-range/mainstream market so the performance crown is free for Nvidia..

The GTX 980 was praised at launch date being only marginally faster than the current 780TI, Nvidia launch now a 1080 25-35% faster overall than the current 980TI and then suddenly that hurt in the butt to every 980TI owner calling major disappointment. that's a joke... this card will be at least 100$ cheaper in 6-8 months when the new cards and new kings arrive to the market same as GTX 780 and GTX 980 did.. history its cyclic..
 
The performance is what I expected, no disappointment there. I don't know how to feel about the pricing - in my view the FE gives validation to higher priced custom options. We will see what they come in at, I'm expecting around the same as the FE personally.
 
so a lot of people suddenly forgot the fact that his card it's replacing the GTX 980 and not the GTX 980Ti which is already a good chunk faster than the GTX 980. We 980 TI/Titan X owners will have a proper replace of our cards later.. at this point in the market, Nvidia doesn't have any reason to put out the strongest possible GPU because as the same they did with the GTX 680, GTX 780 and GTX 980 they just want to put out something faster out and milk some money, this time is even easier because they don't have any competition, which could be even worse as AMD stated their focus in Mid-range/mainstream market so the performance crown is free for Nvidia..

The GTX 980 was praised at launch date being only marginally faster than the current 780TI, Nvidia launch now a 1080 25-35% faster overall than the current 980TI and then suddenly that hurt in the butt to every 980TI owner calling major disappointment. that's a joke... this card will be at least 100$ cheaper in 6-8 months when the new cards and new kings arrive to the market same as GTX 780 and GTX 980 did.. history its cyclic..

As much as I think we'll get a Ti series this go around.... that's not confirmed yet. I feel like people tend to forget the 80 series has only had a Ti model the last two releases and they seemed more of a response to AMD's offerings.
 
Would like to ask Kyle what his overall opinion would be in 1080's vs SLI Titan X's , especially with the experience you had with the SLI TItan X's at 4k that you wrote about over time. I've got Titan X's in SLI and a Benq 3201 4k , running with a 4790k OC'd. Overall have enjoyed the system lots the past year , just occasional aggravations like Fallout 4 launch with no 4k profile etc.

I keep my room temp at 70 year round (the wife has adjusted finally :p ) , have reference blowers exhausting the air of the cards out the case (found better results with SLI with reference blowers ) and my Titan X's overclock quite well with fairly aggressive fan profile.

While I'd like to get rid of SLI it looks like one 1080 isn't quite there for 4k performance with zero worries , or am I wrong about that ? Would you stick with Titan X's in SLI , deal with the SLI headaches , and wait for the 1080 Ti or Titan version of it etc , or do you expect it to be another year before a single card solution can handle 4k at 60 fps safely ?


I could sell mine currently and replace with 1080 SLI , just curious if seems worth the hassle and should wait or not.

I think we will find out if Kyle puts two of them in his sig. I'm waiting for Titan X prices to fall to go SLI myself.
 
I'm sorry if this has been answered and I missed it but is this card still restricted to 8 bit colour output like previous gen Geforce cards or has Nvidia lifted the lock for new games that can make use of a full 10 bit colour range and HDR?
 
Thanks for putting in the time and effort for another great review, [H]!

I'm impressed with the 1080. And, as mentioned in the article, I am really looking forward to AIB models featuring aftermarket cooling...reminds me of the initial 290X release - the OEM HSF was utter crap, but when the aftermarket coolers started popping up to show a 20-40C drop in load temps, it transformed the 290X into what it is today: one of the most long-lived GPUs in history.

...I can see that potentially happening with the 1080Ti, assuming that it provides another 20-35% performance bump over the 1080FE and runs relatively cool (60-70C full load) with a non-reference HSF. But, time will tell, and so will AMDs answer.
 
They are using a custom cooler on it btw, accelero extreme iv
That was not just for the 2GHz core clock?
Possibly academic anyway as I understand they did the game tests with the clock at 1733 when compared to other cards.
Cheers
 
The performance of this card is not what bothers me nearly as much as the gimmicky FE launch price. IMO they have simply priced themselves out of that segment for low tier high end cards. $550 is where this cards needs to be. A lot of its enhanced capabilities are things that game devs need to support as well which effectively makes those features worthless for 6-12 months and that's assuming game devs even bother. Game devs have pretty much given up on 3d support, SLI support, etc now so it doesn't fill me with confidence that the game devs will all jump onboard for these new features. I will wait a month and see what shakes out of the AIB partner tree. If their prices are $700 then I may have to bite the bullet then but at least I'll get more performance for the money.
 
I'm sorry if this has been answered and I missed it but is this card still restricted to 8 bit colour output like previous gen Geforce cards or has Nvidia lifted the lock for new games that can make use of a full 10 bit colour range and HDR?

Pretty sure it's still 10 bit rendering and 8 bit output or whatever.
 
I'm sorry if this has been answered and I missed it but is this card still restricted to 8 bit colour output like previous gen Geforce cards or has Nvidia lifted the lock for new games that can make use of a full 10 bit colour range and HDR?

Pretty sure it's still 10 bit rendering and 8 bit output or whatever.
On page 2 of the review at the end of the page, it states it can output 12-bit color for 4K HDR.

HDR display support is going to be a big topic this year, especially when AMD Polaris is launched. HDR displays are coming, and they are poised to give us a whole new perspective on image quality.

The GeForce GTX 1080 is capable of 12b color (BT.2020 wide color gamut), SMPTE 2084 (Perceptual Quantization) and HDMI 2.0b and DisplayPort 1.4 for 10/12b 4K HDR. Pascal introduces new features such as 4K@60 10/12b HEVC Decode for HDR video, 4K@60 10b HEVC Encode for HDR recording or streaming, and DisplayPort 1.4 ready HDR Metadata Transport. GeForce GTX 1080 is capable of 7680x4320 @ 60Hz. It will certainly be interesting to see how AMD Polaris compares in display technology and HDR support.
 
On page 2 of the review at the end of the page, it states it can output 12-bit color for 4K HDR.

HDR display support is going to be a big topic this year, especially when AMD Polaris is launched. HDR displays are coming, and they are poised to give us a whole new perspective on image quality.

The GeForce GTX 1080 is capable of 12b color (BT.2020 wide color gamut), SMPTE 2084 (Perceptual Quantization) and HDMI 2.0b and DisplayPort 1.4 for 10/12b 4K HDR. Pascal introduces new features such as 4K@60 10/12b HEVC Decode for HDR video, 4K@60 10b HEVC Encode for HDR recording or streaming, and DisplayPort 1.4 ready HDR Metadata Transport. GeForce GTX 1080 is capable of 7680x4320 @ 60Hz. It will certainly be interesting to see how AMD Polaris compares in display technology and HDR support.

Oh, very nice. I misunderstood and was thinking the 10/12 bit was just rendering.
 
Pretty sure it's still 10 bit rendering and 8 bit output or whatever.
It can't be 8-bits for HDR. The various HDR formats require more than 8-bits.

With such a DIE change I would have expected a cooler chip.... Something seems off to me a bit.
Smaller die + similar power consumption = hotter chip. There's less surface area of silicon in contact with the heatsink to pull heat through.
 
On page 2 of the review at the end of the page, it states it can output 12-bit color for 4K HDR.

HDR display support is going to be a big topic this year, especially when AMD Polaris is launched. HDR displays are coming, and they are poised to give us a whole new perspective on image quality.

The GeForce GTX 1080 is capable of 12b color (BT.2020 wide color gamut), SMPTE 2084 (Perceptual Quantization) and HDMI 2.0b and DisplayPort 1.4 for 10/12b 4K HDR. Pascal introduces new features such as 4K@60 10/12b HEVC Decode for HDR video, 4K@60 10b HEVC Encode for HDR recording or streaming, and DisplayPort 1.4 ready HDR Metadata Transport. GeForce GTX 1080 is capable of 7680x4320 @ 60Hz. It will certainly be interesting to see how AMD Polaris compares in display technology and HDR support.

Thanks. I wonder if this only for DX11+ fullscreen applications or enabled 10 bit output for content creation. Nvidia used to allow this on Quadro cards only.
 
  • Like
Reactions: N4CR
like this
Nvidia's direct heat exhaust cooler does do its job, but the GeForce GTX 1080 Founders Edition does face clear and restrictive boundaries that make overclocking completely pointless for sustained or challenging loads.

Admittedly, the load’s a lot lower if you dial back to 1920x1080. But who buys a $700 graphics card for Full HD?
 
I think the conclusion is odd. The benefit of async compute is discounted because it doesn't enable a last gen 28nm card to beat the latest 16nm card? This seems to be an impossibly high bar to set. The question is not whether Fury X with async compute could match Pascal - I don't think anyone expected it would - it's whether Polaris/Vega will show similar gains with async compute that its predecessor does. And whether Pascal's reduced negative gain versus Maxwell is the result of pure brute force or architectural changes that might eventually lead to Pascal showing gains with DX12/async.

The performance speaks for itself, GTX 1080 is a lot faster in AoTS as the graphs show. It provides the best gameplay experience, that is fact. The words I used and analysis is correct, with Async Compute Fury X is not faster than GTX 1080 in this game. That analysis is correct. We don't discount it, we directly tested it, showed the results, and made the conclusion that Fury X with Async is not faster than GTX 1080 in this game, that's a true statement. I cannot speak on unannounced and unavailable hardware performance.

I still think this game and Async Compute is getting blown out of porporition. Async Compute is not the most important grpahics feature ever to exist in a GPU. There are other methods to optimize performance that can be used. Async is just but one feature, out of many, it isn't even required for DX12. People are treating Async Compute like it's the holy grail and Indiana Jones must find it. It's in one game, one, and no one even plays the game as a game, just a benchmark. It's future use in games is unknown, and from what developers have been saying probably not a feature a lot are going to end up using.
 
I noticed VRAM usage of 7 GB in one of the tests. As such, all the other cards in that test would have been VRAM-limited. It would have been interesting and useful to see how the 8 GB 1080 compared against the 12 GB Titan X and the 8 GB 390X.
 
The performance speaks for itself, GTX 1080 is a lot faster in AoTS as the graphs show. It provides the best gameplay experience, that is fact. The words I used and analysis is correct, with Async Compute Fury X is not faster than GTX 1080 in this game. That analysis is correct. We don't discount it, we directly tested it, showed the results, and made the conclusion that Fury X with Async is not faster than GTX 1080 in this game, that's a true statement. I cannot speak on unannounced and unavailable hardware performance.

I still think this game and Async Compute is getting blown out of porporition. Async Compute is not the most important grpahics feature ever to exist in a GPU. There are other methods to optimize performance that can be used. Async is just but one feature, out of many, it isn't even required for DX12. People are treating Async Compute like it's the holy grail and Indiana Jones must find it. It's in one game, one, and no one even plays the game as a game, just a benchmark. It's future use in games is unknown, and from what developers have been saying probably not a feature a lot are going to end up using.

The funniest thing about the async compute debacle is that as long as the compute shaders are being used for rendering tasks, they will be tied to frames, so they will need to be synchronized with the graphics engine. The truly interesting uses for async compute is for things like AI processing that are totally uncoupled (in terms of time) from the rendering
 
So no tests were run with a custom fan profile that took it to 100% by say 70 degrees? That might have made quite a difference on being able to maintain a higher boost clock.

Tests were done, time ran out to flesh it out and put the information together properly. I figured in the end it would be a good addition to our overclock testing article to put it in there instead since running an aggressive fan profile is more of an overclocking adventure rather than testing the card at default settings, which is what this evaluation today was.
 
DX12 performance is all over the board.
GamersNexus got crazy good numbers from both Maxwell and Pascal.

NVIDIA GeForce GTX 1080 Founders Edition Review & Benchmark

I don't know what version of AoTS they used, but a new version/build was released in the game just days from this launch, so some may not have had their results on the latest version, ours is, the version/build is noted at the top of the page for our testing. I've seen version changes in this game change the benchmark performance data over time.
 
I'll probably look at a 1070 for my upgrade this round, although I'd need some opinions regarding GSync. I can get a FreeSync monitor with the same stats as a GSync monitor for about $300 less (I'm in Canada). If I'm going NVidia and I'm overdue for a new monitor, is there even an incentive to do this instead of sinking the $300 in a better card from AMD, presuming they launch one? Or does GSync REALLY matter vs a 144Hz monitor? I don't notice any screen tearing on my current set up, but it's possibly due to the fact that my average framerate is close to the 60Hz my monitor is spitting out anyway.

I'll add I have zero brand loyalty, but I just don't understand why it's necessary for GSync to cost so much more than FreeSync (yes, I understand the licensing and the extra components, I'm just saying).
 
Are you suggesting you think the Nvidia reference card (Founders Edition) will outperform AIB manufacturer's reference cards with in house blowers (i.e. That crappy blower MSI has put out) when it comes to overclocking?
Heat and power are big components of overclocking. I would suggest we will see AIBs with coolers that work better than FE with power circuitry that may be a bit more robust.

While I'd like to get rid of SLI it looks like one 1080 isn't quite there for 4k performance with zero worries , or am I wrong about that ? Would you stick with Titan X's in SLI , deal with the SLI headaches , and wait for the 1080 Ti or Titan version of it etc , or do you expect it to be another year before a single card solution can handle 4k at 60 fps safely ?
Titan X SLI is still the tits.
 
that's right.. but I think this time, with all the talks about the relations of clocks and fan speed mentioned in the article would have worth maybe... another 30 minutes to brent to test how the card behave with an aggressive fan profile or 100% fan speed if desired.. 30 minutes or 60 minutes for both test would have keep us all happy.. instead we have now to outsource to another review sites to see numbers with a tested improved fan curve or 100% fan speed.

Time was limited. I like to get all the default data done in the initial launch review to understand where the card is at and what the baseline is. Then we can construct a review solely for overclocking, take that data, and see the improvement. First you have to see where you are starting from, then you can compare that with where you are going. We now have a solid base of information to compare to overclocking and I can focus solely on that for an article.
 
Back
Top