Nvidia GeForce GTX 1080 Final Specifications And Launch Presentation

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
VideoCardz has gotten their hands on slides detailing various aspects of the GTX 1080.

…Pascal GP104 is split into four blocks (Graphics Processing Clusters), each GPC has 5 Streaming Multiprocessors and each SM has two blocks of 64 CUDA cores, so this gives us total of 2560 CUDA Cores. As a side note, GP100 only has one block of 64 CUDAs per SM. Interestingly the number of ROPs has not changed since GTX 980, which means GTX 980 Ti still has 32 ROPs more, and therefore GTX 1080 has 64 ROPs. The die area of GP104 GPU is 314 mm2, which is right in-between GM204 and GM206. Despite smaller GPU we have 2 billion transistors more than GM204. Also the density of transistors increased by 10 million per mm2 to almost 23 million.
 
314 mm2, 601 mm2, 398 mm2

Is the failure rate on this process higher? Why the smaller die from the base 980?

It's not worth $600 though. $500 is probably more in line with the stats on paper -- especially since they're getting more dies per wafer.
 
314 mm2, 601 mm2, 398 mm2

Is the failure rate on this process higher? Why the smaller die from the base 980?

It's not worth $600 though. $500 is probably more in line with the stats on paper -- especially since they're getting more dies per wafer.

End result is this is more expensive for them though
 
314 mm2, 601 mm2, 398 mm2

Is the failure rate on this process higher? Why the smaller die from the base 980?

It's not worth $600 though. $500 is probably more in line with the stats on paper -- especially since they're getting more dies per wafer.

Can't really compare die sizes. It's the number of transistors on those die that counts...and the 1080 has more.
 
314 mm2, 601 mm2, 398 mm2

Is the failure rate on this process higher? Why the smaller die from the base 980?

It's not worth $600 though. $500 is probably more in line with the stats on paper -- especially since they're getting more dies per wafer.

Well, that would depend on yields. In any case, waffers costs have doubled from 28nm, so the chips end up being much more expensive.

Can't really compare die sizes. It's the number of transistors on those die that counts...and the 1080 has more.

Not only that, the price per transistor has increased with FinFET...
 
Well, that would depend on yields. In any case, waffers costs have doubled from 28nm, so the chips end up being much more expensive.



Not only that, the price per transistor has increased with FinFET...

price per transistor is actually just a function of density; the price per transistor we've seen assumes maximum transistor density, when in fact we won't be seeing maximum density yet.
 
Well, that would depend on yields. In any case, waffers costs have doubled from 28nm, so the chips end up being much more expensive.

Then that explains why they need more chips per wafer while selling each at the same price as the larger. Thanks. Do you know if their Pascal fab is set up to use 450mm wafers, or are they still using smaller wafers?
 
Looks to me like a discounted 980Ti @ around $400 is a much better purchase.
 
Not if it performs worse than a 1070.
1080 is 15-25% maybe touch more than 980Ti.

I don't get where people see a further hobbled 1070 sitting within less than that margin. Neither card will do 4k60 in latest and upcoming AAs.
 
1080 is 15-25% maybe touch more than 980Ti.

I don't get where people see a further hobbled 1070 sitting within less than that margin. Neither card will do 4k60 in latest and upcoming AAs.

Even if the 1070 performs the "same" at launch, it will be using a newer architecture, consume less power, have extra RAM, and have more features (Async Compute...). All of that should make for a card that will be a smarter choice for future games For less than $400.
 
I though we already determined (straight from Nvidia's mouth) that 1080/1070 won't have async compute at all?

So yeah, sorry. 980Ti with a moderate OC beats the shit out of 1070 from a price/performance perspective. Unless you care about power draw. And let's be real here... none of us do. It's like asking the guy that likes to race on the weekends if he cares if his purpose-built race car with roll-cage, 5 point harness and the like gets 8 MPG (if that) on average. He'll laugh and tell you to go to hell.

I'll do the same if you start asking me what my at the wall draw is on my machine.

By the way -- it's way lower than you expect. I've been comfortably running this setup with a 4.5-4.6Ghz OC on the CPU for almost a year with 980Ti SLI on a supposedly "bad" 750watt Corsair CX750. If you believe the CX750 can't put out anywhere near 750w as most of the reviews suggest... my system is pulling under 650w at its highest load numbers, even with the OC on the CPU.

I'll be getting a stronger PSU shortly, but only because I intend to OC both GPUs heavily, and possibly put them under water.
 
Even if the 1070 performs the "same" at launch, it will be using a newer architecture, consume less power, have extra RAM, and have more features (Async Compute...). All of that should make for a card that will be a smarter choice for future games For less than $400.

That I can agree with! Same with Polaris 10 - it could be similar performance to existing products but power/price/features will really help sell it, especially if it scales well mGPU in AA titles.. You'd be looking at dual cards for 4k, 108/70s or P10.. bit of a no brainer when one costs $200-300 less vs x70 and more than double that vs a 1080. But that's all price speculation however a likely position for them.

I was meaning more from the POV that the market positioning does not make sense with the current known 1080 specs. x70 is usually quite a bit slower, or $300 more makes really not much sense considering they still likely can't single card 4k it in some new or upcoming AA titles (ultraubermaxmode).

And I thought Asynch wasn't so useful for Nvidia as they had higher utilisation etc etc. I'll leave that one for razor and co.. lol ;)
 
It's not worth $600 though. $500 is probably more in line with the stats on paper -- especially since they're getting more dies per wafer.

It's worth whatever people are willing to pay. And they'll pay $600 and do it with a smile and it'll be sold out everywhere.
 
Trying to work out if this card is actually all that much faster with normal games, not VR... It seems almost like a mobile chip which has been clock boosted to make it fast in a desktop... But the 1070 card will be interesting to compare to a vanilla 980 or high end 970...
 
It's worth whatever people are willing to pay. And they'll pay $600 and do it with a smile and it'll be sold out everywhere.

That's exactly right. nV is banking on people paying a premium for 1080. What it costs them is completely irrelevant if they think they can sell it for more thereby raising margins.
 
I will be going the 1070 route just because I can and should update from this 660Ti that is still doing just fine. I no longer do bleeding edge upgrades as I am older and slower and not as competitive as I once was but could benefit from the new architecture because I like shiny things.
 
Even if the 1070 performs the "same" at launch, it will be using a newer architecture, consume less power, have extra RAM, and have more features (Async Compute...)
Don't be too sure about that last one:

For DirectX 12 and VR, the term Asynchronous Compute was thrown around, especially since AMD Radeon-based cards were beating Nvidia GeForce cards in DirectX 12 titles such as Ashes of The Singularity and Rise of the Tomb Raider. We were told that the Pascal architecture doesn’t have Asynchronous Compute, but that there are some aspects of this feature which qualified the card for ‘direct12_1’ feature set.
Pascal Secrets: What Makes Nvidia GeForce GTX 1080 Fast?
 
Review embargo lifts tomorrow. This is their attempt to squeeze just a little more ad-view blood from the crazed enthusiast turnip (yes, you resemble a turnip--don't try to hide, we all see you...)
 
I though we already determined (straight from Nvidia's mouth) that 1080/1070 won't have async compute at all?

So yeah, sorry. 980Ti with a moderate OC beats the shit out of 1070 from a price/performance perspective. Unless you care about power draw. And let's be real here... none of us do. It's like asking the guy that likes to race on the weekends if he cares if his purpose-built race car with roll-cage, 5 point harness and the like gets 8 MPG (if that) on average. He'll laugh and tell you to go to hell.

I'll do the same if you start asking me what my at the wall draw is on my machine.

By the way -- it's way lower than you expect. I've been comfortably running this setup with a 4.5-4.6Ghz OC on the CPU for almost a year with 980Ti SLI on a supposedly "bad" 750watt Corsair CX750. If you believe the CX750 can't put out anywhere near 750w as most of the reviews suggest... my system is pulling under 650w at its highest load numbers, even with the OC on the CPU.

I'll be getting a stronger PSU shortly, but only because I intend to OC both GPUs heavily, and possibly put them under water.

Statements like "a mild OC can beat X card" always amuse me because they always ignore that the other card can be OC'd too. It remains to be seen how well the 1070 can overclock, but chances are it will be at least somewhat decently. If it can OC well and stay fairly cool while doing so I'd actually call the 1070 a better buy, but we'll see. Actual numbers will tell the whole story, all we're doing now is playing a game of "what if" and "maybe" with no solid performance information to back up our thoughts.
 
Trying to work out if this card is actually all that much faster with normal games, not VR... It seems almost like a mobile chip which has been clock boosted to make it fast in a desktop... But the 1070 card will be interesting to compare to a vanilla 980 or high end 970...
I'm in the same boat. I'm sitting on a 980 ti just doing 1080p 144hz gaming. As long as I'm able to get 60+ fps on ultra settings, I'm happy.

Can't wait to see actual game benchmarks...I'd imagine it'll only be a 10% boost in fps.
 
Can wait for the reviews. They will only compare the 1070 and 1080 to the previous gen, if they pity us they may compare it to the stock 970 (the most popular card on steam). They will never compare it to a 770/780, you know that card you bought 2 years ago...
 
Can wait for the reviews. They will only compare the 1070 and 1080 to the previous gen, if they pity us they may compare it to the stock 970 (the most popular card on steam). They will never compare it to a 770/780, you know that card you bought 2 years ago...

I hope they include the 770/780.
 
I though we already determined (straight from Nvidia's mouth) that 1080/1070 won't have async compute at all?

So yeah, sorry. 980Ti with a moderate OC beats the shit out of 1070 from a price/performance perspective. Unless you care about power draw. And let's be real here... none of us do. It's like asking the guy that likes to race on the weekends if he cares if his purpose-built race car with roll-cage, 5 point harness and the like gets 8 MPG (if that) on average. He'll laugh and tell you to go to hell.

I'll do the same if you start asking me what my at the wall draw is on my machine.

By the way -- it's way lower than you expect. I've been comfortably running this setup with a 4.5-4.6Ghz OC on the CPU for almost a year with 980Ti SLI on a supposedly "bad" 750watt Corsair CX750. If you believe the CX750 can't put out anywhere near 750w as most of the reviews suggest... my system is pulling under 650w at its highest load numbers, even with the OC on the CPU.

I'll be getting a stronger PSU shortly, but only because I intend to OC both GPUs heavily, and possibly put them under water.
Not quite the same. In semiconductor, power consumption is an important factor because most of the power consumed are dissipated in the form of heat. Therefore, the architecture with better efficiency will scale better too as there's lesser power dissipation to be managed.
 
price per transistor is actually just a function of density; the price per transistor we've seen assumes maximum transistor density, when in fact we won't be seeing maximum density yet.

I know, but such metric is useful at showing just how expensive FinFET is. To the point that yes, you get more density with FinFET... but as waffer prices increase more than the density improvements, the end product gets more expensive, which is the first time this very thing happens.
 
I though we already determined (straight from Nvidia's mouth) that 1080/1070 won't have async compute at all?

So yeah, sorry. 980Ti with a moderate OC beats the shit out of 1070 from a price/performance perspective. Unless you care about power draw. And let's be real here... none of us do. It's like asking the guy that likes to race on the weekends if he cares if his purpose-built race car with roll-cage, 5 point harness and the like gets 8 MPG (if that) on average. He'll laugh and tell you to go to hell.

I'll do the same if you start asking me what my at the wall draw is on my machine.

By the way -- it's way lower than you expect. I've been comfortably running this setup with a 4.5-4.6Ghz OC on the CPU for almost a year with 980Ti SLI on a supposedly "bad" 750watt Corsair CX750. If you believe the CX750 can't put out anywhere near 750w as most of the reviews suggest... my system is pulling under 650w at its highest load numbers, even with the OC on the CPU.

I'll be getting a stronger PSU shortly, but only because I intend to OC both GPUs heavily, and possibly put them under water.

A Quantum Leap in Gaming: NVIDIA Introduces GeForce GTX 1080

New asynchronous compute advances improve efficiency and gaming performance. And new GPU Boost™ 3 technology supports advanced overclocking functionality.
 
Review embargo lifts tomorrow. This is their attempt to squeeze just a little more ad-view blood from the crazed enthusiast turnip (yes, you resemble a turnip--don't try to hide, we all see you...)


Shit yea. Finally! Legit reviews as my head was exploding from all the speculation
 
Shit yea. Finally! Legit reviews as my head was exploding from all the speculation

ONE HUNDRED AND FIFTY BILLION TRANSISTORS

nvidia_ceo_tattoo.jpg
 
Last edited:
I though we already determined (straight from Nvidia's mouth) that 1080/1070 won't have async compute at all?

So yeah, sorry. 980Ti with a moderate OC beats the shit out of 1070 from a price/performance perspective. Unless you care about power draw. And let's be real here... none of us do. It's like asking the guy that likes to race on the weekends if he cares if his purpose-built race car with roll-cage, 5 point harness and the like gets 8 MPG (if that) on average. He'll laugh and tell you to go to hell.

I'll do the same if you start asking me what my at the wall draw is on my machine.

By the way -- it's way lower than you expect. I've been comfortably running this setup with a 4.5-4.6Ghz OC on the CPU for almost a year with 980Ti SLI on a supposedly "bad" 750watt Corsair CX750. If you believe the CX750 can't put out anywhere near 750w as most of the reviews suggest... my system is pulling under 650w at its highest load numbers, even with the OC on the CPU.

I'll be getting a stronger PSU shortly, but only because I intend to OC both GPUs heavily, and possibly put them under water.
I cared about power draw with it exceed the limit of what comes out of my wall then you are overclocking the wires in your home
 
I care about power draw only to the point where it gets way to hot. As long as a 490/1070 fit in my sff without thermal or power issues I'm fine.
 
I cared about power draw with it exceed the limit of what comes out of my wall then you are overclocking the wires in your home

20A draw at the wall requires something like a 1800w PSU with near to full load (I'm not an EE, but I think my numbers here are close enough), so unless you're running 10A breakers in the room in question or are in an old house where the wiring isn't up to code, you have nothing to worry about.
 
Ah can't wait for this card! Such a huge performance jump. With reviews out today it's clear it's a monster!
Now time for the watercooling companies to pick them up and start mapping out some waterblocks!
 
Back
Top