I have to believe the new GPU/memory architecture is going to be impressively better (20%+) at 2K, 4K, and VR applications than Maxwell

Yeah it will, I expect the full gp104 to best the 980ti by a healthy 20%+ margin

The thing is most 980ti users run overclocked and that will thin the performance delta by a good bit
 
why?. GTX 680 destroyed the GTX 580 by an ample margin way bigger than 20% even the GTX 670 did it, man even the 660TI was better and traded blows with the GTX 660 so yes, its always possible.

Yeah, the 680 was a huge jump over the 580 and if I'm not mistaken, retailed for $500. Here's hoping we see the same with pascal.
 
I don't think you'll see a repeat of gm204 vs gk110 until the next graphics focused arch. You realise gp100 could easily fit 6000 32b fpus right ? I imagine that will come as the next 16nm arch, and I also want to point out gk110 does compare favorably with gm204 in some games . additionally I think power of 2 Sm structure in max vs Kepler (192) also gave it am easy utilization boost . that transition was special I think
 
Yeah, the 680 was a huge jump over the 580 and if I'm not mistaken, retailed for $500. Here's hoping we see the same with pascal.

You know, you can stop making shit up just to hype the 1080 launch :D

The 680 was 23% faster than a GTX 580 at launch at it's target 1080p resolution:

NVIDIA GeForce GTX 680 Kepler 2 GB Review

A year later (after Kepler driver optimizations), it was 25% faster.

NVIDIA GeForce GTX 770 2 GB Review

Just saying, the 680 has a fairly cold reception from current 580 owners because it wasn't that much faster. But for everyone else it was an impressive part.
 
so whats the performance boost from a 980TI to the 1080? and 980TI to 1080TI looking to be like? I know they are not out yet but typically we ussually have a decent ID give or take 5-10%. I am running a 1500 MHz or so 980TI and curious what the 1080 will offer. It sucks the 1080TI isn't coming out right away :/
 
You know, you can stop making shit up just to hype the 1080 launch :D

The 680 was 23% faster than a GTX 580 at launch at it's target 1080p resolution:

NVIDIA GeForce GTX 680 Kepler 2 GB Review

A year later (after Kepler driver optimizations), it was 25% faster.

NVIDIA GeForce GTX 770 2 GB Review

Just saying, the 680 has a fairly cold reception from current 580 owners because it wasn't that much faster. But for everyone else it was an impressive part.

Well, it depended on the games. Some games saw crazy leaps in performance. Metro saw 32.58% jump in performance at 1200p. Dirt 3 saw 36.69% jump in performance at 1200p. Battlefield 3 saw a whopping 40.39% jump in performance at 1200p. I got these benchmarks from here:

NVIDIA GeForce GTX 680 Review: Retaking The Performance Crown
 
Well, it depended on the games. Some games saw crazy leaps in performance. Metro saw 32.58% jump in performance at 1200p. Dirt 3 saw 36.69% jump in performance at 1200p. Battlefield 3 saw a whopping 40.39% jump in performance at 1200p. I got these benchmarks from here:

NVIDIA GeForce GTX 680 Review: Retaking The Performance Crown
yea others saw 40-50% as well but some had piss poor gains. I think its more game related than the card. The card had 2x raw processing power.
 
yea others saw 40-50% as well but some had piss poor gains. I think its more game related than the card. The card had 2x raw processing power.

Yeah, it's all dependent on the game. Total War Shogun 2 saw a crazy ass 62.92% jump in performance at 1200p! :wideyed:

But then we have the other spectrum where Crysis Warhead only saw a 17.27% jump in performance at 1200p.

So, yeah, both crazy ass big and very small gaps in performance levels for different games.
 
so whats the performance boost from a 980TI to the 1080? and 980TI to 1080TI looking to be like? I know they are not out yet but typically we ussually have a decent ID give or take 5-10%. I am running a 1500 MHz or so 980TI and curious what the 1080 will offer. It sucks the 1080TI isn't coming out right away :/
Probably best to skip this one for you
 
I don't think you'll see a repeat of gm204 vs gk110 until the next graphics focused arch. You realise gp100 could easily fit 6000 32b fpus right ? I imagine that will come as the next 16nm arch, and I also want to point out gk110 does compare favorably with gm204 in some games . additionally I think power of 2 Sm structure in max vs Kepler (192) also gave it am easy utilization boost . that transition was special I think

Yep. People forget that Kepler could only hit peak ALU utilization in very specific situations so on average was far below that. It's much easier to keep Maxwell running at full speed.

Pascal will need to be pretty special to beat Maxwell with a similar number of cores. Clock speed may play a big part there.
 
yea but i game at 4K....if i games at 1200P/1440P i would. It was supposed to be 2x performance per watt and 225w vs 250w so it should be substantially faster right?
At stock it will probably in the same ballpark as 980ti at 1500, it's a considerable improvement over the 980.

Youll probably get two extra gigs of vram, and factoring in overclocking maybe 20% effective performance gain, but who knows if there's as much headroom as there was on maxwell.

Maxwell clocks were pretty conservative

Just for reference 2816 fma fpus at 1500 mhz is 8.5tflops

2560 fpus (gp104) at 1800 mhz is 9.2 tflops

If we assume it clocks that high, and give it another 10% in architectural improvements, it'll deliver 10tflops, or rather 10/8.5 the performance of a 980ti at 1500

Just to be clear if the theoretical maximum is 9.2 it can never achieve 10 flops, that figure is only relevant when you compare it to gm200. It's essentially a 10% boost in utilization vs maxwell.

I think that's as accurate a guess as you can get at this stage
 
Last edited:
At stock it will probably in the same ballpark as 980ti at 1500, it's a considerable improvement over the 980.

Youll probably get two extra gigs of vram, and factoring in overclocking maybe 20% effective performance gain, but who knows if there's as much headroom as there was on maxwell.

Maxwell clocks were pretty conservative

Just for reference 2816 fma fpus at 1500 mhz is 8.5tflops

2560 fpus (gp104) at 1800 mhz is 9.2 tflops

If we assume it clocks that high, and give it another 10% in architectural improvements, it'll deliver 10tflops, or rather 10/8.5 the performance of a 980ti at 1500

Just to be clear if the theoretical maximum is 9.2 it can never achieve 10 flops, that figure is only relevant when you compare it to gm200. It's essentially a 10% boost in utilization vs maxwell.

I think that's as accurate a guess as you can get at this stage
thast lack luster....any idea on pricing?
 
thast lack luster....any idea on pricing?
It isn't lackluster! People have insane expectations, the cost of manufacturing has gone up! At 8bn transistor chip costs more at 16nm ff than at 28nm.

This is a very worthy upgrade except for the top tier, in which case in most cases best to wait for Titan and Vegas
 
It isn't lackluster! People have insane expectations, the cost of manufacturing has gone up! At 8bn transistor chip costs more at 16nm ff than at 28nm.

This is a very worthy upgrade except for the top tier, in which case in most cases best to wait for Titan and Vegas
ussually die shrinks provide more boost. If its a mere 10% boost over 980TI thats lack luster even if its not the TI version considering its a die shrink and what they have been claiming.
 
If 230mm Polaris can match the 980 Ti, then 300+ mm 1080 is going to smoke it. 40%+ calling it now.
 
If 230mm Polaris can match the 980 Ti, then 300+ mm 1080 is going to smoke it. 40%+ calling it now.
why is 1080 225w and 980ti is 250w? TDP was super tight on 980TI as in it was easy to hit the TDP threshold. Is pascal TDP generous verse maxwell strict TDP?
 
the 1080 is replacing the gtx 980, which is only around 150-170 watts. Node for node, they have the same advantages as what AMD gets with the 14nm node. So its either a 40% increase in performance or a 60% drop in power usage. Lets say they didn't' do anything with the architecture, with the new node that would put a gtx 980 around the 980ti at 150-170 watts. If they did do things to Maxwell architecture like it was suggested in the p100 presentations and documentation its going to have more performance than that at the 150-170 watts.

Now if they upped the clocks so it hits a power envelope of the 980ti, its going to have a lot more performance as long as available bandwidth doesn't get in the way.
 
Last edited:
Pretty much what I was saying

http://seekingalpha.com/article/3967625-nvidia-releases-pascal-weapon

Another important aspect for the audience (but that is actually marginal from the performance point of view) is the fact that Nvidia has highly tweaked and updated the scheduler, meaning that Asynchronous Compute has been probably corrected and tweaked too.

The upcoming GTX cards are likely to use more CUDA cores, while the frequency is likely to be well beyond 1.5 GHz. At the same time, power consumption will set at very competitive levels, thanks to the tweaks described in the first part of the article.


there is a mistake in the article which I think you guys can pick up.
 
Last edited:
the 1080 is replacing the gtx 980, which is only around 150-170 watts. Node for node, they have the same advantages as what AMD gets with the 14nm node. So its either a 40% increase in performance or a 60% drop in power usage. Lets say they didn't' do anything with the architecture, with the new node that would put a gtx 980 around the 980ti at 150-170 watts. If they did do things to Maxwell architecture like it was suggested in the p100 presentations and documentation its going to have more performance than that at the 150-170 watts.

Now if they upped the clocks so it hits a power envelope of the 980ti, its going to have a lot more performance as long as available bandwidth doesn't get in the way.
its been stated the 1080 is 225W TDP thats why i was asking is the TDP over generous again or is the TDP accurate of actual usage. Because if the 1080 is 225 TDP and "2x" performance per watt it'll be a solid beast to upgrade to but if its 170w TDP meh.


okay did some more googling....somewhere i read TDP was 225 but i can't find that anywhere now so i must be mistaken. Thats why i was wondering why it was so little boost if tdp was really 225w given everything they have said.
 
either way its going to be above the 980ti, think of the 1070 as right around the 980ti, and the 1080 above that. How much more is up in the air :)
 
its been stated the 1080 is 225W TDP thats why i was asking is the TDP over generous again or is the TDP accurate of actual usage. Because if the 1080 is 225 TDP and "2x" performance per watt it'll be a solid beast to upgrade to but if its 170w TDP meh.


okay did some more googling....somewhere i read TDP was 225 but i can't find that anywhere now so i must be mistaken. Thats why i was wondering why it was so little boost if tdp was really 225w given everything they have said.

Yeah, it's pretty rare for cards to run at full power the bus is capable of. This gives you some overclocking headroom even on the stock cards,

GTX 980 = 165W TDP, has two 6 pin connectors in the reference design giving them 60w of overclocking headroom.

GTX 960 = 120w TDP, has single 6-pin in reference design, giving 30w of overclocking room.

GTX 680 = 170w boost TDP, has two 6 pin connectors in the reference design giving them 55w of overclocking headroom.

I'd be surprised of the 1080 uses over 175w, or else they would have added a second PCIE connector.
 
Last edited:
Yeah, it's pretty rare for cards to run at full power the bus is capable of. This gives you some overclocking headroom even on the stock cards,

GTX 980 = 165W TDP, has two 6 pin connectors in the reference design giving them 60w of overclocking headroom.

GTX 960 = 120w TDP, has single 6-pin in reference design, giving 30w of overclocking room.

GTX 680 = 170w boost TDP, has two 6 pin connectors in the reference design giving them 55w of overclocking headroom.

I'd be surprised of the 1080 uses over 175w, or else they would have added a second PCIE connector.
980ti was a weird card. Its tdp was spot on and did not give any headroom vs previous gpus thats why i was asking

your point on power connectors makes sense. i dont know how much a power connector can do but i get your point and it makes sense that the numbers for whatever reason i recall are wrong.

thanks
 
Last edited:
a 6 pin power connector gives ya 75 watts, a 8 pin gives ya 150 watts,
 
you can draw more that what is rated for the connectors, usually it doesn't matter too much, as long as the wires don't get to hot they should be fine.
 
you can draw more that what is rated for the connectors, usually it doesn't matter too much, as long as the wires don't get to hot they should be fine.

this where cheap-o 2 MW power supplies don't hold up to their much lower specced cousins :p
 
980ti was a weird card. Its tdp was spot on and did not give any headroom vs previous gpus thats why i was asking

Why do you keep insisting on this bullshit? Just because you can't do math, doesn't mean the rest of us are so hellpless :D


GTX 980 Ti TDP = 250w. It tends to hit below 240w peak even when stressed.

Power connectors on reference card = 1 x 6 pin + 1 x 8-pin.

sx0RBnF.jpg


NVIDIA GeForce GTX 980 Ti 6 GB Review

75w from slot + 150w from 8-pin plus 75w from 6-pin = 300w CARRYING CAPACITY. That's 50w of overclocking breathing room.

I DARE YOU TO SHOW ME A SINGLE EXAMPLE OF A CARD FROM NVIDIA THAT HAS LESS THAN 20% power leeway. I guarantee it doesn't exist.

So you can guarantee that the 225w carrying capacity of the GTX 1080 means the reference card WILL NOT EXCEED 175w. Stop being a worrywart :D
 
Last edited:
I must ask, have you ever been wrong at any time about anything?


Of course, being wrong is human nature, but I try not to be wrong as much as possible :) and will learn as much about something before I make a post if I don't know about something, so I don't make a mistake.
 
Someguy133, nV has a very large advantage with clock speeds and power usage and this is all due to architectural design. If we look at the gm204 it has a similar overclocking capability to the gm200. gm200 being quite a larger chip, I think this has been the first time we saw something like this specially with out using any exotic cooling. Will this translate over to the gp series of cards, Yeah most likely it will because its the architecture of the chip that gave this advantage not the node. The reason why its not the node is because we can see with Kepler or Fiji which both are on the same node, neither of these chips can overclock like Maxwell 2 nor their power usage for the die size show the same capabilities.
 
Why do you keep insisting on this bullshit? Just because you can't do math, doesn't mean the rest of us are so hellpless :D


GTX 980 Ti TDP = 250w. It tends to hit below 240w peak even when stressed.

Power connectors on reference card = 1 x 6 pin + 1 x 8-pin.

sx0RBnF.jpg


NVIDIA GeForce GTX 980 Ti 6 GB Review

75w from slot + 150w from 8-pin plus 75w from 6-pin = 300w CARRYING CAPACITY. That's 50w of overclocking breathing room.

I DARE YOU TO SHOW ME A SINGLE EXAMPLE OF A CARD FROM NVIDIA THAT HAS LESS THAN 20% power leeway. I guarantee it doesn't exist.

So you can guarantee that the 225w carrying capacity of the GTX 1080 means the reference card WILL NOT EXCEED 175w. Stop being a worrywart :D
No other card besides 980TI hit the TDP wall so fast. Every review mentioned it. Normally the rated TDP as you said has a bit of extra room before you need to over ride it. That was not the case with the 980 TI this time....i don't know how you dont recall this. (unless i am mistaken and rated TDP and BIOS TDP are two different things....i dont know why they would be two different figures)
 
Last edited:
No other card besides 980TI hit the TDP wall so fast. Every review mentioned it. Normally the rated TDP as you said has a bit of extra room before you need to over ride it. The 980TI was not the case this one time....i don't know how you dont recall this. (unless i am mistaken and rated TDP and BIOS TDP are two different things....i dont know why they would be two different figures)

I thought you were bitching about the 1080 cards having a max TDP of 225w here. Are you really just trolling about low BIOS power limits and shitty stock coolers on GTX 080 Ti reference boards? That only happened with the high-powered reference 980 Ti card - the 3rd-parrty coolers all added more TDP, and massively higher overclocking temperature headroom.

Because the GTX 1080 card will use will south of 200w, the cooler and BIOS TDP should be vastly more forgiving, because they won't be pushing that single-fan cooler to the limit.

And if that's not enough, you can just wait a Month for higher tdp hardcore overclockers cards to be released.
 
Last edited:
I thought you were bitching about the 1080 cards having a max TDP of 225w here. Are you really just trolling about low BIOS power limits and shitty stock coolers on GTX 080 Ti reference boards? That only happened with the high-powered reference 980 Ti card - the 3rd-parrty coolers all added more TDP, and massively higher overclocking temperature headroom.

Because the GTX 1080 card will use will south of 200w, the cooler and BIOS TDP should be vastly more forgiving, because they won't be pushing that single-fan cooler to the limit.

And if that's not enough, you can just wait a Month for higher tdp hardcore overclockers cards to be released.
wow..you went full derp. You are not even reading what i said. Whatever. I found out what I needed so i am good.

You need to learn how to read and get the chip off your shoulder and not be on a high horse all at once because you did not comprehend anything I have been posting but whatever. I got the info i needed so i could careless about your reading comprehension and ability to think without your head up your ass
 
980ti was a weird card. Its tdp was spot on and did not give any headroom vs previous gpus thats why i was asking

your point on power connectors makes sense. i dont know how much a power connector can do but i get your point and it makes sense that the numbers for whatever reason i recall are wrong.

thanks
I am not sure I follow because the AIB products OC like mad, unless you are specifically thinking of the NVIDIA reference model which would throttle due to cooler design (also not sure how easy it is to configure the power between 110% - 125% for that compared to AIB).
Two different sites showing overclocking for 980ti.
Overclocking - ASUS GeForce GTX 980 Ti MATRIX Platinum Review
MSI GeForce GTX 980 Ti Lightning Review

Cheers
 
Last edited:
Someguy133, nV has a very large advantage with clock speeds and power usage and this is all due to architectural design. If we look at the gm204 it has a similar overclocking capability to the gm200. gm200 being quite a larger chip, I think this has been the first time we saw something like this specially with out using any exotic cooling. Will this translate over to the gp series of cards, Yeah most likely it will because its the architecture of the chip that gave this advantage not the node. The reason why its not the node is because we can see with Kepler or Fiji which both are on the same node, neither of these chips can overclock like Maxwell 2 nor their power usage for the die size show the same capabilities.

Gp100 at 1580mhz is nuts. Gp104 at 1800 is quite seriously possible
I am not sure I follow because the AIB products OC like mad, unless you are specifically thinking of the NVIDIA reference model which would throttle due to cooler design (also not sure how easy it is to configure the power between 110% - 125% for that compared to AIB).
Two different sites showing overclocking for 980ti.
Overclocking - ASUS GeForce GTX 980 Ti MATRIX Platinum Review
MSI GeForce GTX 980 Ti Lightning Review

Cheers

My 980ti has a stock tdp of around 300w, with power limit maxed it goes to 390w, with my custom bios I'm at 420w.

Toasty. :p

VRM cooling is the most important thing important thing imo, these cards run really hot and keeping the VRMs cool with both reduce heat on gpu die and result in better power delivery.

I really hope nv are less conservative with clocks this time around
 
Well 1800 maybe, don't know though, we don't know what the voltage and frequency tolerances are.......
 
Well 1800 maybe, don't know though, we don't know what the voltage and frequency tolerances are.......

True true but at the end of the day we know gp100 boosts to 1600, and 200 mhz is only 12.5% of that. Doesn't seem so far fetched, of course remains to be seen how close 1600mhz is to that inflection point
 
wow..you went full derp. You are not even reading what i said. Whatever. I found out what I needed so i am good.

You need to learn how to read and get the chip off your shoulder and not be on a high horse all at once because you did not comprehend anything I have been posting but whatever. I got the info i needed so i could careless about your reading comprehension and ability to think without your head up your ass


It would help if you could EXPLAIN YOURSELF, which you obviously cannot if all you can do is call me names Tell us in exact words what was so broken with the GTX 980 Ti, or expect us to ignore your ass.

And don't just bitch about the GTX 980 Ti - you have to justify why this problem on the 980 Ti has any potential to appear on the GTX 1080.
 
Back
Top