rumored RTX 3xxx specs

The die shrink alone should give it about 25% performance increase. So if Ampere architechture gives and additional 20-25% then the 50% rumored performance increase doesn't seem far off.

RTX needs to be orders of magnitude faster (and it already is compared to previous generations) in order to reach the magical full RT@4K/60 that everyone is hoping for.

That's not going to happen in the inmediate future, so we'll most likely see some sort of DLSS 2.0 and some other tricks to get playable performance.

Die shrinks don't automagicly give you 20-25% performance bumps. It may appear that way... but remember every die shrink we have ever seen also accompanies a 3-4 year newer architecture. Its rare in any silicon market GPU CPU APU mobile that a die shrink involves nothing more then taking the exact same chip and just making it smaller. When they do the performance gain is extremely low. If you want proof of that just look at the recent AMD Ryzen 1600 re release on a shrunk node. It doesn't perform any better then the old ones... might be a bit cooler and have a bit more freq headroom but its not going to get a magic 25% bump just cause now the same chip is a bit smaller. You can also look at console chips the last few years... when they release the slim version with a die shrink 2-3 years after launch, it will run cooler and they can put it in a slim case but its not 25% faster. If that was the case Sony and MS would have just kept shrinking their consoles every fab generation and called it a day. ;)

When it comes to shrink redesigns it also really matters what the designers want to do. The advantage of a shrink is you have a choice as a designer. You can increase the transistor count and add more cores more compute units more cache... and upgrade your cooling to deal with the extra heat. Or you can reduce the die size keep the same number of cores and other bits and reduce your costs to make the same chips. (again look at examples of same chip shrinks like the ones in the consoles.) Most designers in most markets for the record try and find a middle ground, more transistors but still not a direct 1 to 1 increase... which should improve both performance and yield. There is a sweet spot I'm sure for designers... unless they are asked to swing for the fences (or go conservative to improve the bottom line)

So Nvidia has a choice to make with Ampere. Do they add even more CU units... more cache, more tensor cores more stuff. Meaning they will still need a fairly large die size. Or do they keep close to the same number of units they have now perhaps adding a small number more or arch improvements, and produce a smaller easier (cheaper) to fab chip that will up their profits and provide good supply on all versions of the chips including fully functioning top tier parts.

So bottom line is... they don't get a free 25% just cause its a smaller fab size. They may well pull 50% out... but if they do it will have to come from adding more CU units, or comping up with some really smart new arch improvements to make that happen with out increasing the number of transistors considerably. When we see early specs we'll have a darn good idea if 20% or 50% is more likely. If reliable specs for a 3080ti leak and the count of stream units is anywhere under 5000.... its going to be 20-30% better, if those leaks say its closer to the 2080tis 4352 then I hate to break it to people Nvidia is going for cost savings not uber performance and prepare yourself for another 10% bump.
 
Die shrinks don't automagicly give you 20-25% performance bumps. It may appear that way... but remember every die shrink we have ever seen also accompanies a 3-4 year newer architecture. Its rare in any silicon market GPU CPU APU mobile that a die shrink involves nothing more then taking the exact same chip and just making it smaller. When they do the performance gain is extremely low. If you want proof of that just look at the recent AMD Ryzen 1600 re release on a shrunk node. It doesn't perform any better then the old ones...

While I agree that die shrinks aren't a big automatic gain, Ryzen 1600 12nm and heck even Ryzen 2600 are poor examples. This chips are essetnially identical to original Ryzen 1600. It didn't shrink. Dimensions are identical. In the old days before Fabs started exaggerating a lot, these would have been called new steppings, where you get a bit cleaner signals, so you can low voltage a bit, run a bit more clock or a bit lower power. That's it.

We can't predict what NVidia will get out the 7nm process, but it won't be as bad as the Ryzen 14nm->"12nm", update. Maybe look at Vega 64 to Radeon 7 as benefit of 7nm with minimal architecture improvement.
 
While I agree that die shrinks aren't a big automatic gain, Ryzen 1600 12nm and heck even Ryzen 2600 are poor examples. This chips are essetnially identical to original Ryzen 1600. It didn't shrink. Dimensions are identical. In the old days before Fabs started exaggerating a lot, these would have been called new steppings, where you get a bit cleaner signals, so you can low voltage a bit, run a bit more clock or a bit lower power. That's it.

We can't predict what NVidia will get out the 7nm process, but it won't be as bad as the Ryzen 14nm->"12nm", update. Maybe look at Vega 64 to Radeon 7 as benefit of 7nm with minimal architecture improvement.
More relevant example:

8800 GTX (90nm) -> 9800 GTX (65nm) -> 9800 GTX+ (55nm), all with similar performance to each other.

To be fair, all of those steps were using the same architecture. By everything we've been fed so far Ampere is supposed to be a new architecture that's been in development since at least 2016.
 
While I agree that die shrinks aren't a big automatic gain, Ryzen 1600 12nm and heck even Ryzen 2600 are poor examples. This chips are essetnially identical to original Ryzen 1600. It didn't shrink. Dimensions are identical. In the old days before Fabs started exaggerating a lot, these would have been called new steppings, where you get a bit cleaner signals, so you can low voltage a bit, run a bit more clock or a bit lower power. That's it.

We can't predict what NVidia will get out the 7nm process, but it won't be as bad as the Ryzen 14nm->"12nm", update. Maybe look at Vega 64 to Radeon 7 as benefit of 7nm with minimal architecture improvement.

Agreed its a larger jump from where NV has been to 7nm. Still its not going to be a automatic 20%. At best if they just die shrunk Turing it would probably net them mid to high single digits at best. Your point on Radeon VII is valid... and I believe it proves both ends a bit. Dropping the shrink did allow AMD to control some thermals a bit better and push the clocks. So they did indeed gain 12-15% out of what was basically just a shrink (more memory doesn't matter really). However reviewers that have benched V64 and VII at the same clocks have found they perform within the margin of error (VII wins with like 1-3% on most tests).

Be interesting to see what NV does at 7nm no doubt. I do believe they are going to try to fix some of their yield issues... I really don't believe they feel they have to swing for the fences and deliver a 50% bump to retain their top spot in the market. And for the record I would be glad to be wrong. lol
 
New architecture, smaller process...

This is typically where Nvidia delivers larger bumps in performance, and they have a better record than AMD and Intel on doing just that.

The only real wrinkle in a history-based prediction is RT -- RT needs more than a generational performance bump, and if Nvidia chooses to make that happen, the question as to whether / how much potential raster performance was traded.


Or not. It's not like they don't know where their customers expect them to be!
 
More relevant example:

8800 GTX (90nm) -> 9800 GTX (65nm) -> 9800 GTX+ (55nm), all with similar performance to each other.

It also depends on how good the new process is vs the old. 65->55nm might have been a very mediocre bump. Where Maxwell to Pascal, has essentially the same architecture but a huge process bump, and Pascal was one NVidias best card lines in their history.

IMO this time, process wise we are somewhere in between. A modest bump from process. Who knows on the architecture side.

But in the end both process and architecture advancement are subordinate to the design targets.

IMO NVidia will target about 30% generational improvement (similar to the last decade average), regardless of what the process and architecture deliver.

If they have a wild gains from both process and architecture, they can deliver smaller dies, and increase profit margin, and leave room for more gains on the same process if they so choose. If the gains are less good they can squeeze in larger sizes, but they are going to want ~30% to make the generational improvement look compelling.

I am setting my expectations around 30% gains overall, and will believe different when there is something credible to suggest otherwise.
 
But in the end both process and architecture advancement are subordinate to the design targets.

IMO NVidia will target about 30% generational improvement (similar to the last decade average), regardless of what the process and architecture deliver.

The only operational reason for the target to be higher than 30% would be the smaller bump with Turing along with the increase in price per SKU (perceived, if arguing semantics). Essentially, while Nvidia targets enthusiasts that upgrade every generation to a degree as well as new users, their biggest target is probably the regular upgraders that hit every second or third generation.

Few were likely interested in trading a top-end 1000-series GPU, generally under US$800, for a US$1000+ top-end 2000-series GPU, given the gains in rasterization.

Whatever is actually possible given available technology -- and remember that Nvidia is a company that defines available technology in the GPU industry -- they do need to set a target that will actually interest buyers and motivate sales. So I agree, a minimum of ~30% sounds good, given the combination of a die shrink, a new architecture, and the current architecture being less well received. If they can do better, they'll have trouble keeping stock.
 
VRAM usage shot up dramatically quite soon after this gen's console release at least for texture settings. Next gen consoles with more memory are at the end of this year. Whether or not have to run max-1 (or more) for texture settings due to VRAM limitations is an issue or not will be dependent on the person.

For future proofing at high end I'd really like at least 12GB; but that means either a 192bit bus - which isn't likely at the x80 level because of the bandwidth reduction vs 20xx series cards - or 384bit. The latter is something we've rarely seen in the past because it drives up the cost of the PCB itself a lot.
 
If NVIDIA go insane with their pricing again I’ll probably throw AMD a bone and get big navi.
 
Rumored specs are so high that just your mere presence in a online game guarantees you victory.

Edit: Actually this is only true for the Glorious Founder Edition.
 
Rumored specs are so high that just your mere presence in a online game guarantees you victory.

Edit: Actually this is only true for the Glorious Founder Edition.

I believe you are thinking of the Asynchronous Neural Attachment Linkage model, it requires 2 8 pin power connectors as well as a dongle that you insert into your ass to provide real time analytics and feedback.
 
The only operational reason for the target to be higher than 30% would be the smaller bump with Turing along with the increase in price per SKU (perceived, if arguing semantics). Essentially, while Nvidia targets enthusiasts that upgrade every generation to a degree as well as new users, their biggest target is probably the regular upgraders that hit every second or third generation.

Few were likely interested in trading a top-end 1000-series GPU, generally under US$800, for a US$1000+ top-end 2000-series GPU, given the gains in rasterization.

Whatever is actually possible given available technology -- and remember that Nvidia is a company that defines available technology in the GPU industry -- they do need to set a target that will actually interest buyers and motivate sales. So I agree, a minimum of ~30% sounds good, given the combination of a die shrink, a new architecture, and the current architecture being less well received. If they can do better, they'll have trouble keeping stock.

They have trouble keeping stock as it is. As much as we can hate on the 2000s for not being a generational leap in pure performance and for increasing pricing. The stupid things sold as fast as they could make them. Its seems gamers have proven they are ok with 20-30% per gen.

We are all going to have to hope that Navi 2 was worth waiting for and that NV spies have been telling them that. Otherwise ya they have no reason to target more then 30%. So far 30% has proven to be all they need to do to sell dies as fast as they can fab them. Its not like they are going to be stuck with a ton of 2000 stock unless they do something dumb like order a ton a week before they launch ampere. :)
 
They have trouble keeping stock as it is. As much as we can hate on the 2000s for not being a generational leap in pure performance and for increasing pricing. The stupid things sold as fast as they could make them. Its seems gamers have proven they are ok with 20-30% per gen.

We are all going to have to hope that Navi 2 was worth waiting for and that NV spies have been telling them that. Otherwise ya they have no reason to target more then 30%. So far 30% has proven to be all they need to do to sell dies as fast as they can fab them. Its not like they are going to be stuck with a ton of 2000 stock unless they do something dumb like order a ton a week before they launch ampere. :)

My local stores would disagree, tons of 2000 series in stock, heck even Best Buy has them and they almost never have stock of anything good. Id be surprised if Nvidia can squeeze much more then 20% out of this coming card over the 2000 series on raster performance.
 
Its funny our microcenter has a case full of video cards(green and team red) a mile long and you can see the dust around the boxes that are not selling at these inflated non depreciated prices.
 
Its funny our microcenter has a case full of video cards(green and team red) a mile long and you can see the dust around the boxes that are not selling at these inflated non depreciated prices.

It's silly to pay MSRP this late into the release cycle. If you were going to drop $1300 on a GPU, 15 months ago was the time to do it. Now $800 is the most I'd be willing to pay for a (used) 2080 Ti.
 
No citation needed, everyone knows the bigger the size of the chip the higher the failure rate becomes. If demand was as high for these chips as say Pascal was, then yeah I expect it would be a huge issue for Nvidia.

While that sounds very logical, it's more likely that as transistor count increases, the chances for flaws increase.

Don't need citations to point out the obvious. ... Still no doubt a 754mm die can simply not have a great yield. The number of fully functional chips coming off a wafer at that size can't be great... its just physics....

Sounds logical but without real numbers from the FAB's, you are just guessing with regards to yield rate on the 754nm² chip in question as compared to any other chip. The size is a relatively small factor in yields, as some designs/processes have great yields, and others have shit yields, while being the same size. Your logic is true assuming all else is equal between production of different semiconductors, however systemic production issues can vary greatly between runs.
 
It's silly to pay MSRP this late into the release cycle. If you were going to drop $1300 on a GPU, 15 months ago was the time to do it. Now $800 is the most I'd be willing to pay for a (used) 2080 Ti.

If you need it buy it; at best, perhaps halfway through the cycle or right at the beginning of the cycle are 'good times'.
 
Sounds logical but without real numbers from the FAB's, you are just guessing with regards to yield rate on the 754nm² chip in question as compared to any other chip. The size is a relatively small factor in yields, as some designs/processes have great yields, and others have shit yields, while being the same size. Your logic is true assuming all else is equal between production of different semiconductors, however systemic production issues can vary greatly between runs.

Well of course no manufacturer stands up and says we are having production issues if they can help it.

Nvidia has all but admitted yield issues on their top end fully 100% functional perfect dies... as until recently they where producing "A" and less the A parts. Only the A parts where sold to OEMs for use in factory overclocked parts. No doubt yields on basically everyone's stuff improve with time and I'm sure Nvidias have as well. No doubt though that yields cost them more then they would have liked out of the gate.

A lot of people accuse Nvidia of being greedy for jumping the price on this generation... I give them the benefit of the doubt. I believe they simply kept their margins at the same level they where before. Just means they had to up the end user cost a bit. :) Really I hope Nvidia has decided to go for easier to fab over swinging for the fences performance with Ampere. That is as long as they are willing to pass that on to consumers. If prices where to drift back down a bit I think most of us would be fine with a 10-20% bump. lol
 
Last edited:
Im Ready for the 3080 or at the very LEAST 3070. I have a 144hz Gsync Monitor. Coming from a 1080FE. Gotta feed this 4.35Ghz 3800x of a beast cpu.
 
I wouldn't count on AMD

Oh I'm fully expecting them to fail. But there's always the off chance that maybe they can pull off a miracle. ;) Perhaps Raja was a major contributing factor in the poor choices in where to place their engineering efforts (from a gaming perspective). We'll find out later in the year no doubt and either I'll scurry back into my corner and buy nvidia again (I'll need a new card for my planned 48" OLED) or AMD pulled off a 2 hour roll on the craps table and actually release something competitive at the top end.
 
Last edited:
The only operational reason for the target to be higher than 30% would be the smaller bump with Turing along with the increase in price per SKU (perceived, if arguing semantics). Essentially, while Nvidia targets enthusiasts that upgrade every generation to a degree as well as new users, their biggest target is probably the regular upgraders that hit every second or third generation.

Few were likely interested in trading a top-end 1000-series GPU, generally under US$800, for a US$1000+ top-end 2000-series GPU, given the gains in rasterization.

Whatever is actually possible given available technology -- and remember that Nvidia is a company that defines available technology in the GPU industry -- they do need to set a target that will actually interest buyers and motivate sales. So I agree, a minimum of ~30% sounds good, given the combination of a die shrink, a new architecture, and the current architecture being less well received. If they can do better, they'll have trouble keeping stock.


I fully expect at least 30% increase in performance from a new arch and smaller node.
I also fully expect a 30% increase in price, since they discovered some people are OK with that
 
I fully expect at least 30% increase in performance from a new arch and smaller node.
I also fully expect a 30% increase in price, since they discovered some people are OK with that
I don't think there are many people that are ok with a price increase. Those that are more then likely have money growing out their ass and willing to spend what ever it takes for the best.
 
I fully expect at least 30% increase in performance from a new arch and smaller node.
I also fully expect a 30% increase in price, since they discovered some people are OK with that

Unfortunately, I think you're right and that means I'm going to stick with my 1080 Ti a little longer unless an $800 part comes along that is 30%+ faster. I can afford the Ti tier cards, but I can't justify them - the value isn't there especially given my gaming patterns lately.
 
Unfortunately, I think you're right and that means I'm going to stick with my 1080 Ti a little longer unless an $800 part comes along that is 30%+ faster. I can afford the Ti tier cards, but I can't justify them - the value isn't there especially given my gaming patterns lately.

If the $800 Ampere card isn’t at least 30% faster than your 1080 Ti I would call that a huge failure.
 
What are some high VRAM games I can try? I figured BF V would be one of the biggest users, but I haven't seen that go above ~6.5GB.

Resident Evil 2 was easily using atleast 7 - 7.5 GB of VRAM at 1080p for me for some reason. Haven't even tried in in 1440p.
 
Resident Evil 2 was easily using atleast 7 - 7.5 GB of VRAM at 1080p for me for some reason. Haven't even tried in in 1440p.


If your video card has 8gb vram, often games are just going to aggressively cache more than they need. But it's still quite playable with half that much.

See here, where as long as you have Maxwell/Polaris with 4GB VRAM (This does not include the GTX 970, with 3.5GB ram), the mins/maxes are exactly where you expect them to be:

1080p.png


6GB VRAM is the max new games use, even at 4k. That gives Breathing room to 8GB Pascal/Navi, (along with the expected bump in compression we will see with Ampere).

The problem with "Drawing a line in the VRAM" is "how good is the compression of your arcitecture? But it's pretty clear that 3GB vram for the 1060 was not a good idea!
 
Last edited:
If Ampere is 30% faster and the 2080 Ti was 30% faster, then the Ampere card will be about 70% faster than the 1080 Ti.

I would imagine the $800 Ampere will be about the same as the 2080 Ti in performance, whereas the Ampere Ti will be about 30% faster than the 2080 Ti at 2080 Ti prices or above. So the $800 card would buy me 30% in boost if I'm right.

I can't justify $1200+ for a graphic card.
 
If your video card has 8gb vram, often games are just going to aggressively cache more than they need. But it's still quite playable with half that much.

^ ninjad! Was typing that. It's important to really bench specific titles to determine how much the VRAM really matters. Pre-caching is a thing! If you have a ton of memory, heck, fill it up ahead of time even if the assets are not super likely to be used.

It's all about the working set, and how well the engine can speculatively determine needs ahead of time and background preload them. Yes, VRAM matters, but peak values are not always a great indicator of importance. Good engines and developers are really good at adjusting the working set to avoid a miss during a frame render, as long as the per-frame working set is < vram.
 
I would imagine the $800 Ampere will be about the same as the 2080 Ti in performance, whereas the Ampere Ti will be about 30% faster than the 2080 Ti at 2080 Ti prices or above. So the $800 card would buy me 30% in boost if I'm right.

I can't justify $1200+ for a graphic card.

It's been over a year since the 2080 Ti started at $1200. Can we all just finally admit that the price on that card is now $1000? If people are going to whine about the 2080 Ti costing $1200, I'm going to whine about pre-launch AMD pricing and lousy AMD performance based on benchmarks from before they issue post-launch firmware bandaids.

If we stick to reality as it exists today, what we see here is that the 3080 is going to crush the 1080 Ti in terms of performance, and it's going to do it at the same price. The 3080 Ti is going to crush the 3080 with pricing TBD (but highly unlikely to exceed the $1200 2080 Ti FE, and most likely to match the $1000 2080 Ti AIB cards).
 
It's been over a year since the 2080 Ti started at $1200. Can we all just finally admit that the price on that card is now $1000? If people are going to whine about the 2080 Ti costing $1200, I'm going to whine about pre-launch AMD pricing and lousy AMD performance based on benchmarks from before they issue post-launch firmware bandaids.

Cool, so I’ll just wait a year for it to drop to $1000 or so.

If we stick to reality as it exists today, what we see here is that the 3080 is going to crush the 1080 Ti in terms of performance, and it's going to do it at the same price. The 3080 Ti is going to crush the 3080 with pricing TBD (but highly unlikely to exceed the $1200 2080 Ti FE, and most likely to match the $1000 2080 Ti AIB cards).

We’ll see. The only thing that will surprise me is if 3080 Ti starts at $1000.
 
I would imagine the $800 Ampere will be about the same as the 2080 Ti in performance, whereas the Ampere Ti will be about 30% faster than the 2080 Ti at 2080 Ti prices or above. So the $800 card would buy me 30% in boost if I'm right.

I can't justify $1200+ for a graphic card.

For reals...
I'm planning my next build with a 3900x with a budget of around $1200............ (some parts carrying over)
 
It's been over a year since the 2080 Ti started at $1200. Can we all just finally admit that the price on that card is now $1000? If people are going to whine about the 2080 Ti costing $1200, I'm going to whine about pre-launch AMD pricing and lousy AMD performance based on benchmarks from before they issue post-launch firmware bandaids.

If we stick to reality as it exists today, what we see here is that the 3080 is going to crush the 1080 Ti in terms of performance, and it's going to do it at the same price. The 3080 Ti is going to crush the 3080 with pricing TBD (but highly unlikely to exceed the $1200 2080 Ti FE, and most likely to match the $1000 2080 Ti AIB cards).
don't forget tax
 
Back
Top