NVIDIA Ampere GPUs to feature 2x performance than the RTX 20 series, PCIe 4.0, up to 826 mm² dies and 20 GB of VRAM

Was even the 2060 at $350 at launch? I mean I know now you can get a 2060KO for the $300 range, but everything was ungodly expensive at launch wasn't it?

I'm on the fence if I think their pricing is due to actual cost to manufacture, or if they're still riding the mining card glut, or if they figure they're really the only game in town so why the hell not charge a bloody fortune.
 
Was even the 2060 at $350 at launch? I mean I know now you can get a 2060KO for the $300 range, but everything was ungodly expensive at launch wasn't it?

I'm on the fence if I think their pricing is due to actual cost to manufacture, or if they're still riding the mining card glut, or if they figure they're really the only game in town so why the hell not charge a bloody fortune.
It is a mixture of all the above. Lack of competition at the top explains the huge price jump to the 2080ti.
 
As someone else said the 8800 ultra. I believe the 1080ti was 43% better than the 980ti for only $50 more, which makes these cards stand out. Both times Nvidia was getting pushed by AMD, even though Vega turned out to be a disappointment in reality. I have a really hard time believing that today nvidia can double the performance of the previous gen. My guess is that the new gen will double the performance in some cherry picked titles with raytracing turned on but will only be a net of 15-20 % performance gain across all titles, if that.
I don't think we'll see double the performance but I do expect a larger than average increase. Pascal -> Turing wasn't a big jump in performance.
 
I don't think we'll see double the performance but I do expect a larger than average increase. Pascal -> Turing wasn't a big jump in performance.
It'll be a jump exactly big enough to be just out of reach of the crybabies. No more, no less.
 
>826 mm^2 sized die on 7nm+
Do shitposters not know what defect rates are?
I love how they said they'd 'wait for navi' in order to make final tweaks, as if GPUs aren't designed years in advance. What a load of bullshit.
It's not redesigning the gpu, it's deciding how much of it they can leave dormant -> less defect rate. And decide how hard they need to push final clocks -> also reducing reject silicon.
 
Sorry but no, you make a high end retail product, you don’t get to claim a consumer should not expect a perfect experience. Thats BS. See that is the difference between Intel / NVIDIA and AMD, AMD are happy to use their customers as beta testers regardless of how much they spend.
In a perfect world yes, but in the real world, the more you complicate things the more problems you can have.
Comsumers can expect perfect products all they want, doesn't mean its going to happen.
I agree that SLI is less worse than crossfire but nothing compared to single GPU.
It's like expecting a Ferrari to have any kind of reliability, it just won't.
 
In a perfect world yes, but in the real world, the more you complicate things the more problems you can have.
Comsumers can expect perfect products all they want, doesn't mean its going to happen.
I agree that SLI is less worse than crossfire but nothing compared to single GPU.
It's like expecting a Ferrari to have any kind of reliability, it just won't.

I have used both Crossfire and SLI and they both pretty much sucked equally. When it worked it was great but it was far more of a chore to get that then it should have been. They also are both pretty much dead as neither company really supports it anymore.
 
Games like witcher 3, Dark Souls games, Skyrim SE (and heavily modded it still stands up well today), bioshock, tomb raider, gta V, dishonored 1 and 2 , Dirt 4, dirt rally 1 (and 2 though 2 is badly optimized in general), some far cry games, Vermintide 2, Borderlands 2 and 3, etc.. all support sli directly or at least can be made to work with it well.. The high bandwidth SLI adapter and especially NVLink remove some of the old bandwidth glitch issues. While some other games will work with some jerry rigging or turning off TAA a bunch of games don't work for sure. UE4 engine supports SLI natively so it's just that a lot of devs/studios aren't doing the work or budgeting for it. It's also a shame that VR devs didn't do the work to support dual GPU because it would be the perfect case, one gpu per eye since each eye display in VR is a separate rendered view. There are a handful of VR games that do it like serious sam but in general it's not used in VR like it could so logically be. :(
 
Last edited:
As someone else said the 8800 ultra. I believe the 1080ti was 43% better than the 980ti for only $50 more, which makes these cards stand out.
But weren't the 1000 series of cards a radically different architecture from the 900 series? I don't think there was anywhere near those gains from an 880 to 980 or 780 to 880.
 
Only thing i can say is that given the price of the current cards. This will cost your first and second born children.....20 gigs of VRAM? i foresee this being astronomical in price.......dare I say closer to a 2k price point.
 
But weren't the 1000 series of cards a radically different architecture from the 900 series? I don't think there was anywhere near those gains from an 880 to 980 or 780 to 880.
8800 GTX -> 9800 GTX (G80 -> G92) = -2% (die shrink)
9800 GTX -> GTX 285 (G92 -> Tesla) = +37% (die shrink)
GTX 285 -> GTX 480 (Tesla -> Fermi) = +39% (new arch)
GTX 480 -> GTX 580 (Fermi -> Fermi 2) = +11% (die shrink)
GTX 580 -> GTX 680 (Fermi 2 -> Kepler) = +19% (new arch)
GTX 680 -> GTX 780 Ti (Kepler -> Kepler 2) = +39% (optimization)
GTX 780 Ti -> GTX 980 Ti (Kepler -> Maxwell) = +43% (new arch)
GTX 980 Ti -> GTX 1080 Ti (Maxwell -> Pascal) = +85% (new arch)
GTX 1080 Ti -> RTX 2080 Ti (Pascal -> Turing) = +39% (new arch)

This is why we say Pascal was an outlier and you shouldn't set your expectations based on it.
 
Anybody thinking we will see something 2070
Only thing i can say is that given the price of the current cards. This will cost your first and second born children.....20 gigs of VRAM? i foresee this being astronomical in price.......dare I say closer to a 2k price point.
Do remember the card it is replacing is the Tesla V100, which currently retails for $7000, so I would dare say it will be closer to 8k price point. And I will be ordering at least 2 of them.
 
Was even the 2060 at $350 at launch? I mean I know now you can get a 2060KO for the $300 range, but everything was ungodly expensive at launch wasn't it?
It was $350 at launch, but it performs like a GTX 1070 which wasn't a good deal. Even at $300 the RTX 2060 is hard to justify over buying a used GTX 1070.
I'm on the fence if I think their pricing is due to actual cost to manufacture, or if they're still riding the mining card glut, or if they figure they're really the only game in town so why the hell not charge a bloody fortune.
I think it's the mining card glut because Nvidia tasted more money and they liked it, and don't want to go back. While Nvidia has like 90%+ graphics card market share, they do have to be massively concerned about AMD. AMD is basically giving Sony and Microsoft their best GPU's that aren't even out on PC yet, and even at $600 it would be very attractive for a gamer to ditch PC to go console. You're getting a high end PC for $600 or less with Xbox X or PS5, while a RTX 2080Ti is like over $1k to purchase. If Nvidia just moved the line down by making the RTX 2080 the RTX 3070 while still asking for $600, that shit won't cut it.

Nvidia needs another GTX 970 which means they need to offer a massively faster GPU than what PS5 and Xbox X offers for less. I don't have much faith in AMD offering competitive pricing because at this point they have no incentive to do so. If you're buying an Xbox X or PS5 then you're buying an AMD product. If you're going Stadia then you're going with an AMD product. You want a decently good AMD graphics card then buy Big Navi, which is likely going to be $700-$800. Nvidia could be in the same situation with Geforce Now, but luckily it looks like that service isn't doing so well. If Nvidia doesn't offer a RTX 3070 for $350 then the PC gaming market could collapse and their revenue will drop significantly. It would also be very bad for us gamers as well.

v8U7BdU.png
 
Back in the day (late 90's / early 2000's) that was pretty much standard and expected.
In the 90’s and 2000’s there was any one of more than a dozen fab plants that could easily and reliably build the GPU’s on the latest standard. Now there are 2 maybe 3 fab’s capable of producing them and they are booked year round day in and day out by anybody with the pocket books to pay for them. And back then any node shrink was massive and significant, now even going from 14 to 7 while being “statistically” huge is not going to give gains anywhere nearly as large. Gone are the days where moors law was still the rule, and until there are some significant breakthroughs we are going to have to settle for incremental upgrades at increasing price points as the R&D costs goes up and fab time goes down.
 
I think it's the mining card glut because Nvidia tasted more money and they liked it, and don't want to go back.

Think a little harder here. Why did the mining craze benefit Nvidia but not AMD? Because AMD's cards are too slow. Simple as that. Nvidia is charging more for an inherently superior product. The other bit which I've mentioned a thousand times is that your baseline company, AMD, sells every single GPU for a loss. If not for their CPU business being able to subsidize their GPU losses, AMD would either be out of business entirely or would have raised their prices.

AMD is basically giving Sony and Microsoft their best GPU's that aren't even out on PC yet, and even at $600 it would be very attractive for a gamer to ditch PC to go console.

AMD's strategy here is to sell those console GPUs for a loss in order to build up an installed base, and then use that installed base to force...errr..."entice" developers to develop for their hardware. It's a pretty standard approach done by monopolies. AMD, being largely just a token player in the GPU game based on revenues and unit sales, can get away with this. If Nvidia attempted the same thing, they'd be in trouble with the FTC.

Nvidia needs another GTX 970 which means they need to offer a massively faster GPU than what PS5 and Xbox X offers for less. I don't have much faith in AMD offering competitive pricing because at this point they have no incentive to do so.

AMD's only gameplan is to offer low pricing. They seem to be incapable of offering a superior product, so they just compete on price instead. Why would anyone choose a slow AMD card with broken drivers if it costs the same as a faster Nvidia card which actually works?

Back in the day (late 90's / early 2000's) that was pretty much standard and expected.

That's true mathematically, but only because you're essentially dividing by zero. No generational leap of transistors has ever provided the same percentage improvement that pens and paper added over doing calculations in one's head. Let us yearn for the return to relying on pen and paper for everything.
 
8800 GTX -> 9800 GTX (G80 -> G92) = -2% (die shrink)
9800 GTX -> GTX 285 (G92 -> Tesla) = +37% (die shrink)
GTX 285 -> GTX 480 (Tesla -> Fermi) = +39% (new arch)
GTX 480 -> GTX 580 (Fermi -> Fermi 2) = +11% (die shrink)
GTX 580 -> GTX 680 (Fermi 2 -> Kepler) = +19% (new arch)
GTX 680 -> GTX 780 Ti (Kepler -> Kepler 2) = +39% (optimization)
GTX 780 Ti -> GTX 980 Ti (Kepler -> Maxwell) = +43% (new arch)
GTX 980 Ti -> GTX 1080 Ti (Maxwell -> Pascal) = +85% (new arch)
GTX 1080 Ti -> RTX 2080 Ti (Pascal -> Turing) = +39% (new arch)

This is why we say Pascal was an outlier and you shouldn't set your expectations based on it.

It is sad that most whiners only can remember 1-2 generations back...Pascal ->Turing was business as usual...but at least we can write off anybody whining about the "no progress" as clueless muppets ;)
 
8800 GTX -> 9800 GTX (G80 -> G92) = -2% (die shrink)
9800 GTX -> GTX 285 (G92 -> Tesla) = +37% (die shrink)
GTX 285 -> GTX 480 (Tesla -> Fermi) = +39% (new arch)
GTX 480 -> GTX 580 (Fermi -> Fermi 2) = +11% (die shrink)
GTX 580 -> GTX 680 (Fermi 2 -> Kepler) = +19% (new arch)
GTX 680 -> GTX 780 Ti (Kepler -> Kepler 2) = +39% (optimization)
GTX 780 Ti -> GTX 980 Ti (Kepler -> Maxwell) = +43% (new arch)
GTX 980 Ti -> GTX 1080 Ti (Maxwell -> Pascal) = +85% (new arch)
GTX 1080 Ti -> RTX 2080 Ti (Pascal -> Turing) = +39% (new arch)

This is why we say Pascal was an outlier and you shouldn't set your expectations based on it.

Pretty good table, the only thing I would change is that the Maxwell to Pascal was a double Die Shrink as well as a new arch. Remember 20nm was cancelled due to problems. This skip really messed up AMD as they didn't have the resources to switch up.

This is why Nvidia had such a large performance jump around then.
 
If Nvidia can make a RTX 3070 for $350 that performs like a RTX 2080 Ti then sure, but I really doubt Nvidia would do that today. Remember when the GTX 970 was $330 and performs like a 780Ti which was $700?

Unlikely to be repeated, the reason the 970 was so cheap was because it was built on a mature 28nm process.
 
The reason the GTX 970 was so cheap was because the competition was the $400 PS4, and to some extent AMD's massive oversupply of ~$250 Hawaii 290/x after the first crypto mining collapse.

I don't know why people like to set prices to costs so much. These are all luxury products. They aren't cost plus commodities. The GTX 980 was priced $220 more than the 970 or 67% more, I'm going to venture a guess that the actual unit cost difference between them was not that big of a difference.

Maybe not right away in 2020 due to other factors (global economic issues) and supply issues but at the latest in 2021 both next consoles will serve as a price benchmark that both Nvidia and AMD will need SKUs to target and compete against.
 
I suppose it is crazy to point out that we simply don’t need more GPU power really? I mean Jesus I am driving a 4K OLED on latest titles with a Titan XP still.
 
I suppose it is crazy to point out that we simply don’t need more GPU power really? I mean Jesus I am driving a 4K OLED on latest titles with a Titan XP still.

That’s how I feel about my 2080ti. I’ll likely only upgrade when it dies out of warranty. For the first time I am content lol. 3440x1440 is my main rig. I only got the 2080ti because my 1080ti died, and wanted the extra performance for VR.

I think nVidia realizes this and is one reason they are pushing RT.
 
Anybody thinking we will see something 2070

Do remember the card it is replacing is the Tesla V100, which currently retails for $7000, so I would dare say it will be closer to 8k price point. And I will be ordering at least 2 of them.
Ahh thought this was a mainstream consumer card not a pro card. #Mybad
 
Ahh thought this was a mainstream consumer card not a pro card. #Mybad

It’s usually a good indicator of where the Titan will end up. Titan is usually the top pro card - 5% cores so I like to follow it.
 
It’s usually a good indicator of where the Titan will end up. Titan is usually the top pro card - 5% cores so I like to follow it.
The titan isn't a full pro card though its the in between card. Prosumer - its not quite the same as the Quadro cards but its beefier than the Geforce cards.
 
Anybody thinking we will see something 2070

Do remember the card it is replacing is the Tesla V100, which currently retails for $7000, so I would dare say it will be closer to 8k price point. And I will be ordering at least 2 of them.
I did some more research on this and i don't see anything indicating that this is replacing Telsa V, i see more indications that its replacing the turing arch. This is meant to be mainstream gaming cards......from what I read at least.
https://en.wikipedia.org/wiki/Ampere_(microarchitecture)
https://videocardz.com/newz/rumor-first-nvidia-ampere-geforce-rtx-3080-and-rtx-3070-specs-surface
Both sites state the 3000 series cards nothing about Telsa V100.
 
The titan isn't a full pro card though its the in between card. Prosumer - its not quite the same as the Quadro cards but its beefier than the Geforce cards.

This has evolved over the last two generations so that now every Quadro card has a matching Geforce SKU but with more VRAM. With Turing, the Titan RTX has the same core count as the Quadro RTX 8000, but the latter has double the VRAM (48 vs 24). With Pascal, the Titan Xp has the same 3840 cores as the Quadro P6000, but the latter has double the VRAM (24 vs 12).
 
I did some more research on this and i don't see anything indicating that this is replacing Telsa V, i see more indications that its replacing the turing arch. This is meant to be mainstream gaming cards......from what I read at least.
https://en.wikipedia.org/wiki/Ampere_(microarchitecture)
https://videocardz.com/newz/rumor-first-nvidia-ampere-geforce-rtx-3080-and-rtx-3070-specs-surface
Both sites state the 3000 series cards nothing about Telsa V100.

https://www.nextplatform.com/2020/0...rst-production-cray-shasta-supercomputer/amp/
 
@CorgiKitty mentions that a GA102 is in development, too. However, they offer no details on this GPU apart from that NVIDIA is evaluating the performance of Navi 21 before making adjustments to GA102.

So Nvidia wants to be able to cut out as much as possible to give us the lowest possible performance that is still just ahead of AMD.
Super shitty for us. Nvidia just loves milking those product releases.



I suppose it is crazy to point out that we simply don’t need more GPU power really? I mean Jesus I am driving a 4K OLED on latest titles with a Titan XP still.

The goal should always be to have minimum FPS that matches your monitor refresh rate. We are nowhere close to that yet, especially for those of us who have been trying to reach 4k/120 for years
 
Last edited:
  • Like
Reactions: elvn
like this
I suppose it is crazy to point out that we simply don’t need more GPU power really? I mean Jesus I am driving a 4K OLED on latest titles with a Titan XP still.
60 Hz, though... :vomit:
Back in the day (late 90's / early 2000's) that was pretty much standard and expected.
A lot changes in 20 years. Back then 3D rasterization was a rapidly changing landscape with new technologies and techniques emerging every year or two. We're now at the point where we've pretty much peaked with how to render graphics utilizing rasterization and the only real way to eke out more performance is by brute forcing it with more transistors. The standard for the past 10 years has been 30-40% with the top card from NVIDIA, 15-25% with AMD.
 
This still isn't as bad as the cdrom drive days... when 1x came out then next week 2x, then 4x, then.... and so on. every other week the speed was doubling because they had stock sitting on shelves. Before the 1.x drives ran out they had a cadre of 20x drives waiting to ship. THAT was market manipulation one by the retailers.

What the video card makers are doing is basically forcing their AIB vendors to pick them over the others. Once they have locked in exclusives then they will ship.
 
Think a little harder here. Why did the mining craze benefit Nvidia but not AMD? Because AMD's cards are too slow. Simple as that. Nvidia is charging more for an inherently superior product.
I never said AMD never benefited from the mining craze. Also, from what I remember the miners were buying up AMD cards like crazy with Nvidia being second fiddle. So much so that ASRock started to make AMD GPU's specifically for mining. There was a reason why Nvidia GPU's were cheaper compared to AMD during the mining craze. Why you think the RX 480's and 580's are everywhere on Ebay? They were used for mining and now the mining market is dead.
The other bit which I've mentioned a thousand times is that your baseline company, AMD, sells every single GPU for a loss. If not for their CPU business being able to subsidize their GPU losses, AMD would either be out of business entirely or would have raised their prices.
Citation needed.
AMD's strategy here is to sell those console GPUs for a loss in order to build up an installed base, and then use that installed base to force...errr..."entice" developers to develop for their hardware. It's a pretty standard approach done by monopolies. AMD, being largely just a token player in the GPU game based on revenues and unit sales, can get away with this. If Nvidia attempted the same thing, they'd be in trouble with the FTC.
The way I see it, since AMD's GPU market share is pathetically low then selling hardware cheap to Sony and Microsoft is better than not selling anything at all. There are plenty of people who would only buy an Nvidia GPU but have no problem with a Playstation or Xbox console. In that case AMD banks.

AMD's only gameplan is to offer low pricing. They seem to be incapable of offering a superior product, so they just compete on price instead. Why would anyone choose a slow AMD card with broken drivers if it costs the same as a faster Nvidia card which actually works?
I don't know what AMD's gameplan is and I don't think even AMD knows it. Lisa Su wants AMD to stop looking like the lower priced alternative, but they have a long way to go before they can ask for an equal price compared to Nvidia. As of right now the GPU's AMD offers are faster for a lower price, with Ray-Tracing missing. The problem is they aren't fast enough to justify the slight price decrease while not offering Ray-Tracing.
 
That’s how I feel about my 2080ti. I’ll likely only upgrade when it dies out of warranty. For the first time I am content lol. 3440x1440 is my main rig. I only got the 2080ti because my 1080ti died, and wanted the extra performance for VR.

I think nVidia realizes this and is one reason they are pushing RT.
They are pushing the RT because that’s where things need to go. The traditional graphics pipeline is old and discoveries and algorithms have been made in the last 5 years that could be drastically better and the tech has caught up to those discoveries. By getting RT and many of these features in place and flushed out now when the pipeline is finally updated nVidia will have a mature tech to go in place not first gen. I suspect in another 2 years we will start seeing MCM consumer cards and another year or 2 after that when we see the pipeline updated to properly use it. Outside a “must have” launch title or 2 of course.
 
Back
Top