NVIDIA GeForce RTX 3090 is ~10% faster than RTX 3080

Really, I would say the actual VRAM used by DOOM Eternal for modern architectures seems to be somewhere between 4GB and 6GB at 4k (and 3gb at 1080p).
I'm aware memory allocated is not memory required.
I was going from here: https://www.techspot.com/review/2099-geforce-rtx-3080/
Where they see the 8GB negatively impacting the 2080. (Can't place the link at the right spot, but if you do a find for "Doom Eternal has been " you get the relevant section)
The dataset you provided seems to show a different behaviour so I'm not sure if there are settings differences that could account for that or if someone messed up something in the tests.

Still, I believe the point I made about memory requirements for max settings being expected to jump up a bit in the near future after a new console generation releases seems reasonable.
Again, if you don't keep your cards long it's a non issue, if you hang on to them, it might become one.
 
  • Like
Reactions: erek
like this
I really don't think the 3090 will be the top tier Nvidia 3000 series card. Look back at previous generations and you will see they almost always release their Trump cards later in the product cycle. It's all about money. Everyone who can will jump on 3080 and 3090 now. Some 6 months later well see 3080TI, 3090TI, new Titan, or a combination of the sorts. Many of the same customers will turn right around and buy the next big thing because they "have to have it."

GTX 780 May 2013
GTX 780 TI Nov 2013

GTX 980 Sept 2014
GTX 980 TI June 2015

GTX 1080 May 2016
GTX 1080TI March 2017

RTX 2080 Sept 2018
RTX 2080 S July 2019

If you think the 3080 and 3090 won't have faster versions this series you are very naive.

Ti branding is dead, from what I hear.

There might be supers. Who knows. Not every gen has a refresh.

Depends on a combination of how their yields/bins work out and if AMD comes out with anything of note.
 
  • Like
Reactions: erek
like this
essentially NVIDIA got us. if it weren't for the scalpers a lot of us would really have been taken and made the purchase.

I don't see how scalpers scalping the 3080 would have impacted people buying a 3090 that isn't launched yet :p
 
  • Like
Reactions: erek
like this
I'm aware memory allocated is not memory required.
I was going from here: https://www.techspot.com/review/2099-geforce-rtx-3080/

I wouldn't trust a single outlier benchmark.

TechPowerUp's RTX 3080 review continues to maintain the same 25% performance gap at 4k ultra nightmare versus 1080p, RTX 2808 Ti versus RTX
2080 and I assume they're running the latest copy.

doom-eternal-3840-2160.png

TweakTown shows the same differential.

8_109_msi-geforce-rtx-3080-gaming-trio-review_full.png

And Tom's is also pretty close:

ps23fLheRE7ULCFcmTYaaK-1280-80.png


If you increase the performance of the 2070 Super up to 2080 in the Toms graph, you will see the exact same 27% performance gap And if it's the 2070 failing on you, that has the exact same RAM (proving you're limited in some other fashion)!
 
Last edited:
I really don't think the 3090 will be the top tier Nvidia 3000 series card. Look back at previous generations and you will see they almost always release their Trump cards later in the product cycle. It's all about money. Everyone who can will jump on 3080 and 3090 now. Some 6 months later well see 3080TI, 3090TI, new Titan, or a combination of the sorts. Many of the same customers will turn right around and buy the next big thing because they "have to have it."

GTX 780 May 2013
GTX 780 TI Nov 2013

GTX 980 Sept 2014
GTX 980 TI June 2015

GTX 1080 May 2016
GTX 1080TI March 2017

RTX 2080 Sept 2018
RTX 2080 S July 2019

If you think the 3080 and 3090 won't have faster versions this series you are very naive.

Of course there will be faster cards, the point of the 3090 like the Titans of the past is you pay a premium for that performance upfront early when yields are low and get to enjoy it for the full lifecycle. You can get the performance later for cheaper but very late in the product lifecycle.

This is not a gamers card and it was never meant for normal consumers. It is the Titan replacement so the same applies as previous gens.

Stop expecting stupid performance increases and value for money as it simply isn’t the purpose of the 3090. Even Pascal Titans didn’t deliver massive gains and that was the best generation in recent memory.
 
3090 is a giant waste of money for a gamer, as it's barely any faster then a 3080. If you got uses for it beyond just gaming then it might make sense due to the increased amount of vram. Plus the rumor is these things are going to be extremely hard to find on top of it.
 
  • Like
Reactions: erek
like this
I wouldn't trust a single outlier benchmark.
Techspot does seem an outlier, however they specifically test with and without ultra nightmare textures and I've seen some solid work from them so far. This is not the same of 4k vs 1080p.
TechPowerUp's RTX 3080 review continues to maintain the same 25% performance gap at 4k ultra nightmare versus 1080p, RTX 2808 Ti versus RTX 2080 and I assume they're running the latest copy.
Looking at TechPowerUp's RTX 3080 review the relative performance difference between the 2080 Ti and 2080 jumps from 18%(245 vs 208) at 1080p to 27%(117 vs 92) at 4k.

However, I don't think this would be a good test for what we're talking, because changing the resolution hits a lot more than just the memory required.

Textures are more often than not the primary factor for defining the memory budget requirements.
If you set textures to ultra nightmare, then change the render target between 1080p and 4k it might not significantly affect the relative performance between the cards depending on a lot of other factors.

I'm inclined to trust the information from techspot since they specifically test the textures setting. But without more corroborating tests from other sources I'm not going to press on that.

I'll stick to my initial point that, it's reasonable to expect a jump in memory usage in the near future given that the new console hardware is out. And that 10GB might be a limitation for those that keep their gpus for some generations.
 
Values from someone on Reddit
  • Time Spy (graphics): 20569 vs 17645 (via arstechnica). 16.6% Increase
  • Time Spy Extreme (graphics): 10560 vs 8798 (via arstechnica). 20% Increase
  • Assassin's Creed Odyssey (4K/Maxed): 71 vs 63 (via arstechinica) 12.7% Increase
  • Horizon Zero Dawn (4K/Ultimate Quality): 86 vs 73 (via techspot) 17.8% Increase
  • Red Dead Redemption II (4K/Maxed/No MSAA): 66 (can't find any exact)
  • Shadow of the Tomb Raider (4K/Maxed/DLSS/No RT): 112 vs 103 (via guru 3D) 8.7% Increase
  • Shadow of the Tomb Raider (4K/Maxed/DLSS/RT): 82 vs 69 (via pcgamer) 18.8% Increase
  • Control DLSS Quality 55-70 vs 64.8 (via gamers nexus)
  • Control DLSS Performance 80-95 (can't find anything)
 
  • Like
Reactions: erek
like this
Values from someone on Reddit
  • Time Spy (graphics): 20569 vs 17645 (via arstechnica). 16.6% Increase
  • Time Spy Extreme (graphics): 10560 vs 8798 (via arstechnica). 20% Increase
  • Assassin's Creed Odyssey (4K/Maxed): 71 vs 63 (via arstechinica) 12.7% Increase
  • Horizon Zero Dawn (4K/Ultimate Quality): 86 vs 73 (via techspot) 17.8% Increase
  • Red Dead Redemption II (4K/Maxed/No MSAA): 66 (can't find any exact)
  • Shadow of the Tomb Raider (4K/Maxed/DLSS/No RT): 112 vs 103 (via guru 3D) 8.7% Increase
  • Shadow of the Tomb Raider (4K/Maxed/DLSS/RT): 82 vs 69 (via pcgamer) 18.8% Increase
  • Control DLSS Quality 55-70 vs 64.8 (via gamers nexus)
  • Control DLSS Performance 80-95 (can't find anything)

Comparing benchmarks of cards from multiple sources with different/unknown test benches is useless.

Everything should be taken with a grain of salt until official reviews start popping up.
 
Techspot does seem an outlier, however they specifically test with and without ultra nightmare textures and I've seen some solid work from them so far. This is not the same of 4k vs 1080p.

Looking at TechPowerUp's RTX 3080 review the relative performance difference between the 2080 Ti and 2080 jumps from 18%(245 vs 208) at 1080p to 27%(117 vs 92) at 4k.

However, I don't think this would be a good test for what we're talking, because changing the resolution hits a lot more than just the memory required.

And yet, if you compare the 2080 Ti versus the 2080 Super at 1080p and 4k, then you gt the same 1.14 performance difference.

The reason for the 2080 standard's performance -falloff in Doom at 4k isn't VRAM, it is because the Super has vastly better memory bandwidth.

THE 2080 Super has exactly the same capacity as the 2080.

And now you know precisely why NVIDIA has to develop GDDR6X for this generation!

The RTX 2080 Super has a theoretical 6% speed increase over the original, so any increase over that is thanks to memory bandwidth!

In fact, this is precisely the reason why I'm disappointed to gear that Ampere 3070 is shipping "faster than 2080 Ti" with the same memory bandwidth as that stock 2080. They better have improved the compression massively!

I must say that I am surprised at the inherently different result for the TechSpot review at 4k (their Super is 5% faster than 2080, indicating VRAM-limited), but I'm not going to worry myself sick over it. I could find places in Doom 2016 where my frame rate fell by-half, but the fact that only one site actually found this says how unlikely you are to encounter this (much like DOOM was restricted to one level))!
 
Last edited:
The reason for the 2080 standard's performance -falloff in Doom at 4k isn't VRAM, it is because the Super has vastly better memory bandwidth.
I agree with that. That's why I pointed out that increasing resolution hits more than just VRAM size.
Testing while changing only the Texture Sizes seems a decent approach to measure memory requirements, and that's what Techspot did.

[...]but the fact that only one site actually found this says how unlikely you are to encounter this (much like DOOM was restricted to one level))!
I also agree that in current games you're unlikely to exceed the 8GB VRAM requirements. But it does seem we're close enough. And with the increase in graphic fidelity that usually accompanies the console generational jump we might start exceeding that 8GBs in the near future.

I think that within the next 4 years some PC games at max settings will require or significantly benefit from having more than 10GB VRAM.
It's fine if we have different opinions on how it will play out.
 
Nope, it’s what we discovered after the hype train. 3090 is around 10% faster than the 3080.

to ge honest it seems as if the scalpers actually gave the community enough time to digest the information before actually being taken

I will have to call bullshit on this

10 to 15 % over 3080 is easily hittable by big navi assuming AMD has a HBM variant available. Price of HBM won't be an issue as the budget is now upto $1500 unless nVidia has a 3090ti in reserve
 
I will have to call bullshit on this

10 to 15 % over 3080 is easily hittable by big navi assuming AMD has a HBM variant available. Price of HBM won't be an issue as the budget is now upto $1500 unless nVidia has a 3090ti in reserve
Most leaks are saying they are going with a narrow memory width GDDR6 with a sort of onboard/high speed cache system. I guess we'll see how well it works, but I agree, I think an 80 CU with HBM could possibly have gotten them the performance crown back if they wanted it. It just doesn't appear that's going to be the case (at least based on leaks at this time). Maybe in the future they will release one, or for their professional cards or something. Guess we'll find more out in a month + a weeks time.
 
1 hour long? Why do these talking heads believe I want to watch a 1 hour long talking head answering something that can be done in 2 minutes?

I agree, maybe 15 mins because YT forces that now in order to monetize but 1 hour for this is plain redunc-u-lous
 
Am i better off with my twin rtx 2080ti or one 3090 ? i use them for flight sims, will the new cards be able to use in sli mode also?
 
Comparing benchmarks of cards from multiple sources with different/unknown test benches is useless.

Everything should be taken with a grain of salt until official reviews start popping up.

Exactly the point I've been trying to make. Until these cards go up against each other on identical test configurations in the same games / benchmarks via the same reviewers we won't know anything for sure.
 
And yet, if you compare the 2080 Ti versus the 2080 Super at 1080p and 4k, then you gt the same 1.14 performance difference.

The reason for the 2080 standard's performance -falloff in Doom at 4k isn't VRAM, it is because the Super has vastly better memory bandwidth.

THE 2080 Super has exactly the same capacity as the 2080.

And now you know precisely why NVIDIA has to develop GDDR6X for this generation!

The RTX 2080 Super has a theoretical 6% speed increase over the original, so any increase over that is thanks to memory bandwidth!

In fact, this is precisely the reason why I'm disappointed to gear that Ampere 3070 is shipping "faster than 2080 Ti" with the same memory bandwidth as that stock 2080. They better have improved the compression massively!

I must say that I am surprised at the inherently different result for the TechSpot review at 4k (their Super is 5% faster than 2080, indicating VRAM-limited), but I'm not going to worry myself sick over it. I could find places in Doom 2016 where my frame rate fell by-half, but the fact that only one site actually found this says how unlikely you are to encounter this (much like DOOM was restricted to one level))!


I agree as well.

You can tell by how the benchmarks degrade as you increase resolution.

If scaling is poor, it is likely due to memory bandwidth.

If performance falls off a cliff above a certain resolution, it's because you don't have enough VRAM, and the data is now being pulled over the relatively slow PCIe bus from system RAM.

Two different symptoms.
 
8k on a 88" monitor. Me want. That is one thicc video card (at 3 minute mark and at end of video):
 
it seems like OCed 3080s are putting up about 4% performance increases in early testing.

I haven't looked, but how is the CPU usage in those tests? I'm wondering if we are seeing CPU bottlenecks with this gen. I haven't upgraded my CPU in quite a while and the 2080 I had was definitely CPU limited when I disabled v-sync / G-sync.
 
... you guys just wait ..

..AMD will show you! ... AMD will show you all!!!

:cry:😢😭

I'd love to see a more competitive AMD. Some of my best GPU experiences have been with AMD. However, it's rare that they achieve any kind of parity much less superiority to NVIDIA. At least, on the top end of the GPU spectrum where I buy GPU's.
 
Nvidia Blog post
Oh oh, seems Nvidia is getting out ahead of the reviews on the 3090. They admit its only 10-15% faster @4k than the 3080.

Lol thats a good find and it also backs up the numbers we saw on videocardz. Them saying 15% is probably a one off and based on the leaked Chinese review, it’s really 4-9%. Also Galax has an internal roadmap showing 20 gb 3080 is going to be released as a counter to Big Navi which means 3090 will be pretty much DOA.
 
DOA for most of us looking to get the best gaming experience, but realizing we absolutely don't need 3090 for best 4k/8k gaming experience (at least not worth the minimal gains vs cost). In reality, probably such low quantity it's going to sell out regardless.

Damn you scalpers making me think rationally! Just a week ago was ready to buy both 3080 and 3090, now will be happy just to be able to get a Nvidia branded leather jacket.

EDIT: On the other hand, if I see an 'add cart' button for 3090.... I will most likely see a cart error. Who am I kidding, probably won't ever sniff an 'add cart' button without some script/bot help.
 
Last edited:
  • Like
Reactions: erek
like this
Back
Top