The Slowing Growth of vRam in Games

I can expect VRAM to be appropriate for a 4 year span on a $699 card. It was for the $699 GTX 1080 8GB (I'm using it now 4.33 years after launch and 8GB doesn't fail at this performance level), and GTX 1080 Ti 11GB (certainly will be fine in 2021). 16GB will be fine for the $699 Radeon VII in 2023 without a doubt. I'm not sure how well the $699 3GB GTX 780 Ti did in 2017, but Kepler should not be our target for aging well.

You realize that all those cards you mentioned will be severely outdated in 2023 regardless of VRAM?

I doubt very much we'll see 4 year spans like pascal ever again. At least not for people that like to play new demanding titles.
 
You realize that all those cards you mentioned will be severely outdated in 2023 regardless of VRAM?

2023? I said " I can expect VRAM to be appropriate for a 4 year span on a $699 card." and listed what the 4th year is. And yes, the Radeon VII will be sucking it in 2023 but VRAM will have nothing to do with it. Which means the VRAM was not inadequate for the card.
 
I can expect VRAM to be appropriate for a 4 year span on a $699 card. It was for the $699 GTX 1080 8GB (I'm using it now 4.33 years after launch and 8GB doesn't fail at this performance level), and GTX 1080 Ti 11GB (certainly will be fine in 2021). 16GB will be fine for the $699 Radeon VII in 2023 without a doubt. I'm not sure how well the $699 3GB GTX 780 Ti did in 2017, but Kepler should not be our target for aging well.

GDDR6 should not be compared to GDDR5, at least until it matures some. After all, we have had GDDR5 for 10 years!

Right now, it seems that 2 GB chips are rather costly for GDDR6, so your limited to 1 GB per lane or else the price skyrockets. Furthermore, the lanes for GDDR6 are much hard to produce and is why we are now seeing 128 bit cards (4 lanes) for mainstream cards instead of 192 and 256 bit in the past. 256 bit is high end now.

With GTX 3080 performance, its better to have the 320 bit bandwidth with 10 GB than 256 bit bandwidth with 16 GB.

Bottom line GDDR6 has been too little too late.
 
GDDR6 should not be compared to GDDR5, at least until it matures some. After all, we have had GDDR5 for 10 years!

Right now, it seems that 2 GB chips are rather costly for GDDR6, so your limited to 1 GB per lane or else the price skyrockets. Furthermore, the lanes for GDDR6 are much hard to produce and is why we are now seeing 128 bit cards (4 lanes) for mainstream cards instead of 192 and 256 bit in the past. 256 bit is high end now.

With GTX 3080 performance, its better to have the 320 bit bandwidth with 10 GB than 256 bit bandwidth with 16 GB.

Bottom line GDDR6 has been too little too late.

What are your thoughts on 10GB 320-bit 19 Gbs G6X vs 12GB 384-bit 16 or 15.5 Gbs G6? It's the same GA102 chip either way, and approximately the same bandwidth. I'd certain that per GB G6X is more expensive, so it at least can't be significantly more expensive. Unless there's extra work on the PCB that costs significantly more, TBH I'm not sure.
 
Last edited:
Because I have 4K monitor and also game in VR at 120Hz (Valve Index), I was concerned if settling for the 3080 10GB would shortening its useful life too quickly and that I should instead step up to the 3090 24GB, especially if ray tracing gets more traction. While 24GB is certainly overkill, even at 4K, there is nothing in the middle that is being offered at time of release. But the cost of a roughly 20% jump in performance and a 2.4x increase in VRAM for around double of a 3080 is a really tall order. I have the means to do it as well as room in the case of my new build (but barely). I am usually the person that buys a video card once every 4-5 years (about the frequency I build a new machine), I'm still on my GTX1080 that I got in May of 2016. But with the RAM layout of these cards, I'm wondering if I may be better off just going with the RTX 3080 for now and doing an mid-cycle refresh instead.
 
I think the goal is for PC games to start leveraging the high SSD and bus throughput like consoles are going to. Stream the data in at fantastically high speed as needed rather than having to cache enormous amounts on the video card at once. If this tech becomes as standard on the PC as it is going to be on next gen consoles, we just won't need VRAM to increase at the rate it has been. Necessary quantity may actually reduce somewhat if texture streaming becomes fast and efficient enough. It will become more a matter of having just two things:
1) enough frame buffer in general for the resolution you use. At the higher end that would be 4k and 8k.
2) enough texture storage that whatever data streaming tech that is in use can always keep up. No more than that is necessary. This can increase slowly over time.

Perhaps, with the 10GB 3080, Nvidia knows darn well what is happening with texture streaming and they know console and PC game devs are going to adopt this tech immediately and we will not really reach a point where 4k gaming ever needs 20+ GB of VRAM because all cards will just get fed as needed going forward.

Who knows. Just a guess.
 
What are your thoughts on 10GB 320-bit 19 Gbs G6X vs 12GB 384-bit 16 or 15.5 Gbs G6? It's the same GA102 chip either way, and approximately the same bandwidth. I'd certain that per GB G6X is more expensive, so it at least can't be significantly more expensive. Unless there's extra work on the PCB that costs significantly more, TBH I'm not sure.

You would assume the latter would be cheaper but evidence shows that its the GDDR6 lanes on pcb are the real cost.

Case in point: They used very expensive at the time 16 GB/s gddr6 for the 2080 Super instead of adding a few lanes with cheaper stuff. Bandwidth was absolutely the bottleneck for that card as the extra cores over the 2080 did nothing for performance. (There was even a HOF 2070 Super with 16 GB/s memory that came within spitting distance)
 
You would assume the latter would be cheaper but evidence shows that its the GDDR6 lanes on pcb are the real cost.

Case in point: They used very expensive at the time 16 GB/s gddr6 for the 2080 Super instead of adding a few lanes with cheaper stuff.

TU104 is a 256-bit GPU. Can you explain what "adding a few lanes" would entail? I don't think it would be possible without R&D on a new chip or using a ridiculously more expensive TU102. They simply used the fastest G6 memory availble in mid 2019 on their second largest chip; I don't see how adding more lanes would be a tenable option at all.
 
Last edited:
TU104 is a 256-bit GPU. Can you explain what "adding a few lanes" would entail?
Making it it 320 ore 352 bit with 14 GB/s GDDR6

It will be interesting to see what they will do with the 3060. I am still thinking it will be 192 bit with 6GB and 12 GB options of GDDR6x.
 
Making it it 320 ore 352 bit with 14 GB/s GDDR6

That's not a thing? Not without designing a whole new chip (eg. a theoretical TU103). TU104 is 256-bit. That's it. You either cut it to smaller, or stick with the 256-bit.
 
That's not a thing? Not without designing a whole new chip (eg. a theoretical TU103). TU104 is 256-bit. That's it. You either cut it to smaller, or stick with the 256-bit.

Your absolutely correct. They would have had to neuter the hell out of a tu102 as the only other option, which would have been even more costly.

As for the 3060, I qould assume there is a ga106 coming instead of trimming the ga104, but maybe not.
 
How has the VRAM been measured in the games tested btw? (Perhaps I missed it). As I've appreciated the compilation of figures but the single critique I've come across (repeated ad nauseam) of late is allocated isn't used. However someone just now pointed out that since 2017 Windows Task Manager has reported actual usage distinct from allocation, despite every poster I've seen making the aforementioned critique claiming nothing shows true usage.

Either the linked poster is incorrect about Windows reporting or no poster has bothered checking their own statements. Having accurate figures would certainly be useful so I'm curious.
 
You realize that all those cards you mentioned will be severely outdated in 2023 regardless of VRAM?

I doubt very much we'll see 4 year spans like pascal ever again. At least not for people that like to play new demanding titles.

Of course we will. The PS5 and Xbox X will be around for at least 7 years. So that's basically 7 years where hardware won't advance at all. Every game is just going to be a console port. The 2080ti is more powerful than the GPU in either the PS5 or Xbox X.

Any GPU you buy today should absolutely last an entire console generation. In fact, I'd expect hardware to be even more resilient now because we're not going to have to deal with another major resolution bump. We already got over the 1080p -> 4k hill. We're at least another console generation away from anyone even considering 8k.

I won't be surprised if the PS6 can't even do 8k. It might just be a 4k/high framerate machine.
 
So Cyberpunk really seems to push the vram amount as well, especially with RT.

In regards to usage reported at 2k/1440p/4k:
Techspot - 6.5/7/8 GB
TPU - 5.6/6/7.2 GB // 7.5/7.9/9.9 GB! RT
Guru3D - 7/7.5/8.6 GB // 7.3/7.8/9.3 GB RT w/dlss

Big numbers, even without RT, but none of the cards seemed to be vram starved without RT. The 4 GB 1650 Super did well at 1080p and the 5600xt and 1660ti were fine at higher resolutions.

However, those vram requirements for RT seem legit. The 2060 gets slaughtered with any RT and the 3070 seems to lose to the 2080ti, even with dlss.
https://www.techspot.com/article/2165-cyberpunk-dlss-ray-tracing-performance/

There were some here that were getting crashes even with the 10 GB 3080, but most likely, those were at unplayable settings any hoe - ie. ultra everything with psycho RT settings and no dlss.

The game potentially has very high vram usage, but when at playable settings, it is still fine for 99% of scenarios, unlike games such as Doom eternal.

For the most part, all cards seem to have enough vram for their performance level in pure rasterization, while the 2060 is likely the only failure with any kind of RT while having playable settings.
 
So Cyberpunk really seems to push the vram amount as well, especially with RT.

In regards to usage reported at 2k/1440p/4k:
Techspot - 6.5/7/8 GB
TPU - 5.6/6/7.2 GB // 7.5/7.9/9.9 GB! RT
Guru3D - 7/7.5/8.6 GB // 7.3/7.8/9.3 GB RT w/dlss

Big numbers, even without RT, but none of the cards seemed to be vram starved without RT. The 4 GB 1650 Super did well at 1080p and the 5600xt and 1660ti were fine at higher resolutions.

However, those vram requirements for RT seem legit. The 2060 gets slaughtered with any RT and the 3070 seems to lose to the 2080ti, even with dlss.
https://www.techspot.com/article/2165-cyberpunk-dlss-ray-tracing-performance/

There were some here that were getting crashes even with the 10 GB 3080, but most likely, those were at unplayable settings any hoe - ie. ultra everything with psycho RT settings and no dlss.

The game potentially has very high vram usage, but when at playable settings, it is still fine for 99% of scenarios, unlike games such as Doom eternal.

For the most part, all cards seem to have enough vram for their performance level in pure rasterization, while the 2060 is likely the only failure with any kind of RT while having playable settings.

Can see my settings and vram usage here for 3840x1600. Holds a steady 45fps in the city.
 
I feel sometime they start with a conclusion in their heads and after that interpret the data under that lens, for example they say:

It’s important to note that VRAM is a big factor in Cyberpunk with ray tracing enabled. It’s the primary reason why the RTX 2060 is so slow, and it also causes limitations for the RTX 3070 with 8GB of VRAM being right on the edge at ultra settings. The RTX 2080 Ti, despite delivering similar performance without ray tracing enabled, is the faster GPU with RT enabled thanks to its 11GB VRAM buffer.

At 1440p ultra details with DLSS quality with no rt
2080Ti: 90.1 avg. fps / 70.9 1% low fps
3070..: 82.6 avg. fps / 65.4 1% low fps


Medium RT
2080: 60.6 / 50.0
3070: 56.2 / 44.6


Ultra RT:
2080: 47.8 / 40.0
3070: 46.7 / 38.8


Relative avg FPS
No RT.....: 91.6%
medium RT.: 92.7%
Ultra RT..: 97.6% (near margin of error difference)


The numbers at 1440p at least (didn't saw those for the 2080Ti at 4K but I suspect unplayable like for the 3070 anyway looking how low a 3080 got) seem to tell the exact opposite, the more RT you put in there the closer the 3070 get to the 2080Ti, at least under that metric nothing show that 8gig is limiting the 3070 at 1440p Ultra detail, ultra RT with dlss on (or 11 gig is limiting as well and being more limited isn't a factor... ?).

Is it bias ? It is because of the actual real life experience has you maybe cannot feel any difference between 90 vs 82 fps but you see one at 60 vs 56 depending of your screen (where it will happen at some point you will get under the VRR low limit, etc....) or it is the 60 FPS barrier that play a mind trick that make 60.6 avg fos look much bigger than 56.2 avg fos than 90 is relative to 82.
 
Last edited:
I feel sometime they start with a conclusion in their hands and after that interpret the data under that lens, for example they say:

It’s important to note that VRAM is a big factor in Cyberpunk with ray tracing enabled. It’s the primary reason why the RTX 2060 is so slow, and it also causes limitations for the RTX 3070 with 8GB of VRAM being right on the edge at ultra settings. The RTX 2080 Ti, despite delivering similar performance without ray tracing enabled, is the faster GPU with RT enabled thanks to its 11GB VRAM buffer.

At 1440p ultra details with DLSS quality with no rt
2080Ti: 90.1 avg. fps / 70.9 1% low fps
3070 : 82.6 avg. fps / 65.4 1% low fps


Medium RT
2080: 60.6 / 50.0
3070: 56.2 / 44.6


Ultra RT:
2080: 47.8 / 40.0
3070: 46.7 / 38.8


Relative avg FPS
No RT : 91.6%
medium RT: 92.7%
Ultra RT : 97.6% (near margin of error difference)


The numbers at 1440p at least (didn't saw those for the 2080Ti at 4K but I suspect unplayable like for the 3070 anyway looking how low a 3080 got) seem to tell the exact opposite, the more RT you put in there the closer the 3070 get to the 2080Ti, at least under that metric nothing show that 8gig is limiting the 3070 in any at 1440p Ultra detail, ultra RT with dlss on (or 11 gig is limiting as well and being more limited isn't a factor... ?).

Is it bias ? It is experience has you maybe cannot feel any difference between 90 vs 82 fps but you see one at 60 vs 56 depending of your screen (where it will happen at some point you will get under the VRR low limit, etc....) or it is the 60 FPS barrier that play a mind trick that make 60.6 look much bigger than 56.2 but 82.6 to be about the same than 90.

I noticed that inconsistent trend as well, but then there was the 4k with just reflections graph. The 3070 really took a pounding at 4k without DLSS using just reflections. Even though the 2080ti likely only would have been an unplayable 20 instead of the 3070's 6 fps, that is a bit too close for comfort in finding a scenario where 8 GB runs out.

RT is simply a vram hog in this game.

1.png
 
I noticed that inconsistent trend as well, but then there was the 4k with just reflections graph. The 3070 really took a pounding at 4k without DLSS using just reflections. Even though the 2080ti likely only would have been an unplayable 20 instead of the 3070's 6 fps, that is a bit too close for comfort in finding a scenario where 8 GB runs out.
Yes there a point (at 4K no dlss with RT ultra a 3080 go under 20 fps, not sure with reflection only but with medium RT it is still under 25), a point that is well pass playable framerate for a 3070 even if it had twice the vram (at least from what we saw). And if your example is a good indication, once VRAM become a significant issue the drop become obvious it seem.

I am not sure I would use it has a reference (because it was maybe optimized for those Ampere card and the Vram limited console), but if is an indication, Cyberpunk seem to show that if you need more than 8 gig of VRAM for something it is probably too much for a 3070 regardless of vram issue (same for over 10 gig and a 3080 and so on) and that type of game feel like a best candidate with a flight Sim for worst case Vram wise.
 
Last edited:
For modern games at playable settings with low end gpus, even 3 GB is enough it seems. The GTX 1060 3 GB still matches and often beats the 4 GB 1650. Seems silly that people freaked out about the Fury being 4 GB.

While 96 bit 6 GB would have been ideal for the 'new' gtx 1630, even a 96 bit with 3 GB of gddr6 would have been better given the tested info.

 
Necro thread lives!

But it is still relevant. Maybe even more so. The last couple of years have seen video card prices explode. The "slow march" of RAM increase that has existed since the dawn of 3D video cards and even in PC DRAM has stalled heavily over the last 5-10 years. When the RTX 3080 was first announced and I started the process of switching from my 2080 8GB to the 3080 8GB I honestly thought it was odd that RAM hadn't increased at the time.

Now? It's entirely clear. This isn't the old days where high end game devs would automatically assume that by the time their game launched several years later ALL the video cards that might play their game would have at least 2GB more VRAM on board. If we stick to mainstream priced cards that will eventually make up the bulk of market share 8GB is still going to be king for years yet. How long ago was the 12GB 1080Ti released now? And it will still be several more years before even the higher end cards gain enough market share for 12GB to be an "average" among even the top 20% of video cards. Who is going to aim their development at that?

It's going to take a major price drop in the costs of producing fast VRAM to make 12-16GB cards mainstream.

And that seems to be the exact opposite of what is happening. The industry is pushing ever faster and more expensive new VRAM designs but keeping the total amount relatively stable. GDDR5, GDDR6, HBM, etc.

We're seeing the same thing with DDR5 in PCs. It's a fairly expensive process so far and adoption has not been fast, nor is that switch to DDR5 including much of a push for MORE RAM either.

I think we are near peak RAM from a design standpoint now. Look for bus improvements and high speed data streaming from here.
 
One also difference with the past, why would you need much more VRAM on pc than on consoles, one easy way that was always 100% available for big title was playing on much higher resolution. Game run at 900p on a Xbox, you need much more at 1440p.

Console now ran around 1400 to 4k with some cheats, consoles have not much VRAM (they have 16 gig total system ram for everything), making it less obvious that you will always be easy to use the extra VRAM 100% of the time.

Ram + harddrive bandwith + better compression, like missing ram got less obvious/painful on an OS that has a SSD for it's virtual memory, missing VRAM can get less painful/obvious has the ability to speak large amount of new data to the card and better caching in general.

Bref very new and very hard to run title does it often under 5 gig of VRAM reserved/allocated on the card even at 4K, we usually have no idea the actual use.

And even in 2022 from what I gather there is no title that run significantly faster on the 2080ti than the 8gig 3070 on playable fps setting, the doomtalk is almost confirmed to have been completely overblown and even that it would be an issue is close to have been, that said maybe it would have been otherwise in a covid less world.

If we start to see 2080Ti significantly above 8gig 3070 on playable fps, maybe the 10 gig of the 3080 will be an question, but it need to be quite soon to matter.
 
Necro thread lives!

But it is still relevant. Maybe even more so. The last couple of years have seen video card prices explode. The "slow march" of RAM increase that has existed since the dawn of 3D video cards and even in PC DRAM has stalled heavily over the last 5-10 years. When the RTX 3080 was first announced and I started the process of switching from my 2080 8GB to the 3080 8GB I honestly thought it was odd that RAM hadn't increased at the time.

Now? It's entirely clear. This isn't the old days where high end game devs would automatically assume that by the time their game launched several years later ALL the video cards that might play their game would have at least 2GB more VRAM on board. If we stick to mainstream priced cards that will eventually make up the bulk of market share 8GB is still going to be king for years yet. How long ago was the 12GB 1080Ti released now? And it will still be several more years before even the higher end cards gain enough market share for 12GB to be an "average" among even the top 20% of video cards. Who is going to aim their development at that?

It's going to take a major price drop in the costs of producing fast VRAM to make 12-16GB cards mainstream.

And that seems to be the exact opposite of what is happening. The industry is pushing ever faster and more expensive new VRAM designs but keeping the total amount relatively stable. GDDR5, GDDR6, HBM, etc.

We're seeing the same thing with DDR5 in PCs. It's a fairly expensive process so far and adoption has not been fast, nor is that switch to DDR5 including much of a push for MORE RAM either.

I think we are near peak RAM from a design standpoint now. Look for bus improvements and high speed data streaming from here.

100% this. There is only so much vram you would need for a single frame. Obviously, you need to cache assets for mote than one frame, but as bandwidth has increased qnd bexome more efficient, it's very clear that 4 GB is still good on the bottom end and 8GB does fine on the top end for 99% of scenarios, even with RTX. Software really does a great job of seemlessly utalizing sysyem memory when vram looks to be innsufficient.
 
At some point they're gonna start pushing 8k, and 8k with the now standard full HDR (or higher) color range is probably gonna saturate current RAM..

But I think we still have at least 5 years before that happens.
 
More like 10 years. 1080p still controls a 66% share and 1440p adoption is at 10%. 4K a whopping 2%...
It will not be that long for giant proportion of TV/consoles of the playstation/xbox combo to be 4K one in the US market (and it does not matter if most game does not run 4k all the time or ever, perception being reality here) too.

Would not be surprising for the PS6/Xbox to have some marketing over 8k support, 7 year's between generation would also not be that surprising which would put that push around 2027 or in 5 year's or so, could be sooner if TV maker have an hard time selling similarly sized but just much better 4K tv to 4K tv owner.

And while true that 3440x1440 or 4K is just 3.75% of gamers according to steam survey that is not far from the market share of the 3080 or higher video card

3080 (1.44%), 3080TI (.6%), 3090 (0.46%), 6900xt(0.15%), 2.65% which would be on the PC side for the first 3-4 year's the only 8K gaming market and it could seem really low, but with a stream user base of over 120 millions, every percentage point is 1.2 millions people and they type of people that pay huge dollar/margin on stuff.

If video card made for the server/industrial/AI side happen to be strong enough to do 8K, the unsold or not good enough for the former could be passed down on the later.

There is also Apple VR with Apple custom silicon coming with a rumored 7680 x 4320 resolution and the technology for only the relevant portion of the display to be rendered at 8K, with good eye tracking and large enough tv (or very large VR FOV), you can have 8K on only the 3% of our field of view that see high resolution with some margin of error around it with the rest at only "4k" to 2K far away, with less denoising has well on those part if you do RT and so on, which would make on the GPU side the task much easier and possible.
 
More like 10 years. 1080p still controls a 66% share and 1440p adoption is at 10%. 4K a whopping 2%...
Oh, i'm not thinking MASS adoption, I'm talking about adoption for the top 1% or so of people who are willing to spend $$$$ for both displays and videocards.
 
Oh, i'm not thinking MASS adoption, I'm talking about adoption for the top 1% or so of people who are willing to spend $$$$ for both displays and videocards.
I'd go for an 8k60 gsync display at 32" in a heartbeat. I've been on 4k since 2014!
 
Continuation of this 4 year old thread...
https://www.techpowerup.com/review/the-callisto-protocol-benchmark-test-performance-analysis/5.html

Using the RTX 4090, Calculated 'x' value of Calisto Protocol is .277 GB/Megapixel and 'y' value is 4.67 GB.

Adding RT bumps this to x=.340 GB/MP and y= 5.78 GB. So about the same as always.

Bandwidth seems to matter more as the 6 GB 1660 Super nearly matches the 8 GB 3050, and the 6 GB 5600xt matches the 8 GB 6600, even at 4k, albeit unplayable.

At native 8k, the game should use around 13.9 GB. For 8k w/RT would be around 17.1 GB, but of course we are a long ways off from that. 8k dlss w/Rt rendering at 4k would likely be 10 GB tops.
 
At least what we know for Nvidia, given the specs on the top tier models, mainstream 4000 series cards will likely be 128 bit or 8 GB, which is crazy considering how long the 580 has been around. The mainstream 4050 could be just 96 bit or 6 GB. Bandwidth demands of games has been rather high lately so hopefully it will be gddr6x.

Considering how slowly vram demands have progressed over the years, that may still be sufficient for their bottom tier card.
 
Check out Techpowerup's VRAM numbers for Portal RTX ;)
Considering a 10gig of vram scenario put a 3090TI under 30fps at 1440p, the slow growth of vram could really be forever. In portal 8 gig seem enough for scenario that are playable with a new 4080

So much that 24 gig could be good enough for xx80 cards in 2026, if not 16 or even lower.

Even if I imagine a mod of a game like that is maybe not the best example for many things, if DirectStorage hype ever realize in the next 7 years we will stream asset at 20-70 gig per second from nvme 5.0 drive to the video card when including modern compression which could leave a lot of vram to be use for other things like I imagine Portal do.
 
Last edited:
Check out Techpowerup's VRAM numbers for Portal RTX ;)
Yep just saw that. Holy smokes - full path tracing is expensive. I got an x value of 1.28 GB/mp and a y value of 5.5 GB. Thats insanely high. Render resolutions for balanced dlss at 1080p, 1440p, and 4k are .70 mp, 1.24 mp, and 2.79 mp respectively. Using 1.28x + 5.5, we get an estimated 7 GB, 7.1 GB and 9.1 GB for those resolutions when using dlss.

Dlss has shown to have a small overhead of its own so maybe 7.5, 7.6Gb, and 9.6 GB. This 8 GB 3070 is likely just enough at 1440p and the 10 GB is starting to take a hit at 4k as it loses alot of ground.
 
Honestly though, this game should not be used as a guage for upcoming vram requirements. This was simply an Nvidia jerk off session to show that their card is 100x faster than the current AMD competition.

The fact that the 2080ti can't even get 30 fps at 1440p with dlss says it all.

Furthermore, how does both the 2080ti and 3070 get 50 fps without dlss and over 100 fps with dlss in minecraft (another path traced game) to the 2080ti getting only 9 fps w/o and 26 fps with rtx while the 3070 manages 16 fps w/o and 40 fps with in this game.

Somehow the magic new Cuda cores now makes the 3000 series 50% faster in RTX.

Yeah, I call BS
Screenshot_20221207-204858_Samsung Internet.jpg
Screenshot_20221207-204649_Samsung Internet.jpg
 
Somehow the magic new Cuda cores now makes the 3000 series 50% faster in RTX.
3070 at launch were monster cuda-rt core versus a 2080ti (higher than a Titan) that did stuff quite special with Blender versus Turing for example:

A 3070 is a 40 RT Tflops, 163 tensor tflops cards vs 14.2 (less than half) for a 2080TI

For example in octane render the 2080TI was ahead without RT but behind with RT:

er-God-Rays-RTX-On-and-Off-NVIDIA-GeForce-RTX-3070.jpg

If the frame become RT heavy we could expect for the 3070 to pull ahead, in some purer RT type work or CUDA workload on launch the 3070 was even in a different tier closer to a 3090 than to a Titan/2080TI:

rformance-Classroom-Render-NVIDIA-GeForce-RTX-3070.jpg

If a 2060 super was able to have a nice 60 fps, it was probably not that RT heavy
 
3070 at launch were monster cuda-rt core versus a 2080ti (higher than a Titan) that did stuff quite special with Blender versus Turing for example:

A 3070 is a 40 RT Tflops, 163 tensor tflops cards vs 14.2 (less than half) for a 2080TI

For example in octane render the 2080TI was ahead without RT but behind with RT:


If the frame become RT heavy we could expect for the 3070 to pull ahead, in some purer RT type work or CUDA workload on launch the 3070 was even in a different tier closer to a 3090 than to a Titan/2080TI:


If a 2060 super was able to have a nice 60 fps, it was probably not that RT heavy

Good find, I guess that marries up well then. The 40 series cards don't seem to have such a big delta in tensor tflops over the 30 series cards as the 30 series cards had over the 20 series.

Perhaps the 4070 won't get 40 fps in Frogger with God rays while the 3090ti owners are sputtering along at 15 fps.
 
Ampere has some extra features in hardware that Turing does not have, which makes certain types of RT effects more efficient.
 
One caveat for the Portal RTX review from TPU - their system was using just 16 GB of ram.

We have seen a review from Hardware Unboxed back from 2018 where going from 16 GB to 32 GB of ram would give a small performance bump. Now that was in 4 year old games and with a card that already had 11 GB for vram.

While I still think the 8 GB 3070 would fall short on Portal with just RTX, it may make a difference when using balanced DLSS. I would venture to guess that most 3070 and above owners have at least 32 GB of system ram.
 
Last edited:
No one is running out to buy over priced 8k screens. Almost all your high end cards have over 8 GB and the mid range have around 8 GB which is fine, why pay for more ram you will never use. 4K is barely seeing any traction since the prices are low enough for most now and the broadcast world is stuck on 1080p. 16GB consumer cards would be a waste and add 0 benefit.

Ya I mainly stick to 1440p for gaming. Though I might upgrade my card soon since prices are pretty good now.
 
Back
Top