3080 10g and 4k

Because ultimately what do you think Next Gen games are going to run on? They are going to sell every 8GB RTX 3070 they can make for months to come if it really lives up to performance, and NVidias flagship 3080 is a beast of a card with 10GB. Devs certainly won't aim higher than that.

I think this is different perspectives. There is a reason Noko spoke about those who upgrade every 1-2 years being ok, but that others should be wary.

8-10GB until RTX 4000 in 2022 will probably be fine. You have 1-2 games now with possible issues (Doom, MSFS), expect maybe a couple more in 2021, and more yet in 2022 but as per the cycle Nvidia will have RTX 4000 before end of that year. Users might be fine with lowering settings exclusively for VRAM in these handful of games maybe.

Some of us are targetting 4 year cycles. You say that devs won't make 3070 and 3080 run out of VRAM. But what if 2021 has a Super series with 16GB+ as standard? What if RDNA 2 picks up significant market share with 12-16GB+ cards? And finally, the 2022 RTX 4000 series will undoubtedly launch with 12GB-16GB as standard minimum options on RTX 4060 - 4070. So devs will certainly feel free to aim higher than 8-10GB. It's rare in 2020, and will likely be uncommon in 2021, but for those hoping to make it to 2024 gen (like many did with Pascal - skip a Gen), I suspect a lot more settings dropping will happen come 2022, and especially 2023.

If I could get a double VRAM option for a reasonable premium, that could easily extend the life of the card for another year or two without having to drop settings only for VRAM. You may say that these cards won't run all Ultra in 2023 games (debatable tbh), but whatever other settings you lower, if they have the VRAM you can do max texture settings and that will make a visual difference.
 
Last edited:
  • Like
Reactions: noko
like this
People really need to realise VRAM isn't free (to say the least) and they're not going to magically give you double VRAM for the same price. Not gonna happen.
 
People really need to realise VRAM isn't free (to say the least) and they're not going to magically give you double VRAM for the same price. Not gonna happen.

Only someone true to your name would expect magical free double VRAM. Hopefully there are double VRAM options for a reasonable premium. We are at $500 8GB, $700 10GB, and $1500 24GB. Do you not see room for others? You don't need to squint to see those gaps.

Also (correct me if I'm wrong) aren't the ROPs now untied from the memory controller? Assuming 3070 14Gbs rumor is true, couldn't they cut the 3070 to 192-bit and give it 19Gbps? Almost identical bandwidth and assuming it doesn't affect the ROPs then same performance overall just more VRAM. Gives you a 12GB option that would, yes, carry a premium.

Same with a 3080 with 384-bit and 15.5 or 16 Gbps. Gives you a 12GB option with virtually identical bandwidth. So there are alternatives to doubling. But from Nvidia's perspective (not the consumer's), it'd be better to save something like this for a Super refresh next year.
 
????, tomorrow games will most likely use more and more vram. RT has an overhead due to BVH memory plus more of the scene geometry has to be in memory if reflections are used, which will be incorporated into Doom. Virtually all of today games are fine but not much buffer for the future. 8gb I say is not a long term solution for Max quality and higher resolutions.


Hardware Unboxed definitely had an issue with reduced performance with their testing where the larger ram size of the 1080Ti beat out the 2080 handily at 4K while losing at the lower resolutions. Those going to use 4K OLED TVs, aka 4K I suspect will have to reduce settings due to ram amounts for some games. 10gb would give some room with the 3080. For those that upgrade every year or a little bit longer I can see 8gb maybe ok and 10gb is better. Some of us have a different perspective for long term usage and or resale value later.

The argument for "tomorrow's games" has 2 sides to the coin. Vram requirements increase and so does gpu requiremnets. So if a RTX 3080 is a 4k60 on ultra gpu for Cyberpunk 2077, its only going to be a 1440p high gpu on Witcher 4 or whatever in 4-5 years.

The same argument was said again and for the Fury X. In the last 5 years it went from 1440p ultra to a 1080p medium class card so its vram mostly stayed relevant.

As for RTX, it has used around 1 GB consistently, however performance loss usually requires a drop in resolution which basically negates that.

DLSS actually gives back vram in a sense as it hardly uses any and basically gives you a free resolution bump.
 
Last edited:
We are at $500 8GB, $700 10GB, and $1500 24GB. Do you not see room for others? You don't need to squint to see those gaps.

Not between 3070 and 3080. I don't see them disabling a memory controller as a means to increasing VRAM, so lets stick to the known ways they do these things, and add some perspective.

Apple charges $200 just for 8GB more of slow generic DDR4 system RAM.

To expect anything cheaper than this kind or rate is wishful thinking.

So a 16GB 3070 would be at least $700. Do you see why there is no room? It would be moronic to pay $700 for a 3070 with more VRAM, when can get a 10GB 3080 for that would absolutely crush it for the same price.

Is there room between 3080 and 3090? Certainly. There probably will be a 20GB 3080 for $1000-1200, for those that feel then need more VRAM. Will that make you happy?


I think this is different perspectives. There is a reason Noko spoke about those who upgrade every 1-2 years being ok, but that others should be wary.

8-10GB until RTX 4000 in 2022 will probably be fine. You have 1-2 games now with possible issues (Doom, MSFS), expect maybe a couple more in 2021, and more yet in 2022 but as per the cycle Nvidia will have RTX 4000 before end of that year. Users might be fine with lowering settings exclusively for VRAM in these handful of games maybe.

What exactly is the issue with Doom Eternal Getting 88 FPS at 4K nightmare? How is this a problem?

MSFS literally has PETABYTES of texture, to attempt to stream through. Sure you can use settings high enough to choke VRAM with that : Golf Clap...

I fail to see any issue at all here but FUD.

A 3080 will still be a very good card in 4 years. Will it still be a card that you can blindly set the Maximum settings in every game at 4K. No, but so what? Can you do that now with a 1080?

It's ~Double the capability of the Next Gen Consoles, and guess what in 4 years, it's likely there won't be a new console generation. So if consoles determine everything as some of you seem to think. You should be golden for the next gen of consoles.
 
Nonsense. Chip size has always been the defining factor in GPU price. Get some perspective.

There is only a $200 gap between the 3070 and 3080. Not only does the 3080 have 25% more VRAM, and faster (more expensive) VRAM, it uses a significantly larger and more expensive die. And yet it still only commands a $200 premium. Why would more VRAM alone (like you say 3070 16GB) also make the smaller GA104 cost the same as the GA102?

$280 5600 XT with 6GB of 14Gbps memory.

$200 5500 XT with 8GB of 14Gbps memory.

$170 5500 XT with 4GB of 14Gbps memory.

Now we are really opening our eyes. Only a $30 premium for an extra 4GB of GDDR6, and that’s surely at least breaking even if not extra profit or else why bother with the option? And selling an 8GB card for $80 less than a 6GB card. The 5600 XT of course uses a much larger chip….

The 5600 XT of course uses a larger chip. Said it again. So the 5500XT is a smaller chip, has more VRAM, and is significantly cheaper still. The same would be said for a 12GB or 16GB version of a 3070. Smaller chip, more VRAM, and significantly cheaper still than the 3080.

Just using AMD's premiums alone and assuming 3070 is 14Gbps (that's the rumours) than it is $560 for a 16GB 3070.
 
Last edited:
Nonsense. Chip size has always been the defining factor in GPU price. Get some perspective.

Wait and see. AMD is the value player, they might give you more VRAM at cost to try to win sales. NVidia is a premium player, more like Apple. You will see Apple pricing on VRAM upgrades.
 
Nonsense. Chip size has always been the defining factor in GPU price. Get some perspective.

There is only a $200 gap between the 3070 and 3080. Not only does the 3080 have 25% more VRAM, and faster (more expensive) VRAM, it uses a significantly larger and more expensive die. And yet it still only commands a $200 premium. Why would more VRAM alone (like you say 3070 16GB) also make the smaller GA104 cost the same as the GA102?

$280 5600 XT with 6GB of 14Gbps memory.

$200 5500 XT with 8GB of 14Gbps memory.

$170 5500 XT with 4GB of 14Gbps memory.

Now we are really opening our eyes. Only a $30 premium for an extra 4GB of GDDR6, and that’s surely at least breaking even if not extra profit or else why bother with the option? And selling an 8GB card for $80 less than a 6GB card. The 5600 XT of course uses a much larger chip….

The 5600 XT of course uses a larger chip. Said it again. So the 5500XT is a smaller chip, has more VRAM, and is significantly cheaper still. The same would be said for a 12GB or 16GB version of a 3070. Smaller chip, more VRAM, and significantly cheaper still than the 3080.

Just using AMD's premiums alone and assuming 3070 is 14Gbps (that's the rumours) than it is $560 for a 16GB 3070.

There is really no rhyme or reason to your argument. Why the big price difference between the 3090 and 3080.

Right now, 2 GB gddr6x chips are pricey. It's possible there will be a 16 GB gsdr6 3070, but it will get beat easily by the 3080 in 99% of situations.

As for the 5500xt vs 5600xt, market factors really determined that. AMD needed to stay competitive in the $250 price range with the 1660 Super. Again, that is gddr6 non-x so the point doesn't apply to the 3080 thread.
 
There is really no rhyme or reason to your argument. Why the big price difference between the 3090 and 3080.

You already know the answer to this. It's a halo card, one step away from Titan branding.

My "argument" of die size as the primary cost is self-evident throughout all of GPUs. The 5500XT was the example because it is the only example of a card with a simple double VRAM option for G6. Do you want to go on record believing die size is smaller driving factor than VRAM in manufacturing costs?
 
You already know the answer to this. It's a halo card, one step away from Titan branding.

My "argument" of die size as the primary cost is self-evident throughout all of GPUs. The 5500XT was the example because it is the only example of a card with a simple double VRAM option for G6. Do you want to go on record believing die size is smaller driving factor than VRAM in manufacturing costs?

Die prices is the driving factor, save for extremely cut down versions using failed yields such as the RTX 2060 which is why nvidia kept it alive at such a low price.

Again, I am not sure you can compare GDDR6x to GDDR6. That said, if it was really that cheap, it is rather surprising that there was no 2080S with 16 GB vram.
 
Last edited:
That said, if it was really that cheap, it is rather surprising that there was no 2080S with 16 GB vram.

If you take notice, Nvidia has not permitted higher VRAM options on their cards for years. Kepler was the last time higher end cards were allowed to have double VRAM options (such as 680, 780). Even then, they didn't allow 780 Ti 6GB to entice people to the Titan Black. Since then only lower end had the option (such as 960, 1060) and even then the 1060 3GB also was cut in shaders. For Turing there are no double VRAM options. They probably want more clearly definitive tiers without consumer confusion of dealing with a slower card with more VRAM.

If I put on my corporate hat (some might call it tinfoil but don't underestimate these people), I believe they also want to induce earlier obsolescence to avoid a situation like Pascal users skipping a gen. Pascal was the first to really have more than enough VRAM to skip a gen. A Maxwell GTX 980 4GB has enough horsepower to use more than that (it's as fast as a 1060 6GB and that card can find very playable settings over 4GB), but someone using it in 2017+ will run into VRAM issues.

They might change their minds with Ampere and offer 12/16/20GB options, either as Supers or just double VRAM options. Afterall, clearly $500-$700 cards with 8-10GB in 2020 is more questionable than $450-$700 8-11GB in 2016-2017.
 
I have been gaming on 4k since my 670 Sli setup some years ago.
Never had any issues with vram not being enough.
 
I have been gaming on 4k since my 670 Sli setup some years ago.
Never had any issues with vram not being enough.
Yeah and you're probably one of those people that will claim you never had any trouble with SLI either...
 
Can someone post proof of 8 GB being a limiting factor for 4k gameplay?

performance-3840-2160.png
8gb-2160p.png
 
If you take notice, Nvidia has not permitted higher VRAM options on their cards for years. Kepler was the last time higher end cards were allowed to have double VRAM options (such as 680, 780). Even then, they didn't allow 780 Ti 6GB to entice people to the Titan Black. Since then only lower end had the option (such as 960, 1060) and even then the 1060 3GB also was cut in shaders. For Turing there are no double VRAM options. They probably want more clearly definitive tiers without consumer confusion of dealing with a slower card with more VRAM.

If I put on my corporate hat (some might call it tinfoil but don't underestimate these people), I believe they also want to induce earlier obsolescence to avoid a situation like Pascal users skipping a gen. Pascal was the first to really have more than enough VRAM to skip a gen. A Maxwell GTX 980 4GB has enough horsepower to use more than that (it's as fast as a 1060 6GB and that card can find very playable settings over 4GB), but someone using it in 2017+ will run into VRAM issues.

They might change their minds with Ampere and offer 12/16/20GB options, either as Supers or just double VRAM options. Afterall, clearly $500-$700 cards with 8-10GB in 2020 is more questionable than $450-$700 8-11GB in 2016-2017.

I don't think Nvidia has any kind of restrictions on this, so long as the card meets the minimum specs. After all, there was a gddr5x 1060 and 16 GB/s 2070S. While not more vram, they did go beyond spec. I am just not sure if it would have made sense for any 3rd parties cards to offer double vram on Pascal or Touring, save for maybe the 2060.

The GTX 980 is hardly vram deficient in todays games at it is now basically 1080p med at best. Not a single game, including Doom Eternal, needs 4 GB at that setting.
https://www.techspot.com/article/2001-doom-eternal-older-gpu-test/
That card is basically a GTX 1650.

The vram situation is more questionable, but it is the situation we are in when you balance bus width with capacity and price. You don't end up with a nice even increase in capacity from one gen to the next.

I always argued that a cheaper 5.5 or 6 GB 1080ti would have been great and it still would be strong today for 1440p ultra/4k high. Even a cheaper 4 GB GTX 1070 could of worked as that card would be mostly limited to 1080 high today.

However, back then, Vram capacity was a much bigger selling point.
 
The game was not at it's max settings, it was on Nightmare, max is Ultra Nightmare and that 8gb vram of the 2080 puttered out. It is also relevant for those wanting those extra IQ options, textures, RT, HDR etc. that may consider getting something more than 8gb or 10gb. Yes, reduce settings on newer RT games and maybe not use RT will get the job done.

Edit: Corrections, Doom has Ultra, Nightmare and Ultra Nightmare so if Hardware Unboxed used Ultra it was 3 settings from the top. Makes me wonder if they didn't mean Ultra Nightmare with the presets.

See above starting around 2:30.

Doom Eternal scaling is rather pointless. There is about a 5% performance difference going from ultra to ultra nightmare. The visual difference matches this but cuts down vram significantly.
 
  • Like
Reactions: noko
like this
See above starting around 2:30.

Doom Eternal scaling is rather pointless. There is about a 5% performance difference going from ultra to ultra nightmare. The visual difference matches this but cuts down vram significantly.
Yes indeed, Hardware Unboxed not finding cause for previous error gives pause into reliability of future errors. That being said their being honest about it is actually rather refreshing especially if they correct found problems. Some good points on the video, since testing is done ingame, game play, area. How tested can dramatically change results, as in clearing the area first or testing while clearing the area can have big impact on the numbers. Still the game is using the upper limits of vram and this is not the RT version. I don't think most of us have a crystal ball on what future games will demand, still very good conversation and points made by others already here. What exactly Ultra Nightmare is doing or even Nightmare over Ultra would be nice to know for the added vram costs.

I don't go along with the idea that once you need the vram, the GPU will not be strong enough. Well not totally, once you hit the vram wall it is an immediate performance drop, choppy, erratic and not just a lower performance hit. It is an unusable point. If going down to lower resolutions, lower settings is OK then why even waste the time buying top or high end cards in the first place? Another perspective is, hell, I will be able to play 95% of the next two years worth of games better than a 2080 Ti for a much cheaper cost, I may or may not have to lower settings/resolution for a few titles due to vram

I would think best is the buyer is aware of the strong points as well as any weaknesses now and possibly in the future. Wished reviewers indicated vram usage more particulary when using RT.
 
Those benchmarks are erroneous. Doom Eternal does, under limited conditions, needs (not just caches) more than 8 GB.

Steve discovered that the 'p trick' doesn't actually work.


Not sure what Techpowerup did, but Kitguru segmented their test settings by cards' VRAM size so no trickery was used. There was no significant bottleneck with 8 GB.

There is nothing to indicate there will be a performance issue at 4k with 10 GB for the new cards, and probably not 8 GB either, at least at playable settings.
 
Not sure what Techpowerup did, but Kitguru segmented their test settings by cards' VRAM size so no trickery was used. There was no significant bottleneck with 8 GB.

There is nothing to indicate there will be a performance issue at 4k with 10 GB for the new cards, and probably not 8 GB either, at least at playable settings.
As I already pointed out 8 gb is a limitation even at 1440p for Wolfenstein youngblood if you're trying to run all Max settings with Ray tracing. At a minimum you have to turn texture streaming from uber to ultra. As far as I know that's probably the only game that has an issue although rise of the tomb raider can be a little bit sketchy with only 8 gigs of vram at 4K. 10 gb is certainly not an issue in any current games but that could easily change with true next gen games.
 
Not sure what Techpowerup did, but Kitguru segmented their test settings by cards' VRAM size so no trickery was used. There was no significant bottleneck with 8 GB.

There is nothing to indicate there will be a performance issue at 4k with 10 GB for the new cards, and probably not 8 GB either, at least at playable settings.

Kit Guru was legit. Perhaps HuB just hit a more demanding spot. Like I have been saying, idtech is the exception so this is the very worst case scenario. 10GB will be fine.
 
....Some good points on the video, since testing is done ingame, game play, area. How tested can dramatically change results, as in clearing the area first or testing while clearing the area can have big impact on the numbers. Still the game is using the upper limits of vram and this is not the RT version. I don't think most of us have a crystal ball on what future games will demand, still very good conversation and points made by others already here. What exactly Ultra Nightmare is doing or even Nightmare over Ultra would be nice to know for the added vram costs....

Yeah, alot of reviewers fail at this and do not reset the system when doing back to back bench marks.

Here is the game requirement for 1080p:
  • Low – 2942MiB
  • Medium – 3502MiB
  • High – 4078MiB
  • Ultra – 5230MiB
  • Nightmare – 6254MiB
  • Ultra Nightmare – 6766MiB
Could not find 4k, but TPU saw up to 7.2 GB at 1080p and 8.4 GB at 4k using the highest settings.
 
Yeah, alot of reviewers fail at this and do not reset the system when doing back to back bench marks.

Here is the game requirement for 1080p:
  • Low – 2942MiB
  • Medium – 3502MiB
  • High – 4078MiB
  • Ultra – 5230MiB
  • Nightmare – 6254MiB
  • Ultra Nightmare – 6766MiB
Could not find 4k, but TPU saw up to 7.2 GB at 1080p and 8.4 GB at 4k using the highest settings.
It's important to distinguish use vs need, though.
 
Some of you guys just keep on bringing up idtech games and how it will be the end of the world if you have to reduce from uber-goober-nightmare settings.

I get it, more vram would be better, especially for those with only 16 GB system Ram.

That said, the new 3000 GPUs offer a great value and crush everything before them.

If you still can't get past that, just get a dang 2080ti or 3090!
 
Some of you guys just keep on bringing up idtech games and how it will be the end of the world if you have to reduce from uber-goober-nightmare settings.

I get it, more vram would be better, especially for those with only 16 GB system Ram.

That said, the new 3000 GPUs offer a great value and crush everything before them.

If you still can't get past that, just get a dang 2080ti or 3090!

I unliked and liked again.
 
I'm in the same boat. I have an LG CX OLED that can do 4k120. I was planning on getting a 3090, but the difference in performance between the 3080 and 3090 is very small, and for double the price, I can't justify the cost associated with the 3090.

I've been doing some 4k testing with my 2070 Super @ 8GB VRAM. What I have found is that I can't max out the VRAM currently in any game, including Control and CoD:MW. This has put my mind at ease for the time being and makes me feel better about the 3080.

Honestly, unless you're planning on doing stupid resolutions (8K), the 3090's 24GB VRAM just seems like extreme overkill. Only time will tell.
I have been pondering the same question. I just got my 48CX on Friday and have been loving it. I am using an MSI Gaming X Trio 2080ti and in FarCry 5, Remnant and Elite Dangerous at almost if not everything maxed, I have been quite happy with the performance. I was sure a 3090 was in my future, but since my 2080ti is solid now, but may lag a bit for Cyberpunk, a 3080 may do really well. I may try for a launch day 3080 and if it does not “feel” right send it back and try for a launch day 3090.
 
To me? I think it'll just be fine. This card should perform well for 3 years. Then ditch it.

I don't think 4k with 10gb of memory is going to be an issue at all.

A 3080 won't feel right? And a 3090 will? Jesus. Different strokes for different folks. Then again, I don't play games for a living.

I thought the 3080 can play 4k with everything maxed out at 100 to 150 fps? That won't "feel" right?
 
As I already pointed out 8 gb is a limitation even at 1440p for Wolfenstein youngblood if you're trying to run all Max settings with Ray tracing. At a minimum you have to turn texture streaming from uber to ultra. As far as I know that's probably the only game that has an issue although rise of the tomb raider can be a little bit sketchy with only 8 gigs of vram at 4K. 10 gb is certainly not an issue in any current games but that could easily change with true next gen games.

Except it isn't. I just played through a level of Wolfenstein Youngblood @ 4K max settings, Balanced DLSS, and it ran around 60 FPS on my 2070 Super. I checked Afterburner once I was done, and the VRAM stayed pegged at 8030 MB the entire time. I think what you're seeing is the game loading as much assets as it can into VRAM, even if it isn't using them. I'd be highly surprised if the game is actually using more than 4-6GB at any given time. I wonder if there is someone with an RTX 2060 that can test this...
 
To me? I think it'll just be fine. This card should perform well for 3 years. Then ditch it.

I don't think 4k with 10gb of memory is going to be an issue at all.

A 3080 won't feel right? And a 3090 will? Jesus. Different strokes for different folks. Then again, I don't play games for a living.

I thought the 3080 can play 4k with everything maxed out at 100 to 150 fps? That won't "feel" right?
I sure hope the 3080 will do exactly that. If I get 100-150 FPS at 4k with even almost everything maxed, I will be thrilled and save a lot of money.
 
I sure hope the 3080 will do exactly that. If I get 100-150 FPS at 4k with even almost everything maxed, I will be thrilled and save a lot of money.
Same.

If a 3080 can do 100+ FPS @ 4k on most titles, gaming on my LG CX will feel sublime.
 
My biggest concern with just throwing gobs of VRAM on the cards is you have devs who will become sloppy and won't optimize things well. You see it with phones right now, manufacturers keep putting more and more memory in them and app devs keep getting sloppy with the memory utilization.
 
So after some thought I decided to just stay with 1440p for the time being, I did purchase a new monitor with G-Sync, 10 bit color and a 165hz refresh rate.
 
Data points for older games, all RTX enabled:
4K with RTX Max settings:​
Shadow Of The Tomb Raider - Only shadows are enhanced with RT, 7415MB -DLSS on​

BF5, max 4K 8.5gb, Reflections DLSS?​

Metro Exodus Max, 8.1gb, RT, DLSS on 4:01.24​

Wolfenstein Youngblood, Mein Lebel settings, RT, DLSS 4K, 8.5gb​


With DLSS the game was being rendered at 1440p and not 4K, I would say all of those games would become unplayable or very poor performance without DLSS with even more vram being used rendering natively at 4K. Next generation games with more use of RT, other game progression gives me doubt 4K performance for max settings over time. Still a lot of the IQ is maintained with reduced settings so for those games that go above 10gb for the 3080 or 8gb for the 3070 should be fine at least in the near term. If any games take advantage, special options for large pools of vram will probably be few. Just comes down to how you play, games played etc. I don't see much leeway if things get pushed.

I will probably pick up a 3080 if available. Probably end up with both Nvidia and AMD if both are good. I think Nvidia is good besides some concerns over vram on my end. If you don't think vram will be an issue then I recommend you grab these cards fast, if you can before the drought.
 
Last edited:
My biggest concern with just throwing gobs of VRAM on the cards is you have devs who will become sloppy and won't optimize things well. You see it with phones right now, manufacturers keep putting more and more memory in them and app devs keep getting sloppy with the memory utilization.

Game dev and mobile app dev are totally diff.
Game dev clings to a very traditional set of deployment targets, much like old banking software from the 80s targeting Z series mainframes and the most common deployment footprints.
I’m not saying game dev are better, they’ll still decide to claim all your vram or 1/2 your physical cores, just saying someone may have taken stats like average pc state of the last’s titles users and aren’t trying to exceed resource use that isn’t there. That’s what unthrottled settings are for, to let it rip when new components appear.

Mobile dev can be highly problematic in that I’ve seen time and time again testing is the last thing anyone cares about.
I’ve watched guys juggle synchronous and asynchronous calls in a build totally confused about what’s happening on physical phones, they never pushed the build to a device farm.
There’s no real sticking in your lane in mobile like there is in game dev.
You are basically playing in rush hour traffic in Bangalore compared to a state where a game is launched and you might have to consider Discord/obs/afterburner.

People are complaining about 3080 having 10gb of nvram, well the safe play is all the conversations you would have had with game studio teams have lead Nvidia to be “about 1080ti/2080ti nvram” bc not enough compelling use cases came up.

It’s like people think these gpus were developed in a vacuum, and I can tell you when the publisher I worked for started supporting AMD gpus heavily we gave them feedback on the engineering sample gpus they doled out to us. I’ve had conversations with Nvidia engineers where if my answer was “no” then that got checked off and we moved onto the next item.
 
Back
Top