Possible RTX 3080 Ti FE?

Oh God! NO! Not TWO games! (one of which was coded with AMD's help and isn't released yet and hasn't been tested)

Man, setting textures down a notch with (likely) no discernable difference in quality is gonna REALLY suck for the plebs with 11GB of VRAM or less for those (maybe) two games!

I hope someone tries to compare for example 3070 vs 2080 Ti at 4K vs 1440p/1080p that we know roughly how they compare like in average in other games until these supposedly larger VRAM 3000 series cards come out from Nvidia.

Somehow I don't think 3070 will perform a significantly worse but we'll see, I just find people are so quick to scream 10GB isn't enough just because A) a developer says so likely sponsored by AMD to say so B) they've seen game X use Y amount of RAM as some kind of proof when reality is VRAM allocation isn't as night and day and often extra RAM is used than actually needed

Maybe I haven't looked but hasn't anyone tried to debuncle the myth with VRAM use already, starts feeling this is the proper time to do a full-deep analysis on this subject in how much is needed before showing considerably performance loss in a couple of exceptionally demanding VRAM titles, I'd happily admit I'm wrong, I just want to see actual benchmark comparisons as this VRAM discussion is getting old and could be ended with a good test.

In my view, the GPU processing capability vs amount VRAM capacity seems much more of a deciding factor than the game's VRAM useage needs and is it a couple of different percentage differences or 10 or 20% etc? But yea I'd gladly see some analysis on it.
 
Last edited:
I hope someone tries to compare for example 3070 vs 2080 Ti at 4K vs 1440p/1080p that we know roughly how they compare like in average in other games until these supposedly larger VRAM 3000 series cards come out from Nvidia.
And the 1080 TI with 11gb:
https://www.guru3d.com/articles-pag...es unusually,quality settings and TAA enabled.

3070 (8gb) vs 1080Ti (11gb)
1080p: 50/46 (109%)
1440p: 47/37 (127%)
2160p: 31/22 (141%)

3070 (8gb) vs TitanXp (12gb)
1080p: 50/47 (106%)
1440p: 47/38 (127%)
2160p: 31/23 (135%)

3070 (8gb) vs 2080 super (8gb)
1080p: 50/48 (104%)
1440p: 47/44 (107%)
2160p: 31/29 (107%)

3070 (8gb) vs 2080ti(11gb)
1080p: 50/50 (100%)
1440p: 47/47 (100%)
2160p: 31/35 (89%)

3070 (8gb) vs RadeonVII (16gb)
1080p: 50/44 (114%)
1440p: 47/37 (127%)
2160p: 31/24 (129%)

It is hard to tell because the 3070 gained FPS lead at 4K vs other card with more rams, so using that logic it is both and issue and not an issue, that said that 2080Ti being equal with the 3070 except at 4K in flight simulator, while often being slower is not empty argument that 8gig could be not enough anymore.
 
Well if the 3080 Ti has more vram, that would be the #1 reason why I would consider it even if slightly faster than the 3080.
 
I see you didn't bother to read the article either.

Good luck to you in the future, and good bye.

No I’ve read plenty of articles about this, including Nvidia’s explanation as to why they only chose to go with 10GB. The fact of the matter is that if there are games being released within weeks or months after you launch your flagship GPU that require more memory than it has, then it’s not sufficient. It’s fine to accept that as reality. Some of us like to keep a video card for more than one upgrade cycle. Games starting to require more than 10GB VRAM at 4K as of now is not a good trend for the 3080. It’s fine to admit that, Jensen won’t be mad at you.
 
Last edited:
I’m not saying that’s the case in every title. I’m saying there are titles that are imminently launching where it appears that 10GB is not sufficient for maxing out all textures. The “OMG it’s like 2 games just lower your textures” response isn’t acceptable for a $700 video card. I’m sorry, but if there is any sign that games will need more than 10GB in the near future for 4K, then this was an oversight for those of us who don’t buy a new GPU every upgrade cycle.
 
  • Like
Reactions: noko
like this
Once again. Hardly any games. One of the games you listed isn't even a game it's a benchmark. After two plus years, you have 18 games. It's not a feature.
I don't think you know the meaning of words...

I'm sure when AMD launches their version of DLSS, it absolutely will be a feature, right?
 
I don't think you know the meaning of words...

I'm sure when AMD launches their version of DLSS, it absolutely will be a feature, right?

I'm sitting here with a 3080 in my system :rolleyes:.

In 2018 (the first year DLSS was available) there were 8167 games released on Steam
In 2019 there were 8400 games released on Steam
I can't find hard data on 2020 games, but I'd say several thousand I see 8555 on steamspy.

But you can only point out 18 games and 1 benchmark to prove your point. One of which (Cyberpunk 2077) isn't even released yet. It's not a feature if you can't use it on the majority of games released. At best it's a bonus in the ones you can use it. You're talking about 0.07165% of the games released since DLSS became a "feature" are DLSS capable.

I don't really feel like my logic is so hard to understand, but here we are... I wouldn't base a single purchase decision on whether or not a card supported DLSS.
 
Once again. Hardly any games. One of the games you listed isn't even a game it's a benchmark. After two plus years, you have 18 games. It's not a feature.
Not to pile on, but C2077 isn't even out yet. Also, I wonder how many people use the Windows 10 version of Minecraft (as opposed to the Java one), which is the only one that supports DLSS.
 
You are seriously suggesting that 8400 games would benefit from DLSS? I NEED that for bejeweled don't I?

A better comparison is 233 new games released 2018 to 2019. Never mind that DLSS 1.0 was September 2018.. so 3/4 of those games from 2018 also really shouldn't count. But lets go with 233 since we don't have numbers for 2020 and its the same number of months.
18/233 = 7.7% of new games support it, and this is DLSS 2.0, not 1.0. 2.0 is far superior to 1.0
DLSS 2.0 came out on March 26, 2020. That means in 7 months, 18 games support it. No matter how you slice it, that is impressive. And if you haven't seen what DLSS 2.0 can do, prepare to be impressed.

Apologize for the slight OT'ing of the thread.
 
You are seriously suggesting that 8400 games would benefit from DLSS? I NEED that for bejeweled don't I?

A better comparison is 233 new games released 2018 to 2019. Never mind that DLSS 1.0 was September 2018.. so 3/4 of those games from 2018 also really shouldn't count. But lets go with 233 since we don't have numbers for 2020 and its the same number of months.
18/233 = 7.7% of new games support it, and this is DLSS 2.0, not 1.0. 2.0 is far superior to 1.0
DLSS 2.0 came out on March 26, 2020. That means in 7 months, 18 games support it. No matter how you slice it, that is impressive. And if you haven't seen what DLSS 2.0 can do, prepare to be impressed.

Apologize for the slight OT'ing of the thread.

That really doesn’t change the fact that adoption for DLSS has been anemic at best. That doesn’t mean it’s not a cool technology, but it does mean that DLSS support should not be what makes or breaks your decision on which card to buy. That’s exactly what his point is. Nvidia’s marketing team sold DLSS and RTX as revolutionary features that you absolutely cannot live without. As it stands, support for both is limited, and RTX has such a massive performance hit in most games on most cards that support it, that people who have an RTX card often turn it off anyway. It’s difficult to justify them as must-have features if that’s the case, despite the fact that both show promising results when implemented properly.
 
Well if AMD method is open and will work with Nvidia as well, who would even bother with DLSS unless paid by Nvidia? Developers I would think would use what works with the consoles and PCs and not worry about propritary limited hardware solutions especially if it can support older hardware like Pascal, Vega etc.
 
If it wasn't for Nvidia pushing developers to use new technology, not nearly as many games would look as good as they do. Developers do want the new tech in their games because they make it look better, but the coding and 3d pipeline optimizing is hard to do (only dev who does this well on their own is iD). DLSS + RTX makes them run significantly faster than if it was just RTX alone. We all have games that look better, use newer technologies sooner, thanks to Nvidia. We can all hate the proprietary stuff all we want, or their marketing jackasses, but the company as a whole has done amazing things for modern pc gaming.

NO ONE would have raytracing in games right now, in cards right now, in consoles right now, if Nvidia had not only put the tech out, but also helped developers learn to use it.

As far as "AMD's method" goes, it has a lot to live up to. We will likely have to wait for the 2nd or 3rd iteration to have something worth using (see freesync 1.0). So 1.5 to 2 years away, maybe even longer.

AMD fanboys/Nvidia haters are gonna hate, but Nvidia has an impressive track record of bringing technology to market that becomes industry standard.
First T&L engine in hardware.
Multi-gpu tech
Automatic overclocking, GPU boosting
Frame-time synchronizing (G-Sync). This was huge for playable games and getting rid of stuttering. I think even [H] itself played a role in bringing those frame-time issues to the fore.
Raytracing in consumer GPU's
DLSS, a deep learning technology.

Everything in this list prior to Raytacing, AMD has copied and added to their products. And this likely isn't even a complete list, just what comes to mind. And the Raytracing and DLSS AMD is adding...

Mildly dislike the "company" portion of Nvidia.
Love the technology and always pushing it forward, waiting on no one.
Love what they've done for gaming.
Understand keeping some tech proprietary (Gameworks), and have yet to see proof it purposely hinders AMD gpu's. It is always the lack of sufficient shader cores or something, or AMD's design choices that hinder their GPU's. See old [H] reviews on GPU's that used Witcher 3 as one of the games benched. AMD yet again copies the design changes (more of this type of processing core in ratio to the main cores, more shaders, etc.) in their next GPU's and suddenly they perform way better than previous gen AMD, and even sometimes exceed an Nvidia GPU on a game built with Hairworks, etc..
 
Last edited:
Games starting to require more than 10GB VRAM at 4K as of now is not a good trend for the 3080.
Two outliers - one questionably coded port and one AMD sponsored title - is not a "trend", no offense.

The “OMG it’s like 2 games just lower your textures” response isn’t acceptable for a $700 video card. I’m sorry, but if there is any sign that games will need more than 10GB in the near future for 4K, then this was an oversight for those of us who don’t buy a new GPU every upgrade cycle.
Not acceptable to you, which ends up not meaning much in the grand scheme - it doesn't change the still high demand.

If only more people would join the "10GB is not enough" bandwagon so the rest of us could get actually get a 3080.
 
Last edited:
Two outliers - one questionably coded port and one AMD sponsored title - is not a "trend", no offense.


Not acceptable to you, which ends up not meaning much in the grand scheme - it doesn't change the still high demand.

If only more people would join the "10GB is not enough" bandwagon so the rest of us could get actually get a 3080.

There are those of us who like to keep their hardware for longer than one upgrade cycle. If you’re not one of those people, great for you.
 
If it wasn't for Nvidia pushing developers to use new technology, not nearly as many games would look as good as they do. Developers do want the new tech in their games because they make it look better, but the coding and 3d pipeline optimizing is hard to do (only dev who does this well on their own is iD). DLSS + RTX makes them run significantly faster than if it was just RTX alone. We all have games that look better, use newer technologies sooner, thanks to Nvidia. We can all hate the proprietary stuff all we want, or their marketing jackasses, but the company as a whole has done amazing things for modern pc gaming.

NO ONE would have raytracing in games right now, in cards right now, in consoles right now, if Nvidia had not only put the tech out, but also helped developers learn to use it.

As far as "AMD's method" goes, it has a lot to live up to. We will likely have to wait for the 2nd or 3rd iteration to have something worth using (see freesync 1.0). So 1.5 to 2 years away, maybe even longer.

AMD fanboys/Nvidia haters are gonna hate, but Nvidia has an impressive track record of bringing technology to market that becomes industry standard.
First T&L engine in hardware.
Multi-gpu tech
Automatic overclocking, GPU boosting
Frame-time synchronizing (G-Sync). This was huge for playable games and getting rid of stuttering. I think even [H] itself played a role in bringing those frame-time issues to the fore.
Raytracing in consumer GPU's
DLSS, a deep learning technology.

Everything in this list prior to Raytacing, AMD has copied and added to their products. And this likely isn't even a complete list, just what comes to mind. And the Raytracing and DLSS AMD is adding...

Mildly dislike the "company" portion of Nvidia.
Love the technology and always pushing it forward, waiting on no one.
Love what they've done for gaming.
Understand keeping some tech proprietary (Gameworks), and have yet to see proof it purposely hinders AMD gpu's. It is always the lack of sufficient shader cores or something, or AMD's design choices that hinder their GPU's. See old [H] reviews on GPU's that used Witcher 3 as one of the games benched. AMD yet again copies the design changes (more of this type of processing core in ratio to the main cores, more shaders, etc.) in their next GPU's and suddenly they perform way better than previous gen AMD, and even sometimes exceed an Nvidia GPU on a game built with Hairworks, etc..

The next gen consoles have been in development before RTX announced, so they'd have had it with our without nvidia, and so would have AMD desktop gpus.

Frame time sync - Adaptive sync in laptops before gsync

Multi gpu - 3dfx

Hardware T&L. Geforce 256 October 1999 vs Radeon DDR April 2000.

Unless ATI redesigned their gpu in 4 months, it was just another feature everyone was moving to.

Don't fool yourself into thinking rtx cores and DLSS on tensor was anything but a way to reuse.
 
There are those of us who like to keep their hardware for longer than one upgrade cycle. If you’re not one of those people, great for you.

Well then you'll have to play future games without maxed out settings, which you were going to have to do anyways if you're not upgrading every cycle.
 
The next gen consoles have been in development before RTX announced, so they'd have had it with our without nvidia, and so would have AMD desktop gpus.
Believe what you want. These consoles are coming out this month, more than 2 years after the first RTX cards came out. If you think the specs and capabilities (of these consoles) were set in stone prior to RTX (fall 2018), you'd be mistaken.
Frame time sync - Adaptive sync in laptops before gsync
Wrong. Frame draw in laptops was limited(capped) as a way to conserve battery power, had nothing to do with variable FPS or gaming. From that VESA built adaptive sync into Displayport 1.2, the spec finalized in march 2014, months after G- Sync was in consumers hands.

Nvidia G-Sync, announced October 2013, on 6xx GPU's from 2012, LCD's with tech available January 2014
VESA Adaptive sync, finalized spec in March 2014
AMD Freesync 1.0, released March 2015, and was pretty crappy, working only in the 40hz-60hz range. LCD's with this were available later that year, or about a year and a half after G-sync was in consumers hands.
AMD Freesync 2.0, released January 2017, finally a reasonably functioning frame syncing technology, more than 3 years after G-Sync came out. The first Freesync 2.0 LCDs were available in summer 2017.
Multi gpu - 3dfx
Yes, but those chips were not GPU's. The first GPU was Nvidia's Geforce256. Nvidia owns 3dfx now btw.
Hardware T&L. Geforce 256 October 1999 vs Radeon DDR April 2000.
Yep, the first Radeon which was unable to compete with the aging GeForce256. New tech was coming out really fast back then, every 6 months something came out twice as fast (unless it was Radeon, which was 6 months later and still slower).
Unless ATI redesigned their gpu in 4 months, it was just another feature everyone was moving to.
See above. Development times were not multi-year back then.
Don't fool yourself into thinking rtx cores and DLSS on tensor was anything but a way to reuse.
Hmm? You'll have to excuse me for not taking your word for it, thanks.

I love it that AMD fanboys know more about what Nvidia's designs were for than Nvidia does. What would we do without your insight??
 
Last edited:
Well then you'll have to play future games without maxed out settings, which you were going to have to do anyways if you're not upgrading every cycle.
I suppose that’s true, but it’s also true that I’ll have an extra $700 for when I feel more compelled to upgrade. As it stands, my 1070Ti is satisfactory. I “want” to upgrade now, but I also don’t “need” to. If I’m going to do it now, I want assurances it can go for at least a few years. Maybe you don’t care, and that’s fine.
 
Well if the 3080 Ti has more vram, that would be the #1 reason why I would consider it even if slightly faster than the 3080.

I still can't for the life of it understand why nVidia chose to do 10gigs on what was supposed to be their halo card. That is considering that was their words calling the 3080 their halo product.
 
  • Like
Reactions: noko
like this
I still can't for the life of it understand why nVidia chose to do 10gigs on what was supposed to be their halo card. That is considering that was their words calling the 3080 their halo product.
Well a 2080 was only 8. But yeah, I think not setting an example and doing "baby steps" was a mistake.
 
Believe what you want. These consoles are coming out this month, more than 2 years after the first RTX cards came out. If you think the specs and capabilities (of these consoles) were set in stone prior to RTX (fall 2018), you'd be mistaken.

June 2018 Interview with Phil Specter where Hardware development was well under way. You say below that gpus are currently a multi year process, it would be impossible to react to RTX in Fall of 2018 and get both MS and Sony to agree to redesign their consoles that late in the game
https://www.theverge.com/2018/6/12/17453174/microsoft-xbox-next-genreation-2020-release-date-rumors
Wrong. Frame draw in laptops was limited(capped) as a way to conserve battery power, had nothing to do with variable FPS or gaming. From that VESA built adaptive sync into Displayport 1.2, the spec finalized in march 2014, months after G- Sync was in consumers hands.

Nvidia G-Sync, announced October 2013, on 6xx GPU's from 2012, LCD's with tech available January 2014
VESA Adaptive sync, finalized spec in March 2014
AMD Freesync 1.0, released March 2015, and was pretty crappy, working only in the 40hz-60hz range. LCD's with this were available later that year, or about a year and a half after G-sync was in consumers hands.
AMD Freesync 2.0, released January 2017, finally a reasonably functioning frame syncing technology, more than 3 years after G-Sync came out. The first Freesync 2.0 LCDs were available in summer 2017.
Again, wrong. The Desktop version of adaptive sync was finalized in March of 2014, but even in the white paper announcing it they bring up the fact that

"Variable refresh rate technology has been available to Notebook PC makers for quite some time as a system power saving feature for embedded notebook panels.

https://www.vesa.org/wp-content/uploads/2014/07/VESA-Adaptive-Sync-Whitepaper-140620.pdf

The idea was absolutely available in laptops long before Nvidia debuted the idea in desktops. Yes, it's intention was power savings, but the idea certainly wasn't an original Nvidia one.
Yes, but those chips were not GPU's. The first GPU was Nvidia's Geforce256. Nvidia owns 3dfx now btw.

Goal post move. GPU is synonymous with video card, 3dfx was the first company to pair multiple video cards together for extra performance, and buying the company that did it first isn't the same as inventing it.

Yep, the first Radeon which was unable to compete with the aging GeForce256. New tech was coming out really fast back then, every 6 months something came out twice as fast (unless it was Radeon, which was 6 months later and still slower).

More goal post moving. You said it was important that Nvidia had the first T&L acceleration in a GPU because it showed leadership. It was the next logical path, and Ati was on the same path at the same time. Now it's about who did it faster.
See above. Development times were not multi-year back then
Release times were the same, approx every year. It's only in the last generation that release times have slipped to two years.

https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units

https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units
Hmm? You'll have to excuse me for not taking your word for it, thanks.

I love it that AMD fanboys know more about what Nvidia's designs were for than Nvidia does. What would we do without your insight??
How about you tell me what use Tensors have on desptop GPUS? GPU development costs are high, and dev times are long, it only makes sense to reuse what you can.

I'm glad that you have insight into nvidia that non-employees don't, feel free to fill us in on your particular set of facts, or are they just the opinions? We've ben held rapt here with your historical handjob on Nvidia, it's practically "just buy it" levels of love from your corner.

Also, check the sig. My last system was an intel build, and I'm running an Nvidia GPU. I'm just not sitting here waiting for what they're looking to give me.
 
Last edited:
  • Like
Reactions: Axman
like this
I still can't for the life of it understand why nVidia chose to do 10gigs on what was supposed to be their halo card. That is considering that was their words calling the 3080 their halo product.
Bus width constraints. This has been covered.
 
I haven't been able to verify this particular tweet, so please don't take it as gospel, but it would be interesting if this is indeed true:

RTX 3080 Ti FE:
PG133-SKU15,
GA102-250-KD-A1,
20GB GD6X,
he same FP32 count as 3090,
10496FP32, the same MEM speed and TGP as 3080,
no NVLINK.

Source:

https://twitter.com/kopite7kimi/status/1323785556417863680

Also mentioned on several tech websites, but still referencing the above tweet:

https://www.techradar.com/news/nvid...gpu-with-20gb-of-vram-to-take-on-amd-big-navi

https://wccftech.com/nvidia-geforce-rtx-3080-ti-20-gb-graphics-card-specs-leak/

NVIDIA GeForce RTX 3080 Ti Landing in January at $999

 
With the performance gap between the 3080 and 3090 being so miniscule, the only way I can see a 3080Ti in the future is if nVidia:

1. keeps the 3080 in the lineup (dropping to $550-650)
2. drops the 3090 ("it was a limited edition")
3. releases a 3080Ti that essentially takes over the 3090 in the lineup ($750-850)
4. releases a new Titan ($1000+)

Of course, this is speculation based on how well the Radeon 6000 series perform in real-world measures as well as supply/availability on and shortly after launch.
 
With the performance gap between the 3080 and 3090 being so miniscule, the only way I can see a 3080Ti in the future is if nVidia:

1. keeps the 3080 in the lineup (dropping to $550-650)
2. drops the 3090 ("it was a limited edition")
3. releases a 3080Ti that essentially takes over the 3090 in the lineup ($750-850)
4. releases a new Titan ($1000+)

Of course, this is speculation based on how well the Radeon 6000 series perform in real-world measures as well as supply/availability on and shortly after launch.

No. There has always been an overlap between the "Titan class" card and the X080 Ti card. With the Ti you get essentially the same performance, less memory, and a significantly lower price. Nothing is new this time around based on the leaks. The 3090 was ALWAYS a bad buy. This just cements that sentiment.

The Ti will be $999, and the other Ampere pricing will remain the same.
 
No. There has always been an overlap between the "Titan class" card and the X080 Ti card. With the Ti you get essentially the same performance, less memory, and a significantly lower price. Nothing is new this time around based on the leaks. The 3090 was ALWAYS a bad buy. This just cements that sentiment.

The Ti will be $999, and the other Ampere pricing will remain the same.


Read my very last sentence again.
 
I just don't care about these cards when I can't get one.

I'm not asking to be able to randomly stroll into a store and pick one up, but I'm not camping outside or setting up a damn bot to auto purchase online.

Can I get maybe 10 minutes of in stock time? Or a Microcenter hold I can go pick up?

How much price and performance arguing are we all going to do over products nobody, or hardly anybody, have actually seen/used?
 
I just don't care about these cards when I can't get one.

I'm not asking to be able to randomly stroll into a store and pick one up, but I'm not camping outside or setting up a damn bot to auto purchase online.

Can I get maybe 10 minutes of in stock time? Or a Microcenter hold I can go pick up?

How much price and performance arguing are we all going to do over products nobody, or hardly anybody, have actually seen/used?

100% agree. When we start talking semantics and idioms over future revisions of an existing paper-launch product lineup, then the unobtainable product lineup itself becomes largely irrelevant.

It is fun to speculate on such things, though.
 
I just don't care about these cards when I can't get one.

I'm not asking to be able to randomly stroll into a store and pick one up, but I'm not camping outside or setting up a damn bot to auto purchase online.

Can I get maybe 10 minutes of in stock time? Or a Microcenter hold I can go pick up?

How much price and performance arguing are we all going to do over products nobody, or hardly anybody, have actually seen/used?

I think it's getting better. I'd say 25-30% of the people who are looking have gotten one from somewhere. The 3090 is the easiest to come by (probably due to the very poor price/performance ratio). I would think by the time this Ti card releases, stock should be pretty good...just in time for people to scramble for the Ti card.
 
There are those of us who like to keep their hardware for longer than one upgrade cycle. If you’re not one of those people, great for you.
Valid point that if you want to move from a 1070Ti to something you can keep for a couple years, then you might be better off waiting for Radeon 6700/6800. However, I think the idea you can be "future proof" when it comes to something as rapidly evolving as GPU's.. is a fallacy. It's just always going to be a compromise.

Now that 10GB 3080's are out in the wild, AAA devs will be catering to them for their highest settings, ensuring 10GB doesn't cause performance problems. However this gen will definitely be the last time we see 10GB for Nvidia xx80 and in future it will not be enough. 4080/Hopper will get somewhere between 12-20, I reckon, depending on how the bus width math and GDDR7 supply pricing shakes out.
 
How much price and performance arguing are we all going to do over products nobody, or hardly anybody, have actually seen/used?

Yeah this is some unintended absurdity right now. I also think the Green/Red fanboy thing ends up being academic and mostly a thing of the online discussion world, because people quickly stop seeing colors when the performance is there from both sides.

In other words I suspect a lot of people that have been waiting on a 3080, and may have even bought Nvidia for the past several gens, would gladly buy a 6800XT if one was more easily available.

3080 or 6800XT, I'll personally buy whatever becomes available to me on Amazon first.
 
The point is that there is 1 badly optimized game with an aftermarket texture pack that uses more than 10GB of VRAM, and reportedly another that was made with input from AMD that *might* need more than 10 as well with everything maxed at 4K.

With a setting that very likely doesn't noticeably impact visual quality by taking it down a notch (as seen over and over in in depth game reviews, the most recent being Watch Dogs Legions).

However I don't really care - I just laugh as games allocate (not need) more, and more, and more VRAM on my 3090 while people piss and moan, worrying about VRAM allocation (and not actual in-use VRAM).

Were you around 8 years ago saying the same thing about 4gb of ram?

4 years ago saying the same thing about 6gb of ram?

2 years ago saying the same thing about 8gb of ram....

And so on... Yea sure by the time there are several titles that can use more than 10gb of video memory new cards may be out. But for those that WANT to use that... that will run MODS in whatever fallout/starfield/whatever game then they will care.

And clearly vram helps sell video cards.
 
Back
Top