Jedi Survivor is the best showcase of a looming problem for PC players

I'd agree with high VRAM usage if it's "optimized" VRAM usage, in other words no bugs, memory leaks, etc. Unfortunately, we're not at a point to where dev's are just going to start churning out games using 12GB+ of VRAM, as you said not until 16GB is mainstream, which would be awhile since consoles pretty much dictate where gaming is at, and where it's going.

I think the backlash in this title won't set us back, I think, if anything it'll hopefully push dev's to properly optimizing their games prior to release. The issue with AAA games on PC this year hasn't been so much graphics, or gameplay, it's mainly been optimization. Just look at games like Cyberpunk 2077, Control, and Witcher 3 as examples of games that look good, really good, but don't require 12GB+ to run, then look at Jedi Survivor and ask how in the hell can this game use 20GB+ of VRAM? Hogwarts Legacy is another title that looks great, but definitely not 12GB+ VRAM great, it's not above CP2077 or Witcher 3. Games are just being released broken, but the problem is people are putting more emphasis on VRAM than what's really, truly needed. 16GB of VRAM is quite a bit, it's an amount we probably won't see used for at least another six or seven years, unless of course you're into modding and adding high res textures and all that jazz, and the kicker is that a majority of gamers are still running 1080p, not 1440p or 2160p where more VRAM becomes more important, but not for the amount of it, but the memory bus that comes with higher VRAM.

At the end of the day I'm in the group that firmly believes that 12GB is sufficient for 1440p and below, and 16GB is sufficient (for memory bus reasons) for 4K. We won't be seeing many games using more than 12GB naturally, unless it's not properly optimized, and I'm not saying this as a 4070Ti owner, I'm saying this as a realist--games aren't looking better and better enough to justify the drastic uptick in VRAM usage, and if CP2077 is anything to go by, true usage of anything near or above 12GB+ would bring even a 4090 to it's knee. If games are coming out looking worse than CP2077 or Witcher 3 while requiring more resources, then it's on game dev's, not Nvidia or AMD to fix, otherwise game dev's would just release broken game after broken game with little, to no optimization or patching. Putting it on Nvidia and AMD continue to band-aid the issue with more VRAM or faster GPU's every time a series of broken games were released would get very expensive very quickly, and still wouldn't fix the issue at its root, and that right there would be the downfall of PC gaming.
In a perfect world I agree with you. However, today we don't see optimized titles for PC much if at all. Broken is how games have been shipping for almost a decade now (if not longer, see below). As long as broadband internet has been viable and generally available to the masses, everything launches broken and is patched over time. Mass Effect Andromeda came out in 2017. That was six years ago and it was a broken mess. It got patched to a point and then dropped due to the backlash it faced. There are numerous titles that got dropped due to backlash including Anthem. Which could have been good had it gotten continued support and updates.

It's nice to think games should release in a polished state but it's just not reality. Old PC games had bugs in em and you didn't really even see most of them. Though some were game breaking at times. As games got more complex they started launching in a broken state. Hell, I recall Mechwariror 3 releasing largely buggy and that was 1999...

This has been going on for a long time. Not gonna change anytime soon.
 
Last edited:
In a perfect world I agree with you. However, today we don't see optimized titles for PC much if at all. Broken is how games have been shipping for almost a decade now (if not longer, see below). As long as broadband internet has been viable and generally available to the masses, everything launches broken and is patched over time. Mass Effect Andromeda came out in 2017. That was six years ago and it was a broken mess. It got patched to a point and then dropped due to the backlash it faced. There are numerous titles that got dropped due to backlash including Anthem. Which could have been good had it gotten continued support and updates.

It's nice to think games should release in a polished state but it's just not reality. Old PC games had bugs in em and you didn't really even see most of them. Though some were game breaking at times. As games got more complex they started launching in a broken state. Hell, I recall Mechwariror 3 releasing largely buggy and that was 1999...

This has been going on for a long time. Not gonna change anytime soon.
Sadly, you’re right. I just think with the latest offerings you have people thinking more VRAM is the panacea for buggy games—it’s not.

The games I mentioned in my post above are examples of games released buggy as heck at launch, but after time they got fixed and are now optimized very well, showing that the issue isn’t the hardware.

Some people, especially those newer to the PC gaming scene buy into all the hubbub that tech tubers like Gamers Nexus, or Hardware Unboxed are saying that VRAM is going to make or break a cards longevity, but they’re basing these statements on games being released broken.

Jedi Survivor is just another game released in a broken state, even on consoles. Once fixed it’ll be seen that you don’t need a 24GB VRAMM GPU to play it, nor do you need an i9 13900KF/7950X3D etc. etc. etc.

To me the VRAM issue is another item in a long list of talking points to push people into buying newer, more expensive, hardware—and there are people that will, even if it means financial strain, thinking that it’ll fix the problem with games releasing in broken states.
 
Sadly, you’re right. I just think with the latest offerings you have people thinking more VRAM is the panacea for buggy games—it’s not.

The games I mentioned in my post above are examples of games released buggy as heck at launch, but after time they got fixed and are now optimized very well, showing that the issue isn’t the hardware.

Some people, especially those newer to the PC gaming scene buy into all the hubbub that tech tubers like Gamers Nexus, or Hardware Unboxed are saying that VRAM is going to make or break a cards longevity, but they’re basing these statements on games being released broken.

Jedi Survivor is just another game released in a broken state, even on consoles. Once fixed it’ll be seen that you don’t need a 24GB VRAMM GPU to play it, nor do you need an i9 13900KF/7950X3D etc. etc. etc.

To me the VRAM issue is another item in a long list of talking points to push people into buying newer, more expensive, hardware—and there are people that will, even if it means financial strain, thinking that it’ll fix the problem with games releasing in broken states.
Yeah, the old means of forcing obsolescence was with Direct X Support. There are a number of older graphics cards out there that don't support DX12 and WOULD be very capable of running games if the companies that made them continued to support them with updated drivers.
 
Sadly, you’re right. I just think with the latest offerings you have people thinking more VRAM is the panacea for buggy games—it’s not.

The games I mentioned in my post above are examples of games released buggy as heck at launch, but after time they got fixed and are now optimized very well, showing that the issue isn’t the hardware.

Some people, especially those newer to the PC gaming scene buy into all the hubbub that tech tubers like Gamers Nexus, or Hardware Unboxed are saying that VRAM is going to make or break a cards longevity, but they’re basing these statements on games being released broken.

Jedi Survivor is just another game released in a broken state, even on consoles. Once fixed it’ll be seen that you don’t need a 24GB VRAMM GPU to play it, nor do you need an i9 13900KF/7950X3D etc. etc. etc.

To me the VRAM issue is another item in a long list of talking points to push people into buying newer, more expensive, hardware—and there are people that will, even if it means financial strain, thinking that it’ll fix the problem with games releasing in broken states.
While that's true there's something to be said for the added value of extra VRAM. If you plan on keeping the card for a fairly long time it would make sense to bias your decision in favor of AMD. Nvidia put themselves in this position with their lame attempt at segmenting the market back during the 30 series. There was little difference in performance between the 3080 10g, 12g, and 3080 ti.
 
^ Yeah, there's nothing you can do if game needs more VRAM other than drop a resolution. To me that is way more noticeable than lowering the normal GPU intensive stuff (shadows, lighting, RT, etc.). With VRR usually anything above 80-90FPS looks smooth, but from experience if game has VRAM issues (or with this game traversal stutters and/or shader compilation pre patch ) then VRR breaks.
 
More vram has always been a boon for longevity. Only recently have people been arguing against it. Odd, really.
I don't think it's so much arguing against more VRAM, it's arguing about why more VRAM is needed. I wouldn't mind if the 4070Ti had 16GB of VRAM, but, in optimized games, would it be needed? Would the 4070Ti have the processing power to handle the amount of what goes into making a game actually use anything over 12GB of VRAM? We have a hardware community that firmly believes VRAM is the end-all-be-all of GPU's, and anything with less than 16GB will be useless in a year or two, and when you so much as mention trying to turn down the graphics, the consensus is "I shouldn't have to." To me, the most important part about having a lot of VRAM isn't the quantity, it's the memory bus that comes with it. 12GB GPU's like the 4070/4070Ti are basically targeting 1440p, yet people and reviewers are trying to push them into the 4k arena, and then complain about the memory bus and limited VRAM... which, to this point hasn't been a big issue, but it will be a year or two down the road if 4K is your preferred resolution.

The argument for me hasn't so much been the amount of VRAM, it's been more along the lines of people knee-jerk reacting without actually doing their research. It's been people taking everything these "tech tubers" say as the gospel, period. Yesterday I was in a discussion in the Nvidia subreddit yesterday, and some guy flat out called me a liar when I posted what I visually see being used via Rivatuner's per processes usage when I'm playing games like RE4, Cyberpunk 2077, Redfall, etc. stating that 1080p games are using 10GB and 1440p games are using 11GB, saying that he'll believe a "reputable source" over some random redditor, and that I was probably reading things wrong. I just kindly told the person: "You're the target audience for these tech tubers, you don't seem to do your own research, and instead rely on them to tell you what's what, hence why you're calling me a liar. You don't have to believe me, but I'd hope you lend the same credence to these tech tubers."

As for the longevity argument: look at the 3090, the 6950XT, the 6800/XT.. all cards with 16GB+ VRAM, yet in very niche cases are they posting better 1% lows than the 4070Ti. What that's showing me is by the time games start pushing the envelope with graphics and textures, when cards like the 4070Ti are getting taxed due to VRAM issues, what's that going to mean for the aforementioned 16GB+ GPU's? They're going to see just as big of a performance loss, and as I've always said, and will continue to say: 1% lows on crap frame rates are still crap. Putting an absurd amount of VRAM on GPU's that probably aren't going to be able to utilize it all is just gimmicky. So, VRAM, to me has never been the deciding factor on longevity, it's always been about the GPU's processing power, and in 2-3 years when and if I have to start lowering settings to get acceptable frame rates, that's going to lower VRAM usage anyways, making the argument that more VRAM is necessary null and void.
 
Looks like VRAM issues aren't just a PC problem any more:

Xbox Series S Suffers from VRAM Limitations, Just Like 8GB GPUS​

https://www.tomshardware.com/news/xbox-series-s-suffers-from-vram-limitations-just-like-8gb-gpus

It looks like the Xbox Series S is facing serious memory allocation issues in Borderlands 3, at least according to Reddit user u/jokekiller94 — who posted a screenshot of the game crashing on their Series S console with an "out of memory" error on the screen.

The issue isn't limited to the original poster — Reddit user u/bacon_sammer responded in the thread, saying they have the same issue with Borderlands 3 crashing frequently on their Xbox Series S. They said the game runs fine on the Xbox Series X, however, so the it seems to be a Series S issue.
 
steve-ballmer-microsoft.gif
 
More vram has always been a boon for longevity. Only recently have people been arguing against it. Odd, really.
Ah, the fine wine argument. Numbers of GPUs purchased yearly tend to disapprove this argument. Most people upgrade frequently. Most people are fine with the vram they have. Most people adjust their settings to what plays well for them. If we're talking folks on the high end, they are well taken care of.
 
Neither are most pc owners if the hardware surveys have anything to say for it. No wonder they have to keep selling fewer numbers of increasingly expensive cards.
That's a different story. Let's not pretend that today is the historical market.
 
I don't think it's so much arguing against more VRAM, it's arguing about why more VRAM is needed. I wouldn't mind if the 4070Ti had 16GB of VRAM, but, in optimized games, would it be needed? Would the 4070Ti have the processing power to handle the amount of what goes into making a game actually use anything over 12GB of VRAM? We have a hardware community that firmly believes VRAM is the end-all-be-all of GPU's, and anything with less than 16GB will be useless in a year or two, and when you so much as mention trying to turn down the graphics, the consensus is "I shouldn't have to." To me, the most important part about having a lot of VRAM isn't the quantity, it's the memory bus that comes with it. 12GB GPU's like the 4070/4070Ti are basically targeting 1440p, yet people and reviewers are trying to push them into the 4k arena, and then complain about the memory bus and limited VRAM... which, to this point hasn't been a big issue, but it will be a year or two down the road if 4K is your preferred resolution.

The argument for me hasn't so much been the amount of VRAM, it's been more along the lines of people knee-jerk reacting without actually doing their research. It's been people taking everything these "tech tubers" say as the gospel, period. Yesterday I was in a discussion in the Nvidia subreddit yesterday, and some guy flat out called me a liar when I posted what I visually see being used via Rivatuner's per processes usage when I'm playing games like RE4, Cyberpunk 2077, Redfall, etc. stating that 1080p games are using 10GB and 1440p games are using 11GB, saying that he'll believe a "reputable source" over some random redditor, and that I was probably reading things wrong. I just kindly told the person: "You're the target audience for these tech tubers, you don't seem to do your own research, and instead rely on them to tell you what's what, hence why you're calling me a liar. You don't have to believe me, but I'd hope you lend the same credence to these tech tubers."

As for the longevity argument: look at the 3090, the 6950XT, the 6800/XT.. all cards with 16GB+ VRAM, yet in very niche cases are they posting better 1% lows than the 4070Ti. What that's showing me is by the time games start pushing the envelope with graphics and textures, when cards like the 4070Ti are getting taxed due to VRAM issues, what's that going to mean for the aforementioned 16GB+ GPU's? They're going to see just as big of a performance loss, and as I've always said, and will continue to say: 1% lows on crap frame rates are still crap. Putting an absurd amount of VRAM on GPU's that probably aren't going to be able to utilize it all is just gimmicky. So, VRAM, to me has never been the deciding factor on longevity, it's always been about the GPU's processing power, and in 2-3 years when and if I have to start lowering settings to get acceptable frame rates, that's going to lower VRAM usage anyways, making the argument that more VRAM is necessary null and void.
Oh it's very much arguing against VRAM. This argument first came around during the 30xx series launch. The nVidia faithful did what? Started an argument of "what's in use is different than what's stored" as if what's stored doesn't take up space. Fast forward to today and every AAA title is blowing past 8GB which isn't even that hard to understand when you know games like Deus Ex Mankind Divided blows past 8GB at 1440p and most definitely at 4K and that game came out in 2016. The nVidia faithful really wants everyone to believe that all AAA titles that come out now are unoptimized, as if Capcom, Square, (everyone?) have completely forgotten how to develop a video game. It's lunacy.

Now VRAM isn't the be all end all. Bandwidth plays a huge part. But as long as you keep AA low your bandwidth usage will not exceed what's needed to drive whats in VRAM.

I was forced to buy a 570 8GB for my office PC when I needed something quick and cheap (right before crytpo and it literally was the first AMD card I've bought) and age like fine wine it most definitely did as soon as games started blowing past 6GB (the card had a 8GB option originally it was 4GB). Had I bought that 4GB I would have hit a brick wall much sooner. The 3070 specifically had no business shipping with 8GB. It's a high end card. If it had 16GB or even 12GB you would be able to keep it for a much longer time.

Some people upgrade every generation but most people don't. I buy when the damn thing dies. Some skip a generation. Either way people who upgrade every generation are teeny tiny fraction. So it would make sense to get the most VRAM you can because its going to pay dividends now that consoles aren't stuck with 4GB/6GB VRAM games.
 
Started an argument of "what's in use is different than what's stored" as if what's stored doesn't take up space.
Are you arguing that allocated VRAM is the same as VRAM in use? Because it has been discussed ad nauseam and been proven that allocated is not the same as in use. I know the fanboys will scream at the top of their lungs over and over about more VRAM is better hence AMD is better, but more VRAM doesn't make something instantly better. And no, no one is arguing that more VRAM is bad. No matter what you or anyone else feels.
 
Are you arguing that allocated VRAM is the same as VRAM in use? Because it has been discussed ad nauseam and been proven that allocated is not the same as in use. I know the fanboys will scream at the top of their lungs over and over about more VRAM is better hence AMD is better, but more VRAM doesn't make something instantly better. And no, no one is arguing that more VRAM is bad. No matter what you or anyone else feels.
Case in point. Hey how are those 1% lows on the 3070? You good? How does your argument help people who don't have enough vram? You gonna tell them that ackshually they are using less than what's allocated in VRAM? Will they get a boost when you tell them that? Let's be constructive, specifically what can they do to "tap into" all that VRAM that's not in use when their 1% lows tank?

This isn't even an AMD argument. It's a VRAM argument that AMD just so happened not to fall into on their higher end cards. Knowing AMD it probably wasn't intentional. If the 6700xt had 8GB of RAM and the 3070 had 12 I can guarantee you 100% that we would be having the same argument just reversed. You don't even need to play Jedi Survivor to see it. Here's Forza Horizon 5. I'm guessing this is unoptimized too? The nvidia card running around with damn near double the system RAM use. Why would that be? I can promise you that you're going to see that everywhere because nVidia has been using your system RAM to to compensate for the a lack of VRAM.
1683796065081.png

Here's another unoptimized game from 2021. Called Hitman 3.
1683797490033.png

And another from **checks notes ** nVidia. o_O
1683797687815.png

None of this is dissimilar to Jedi Survivor.
1683800041205.png


Obviously all of these games are unoptimized too, or maybe, just maybe the card doesn't have enough VRAM. nVidia can't sponsor every game that comes out so are we just going to hear over and over the game is unoptimized? But in actuality I've been seeing the latest argument nVidia is priming it's "that's from last gen" that's right the "last gen-us" of your card with more ram doesn't really count because its...."last gen". How TF that's supposed to matter is beyond me. All I know is that these arguments are getting really dumb and it's actually hurting the game industry when all nVidia needs to do is put enough damn VRAM in the card and call it a day. You wouldn't have to run out here and claim all games are unoptimized.
 
Last edited:
This isn't even an AMD argument. It's a VRAM argument that AMD just so happened not to fall into on their higher end cards. If the 6700xt had 8GB of RAM and the 3070 had 12 I can guarantee you 100% that we would be having the same argument just reversed.
Nah, when AMD uses a 192 bit bus it's "cutting corners" and when they give you 50% more vram it's, uh, "stat padding".

When Nvidia cuts down on the bus or gives less vram, that's just the masterful engineers at Nvidia telling you just how much hardware you actually need.
 
Nah, when AMD uses a 192 bit bus it's "cutting corners" and when they give you 50% more vram it's, uh, "stat padding".

When Nvidia cuts down on the bus or gives less vram, that's just the masterful engineers at Nvidia telling you just how much hardware you actually need.
My 2 cents:

If you bought a 4070 ti 12gb then you are probably the ideal target audience for nvidia, who is prepared to upgrade every (alternate) generation

Otoh, if you bought a 4070 12gb for $600, you just got sucked into a scam by nvidia, when you should have held the line for 3 more months & could have purchased a cheaper 4060 ti 16gb or equivalent navi 32 card. My condolences.
 
Just to put things into perspective for the doom & gloom people...

Smooth AAA launches on PC so far this year:
  1. Dead Island 2
  2. Dead Space
  3. Forspoken
  4. Like a Dragon: Ishin!
  5. Resident Evil 4
  6. Returnal
Rough launches:
  1. Hogwarts Legacy (pretty much fixed at this point)
  2. The Last of Us Part I (still some minor issues)
  3. Redfall ( :ROFLMAO: )
  4. Star Wars Jedi: Survivor (Respawn is putting out patches quickly)
  5. Wild Hearts (nobody cares)
The "looming problem" is people blowing things out of proportion.
 
In a perfect world I agree with you. However, today we don't see optimized titles for PC much if at all. Broken is how games have been shipping for almost a decade now (if not longer, see below). As long as broadband internet has been viable and generally available to the masses, everything launches broken and is patched over time. Mass Effect Andromeda came out in 2017. That was six years ago and it was a broken mess. It got patched to a point and then dropped due to the backlash it faced. There are numerous titles that got dropped due to backlash including Anthem. Which could have been good had it gotten continued support and updates.

It's nice to think games should release in a polished state but it's just not reality. Old PC games had bugs in em and you didn't really even see most of them. Though some were game breaking at times. As games got more complex they started launching in a broken state. Hell, I recall Mechwariror 3 releasing largely buggy and that was 1999...

This has been going on for a long time. Not gonna change anytime soon.
MW3 has quite a few bugs but they have workarounds. It seems to me that releasing a playable product was a bigger priority decades ago than it does now. I can live with patches but the size has gotten out of control, it's more like total asset replacement than a patch. With broadband widely available, as you say, it seems that developers or publishers don't care to minimize required bandwidth like they would have 20 years ago.
 
Just to put things into perspective for the doom & gloom people...

Smooth AAA launches on PC so far this year:
  1. Dead Island 2
  2. Dead Space
  3. Forspoken
  4. Like a Dragon: Ishin!
  5. Resident Evil 4
  6. Returnal
Rough launches:
  1. Hogwarts Legacy (pretty much fixed at this point)
  2. The Last of Us Part I (still some minor issues)
  3. Redfall ( :ROFLMAO: )
  4. Star Wars Jedi: Survivor (Respawn is putting out patches quickly)
  5. Wild Hearts (nobody cares)
The "looming problem" is people blowing things out of proportion.
Dead Space, Resident Evil 4, Forespoken, and Returnal all got complaints for VRAM usage.

But you're right though there's no need for doom or gloom just get a card with enough VRAM or turn down settings.
Screenshot_20230511-102024.png
 
I'm curious how there is even a debate on this... If you buy anything less than the best video card available, you can generally expect to sacrifice some settings or resolution in order to gain performance on modern games. This is how it has been since the beginning of time... lol.

I think in terms of this specific thread, the real debate was how initially the game performed poorly on even the best of systems and lacked what would be mainstream features of today (DLSS, Frame Gen).
 
I'm curious how there is even a debate on this... If you buy anything less than the best video card available, you can generally expect to sacrifice some settings or resolution in order to gain performance on modern games. This is how it has been since the beginning of time... lol.

I think in terms of this specific thread, the real debate was how initially the game performed poorly on even the best of systems and lacked what would be mainstream features of today (DLSS, Frame Gen).
Since the beginning of time?
No not really.
For a long stretch of probably a decade and a half, perhaps even more like two decades. It was always stupid to buy the flagship... cause 6 months to a year down the road the new mid range card would kick it in the nuts. Most of us bought a new upper mid range cards every year or two. The flagship was the sucker card. Before the 1080 TI, you could expect a flagship to be top tier for less then a year.

You could argue that GPUs have just maxed out after the 1080. I think the reality is Nvidia over released at that point.... as AMD really didn't have a counter punch. The two GPU market meant Nvidia had no reason to compete with itself. So instead of putting out a 2080 that smoked the 1080... meaning a 2070 would have really kicked it in the nuts. They instead focused on a completely non game market.... and started bolting other things onto their silicon. Found a few software excused to sell to gamers. AND six years later we are where we are. Stagnation. Sure the 4090 is faster then the 3090... but Nvidia has decided they can get away with keeping the cost per rendered frame basically identical. (and I guess they can obviously based on sales) This hasn't been the status quo since the beginning of time though. Here is hoping Intels second swing isn't a complete miss. We need more then just 3 players... we need one of the two that isn't Nvidia to actually bloody their nose in sales #s.
 
If you're looking for help in competition from "We're skipping the RX 7700 series" and/or "GPU lineup and 18 other departments cancelled" don't hold your breath
 
If you're looking for help in competition from "We're skipping the RX 7700 series" and/or "GPU lineup and 18 other departments cancelled" don't hold your breath
Your not wrong. PC gaming isn't dead but... we have had a long stretch of consoles dictating software now. AMD and Intel don't seem to likely to put the boots to Nvidia dominance any time soon. Nvidia has won the mind share to the point, that either of the other two would need a real knock out to really make Nvidia re consider.

Or another very outside chance. Perhaps the AMD/Microsoft AI team up forces Nvidia to spin off AI into actual dedicated silicon development... freeing up Nvidias gaming gpu silicon to beef up the bits that matter for gaming over tensor core arrays. I doubt that happens but ya never know chiplets on Nvidias side could make it more likely, if we are talking game chiplets and AI chiplets.
 
Since the beginning of time?
No not really.
For a long stretch of probably a decade and a half, perhaps even more like two decades. It was always stupid to buy the flagship... cause 6 months to a year down the road the new mid range card would kick it in the nuts. Most of us bought a new upper mid range cards every year or two. The flagship was the sucker card. Before the 1080 TI, you could expect a flagship to be top tier for less then a year.

You could argue that GPUs have just maxed out after the 1080. I think the reality is Nvidia over released at that point.... as AMD really didn't have a counter punch. The two GPU market meant Nvidia had no reason to compete with itself. So instead of putting out a 2080 that smoked the 1080... meaning a 2070 would have really kicked it in the nuts. They instead focused on a completely non game market.... and started bolting other things onto their silicon. Found a few software excused to sell to gamers. AND six years later we are where we are. Stagnation. Sure the 4090 is faster then the 3090... but Nvidia has decided they can get away with keeping the cost per rendered frame basically identical. (and I guess they can obviously based on sales) This hasn't been the status quo since the beginning of time though. Here is hoping Intels second swing isn't a complete miss. We need more then just 3 players... we need one of the two that isn't Nvidia to actually bloody their nose in sales #s.
Well obviously, not the beginning of time... lol. Video cards were a whirlwind of competition in the late 90's early 2000's and things changed fast.

However, I would disagree on things getting stagnant. The 10xx series from nvidia was awesome yes, and the 20xx series disappointed most, but nvidia took a risk of trying to introduce different technology into the gaming world. The 30xx series built on that and was a good jump for those who were still on the 10xx series. However; the jump from the 30xx series to the 40xx series felt massive to me. I had a 3090 before and a 4090 now and the difference was night and day. Plus DLSS 3.0 is an absolute game changer in gaming technology.

Perhaps some of the difference is how one games. At 4K, the top of the line card now is easily the "go-to". It shit stomps my 3090 hands down at 4K, I'd never even look back at this point. I imagine if one is gaming at 1440p or less (yuck... lol), then these generational leaps seems somewhat tame.
 
Or another very outside chance. Perhaps the AMD/Microsoft AI team up forces Nvidia to spin off AI into actual dedicated silicon development... freeing up Nvidias gaming gpu silicon to beef up the bits that matter for gaming over tensor core arrays. I doubt that happens but ya never know chiplets on Nvidias side could make it more likely, if we are talking game chiplets and AI chiplets.

Microsoft spokesperson Frank Shaw has, however, denied that AMD is involved with Athena.

https://www.theverge.com/2023/5/5/23712242/microsoft-amd-ai-processor-chip-nvidia-gpu-athena-mi300
 
Well obviously, not the beginning of time... lol. Video cards were a whirlwind of competition in the late 90's early 2000's and things changed fast.

However, I would disagree on things getting stagnant. The 10xx series from nvidia was awesome yes, and the 20xx series disappointed most, but nvidia took a risk of trying to introduce different technology into the gaming world. The 30xx series built on that and was a good jump for those who were still on the 10xx series. However; the jump from the 30xx series to the 40xx series felt massive to me. I had a 3090 before and a 4090 now and the difference was night and day. Plus DLSS 3.0 is an absolute game changer in gaming technology.

Perhaps some of the difference is how one games. At 4K, the top of the line card now is easily the "go-to". It shit stomps my 3090 hands down at 4K, I'd never even look back at this point. I imagine if one is gaming at 1440p or less (yuck... lol), then these generational leaps seems somewhat tame.

You still believe the tensor cores added in the 2000s where for gaming ? lol
They skipped the real 2000 silicon (volta) you can say because of lacking competition... but it was also because they lined it with Tensor cores that had zero use in gaming. ZERO. Volta if had released would have gotten beaten by the 1080ti in gaming with nothing to do with 1/3 of the chip.
Nvidia knew that even with their R&D Spend they couldn't do the two things separate. So the 2080 silicon was designed for both markets and used for both markets. They just had to invent crap to do with the tensor cores for gamers.... which is why they came up with 5 or 6 different ideas of things they could do with them. Tensor cores are not the best way to do any of it in gaming frankly.... they just had to do something with that silicon.

The 4000s... are only an upgrade as far as the 4090 goes as they added a new high bar. However all the 4000s... cost exactly as much more as they give you in performance. (with the 4090 maybe giving you a bit more then the price increase) Don't feel like much of an upgrade at that point. Ya we got faster cards that now cost more then high end workstation cards cost a decade ago. woo woo
 
You still believe the tensor cores added in the 2000s where for gaming ? lol
They skipped the real 2000 silicon (volta) you can say because of lacking competition... but it was also because they lined it with Tensor cores that had zero use in gaming. ZERO. Volta if had released would have gotten beaten by the 1080ti in gaming with nothing to do with 1/3 of the chip.
Nvidia knew that even with their R&D Spend they couldn't do the two things separate. So the 2080 silicon was designed for both markets and used for both markets. They just had to invent crap to do with the tensor cores for gamers.... which is why they came up with 5 or 6 different ideas of things they could do with them. Tensor cores are not the best way to do any of it in gaming frankly.... they just had to do something with that silicon.

The 4000s... are only an upgrade as far as the 4090 goes as they added a new high bar. However all the 4000s... cost exactly as much more as they give you in performance. (with the 4090 maybe giving you a bit more then the price increase) Don't feel like much of an upgrade at that point. Ya we got faster cards that now cost more then high end workstation cards cost a decade ago. woo woo
I mean, I run all of my games at 4K, max settings and I max out my monitors refresh rate in every game that has DLSS 3.0 w/ Frame Gen and a few other games without it, so I'd say that is a hell of an upgrade for my use case. Only 2 or 3 games I play dip below the 144Hz mark on my monitor now.

I suppose it is all relative to the user as well. The 4090 was worth every penny for me to get this kind of performance. My 3090 was decent too, and I'll admit the 2080Ti i had barely outperformed my 1080's I had in SLi, except in games that had RT of course (which were fewer in those days).

I'm not a chip designer or engineer for nvidia, but they are a company, answering to shareholders and they are out to make money. If they used the same die on the 2000 series between gaming and workstations, so be it. At least they found a way to leverage their tech for gaming.

Honestly, I don't see that ever changing. nVidia is swinging toward AI and we are likely going to get the trickle down technology from that in our gaming GPUs from here on out for the forseeable future.
 
Whether or not you like it is beside the point. The game launched with no significant performance or technical issues.
LOL dude ive played 4 hours and its crashed 3x. Twice in cinematics, once in gameplay. Maybe even more times hahah
 
I'm curious how there is even a debate on this... If you buy anything less than the best video card available, you can generally expect to sacrifice some settings or resolution in order to gain performance on modern games. This is how it has been since the beginning of time... lol.
I think it is two issues:

1) People got used to the consoles sucking, which limits (many) PC games. Because the majority of games are multi-platform, targeting console hardware is going to be a thing. Well the consoles were kinda shit for a while. The Xbone and PS4 were actually designed as more "mid range" hardware when they launched to be cheaper, and of course as they aged they only got more low end. Even their mid gen update was fairly small, particularly the PS4 Pro. So if you were designing a game for them, it wasn't going to use a ton of VRAM. Sure you could have some larger assets for the PC release, but when you were starting from something less it didn't end up being huge.

Well that changed with the current gen. Now the consoles are much higher end. Still not near the high end of PCs, but much better. nVME storage, a MUCH improved CPU and GPU and, of course, double the RAM. So games that target them can use much larger assets both because of the RAM and the ability to stream them in. If you then push that more to have even higher resolution assets for the PC... well that's going to need a lot of VRAM.


2) Just general whiners/sour grapes. There are people who want to be able to run everything with the settings maxed out, just kinda as an ego thing. They can't with these games, but they don't want to buy new hardware, particularly since high end hardware is expensive, so they just get mad rather than turn down the settings. IT would also not surprise me if they were some of the exact same people who cried about older games "not using the PC fully" and "being limited by consoles."

I think in terms of this specific thread, the real debate was how initially the game performed poorly on even the best of systems and lacked what would be mainstream features of today (DLSS, Frame Gen).
It was but people decided to get whiny about it and declare that it was just the PC version that sucked, that consoles run it great and look great and PC gaming is dead and we should all get consoles.
 
Case in point. Hey how are those 1% lows on the 3070? You good? How does your argument help people who don't have enough vram? You gonna tell them that ackshually they are using less than what's allocated in VRAM? Will they get a boost when you tell them that? Let's be constructive, specifically what can they do to "tap into" all that VRAM that's not in use when their 1% lows tank?

This isn't even an AMD argument. It's a VRAM argument that AMD just so happened not to fall into on their higher end cards. Knowing AMD it probably wasn't intentional. If the 6700xt had 8GB of RAM and the 3070 had 12 I can guarantee you 100% that we would be having the same argument just reversed. You don't even need to play Jedi Survivor to see it. Here's Forza Horizon 5. I'm guessing this is unoptimized too? The nvidia card running around with damn near double the system RAM use. Why would that be? I can promise you that you're going to see that everywhere because nVidia has been using your system RAM to to compensate for the a lack of VRAM.
View attachment 569617
Here's another unoptimized game from 2021. Called Hitman 3.
View attachment 569620
And another from **checks notes ** nVidia. o_O
View attachment 569621
None of this is dissimilar to Jedi Survivor.
View attachment 569623

Obviously all of these games are unoptimized too, or maybe, just maybe the card doesn't have enough VRAM. nVidia can't sponsor every game that comes out so are we just going to hear over and over the game is unoptimized? But in actuality I've been seeing the latest argument nVidia is priming it's "that's from last gen" that's right the "last gen-us" of your card with more ram doesn't really count because its...."last gen". How TF that's supposed to matter is beyond me. All I know is that these arguments are getting really dumb and it's actually hurting the game industry when all nVidia needs to do is put enough damn VRAM in the card and call it a day. You wouldn't have to run out here and claim all games are unoptimized.
How come in each screen shot the 3070 looks graphically better than the 6700XT, especially FH4?

As for the system memory usage, especially in games where VRAM usage is well under the 3070's 8GB that's anyone's guess. Could be driver overhead, it could be the driver requesting more data, etc. If you looks at Hitman with 6GB of usage it's clear that they aren't having VRAM capacity issues, and Requiem using 4GB is another example of it not being a VRAM issue.

Something to take into consideration is the 3070 has a 256-bit bus, the 6700XT a 192-bit bus, so I doubt there's memory bandwidth issues, especially in Hitman/Requiem, as for Jedi Survivor that games been patched and there's even a HUB video demonstrating this on the 3070. Forza had weaker performance on the 3070 versus the 6700XT for some odd reason, I know this because I had a 6700XT and watched lots and lots of videos on it. Also, why are you linking a video showing a 6800 versus a 3070? The 6800 is roughly 10-15% faster than the 3070, so the video results are within margin.

So, all in all, the pictures you linked don't show any problem with VRAM, just how each GPU utilizes system memory. I have 32GB of RAM and have seen usage as high as 21-22GB, but that's total system memory, not what's actually being used by the game itself, but in some of the more demanding games I've seen I haven't seen higher than 11GB of VRAM usage, but my RAM usage was way up there.

So, at the end of the day, your post is basically complaining about how much RAM is being used by the 3070 system, versus the 6700XT system... well, at least the RAM is getting used, what's wrong with that? That's the point of having RAM, is to use it.
 
How come in each screen shot the 3070 looks graphically better than the 6700XT, especially FH4?
You sure you want to go there? Hardware Unboxed did a video about what happens in VRAM constrained scenarios on Nvidia cards. It wasn't pretty.
As for the system memory usage, especially in games where VRAM usage is well under the 3070's 8GB that's anyone's guess. Could be driver overhead, it could be the driver requesting more data, etc. .
There's no such thing as 10GB driver overhead.
Something to take into consideration is the 3070 has a 256-bit bus, the 6700XT a 192-bit bus, so I doubt there's memory bandwidth issues, especially in Hitman/Requiem, as for Jedi Survivor that games been patched and there's even a HUB video demonstrating this on the 3070. Forza had weaker performance on the 3070 versus the 6700XT for some odd reason, I know this because I had a 6700XT and watched lots and lots of videos on it. Also, why are you linking a video showing a 6800 versus a 3070? The 6800 is roughly 10-15% faster than the 3070, so the video results are within margin.
We are looking at the memory not the performance in the last picture.
So, all in all, the pictures you linked don't show any problem with VRAM,
Literally the benchmarks don't agree with you. You keep answering with "with some unknown reason" even when the numbers are staring you in the face. This has already been benchmarked. I was making a statement not asking a question we already had an answer to.
So, at the end of the day, your post is basically complaining about how much RAM is being used by the 3070 system, versus the 6700XT system... well, at least the RAM is getting used, what's wrong with that? That's the point of having RAM, is to use it.
🤦‍♂️
 
  • Like
Reactions: ChadD
like this
More vram has always been a boon for longevity. Only recently have people been arguing against it. Odd, really.
People have really argued that more of anything is not a boon for longevity (at a cost of higher cost now), that doubtful.
And another from **checks notes ** nVidia. o_O
Looking at the 1% low fps being higher and the vram usage being the same between the 2, I am not exactly sure the issue we are looking at and some of the result are so different from other benchmark around, Forza for example:
https://www.kitguru.net/components/...on-5-pc-performance-benchmark-30-gpus-tested/
https://www.tweaktown.com/articles/...hmarked-1080p-1440p-4k-test-driven/index.html

Even at Ultra raytracing, in 4k, it stayed at 8.5 gb of VRAM on a 6800xt back then (and to note used more than a 2080TI, so it would be interesting to have a bit more data and maybe from a bit more standard memory config)
https://www.guru3d.com/articles_pag..._graphics_performance_benchmark_review,9.html

Maybe it is in a particular part of the game or is the PC not being in dual channel mode (24gb of ram?)
 
Last edited:
Mn
Looking at the 1% low fps being higher and the vram usage being the same between the 2, I am not exactly sure the issue we are looking at
Plague Tales Requiem is an outlier but if you look closely you can see what's happening. The nVidia card is soaking up 2GB more of system ram. Meanwhile both cards are using almost exactly the same amount of VRAM, which means if that 2GB is needed to run the game then the AMD system is going to have to go out to storage and fetch textures from there more often than nVidia will even though it has VRAM to spare.

This is probably why.
1683845380182.png

Even at Ultra raytracing, in 4k, it stayed at 8.5 gb of VRAM on a 6800xt back then (and to note used more than a 2080TI, so it would be interesting to have a bit more data and maybe from a bit more standard memory config)
https://www.guru3d.com/articles_pag..._graphics_performance_benchmark_review,9.html

Maybe it is in a particular part of the game or is the PC not being in dual channel mode (24gb of ram?)
Ray tracing is going to change things for sure. But they don't show you lows which would give you more info.

But as someone else said it's going to be hard to compare reviews.
 
Last edited:
A particular game performance can dramatically change in a single driver update. We have seen it, 10%, even 20%.

A game update can have a huge performance upgrade overall or for particular GPUs, 50%+. Overnight.

Of course it can occur the other way as in a performance degradation.

Comparing different reviews with different dates, drivers, setups, game patches, methods, can get rather foolish. And be totally non representative of your experience.

The information taken in context of the review can have some limited usage for deciding what is best for one use. Just like old newspapers, the story can get updated, except old out of date, not reflecting current state gets saved to eternity while newspapers get thrown away.
 
Back
Top