Jedi Survivor is the best showcase of a looming problem for PC players

LOL at complaining about system ram usage... 32gb of ddr4 is around $60. No reason not to have that nowadays for a gamer.
LOL at the person who doesn't realize that system memory is slower than VRAM and how that can cause a host of issues. But you do you.
 
You sure you want to go there? Hardware Unboxed did a video about what happens in VRAM constrained scenarios on Nvidia cards. It wasn't pretty.

There's no such thing as 10GB driver overhead.

We are looking at the memory not the performance in the last picture.

Literally the benchmarks don't agree with you. You keep answering with "with some unknown reason" even when the numbers are staring you in the face. This has already been benchmarked. I was making a statement not asking a question we already had an answer to.

🤦‍♂️
Again though, you didn't address my original comment--how come the 3070 images looked better than the 6700XT images?

As for the VRAM.. look, in the initial three, the only one coming close to hitting VRAM cap was Forza Horizon 5. The rest were at 4GB and 5GB respectively. Last I checked, the 3070 has 8GB of VRAM, so again, why would the Nvidia only system require the use of so much system memory when it's not near cap? It has a bigger memory bus than the 6700XT, so memory bandwidth couldn't be the issue, the graphics settings are the "same," so why would the Nvidia card use 2x the system memory? There's absolutely nothing staring me in the face with those screen shots. I don't get the point of your screen shots truthfully.

As for the FH5 results, after watching six different videos, yes it uses a lot, but doesn't stutter or have 1% low issues, and also happens to use the same amount of VRAM at 4K that it does at 1440p.. which tells me that FH5 is allocating that VRAM, as for getting those frame rate results shown in your screenshots, the only times it averages in the 60's is at 4k, not 1440p. At 1440p it averages between 88-98 FPS depending on the testers CPU since not all of 'em used the same one, again this is based on six videos worth of data. Don't believe me, feel free to look it up, I'm sure you'll find the same results I've found.

Honestly, I think you went and cherry picked these screenshots to try and make a point, but what point were you exactly trying to make?

While I will agree the 3070 got the shaft with 8GB of VRAM, it was released at a time when games weren't normally using that much VRAM, and considering the card is targeted at 1440p, that only reinforces that. Even then, still, not a lot of games are actually "using" over 8GB of VRAM, unless they're not optimized, hell, from what I've seen through all these different conversations, videos, research, etc. GPU's with higher VRAM pools tend to allocate more VRAM showing higher VRAM usage, while cards like the 4070Ti, what I have, show very different results.

As for this whole VRAM debacle... why is it a 4070Ti walks all over a 6950XT, or lands in between a 3090/3090Ti when it has a 192-bit bus/12GB VRAM versus their 16GB/256-bit and 24GB/384-bit bus, even at 4K? Why is it only a couple games actually post better 1% lows, while still averaging less FPS? Because at the end of the day, some games are programmed differently, and ultimately games are designed with console hardware in mind. While texture quality doesn't impact performance, I seriously doubt we'll see consoles using anything more than 2k textures at even the highest quality, as anything above that you're taking VRAM away from other graphical features/OS, etc. So, at worst case scenario, even the most graphically intensive PS5 game might end up using MAYBE 10-11GB of VRAM, but the beauty of being on PC, you can lower the texture quality to high, or even medium, and probably not notice massive differences and not have to worry about VRAM being an issue.

I'm not arguing against adding more VRAM to GPUs, like I said, I wouldn't mind having more on my 4070Ti, but I also feel like it would end up being wasted as when time comes to put that 16GB to use, I'm either going to be facing an unoptimized game that runs like shit, or a fully optimized game that not even a 4090 could run properly. So, honestly, I'm indifferent, while it would be nice, I'm not going to base my opinion, or give out advice to people based purely on something like VRAM size, unless of course they're talking about gaming at 4k, and even then, I'd tell them the major benefit of having more VRAM isn't the amount you have, but the memory bus it comes with which is beneficial at higher resolutions.

Lastly--take a look at the most played games on Steam/Epic and tell me how many of those games actually use anything close to 8GB. Using 2023 games at Ultra settings on a mid-range GPU released in 2021 isn't going to give you the performance you'd expect, and that's been the case with GPU's as far back as I can remember, mid-range GPUs tend to give great performance in AAA games for a good year or two before you need to start adjusting the settings.
 
Again though, you didn't address my original comment--how come the 3070 images looked better than the 6700XT images?
Easy they don't.
As for the VRAM.. look, in the initial three, the only one coming close to hitting VRAM cap was Forza Horizon 5. The rest were at 4GB and 5GB respectively. Last I checked, the 3070 has 8GB of VRAM, so again, why would the Nvidia only system require the use of so much system memory when it's not near cap? It has a bigger memory bus than the 6700XT, so memory bandwidth couldn't be the issue, the graphics settings are the "same," so why would the Nvidia card use 2x the system memory? There's absolutely nothing staring me in the face with those screen shots. I don't get the point of your screen shots truthfully.
The "cap" can be set to anything. It's not just the 3070 that would benefit.
As for the FH5 results, after watching six different videos, yes it uses a lot, but doesn't stutter or have 1% low issues,
Pretty darn sure the 3070 ain't on top in that benchmark.
Honestly, I think you went and cherry picked these screenshots to try and make a point, but what point were you exactly trying to make?
You could use the same video the problem persists in every game. Huge amounts of system memory being used to cover up a lack of VRAM.
While I will agree the 3070 got the shaft with 8GB of VRAM, it was released at a time when games weren't normally using that much VRAM, and considering the card is targeted at 1440p, that only reinforces that. Even then, still, not a lot of games are actually "using" over 8GB of VRAM, unless they're not optimized, hell, from what I've seen through all these different conversations, videos, research, etc. GPU's with higher VRAM pools tend to allocate more VRAM showing higher VRAM usage, while cards like the 4070Ti, what I have, show very different results.
Literally every AAA game being released today is going over 8GB.
As for this whole VRAM debacle... why is it a 4070Ti walks all over a 6950XT, or lands in between a 3090/3090Ti when it has a 192-bit bus/12GB VRAM versus their 16GB/256-bit and 24GB/384-bit bus, even at 4K?
Because it's got quite a bit more cache than a 3090.
Why is it only a couple games actually post better 1% lows, while still averaging less FPS?
Have we regressed to the point that we have forgotten that FPS only tells part of the story? You can have higher FPS and crappy lows and that is something I wouldn't want.
Lastly--take a look at the most played games on Steam/Epic and tell me how many of those games actually use anything close to 8GB. Using 2023 games at Ultra settings on a mid-range GPU released in 2021 isn't going to give you the performance you'd expect, and that's been the case with GPU's as far back as I can remember, mid-range GPUs tend to give great performance in AAA games for a good year or two before you need to start adjusting the settings.
AAA? Most of them and definitely at 4K. I mean I can run through my library but it's quite a bit.
 
  • Like
Reactions: noko
like this
If you're keeping your card for a long time, resale value probably isn't a big concern.
I think it depends on what constitutes a "long time". My 2080ti is almost 4 years old and I'm seeing that people are still buying them for $350! That's about 40% of what I bought it for all those years ago (open box at MC). I'm tempted to sell it for that while I can and grab a 7900/4080.
 
I think it depends on what constitutes a "long time". My 2080ti is almost 4 years old and I'm seeing that people are still buying them for $350! That's about 40% of what I bought it for all those years ago (open box at MC). I'm tempted to sell it for that while I can and grab a 7900/4080.
I'm thinking 5+ years. My brother just upgraded, he was still using a 970 4gb. Can't expect to get much from that. You also got a deal on open box, which is not typical, normally you would have paid ~$1200 for that new.
 
I'm thinking 5+ years. My brother just upgraded, he was still using a 970 4gb. Can't expect to get much from that. You also got a deal on open box, which is not typical, normally you would have paid ~$1200 for that new.
Yeah, I realize that but at the same time I've seen a few open-box 7900XTXs show up at MC for $830-860 over the past few weeks...
 
Easy they don't.

The "cap" can be set to anything. It's not just the 3070 that would benefit.

Pretty darn sure the 3070 ain't on top in that benchmark.

You could use the same video the problem persists in every game. Huge amounts of system memory being used to cover up a lack of VRAM.

Literally every AAA game being released today is going over 8GB.

Because it's got quite a bit more cache than a 3090.

Have we regressed to the point that we have forgotten that FPS only tells part of the story? You can have higher FPS and crappy lows and that is something I wouldn't want.

AAA? Most of them and definitely at 4K. I mean I can run through my library but it's quite a bit.
Actually, no. Check steamdb to see the games with the highest player bases, most played.

https://steamdb.info/

Reread what I put games that use 8GB. I said not a lot of them. You think maybe 10 games constitutes a lot? Even then, once you drop their settings from Ultra to high that usage drops.

Average frame rate is just an important as 1% lows, if your average frame rate is garbage, what good is better 1% lows?

Explain how it makes sense, that a card that isn’t even being VRAM capped to begin with is using 2x the system memory of a card that has only 4GB more. Compensating for the lack of VRAM indicates that the games are actually using more VRAM than what the card has, but in your screen shot using 5GB out of 8GB wouldn’t require phenomenally more system memory. So, again, how is it compensating for something it’s not even fully using? In FH5 I could understand, but the next two, again what you’re trying to say doesn’t make sense.
 
Last edited:
Easy they don't.

The "cap" can be set to anything. It's not just the 3070 that would benefit.

Pretty darn sure the 3070 ain't on top in that benchmark.

You could use the same video the problem persists in every game. Huge amounts of system memory being used to cover up a lack of VRAM.

Literally every AAA game being released today is going over 8GB.

Because it's got quite a bit more cache than a 3090.

Have we regressed to the point that we have forgotten that FPS only tells part of the story? You can have higher FPS and crappy lows and that is something I wouldn't want.

AAA? Most of them and definitely at 4K. I mean I can run through my library but it's quite a bit.
Agree, averages mean crap, you can have high averages and an utter stutter fest. 1% lows do not affect the averages that much since it is only 1% of the frames but those 1% if in the sub 30fps (even radically departing frame rates 100fps to 50fps, large frame time differences) can wreak havoc on smoothness and timing of shots or movement.
 
I think it depends on what constitutes a "long time". My 2080ti is almost 4 years old and I'm seeing that people are still buying them for $350! That's about 40% of what I bought it for all those years ago (open box at MC). I'm tempted to sell it for that while I can and grab a 7900/4080.
I think what is keeping the 2080ti around 350 is that it’s for better or worse the best 1440p Nvidia gpu under 400 right now. It has enough vram and compares to the 6700xt which is similar in price. So if someone has a max 300-350 budget and wants Nvidia the used 2080ti is the one to get. I almost bought one last week but I’m looking more like 300 not 350.
 
Actually, no. Check steamdb to see the games with the highest player bases, most played.

https://steamdb.info/

Reread what I put games that use 8GB. I said not a lot of them. You think maybe 10 games constitutes a lot? Even then, once you drop their settings from Ultra to high that usage drops.
So your whole thing is the most popular are below 8GB? Errr OK. I don't know why that matters since usually your WoWs, Dota 2s, are always going hit top spots because they are either cheap to play or in the case of Dota completely free. We're enthusiasts and most of us aren't broke. So that's who I'm talking about.
Average frame rate is just an important as 1% lows, if your average frame rate is garbage, what good is better 1% lows?
Nope. Now it has to be within reason, but if you gave me a game that starts at 30 and never goes below 30 vs a game that starts at 60 and frequently drops I'm going to pick the 30 locked option. It's going to give a much better experience than if FPS is all over the place.
Explain how it makes sense, that a card that isn’t even being VRAM capped to begin with is using 2x the system memory of a card that has only 4GB more. Compensating for the lack of VRAM indicates that the games are actually using more VRAM than what the card has, but in your screen shot using 5GB out of 8GB wouldn’t require phenomenally more system memory. So, again, how is it compensating for something it’s not even fully using? In FH5 I could understand, but the next two, again what you’re trying to say doesn’t make sense.
As I stated before where things get stored and how much, is completely programmable. If I had a lineup that included cards like the GTX 1050 for instance that only has 4GB, would I set the cap at 4GB and then stream in textures as needed from system memory to make up the deficit? Yup. What's the benefit? The game is going to run better than if it hits a brick wall then has to immediately go out to storage to load in the next batch of textures. When Hogwarts released for instance this is exactly what you saw. Doing this ensures that all of the cards within your lineup will be able to play the game without having a horrible experience. The reason to do this seems pretty obvious to me.
 
Last edited:
So your whole thing is the most popular are below 8GB? Errr OK. I don't know why that matters since usually your WoWs, Dota 2s, are always going hit top spots because they are either cheap to play or in the case of Dota completely free. We're enthusiasts and most of us aren't broke. So that's who I'm talking about.

Nope. Now it has to be within reason, but if you gave me a game that starts at 30 and never goes below 30 vs a game that starts at 60 and frequently drops I'm going to pick the 30 locked option. It's going to give a much better experience than if FPS is all over the place.

As I stated before where things get stored and how much, is completely programmable. If I had a lineup that included cards like the GTX 1050 for instance that only has 4GB, would I set the cap at 4GB and then stream in textures as needed from system memory to make up the deficit? Yup. What's the benefit? The game is going to run better than if it hits a brick wall then has to immediately go out to storage to load in the next batch of textures. When Hogwarts released for instance this is exactly what you saw. Doing this ensures that all of the cards within your lineup will be able to play the game without having a horrible experience. The reason to do this seems pretty obvious to me.
Yes, and that's why AMD and Nvidia are still going to release 8GB options, and why they're going to still sell like gangbusters. I'm not referring to myself, you, or a good portion of [H]. Until those games start seeing better requirements, 8GB is going to be a mainstream option for the mainstream masses who game on PC.

In all fairness, IF your GPU is only managing 30 FPS on any title, chances are it's not a locked 30 FPS, unless you intentionally dropped your FPS from something higher to maintain stability. So, at 30 FPS you'll most likely see 1% lows in the teen's to mid-20's.

Now, that makes sense. Although, why would a card with only 50% more VRAM require that much less system memory? If games are being programmed to store, let's say 20GB of texture data in memory due to VRAM limitations on an 8GB GPU, why would the 12GB GPU only require half that? In the case of FH5 where it's using 22GB of system memory with the 3070, while the 6700XT only uses 10GB, then Hitman where, again, the 6700XT is using only half the system memory of the 3070. Then you go down the list to Plague Tale Requiem and the difference is only 2GB of system memory usage difference. The math there doesn't add up. It's either over-compensating, or there's something in the games code that is telling it to pull more than it actually needs for the 3070 since both System Memory and VRAM memory usage equals 30GB, while the 6700XT is only 22GB in Forza Horizon 5. Either way, if this is the case, that games are programmed to have memory at the ready to prevent the "brick-wall" effect, then even people with 12GB cards should be fine for quite some time, the 3070 doesn't seem to be having too many issues with high quality settings in newer AAA games, it's only when you set it to the games highest settings. TLOU and Jedi Survivor are examples of this.

Again though, I'm not advocating for more 8GB cards, or for Nvidia to keep releasing cards with bare minimum VRAM, I'm just saying that more VRAM doesn't always equal better performance, otherwise the 4070Ti wouldn't consistently beat a 3090, or 6950XT, even at 4k resolutions. 1% lows will always happen, games tend to be coded differently, and stuttering, unfortunately will always seem to be a part of PC gaming as long as developers keep on being lazy, it doesn't matter how much VRAM you have, a poorly coded game will always stutter unfortunately. :(

Addendum: I honestly wish we weren't getting games looking straight out of 2018-2019 in 2023, it only makes matters worse when they're releasing with graphics from 2018 and they're using vastly more resources than they should, but I know they do get optimized eventually, hence why I'm not concerned about my 4070Ti's 12GB of VRAM.
 
I'd like NV to add more VRAM. Yes! Can be used to improve everything. Please more sir. That's one thing.

I'd also like developers to return to the time when they did better asset analysis and working-set management.

These "AAA" titles don't actually look better than things which existed years ago. Random example - The Division 1. Still looks amazing compared to current titles. 2016.
 
I'd like NV to add more VRAM. Yes! Can be used to improve everything. Please more sir. That's one thing.

I'd also like developers to return to the time when they did better asset analysis and working-set management.

These "AAA" titles don't actually look better than things which existed years ago. Random example - The Division 1. Still looks amazing compared to current titles. 2016.
Now with the release of Tears of the Kingdom, it's even more apparent how lazy developers have gotten. While both Zelda games are not without issues with minor frame drops here and there, the fact that such a gargantuan game exists and plays on such hardware such as Nvidia's Tegra X1 (a SOC released in 2015 no less) is testament to true optimization.
 
Now with the release of Tears of the Kingdom, it's even more apparent how lazy developers have gotten. While both Zelda games are not without issues with minor frame drops here and there, the fact that such a gargantuan game exists and plays on such hardware such as Nvidia's Tegra X1 (a SOC released in 2015 no less) is testament to true optimization.
I mean it is a well optimized game (my girlfriend has been playing it). However things that you notice about it right away:

1) It is locked to 30fps max, and it drops down to 20fps not infrequently.
2) Game runs at 1600x900 max, and frequently lowers the resolution dynamically to 1280x720 (maybe lower) depending on what is happening.
3) Texture resolution is quite low, many things are solid/cell shaded, the things that are textured have pretty low rez textures overall.

Some of these are things that gamers scream about, a lot, at least if it isn't on a switch. Some of these, like FPS, are things people are screaming about for Jedi Survivor and they are mad it isn't doing 60, not 30.

When you compare it to Jedi Survivor, the graphics, the world detail, the mob detail, the animation, etc, etc are not even in the same ballpark. So sure, Jedi Survivor is not nearly as optimized as it could or should be, but it is also a case that you are comparing apples to steak.
 
I mean it is a well optimized game (my girlfriend has been playing it). However things that you notice about it right away:

1) It is locked to 30fps max, and it drops down to 20fps not infrequently.
2) Game runs at 1600x900 max, and frequently lowers the resolution dynamically to 1280x720 (maybe lower) depending on what is happening.
3) Texture resolution is quite low, many things are solid/cell shaded, the things that are textured have pretty low rez textures overall.

Some of these are things that gamers scream about, a lot, at least if it isn't on a switch. Some of these, like FPS, are things people are screaming about for Jedi Survivor and they are mad it isn't doing 60, not 30.

When you compare it to Jedi Survivor, the graphics, the world detail, the mob detail, the animation, etc, etc are not even in the same ballpark. So sure, Jedi Survivor is not nearly as optimized as it could or should be, but it is also a case that you are comparing apples to steak.
We know all that. Did you miss the fact that I clearly stated it runs on a Tegra X1? A mobile chip from 2015, in other words a 7 year old mobile chip. Of course there are going to be sacrifices, but there is absolutely no doubt that TotK is insanely more optimized than Jedi Survivor. The mean fact you can jump from a Sky Island all the way down to the depths with absolutely no loading screen on a geriatric mobile chip says it all.
 
Yes, and that's why AMD and Nvidia are still going to release 8GB options, and why they're going to still sell like gangbusters. I'm not referring to myself, you, or a good portion of [H]. Until those games start seeing better requirements, 8GB is going to be a mainstream option for the mainstream masses who game on PC.
For mainstream I can see it. But people are just going to have to realize that the latest engines mean they are going to have to turn down settings even at 1080P in some instances.
In all fairness, IF your GPU is only managing 30 FPS on any title, chances are it's not a locked 30 FPS, unless you intentionally dropped your FPS from something higher to maintain stability. So, at 30 FPS you'll most likely see 1% lows in the teen's to mid-20's.
1% lows are not directly linked to FPS.
Now, that makes sense. Although, why would a card with only 50% more VRAM require that much less system memory? If games are being programmed to store, let's say 20GB of texture data in memory due to VRAM limitations on an 8GB GPU, why would the 12GB GPU only require half that? In the case of FH5 where it's using 22GB of system memory with the 3070, while the 6700XT only uses 10GB, then Hitman where, again, the 6700XT is using only half the system memory of the 3070. Then you go down the list to Plague Tale Requiem and the difference is only 2GB of system memory usage difference. The math there doesn't add up. It's either over-compensating, or there's something in the games code that is telling it to pull more than it actually needs for the 3070 since both System Memory and VRAM memory usage equals 30GB, while the 6700XT is only 22GB in Forza Horizon 5. Either way, if this is the case, that games are programmed to have memory at the ready to prevent the "brick-wall" effect, then even people with 12GB cards should be fine for quite some time, the 3070 doesn't seem to be having too many issues with high quality settings in newer AAA games, it's only when you set it to the games highest settings. TLOU and Jedi Survivor are examples of this.
Every type of game has a difference performance profile. One of the toughest games to do from a performance perspective are racing games. LOD is far higher with racing games because the draw distance is so far. Games that are not open and have a bunch of corridors like Plague Tale, RE, Hitman, they will require far less LOD at a distance so there's less to load. Obscuring what gets loaded in corridor games is far easier to control. FH5 is incredibly open with tons of roads you can take so it's at the extreme of pre-caching.

The bandwidth difference between system memory and VRAM is substantial. VRAM is 4 times faster than going out to system memory and system memory is 10 times faster than storage. That's why console's unified memory is such a benefit.
Again though, I'm not advocating for more 8GB cards, or for Nvidia to keep releasing cards with bare minimum VRAM, I'm just saying that more VRAM doesn't always equal better performance, otherwise the 4070Ti wouldn't consistently beat a 3090, or 6950XT, even at 4k resolutions.
Never said always. Always is a word that never should be used in anything technical.
1% lows will always happen, games tend to be coded differently, and stuttering, unfortunately will always seem to be a part of PC gaming as long as developers keep on being lazy, it doesn't matter how much VRAM you have, a poorly coded game will always stutter unfortunately. :(
This is a misnomer. They aren't really being lazy. Time frames in game development are controlled by producers not development. No one wants to release a crappy game from a development and testing POV. Secondarily with ports, consoles drive development. This has been the case for a very long time. The only pet peeves I have with consoles is the previous generation as it was not nearly as much of a jump as it should have been. But this current generation isn't something to be sneezed at. All of them are walking around with a power equivalent to a 3070 or a 6700 non-xt but in a unified set up. Honestly you can't recreate that setup on PC. The only way to do it is to create a unified space LIKE VRAM and have everything come out of that. So if a video card is moving about the cabin with 8GB of VRAM which argue if you want is less that what is possible via console then expect to see problems.

The only other option is what we've been talking about: caching stuff in system memory and streaming it out from there to VRAM which is what nVidia is doing. It's not inherently bad, but it requires that you get out in front of game development.
Addendum: I honestly wish we weren't getting games looking straight out of 2018-2019 in 2023, it only makes matters worse when they're releasing with graphics from 2018 and they're using vastly more resources than they should, but I know they do get optimized eventually, hence why I'm not concerned about my 4070Ti's 12GB of VRAM.
What game is that?
 
Last edited:
Are you serious?? A) There are FPS drops EVERYWHERE in Zelda: Tears of the Kingdom. The draw in distance is really close and when it's not everything is covered in mist. B) The level of detail is .... just come the fuck on not even 2016. The color pallet might be 16-bit but I'm almost doubting that. It's definitely restricted to preserve bandwidth. You're comparing this 720P game (because if nVidia drops the resolution then there's no problem at all) with Virtua Fighter 2 shadows:

View attachment 570122
With this (they rendered arm hair) there's also ray-tracing on the PS5 and Xbox versions:
View attachment 570121
I have the Zelda game, there's loading tons of places. You're just not being honest here not at all but it shows how you're approaching this that's for sure. There's also the issue of there being NO PC PORTS. From a development stand point its not even an apples to oranges comparison. It's more like comparing balloons to grape jelly.

Oh wait, I had to check Digital Foundry (nVidia homebase) because your summary just seemed like marketing and wouldn't you know, you're repeating Digital Foundry word for word (I literally found the exact spot from the nVidia marketing slick in this video):

So i'm dealing with nVidia marketing directly. Hi nVidia? How are sales?
EA can't go after nVidia for defamation but other developers should for their "everything is so unoptimized" marketing nonsense.

Holy shit, the fact that I have to keep saying this is mind boggling. Mobile SoC FROM 2015. How is this going over your heads?

One last time for the haters in the back, 7 year old mobile SOC. Still less issues that the turds being released in 2023 because of proper optimization from their Devs.

From here on out, I will be ignoring the obvious trolls.
 
We know all that. Did you miss the fact that I clearly stated it runs on a Tegra X1? A mobile chip from 2015, in other words a 7 year old mobile chip. Of course there are going to be sacrifices, but there is absolutely no doubt that TotK is insanely more optimized than Jedi Survivor. The mean fact you can jump from a Sky Island all the way down to the depths with absolutely no loading screen on a geriatric mobile chip says it all.
Yes.. and it LOOKS LIKE IT. It looks like a game that was designed for an old, low power, mobile chip. It's good for what it is and that's great but there were a lot of tradeoffs made, not in the least of which being low rez, lower than the switch's native and low FPS. Jedi Survivor is the opposite. It is VERY modern, VERY detailed, VERY shiny and unsurprisingly demanding because of that. Now it also clearly is not as well coded as it should be, the biggest evidence being not making effective use of all the cores that modern processors offer. However the fact that a game that looks like this noms up VRAM like mad and needs a chonk to run at 1440p or above is not a surprise to me.

Basically the games are so different in their targets it just doesn't make sense to me to compare them.
 
Holy shit, the fact that I have to keep saying this is mind boggling. Mobile SoC FROM 2015. How is this going over your heads?

One last time for the haters in the back, 7 year old mobile SOC. Still less issues that the turds being released in 2023 because of proper optimization from their Devs.

From here on out, I will be ignoring the obvious trolls.

It's kind of hard to be impressed when it looks like a 7 year old mobile game.
 
It's kind of hard to be impressed when it looks like a 7 year old mobile game.
I mean I will say it is impressive for what it is. The graphics are good for an open world Switch game. But it for sure looks like it. It looks like a Switch game, it runs like a Switch game. It is not some amazing feat of coding that makes me go "This can't possibly be a Switch!"

They did a good job on it though.
 
It is apparent to me now how little most people know of actual game development. My mind is blown at just how oblivious most gamers actually are.

ron-swanson-parks-and-rec.gif
 
It is apparent to me now how little most people know of actual game development. My mind is blown at just how oblivious most gamers actually are.
Why would a gamer know a ton about game development, any more than a driver would know about automobile engineering.

However since you imply that you DO know so much, what are your credentials? What is your training and experience that gives you extensive insight into game development?

More to the point: What, specifically, needs to be done to optimize Jedi Survivor? What are the specific things they need to do to improve the framerate, or lower the VRAM usage, or fix other issues you have with it? Because, personally, I feel like the "this needs more optimization" gets thrown around by a lot of gamers a lot of the time as a generic term, as though "optimization" is almost like a magic wand that if it was just waved would make everything better. That's not how it works, of course. So if you are well informed about how it is done, let's hear what specific things they needed to do, but didn't, or how they need to reimplement things.
 
Tears of the Kingdom runs on an ancient relic like the Tegra X1, in a system that has 4GB of RAM and still manages to feel smooth as butter when playing it. Does the game look bad by today's standards? Depends on who you ask, but to me it looks very good considering the hardware running it, and the fact that Nintendo managed to squeeze all of it onto an 18GB cartridge of all things while these newer games are requiring 4-5x that amount.. it's astounding. This is part of the reason I respect Nintendo--they don't fuck around with their first party, or even second party titles when it comes to optimization and squeezing as much as they can from what they have available and managing to make their games entertaining despite having dated graphics.

What game is that?
Any number of games running the UE4 Engine. Just look at Batman Arkham Knight, that game was released in 2015 and gives games released 8 years later a run for their money. A lot of these games aren't living up to the hardware cost they demand just to run them optimally. Doom Eternal is an example of a non-UE4 game that looks phenomenal, yet runs on a potato, then there's both Witcher 3 and CP2077 running on the RED engine that just blow every UE4 title out of the water. Then you got RE4 which is running on Capcoms engine and that game looks and runs great on all manners of hardware. UE4 games just look so dated, hence my remark about games looking like they belong back in 2018.
 
More to the point: What, specifically, needs to be done to optimize Jedi Survivor? What are the specific things they need to do to improve the framerate, or lower the VRAM usage, or fix other issues you have with it?
Apparently just pop in lower res textures all over the place like they did with Hogwarts. "Look guys, it uses less vram now!" This argument would never be happening if Nvidia would just be a little bit more forward thinking in how much they include for the price points they're demanding. Like, just thinking one generation ahead.

But that would hurt pushing people to upgrade every gen or so. That 16gb 3070 would have been amazing and I would have been all over it. Idc if someone wants to say "but that's just too much vram". Who cares? I like having more just in case. People are literally arguing in defense of less hardware just because one company has gone all in on slathering AI compensations all over things to push out graphs of "3x faster than" followed by 3 asterisks.
 
Why would a gamer know a ton about game development, any more than a driver would know about automobile engineering.

However since you imply that you DO know so much, what are your credentials? What is your training and experience that gives you extensive insight into game development?

More to the point: What, specifically, needs to be done to optimize Jedi Survivor? What are the specific things they need to do to improve the framerate, or lower the VRAM usage, or fix other issues you have with it? Because, personally, I feel like the "this needs more optimization" gets thrown around by a lot of gamers a lot of the time as a generic term, as though "optimization" is almost like a magic wand that if it was just waved would make everything better. That's not how it works, of course. So if you are well informed about how it is done, let's hear what specific things they needed to do, but didn't, or how they need to reimplement things.
Hogwarts, Jedi Survivor, Callisto, etc all suffer from UE4 largely being an ancient engine at this point. There is a lot that needs to be done for optimization on PC. The games frequently end up being CPU bound, even on the highest end CPU's. This is largely because the entire engine slows to a crawl since a single thread is being hammered to render more complicated scenes.

There is a long discussion I found about Bend studios running into these issues with Days Gone, and part of the reason that game had such extended development was because they chose to deal with writing their own low-level code to resolve the performance issues, and I will personally say that Days Gone is one of the best running UE4 games out there.

Really, Epic is to blame here at the end of the day. They are the ones that should be fixing these issues with UE4, and it's obvious they abandoned it for UE5. However, most of these developers chose to start developing with UE4 before UE5 was even available.
 
Tears of the Kingdom runs on an ancient relic like the Tegra X1, in a system that has 4GB of RAM and still manages to feel smooth as butter when playing it. Does the game look bad by today's standards? Depends on who you ask, but to me it looks very good considering the hardware running it, and the fact that Nintendo managed to squeeze all of it onto an 18GB cartridge of all things while these newer games are requiring 4-5x that amount.. it's astounding. This is part of the reason I respect Nintendo--they don't fuck around with their first party, or even second party titles when it comes to optimization and squeezing as much as they can from what they have available and managing to make their games entertaining despite having dated graphics.
Probably because it's a 720p game not 1080. Also it uses FSR. Yep that FSR. So is FSR still bad or nah?
Any number of games running the UE4 Engine. Just look at Batman Arkham Knight, that game was released in 2015 and gives games released 8 years later a run for their money.
? That game was notorious for how bad the PC port ran. I'm surprised you mentioned it.
A lot of these games aren't living up to the hardware cost they demand just to run them optimally. Doom Eternal is an example of a non-UE4 game that looks phenomenal, yet runs on a potato,
Also? Doom Eternal requires more than 8GB to run at the higher resolutions. Hardware Unboxed did a write up on it specifically how the 3080 was the better card largely because it had more VRAM than the 3070.

Also it's running on an engine developed by easily the best game developer of our life time.
then there's both Witcher 3
That game had low quality textures and required a HD pack to even bring it up to the level of what we're talking about.
and CP2077 running on the RED engine that just blow every UE4 title out of the water.
Yup and it doesn't "run on a potato" either and had one of the most troubled launches that I can remember.
Then you got RE4 which is running on Capcoms engine and that game looks and runs great on all manners of hardware. UE4 games just look so dated, hence my remark about games looking like they belong back in 2018.
RE4 also received complains so I'm not sure what you're talking about.
 
Last edited:
Tears of the Kingdom runs on an ancient relic like the Tegra X1, in a system that has 4GB of RAM and still manages to feel smooth as butter when playing it. Does the game look bad by today's standards? Depends on who you ask, but to me it looks very good considering the hardware running it, and the fact that Nintendo managed to squeeze all of it onto an 18GB cartridge of all things while these newer games are requiring 4-5x that amount.. it's astounding. This is part of the reason I respect Nintendo--they don't fuck around with their first party, or even second party titles when it comes to optimization and squeezing as much as they can from what they have available and managing to make their games entertaining despite having dated graphics.


Any number of games running the UE4 Engine. Just look at Batman Arkham Knight, that game was released in 2015 and gives games released 8 years later a run for their money. A lot of these games aren't living up to the hardware cost they demand just to run them optimally. Doom Eternal is an example of a non-UE4 game that looks phenomenal, yet runs on a potato, then there's both Witcher 3 and CP2077 running on the RED engine that just blow every UE4 title out of the water. Then you got RE4 which is running on Capcoms engine and that game looks and runs great on all manners of hardware. UE4 games just look so dated, hence my remark about games looking like they belong back in 2018.
Arkham knight was a UE 3 game. Albeit a heavily modified one. #rocksteadymagic
 
  • Like
Reactions: kac77
like this
Probably because it's a 720p game not 1080. Also it uses FSR. Yep that FSR. So is FSR still bad or nah
Never said FSR was bad, point it out and I’ll gladly own up to it, but afaik I’ve never said it was bad. Just not as good as DLSS.
? That game was notorious for how bad the PC port ran. I'm surprised you mentioned it.
Still looked good, both on PC and consoles. Never said anything about its PC port. It did get optimized eventually, and it does look better than 3/4 of the UE4 games available.
Also? Doom Eternal requires more than 8GB to run at the higher resolutions. Hardware Unboxed did a write up on it specifically how the 3080 was the better card largely because it had more VRAM than the 3070.
You left out a minor detail—at highest in-game settings with Ray Tracing enabled. At the next level of detail, it can run even at 4K on a 3070.
Also it's running on an engine developed by easily the best game developer of our life time.
I agree, and more publishers should be asking them for advice.
That game had low quality textures and required a HD pack to even bring it up to the level of what we're talking about.
Yes, and even with a texture pack it didn’t require 15-20GB of VRAM. Now we have the updated next-gen version, while still a bit buggy, it isn’t demanding 15-20GB of VRAM just to be playable.
Yup and it doesn't "run on a potato" either and had one of the most troubled launches that I can remember.
I didn’t say CP2077 could be played on a potato, that was Doom Eternal. Yea, CP2077 was troubled at launch due to investors getting impatient, and gamers sending death threats to CD Projekt Red, so they just released it as it was, but are still continuously fixing it—again something that should be noted by game publishers.
RE4 also received complains so I'm not sure what you're talking about.
What complaints? The only complaint I’m aware of is from the VRAM Doomer club. If the game had any issues, Capcom was super quick to fix them, unlike EA, or Naughty Dog.
 
Never said FSR was bad, point it out and I’ll gladly own up to it, but afaik I’ve never said it was bad. Just not as good as DLSS.
Well if you can't use DLSS on your own hardware then I don't even agree with the last part.
Still looked good, both on PC and consoles. Never said anything about its PC port. It did get optimized eventually, and it does look better than 3/4 of the UE4 games available.
We've been talking about the PC ports this entire time. Literally its in the title of this thread and no it does not look better than 3/4 of the UE4 games available.
You left out a minor detail—at highest in-game settings with Ray Tracing enabled. At the next level of detail, it can run even at 4K on a 3070.
Yup just slower because of lack of VRAM.
Yes, and even with a texture pack it didn’t require 15-20GB of VRAM. Now we have the updated next-gen version, while still a bit buggy, it isn’t demanding 15-20GB of VRAM just to be playable.
It requires more than 8GB though at high settings.
I didn’t say CP2077 could be played on a potato, that was Doom Eternal. Yea, CP2077 was troubled at launch due to investors getting impatient, and gamers sending death threats to CD Projekt Red, so they just released it as it was, but are still continuously fixing it—again something that should be noted by game publishers.
And EA is as well with Jedi Survivor.
What complaints? The only complaint I’m aware of is from the VRAM Doomer club. If the game had any issues, Capcom was super quick to fix them, unlike EA, or Naughty Dog.
Nope. Digital Foundry complained about it not being optimized and how it used too much VRAM.
 
Back
Top