4060 Ti 16GB due to launch next week (July 18th)

You wouldn't happen to have a link to these videos?
A quick search brought up this one which shows the AMD cards consistently allocating at least 10% more.

View: https://www.youtube.com/watch?v=-aCx1BUpQbQ&ab_channel=DanielOwen

Nvidia does some interesting stuff driver side to offset VRAM usage, including creating a buffer space in system RAM for storing compressed assets that it can pull up as needed via driver command, so you will see Nvidia systems using slightly more system RAM than their AMD counterparts, it's part of their Magnum IO tech
But Nvidia is also very fast on Garbage Collection and makes use of its work in texture compression to partially compress assets in VRAM instead of fully decompressing them if they are something that will be used consistently, such as the materials needed for the onscreen playable character and that's part of their optimization process for games, but the drivers also have some means of detecting them as well.
Interesting stuff.

8's still not enough though, and Nvidia pushing it as a means of pressuring developers into adopting some of their compression algorithms to "solve" the VRAM memory issues is BS.
 
Last edited:
A quick search brought up this one which shows the AMD cards consistently allocating at least 10% more.

View: https://www.youtube.com/watch?v=-aCx1BUpQbQ&ab_channel=DanielOwen

Nvidia does some interesting stuff driver side to offset VRAM usage, including creating a buffer space in system RAM for storing compressed assets that it can pull up as needed via driver command, so you will see Nvidia systems using slightly more system RAM than their AMD counterparts, it's part of their Magnum IO tech
But Nvidia is also very fast on Garbage Collection and makes use of its work in texture compression to partially compress assets in VRAM instead of fully decompressing them if they are something that will be used consistently, such as the materials needed for the onscreen playable character and that's part of their optimization process for games, but the drivers also have some means of detecting them as well.
Interesting stuff.

8's still not enough though, and Nvidia pushing it as a means of pressuring developers into adopting some of their compression algorithms to "solve" the VRAM memory issues is BS.

It does seem to use more ram on the AMD card, but not 1GB to 4GB worth. More like 300MB to 1GB difference, with Ray-Tracing pushing the memory difference. A plague Tale Requiem seems to be in favor of AMD where it uses 100MB less on it than the 3060. Forza Horizon 5 seems to be a 1.5GB difference in favor of Nvidia. Since this is a difference in how drivers handle memory usage, I do wonder how it is on Linux? A quick search and I can't find a comparison with memory usage.
 
It does seem to use more ram on the AMD card, but not 1GB to 4GB worth. More like 300MB to 1GB difference, with Ray-Tracing pushing the memory difference. A plague Tale Requiem seems to be in favor of AMD where it uses 100MB less on it than the 3060. Forza Horizon 5 seems to be a 1.5GB difference in favor of Nvidia. Since this is a difference in how drivers handle memory usage, I do wonder how it is on Linux? A quick search and I can't find a comparison with memory usage.
It's not terribly different, at the end of the day the games load the same assets and the drivers do the same job, the differences come down to how Proton and such deal with the translations, and that's at least consistent between the two more than not.
You will encounter the standard Linux kerfuffle with AMD open versus Nvidia proprietary drivers depending on your Linux Flavor before the differences are meaningful enough to be worth mentioning.
Even if we use the extreme cases where AMD is using 10-15% more VRAM, their cards (mid-range and up) more than cover the spread, but the low end gets hammered harder for sure, 8 no matter your color of choice is just not enough and all the GPU parties involved need to stop pretending it is. Even the consoles get 10GB or more dedicated to GPU assets and they should be considered the baseline not a middle-tier.
 
Lakados
its like saying every car needs a 50 gal tank, so you can drive from one side of the country to the other, without refilling, while thats not the case for +50% of car buyers,
so why should they do it?


vram use depends largely on the res used and games played.
my 2080S with 8gb is fine for all stuff i play, as the "newest" game is 2y old, ignoring that i can run a lower res in-game (vs native UHD), and easily make up for not having more.

more than 50% of the planets gamers are running something like a 1060 3GB @720/1080p, so any upgrade with 6/8GB would still be an improvement,
so lower tier stuff is fine using 8.
 
Last edited:
I'll find the videos I was referring to soon. They're buried in a 90 page thread at Guru3d so probably will look through tonight.
 
Lakados
its like saying every car needs a 50 gal tank, so you can drive from one side of the country to the other, without refilling, while thats not the case for +50% of car buyers,
so why should they do it?


vram use depends largely on the res used and games played.
my 2080S with 8gb is fine for all stuff i play, as the "newest" game is 2y old, ignoring that i can run a lower res in-game (vs native UHD), and easily make up for not having more.

more than 50% of the planets gamers are running something like a 1060 3GB @720/1080p, so any upgrade with 6/8GB would still be an improvement,
so lower tier stuff is fine using 8.
While I appreciate and agree with you on this, there should be a card marketed for this that could have 8GB or hell even less. But it should not be a $350 card. $200 or less yeah I agree with you but if Nvidia, and AMD want to push this narrative that an entry gaming GPU is upward of $300 then having more than 8GB should be the norm.
 
While I appreciate and agree with you on this, there should be a card marketed for this that could have 8GB or hell even less. But it should not be a $350 card. $200 or less yeah I agree with you but if Nvidia, and AMD want to push this narrative that an entry gaming GPU is upward of $300 then having more than 8GB should be the norm.
A sub $200 graphics card should never go bellow 8GB because we already have a plethora of choices that are 8GB and bellow $200. For example the RX 6600m can be found for less than $200 on Aliexpress. Also, AMD and Nvidia need to realize the main market segment is still under $300. They keep pushing for higher prices and wonder why nobody is buying them. For example, the RX 7700 XT is a really good GPU for $400. Except that nobody is buying them because it's not within peoples budget. It's only good because Nvidia's offerings are so bad. For nearly a decade you see the top graphic cards that are sub $300, like RTX 3060, GTX 1060, GTX 970, and etc. You don't see AMD anywhere near that list because in the end an Nvidia GPU is just not that much more money over AMD. The 7700 XT doesn't even offer more VRAM over the 6700 XT or a 4060 Ti 16GB. What's even more perplexing is that both AMD and Nvidia offer strange VRAM sizes like 10GB and 12GB, because they limit the bandwidth to these cards through memory. Why is the RTX 4060 8GB? Because it's 128-bit memory bus would only support 8GB and 16GB, and you know damn well that Nvidia isn't giving 4060 more VRAM than a 4070. Except they did and it costs too much. Nvidia really tried to make that 128-bit memory bus work with 24MB of L2 cache. If the 4060 was a 192-bit GPU, then the 12GB of memory would have made sense and the graphics card would have been faster which would have been more attractive for it's price, but then it would approach the performance of the RTX 4070.
 
IMHO the $300 price tag on the 4060 isn't bad. It's the cheapest "60" since the GTX 1060, which launched at $200 for the 3GB model and $250 for the 6GB model in 2016. The RTX 2060 launched at $350 and the 3060 launched at $330 officially... but that was during the Covid+Crypto vid card crisis and you couldn't get one, at least not for $330. If I punch the 1060 prices into an inflation calculator with the launch dates I get $254 and $317. Those had 3/8s and 3/4s of the ram on a 1080, while the 4060 has 1/2 the ram of a 5080. So if I just wave my hands and fudge things a bit I'm thinking $275, so $300 isn't way off and everyone like round numbers, right? The problem is the lack of a significant performance improvement compared to the 3060, plus it gets beat sometimes because the 3060 has 12GB. Same with the 4060Ti. It's barely faster than the 3060Ti. So pricing is fine, engineering messed up.

While I appreciate and agree with you on this, there should be a card marketed for this that could have 8GB or hell even less. But it should not be a $350 card. $200 or less yeah I agree with you but if Nvidia, and AMD want to push this narrative that an entry gaming GPU is upward of $300 then having more than 8GB should be the norm.
There are, they're just not current generation NV and AMD cards. They don't make a budget part every generation anymore. 4060 is midrange. The "6" part has been midrange for 20 years. You can still get a $200 8GB 3050 or RX6600XT plus there are a few more slower and cheaper options, particularly from AMD and Intel. Intel has stuff all the way down to $99, AMD has stuff like the RX 6500XT and 6400 and if you really want one NV has things like a GTX 1630 or GT 710. Other than that I partially agree with you on the ram. They ought to offer a 16GB 4060. I just wouldn't eliminate the 8GB model. The 8GB model is just fine for a lot of e-sports games.

Why is the RTX 4060 8GB? Because it's 128-bit memory bus would only support 8GB and 16GB, and you know damn well that Nvidia isn't giving 4060 more VRAM than a 4070. Except they did and it costs too much. Nvidia really tried to make that 128-bit memory bus work with 24MB of L2 cache. If the 4060 was a 192-bit GPU, then the 12GB of memory would have made sense and the graphics card would have been faster which would have been more attractive for it's price, but then it would approach the performance of the RTX 4070.
This should get better with the next generation of cards. NV is switching to GDDR7 for most of the lineup, and memory manufacturers have 3GB GDDR7 modules coming next year. They're the usual 32-bits wide, so 4 will put 12GB on a 128-bit bus and 6 will put 18GB on a 192-bit bus. If the leaks are correct the 5070 will be 192-bit, so it's likely we'll see an 18GB 5070. I'm not sure what they'll do with the 5060 and all the "Ti" parts, and maybe we'll get a 5050 too. Over on the AMD side, word is they're sticking with GDDR6 for one more generation and not chasing NV at the high end. I expect the result will be a wider bus compared to NV at a given price or performance level. There probably won't be a 5080 or 5090 competitor from AMD, so I think we'll see 16GB 256-bit cards going up against the 5070 and 5060Ti.
 
So pricing is fine, engineering messed up.
Can't say pricing is fine when the performance isn't. The GTX 970 was $330, but people bought them like crazy because it performs better than a 780 Ti. That would be like if the 5070 was $350 but performs like a 4090, which is just not going to happen today.
This should get better with the next generation of cards. NV is switching to GDDR7 for most of the lineup, and memory manufacturers have 3GB GDDR7 modules coming next year. They're the usual 32-bits wide, so 4 will put 12GB on a 128-bit bus and 6 will put 18GB on a 192-bit bus. If the leaks are correct the 5070 will be 192-bit, so it's likely we'll see an 18GB 5070. I'm not sure what they'll do with the 5060 and all the "Ti" parts, and maybe we'll get a 5050 too. Over on the AMD side, word is they're sticking with GDDR6 for one more generation and not chasing NV at the high end. I expect the result will be a wider bus compared to NV at a given price or performance level. There probably won't be a 5080 or 5090 competitor from AMD, so I think we'll see 16GB 256-bit cards going up against the 5070 and 5060Ti.
It's likely that Nvidia will stick with 128-bit bus for the 5060, which again is going to give a huge performance disparity against the 5070. The GDDR7 memory is going to give it a performance boost over the 4060 and the 3060, but that's likely going to drive up prices. Nvidia isn't exactly in a position to care what anyone thinks of their offerings when they're busy being an AI company, and know very well you'll buy what they have and like it. As much as I mock the 4060, it's in the top 10 in the Steam hardware survey, which means that people are actually buying these terrible valued GPU's. You basically see nothing on that list from AMD until you get really low where the RX 580, 6600, and 6700 XT actually show up on that list. AMD is just not appealing to consumers, and Nvidia knows it. Even if AMD were to make a competent graphics card for a low price, it probably wouldn't matter as Nvidia is cooking up DLSS Infinity or DLSS Beyond, which now makes Ray-Tracing look more shinny when you look at water puddles. Meanwhile AMD's new FSR 3.1 just made ghost trails look even worse.
 
Last edited:
since i havent been as active here as on other platforms, ill say it before ppl start reading between the lines: in no way do i find gpu tiers/pricing acceptable.

that said, many (incl here), need to stop buying model nbrs (e.g. xx70/xx80/xx90), and treat it like every other purchase: buy what the funds allow, not what you "want" to have.

no one goes and buys a house and tells the seller they want a mansion with 5 bedrooms a pool and a hot tub, because that's what they expect to get for 250K.
no one goes to a "luxury" car dealer and says they want their 700HP 2 door sports car for 50K, because thats what other brands sell their 4 door family sedan.

if "you" have a product that sells globally in large numbers, name ONE reason why you should lower the price, that is not connected to lower end-user cost.
Nv, like almost all others on this planet, want to sell their product for the highest price, and make their shareholders happy, not us, and tiers/pricing is up to THEM
anyone not happy with that, dont buy their product, its that simple.
how many of you, have a gpu thats older than 2 tiers on their main rig? right..

to me, anything above ~350$ is overpriced, when looking at price/perf ratio, i might get a 5060ti maybe LC xx70 (to get newer tech for de/encoding not primary gaming imprv), even that i was mainly getting xx80/or higher cards in the past.

but i wont start complaining about tiers/prices or what should cost how much, as i'm (like others) are guilty of being part of the problem, by still buying their stuff..


the 460/1GB. cost 230$, 14y later, the 4060/8GB starts at 280$. definitely much faster, than the ~50-100$ difference in price..
 
Last edited:
I got a 6700 XT 12GB because the answer is still more VRAM. Get a RTX 3060 12GB since they're so cheap now. If you visit r/pcmasterrace they eviscerate anyone who claims to have bought a 4060. They want people to boycott buying the 4060's so Nvidia learns their lesson.
View attachment 663328

This is only tangentially related at best, but I want to note that, while the 4060/Ti desktop version has a terrible reputation (and for mostly good reason), the 4060 on laptop is arguably one of the best $/perf deals as far as laptop dGPU's go. Just in case anyone happens to be looking for a basic gaming laptop at this moment, like I was a month ago or so.

I wanted a no-frills laptop to do basic gaming, with a priority on price to performance ratio, <$900. My search naturally started at the 4070 (as I assume many will intuitively do), based on desktop reputation. Then I realized the laptop space was just a different ball game entirely. Most of what you get for going up to the 4070 in laptops is a 1440p screen, if that. That can be valuable for some, but for me at that physical size the 1440p matters for diddly squat, and you pay a very decent chunk more for it. My 4060 laptop has kind of impressed me. Played Baldur's Gate 3 at 4k resolution (I decided to hook it up to 42" LG C3) with DLSS Balanced and mostly high settings, and kept it above 60-80 FPS almost all the time. It's only 8GB, but since the screen on most of these 4060 laptops is 1080p anyway it's plenty sufficient... and with DLSS apparently still sufficient even for 4k in many games. For native res gaming, I think that the 1080p screen is right-sized for both the 4060 and 4070 performance envelope anyway. I was able to upgrade it to 32GB RAM, still for less than $980 total with the sticks. The laptop itself was <=$900 after tax.
 
The problem is "budget" GPUs are $400+. AMD is of course better with VRAM amount, but that is still around $350 to get more than 8GB. If these were $200-250 GPUs I could understand your point, but they aren't. You also have to keep in mind some of the issues occur at 1920x1080.
None of this changes anything.

You can buy a new or old, $100, $300, $500, $700, or $1700 GPU, and if the game you are playing isn't reaching the FPS you want, you turn down a setting or 2, check performance, repeat, until you get the performance you want.

All of the games in the previous list are the titles with issues, very likely all of the games of any note, that have performance issues on budget GPU's. There are thousands of games (just A or AA but not AAA) that do play at max settings even on 4 year old cards, or new budget cards. This has never been the expectation for new AAA games... remember Crysis?

There has never been a time where a new AAA game played at maximum on old or budget GPU's, with the only possible exception being Doom2016, which was so well coded that it was sufficiently fast on even a generation or 2 older GPU's.
Every other game was coded with < AAA (aka Budget) developers... can't have budget developer+budget GPU and expect AAA performance. Doom had AAA developer so you could do Budget GPU and get AAA performance.
All other developers are somewhere < AAA (whichever A stands for game engine performance and optimization), so you should not be surprised when you need to either lower settings, or skip the budget GPU lineup for game X. Generally speaking of course. Another Unicorn game could come along someday.
In a lot of cases DLSS will help, so if the game supports DLSS that is an option although that introduces other issues.

If Nvidia's upcoming "budget" $600 GPUs are forced to use DLSS to not run out of VRAM in some games that would be quite a shortcoming, IMO.
Price is a different conversation than performance of budget GPU's. $300 top end GPU's do not exist anymore, and likely never will again.
 
All of the games in the previous list are the titles with issues, very likely all of the games of any note, that have performance issues on budget GPU's. There are thousands of games (just A or AA but not AAA) that do play at max settings even on 4 year old cards, or new budget cards. This has never been the expectation for new AAA games... remember Crysis?
The 2000's was a different era of gaming because things moved so fast. Crysis though did work just fine on older GPU's, just not Geforce FX and Radeon 9700 graphic cards. The problem was that both ATI and Nvidia were raising prices which moved people to consoles. So when people did try to play Crysis with their budget hardware, of course the game would be slow. I had a ATI Radeon X1950 PRO and Crysis played just fine for me.
There has never been a time where a new AAA game played at maximum on old or budget GPU's, with the only possible exception being Doom2016, which was so well coded that it was sufficiently fast on even a generation or 2 older GPU's.
Since the introduction of Nvidia's Maxwell you've seen this a lot. The reason why the GTX 970 was popular for such a long time was because it could play new AAA games at maximum at 1080p. Same goes for the GTX 1060 as it just now got replaced by the RTX 3060. The GTX 1060 didn't have a problem at 1080p until PS5 was released and even then it took a few years for PS5 games to stress GTX 1060 owners to upgrade.
Price is a different conversation than performance of budget GPU's. $300 top end GPU's do not exist anymore, and likely never will again.
Then Nvidia and AMD better get ready to see the RTX 3060 dominate Steam for the next five years.
 
im sure testing 1080p cards at 1440p doesnt help either...
Surely not, but the 16GB model does turn in some respectable frame rates with DLSS, and it's harder to run out of VRAM at 1080p. It's pretty clear that the 16GB card can play several of these games decently well, while the other card, which is identical other than VRAM, falls off a cliff.
 
@Marees
and really irrelevant for the product itself.

running a pc that still uses pcie 2.0, will have a serious cpu bottleneck anyway (from some rare exceptions (server/multi socket etc), besides, anyone spending more than 300$ on a gpu for a rig without pcie 3.0, deserves it.
 
@Marees
and really irrelevant for the product itself.

running a pc that still uses pcie 2.0, will have a serious cpu bottleneck anyway (from some rare exceptions (server/multi socket etc), besides, anyone spending more than 300$ on a gpu for a rig without pcie 3.0, deserves it.
Not irrelevant


This video was done because many users who were running this card on pci 3.0 complained to HUB
 
Not irrelevant


This video was done because many users who were running this card on pci 3.0 complained to HUB
I will wait for the techspot blog to post the extracts here but meanwhile this is from a year ago:


https://www.neogaf.com/threads/rtx-4060ti-is-even-worse-on-pcie-3-0.1657123/



The RTX 4060ti utilizes only 8 PCIe lanes, which does not make a difference for PCIe 4.0 systems. However, for older PCIe 3.0 systems, this acts as a limitation to the card's performance.

In practice, the performance of the RTX 4060ti on these systems is even closer to that of the RTX 3060ti.

Der8auer did a video:


View: https://www.youtube.com/watch?v=uU5jYCgnT7s
 
I will wait for the techspot blog to post the extracts here but meanwhile this is from a year ago:


https://www.neogaf.com/threads/rtx-4060ti-is-even-worse-on-pcie-3-0.1657123/



The RTX 4060ti utilizes only 8 PCIe lanes, which does not make a difference for PCIe 4.0 systems. However, for older PCIe 3.0 systems, this acts as a limitation to the card's performance.

In practice, the performance of the RTX 4060ti on these systems is even closer to that of the RTX 3060ti.

Der8auer did a video:


View: https://www.youtube.com/watch?v=uU5jYCgnT7s


Tl:dr when GPU is VRAM limited then pcie bandwidth makes a huge difference

https://tech4gamers.com/rtx-4060-ti-pcie-3-0-bottleneck/
 
@Marees
which is an issue with users and their setup, not the card.
or is the card 3.0? right.

again, its irrelevant, the same way its useless someone testing the latest 2 door supercar and says its performing bad using the suspension/brakes/wheels from previous models.

its up to the ppl buying something, to do their research prior to purchase, e.g. at least check requirements to properly use it, no one else, and if they dont, their problem, not the products.
 
@Marees
which is an issue with users and their setup, not the card.
or is the card 3.0? right.

again, its irrelevant, the same way its useless someone testing the latest 2 door supercar and says its performing bad using the suspension/brakes/wheels from previous models.

its up to the ppl buying something, to do their research prior to purchase, e.g. at least check requirements to properly use it, no one else, and if they dont, their problem, not the products.
The purpose of the forum is to provide the outcomes of such research

The verdict is:

If you have pcie 3.0 then buy the 16gb card
 
PCIe Gen3 came out in 2010. It's fairly common and still in use, but PCIe Gen2 is really old. Anyone still running PCIe Gen2, likely would still get an "upgrade" to performance if they went with a 4060 anything. How much, who knows. PCIe Gen2 is 1/4th the bandwidth of PCIe Gen4, however the cards don't use the full BW anyway. Tests on 4xxx cards comparing PCIe Gen4 to Gen3 performance, found minimal differences. It's probably more noticeable on Gen2 but that is so old I don't know if anyone has even tested it. The card with 8 PCIe lanes needs tested on Gen4 vs Gen3 to see if it has more of a performance hit than 1.5% tested elsewhere.

1.5% is a bit nitpicking, when the newer cards are still an upgrade. People buying the low end cards are not likely to be upgrading every generation, so the comparisons make more sense if it's a 1060 or 2060 compared to 4060, than 3060 to 4060.
But I would be interested in a performance comparison of PCIe Gen3 vs Gen4 on the 8 lane card (one or both of the 4060's iirc).

Edit: Someone did a multi-game comparison here:

View: https://www.youtube.com/watch?v=koTVR0fH6KA
Warzone, ranged from 12 to 15% slower on PCIe Gen3 and was about the worst example in the bunch.
Spiderman ranged from 7 to 14% slower on PCIe Gen3
Last of Us performed quite well, only a 5% difference
Starfield was about 10% difference. 51 fps vs 55 fps
Assassin's Creed, 4% difference, 4fps
Cyberpunk 2077 and Alan Wake 2 were the best performers with only about 3% difference, or 1 to 2 FPS. ResizeBar was on, No Raytracing or DLSS. The PCIe Gen3 was a little hitchy on Cyberpunk, DLSS performance might help with that.

He didn't graph the 1% lows, but the run through's looked pretty good. All were 1080p runs which are consistent with expectations for low end cards in my opinion.

I would have no reservations about recommending a 4060 to someone even with an older system. If they are coming from a 9xx, 10xx, or 20xx card, it is still a nice performance bump. I would of course recommend the 16Gb card if they can afford it, as I believe that card will have longer legs and last for many years.

If you are running an 8Gb card, the sky isn't falling after all.
 
Last edited:
Marees
nope, the forum is a place to exchange information for ppl with problems/solutions/tips, and not to show how ppl with ancient hw won't get the full perf out of a certain product.

thats YOUR verdict, nothing more.

of the almost 2B gamers (read: steam isnt the source), +50% are running 720p maybe 1080p, so like goodboy said, anything will be an upgrade for them,
and as long as you arent giving ppl the additional funds, so they can buy the 16 instead of the 8GB card, its a mute point.

i bet 99% of the ppl that dont care to do research prior to buying it, will not even get to this forum anyway (and actually read this)..


GoodBoy
even if i ignore for a moment that we have zero info on the environment (how accurately was room/case temp held/type of cooling setup etc), im still not seeing this being a proper test to compare.

i would like to see this on a xX70 board with a big cpu, so you dont have any chance of bottlenecking, not the 7600 (even if more likely to be the cpu in a rig using a xx60ti), using 16gb of ram, as its usually faster (at the same price)
and more common for gamers upgrading on a budget.
but as soon as soon as i see them using settings like rebar, its useless for comparison regarding pcie, as systems with 2.0/3.0 dont have it.

test with identical everything, on a system with no (other) bottlenecks, as it will make for more accurate results, and you can always "add" another test suite with the the rest of the hw more closely matching
what ppl buying a xx60(ti) chip would have to show how that differs.
 
Yeah a test on an older system could make more sense for users also on an old system wanting to learn what a more involved upgrade would get them.

As far as that reviewers results, the only way to keep everything the same (CPU, Ram speeds, HDD speed, etc) except for the PCIe speed (bandwidth) is to test on a newer PCIe Gen4 mobo, and just toggle it down to PCIe Gen3 for those tests.

So it's a good test in that it shows the difference between pcie 3 and 4 on an 8 lane graphics card. I think the perf difference is less on the 16 lane GPU's for Gen3 vs Gen4...

My system is only PCIe Gen3 but has ReBar support. My CPU is getting kind of old (10900x), but I can still learn some useful information in that test. Many users who want to review his results will be on at least PCIe Gen3 and a good portion of those probably have Rebar capability anyway. People already running PCIe Gen4 don't care as they are already on a better system.

A test with an older cpu, pcie gen3 mobo without Rebar as compared to a newer pcie gen4 system, it would tell us the total uplift for an across the board upgrade. I think that would also be useful information, but I doubt anyone is going to go to the trouble to do all of that work, as the initial tests already show the difference is 3% to 15% and it varies by game. Users running an older system can match the reviewers' settings in those games since he shows what all settings are prior to each run, and compare their own results to his gen4 results, and learn what the total upgrade would get them.
That reviewer who did that test could test again, just the PCIe Gen3 tests and turn Rebar off. Rebar really only helps slower CPU's iirc, but it's been some years since I read information about that. So while it might have some bit of impact, since that cpu is newer I suspect it is negligible in the tests that the reviewer ran. So back to the users on old systems just comparing whatever performance they see with their existing system, and compare to the results the reviewer got, to determine what a total upgrade would get them.

The tests are mainly for us nerds who are fact-checking the click-bait headlines of the big Youtube reviewers, some of them have a rep for click-bait titles. DerBauer's review was worthless without a comparison to PCIe Gen4... HU likes to drive views with the click-baity headlines. And many of the smaller reviewers are even worse...
 
Comparison of 8gb vs 16gb cards in unreal engine 5 games

This is from a year ago, so someone might have already posted this

I partially take that back

Not a significant difference in average frame rate but significant impact to 1% lows causing spikes in frametime graph

Also major increase in system ram allocation when vram limited (this is not an issue unless you have a faulty CPU)


View: https://www.youtube.com/watch?v=Ji4pv_OLCj0
 
Testing the 8gb card under extreme conditions (pcie 2.0/3.0) effectively turns it into an integrated graphics card


View: https://www.youtube.com/watch?v=ecvuRvR8Uls

I was waiting for the techspot blog version.

But there seems to be a glitch in the universe

So here is my manual summary:

(No UE 5+ games in list below
Apparently UE 5 can handle vram well but there will still be frametime gaps in 1% & 0.1% lows depending on pcie & i/o latency/bandwidth under vram limited conditions )


S. NoGame engineGameSettings4060 ti 8gb pcie 3.04060 ti 8gb pcie 4.04060 ti 16gb pcie 3.04060 ti 16gb pcie 4.0
1Id tech 7Doom Eternal1440p
nightmare (2nd highest)
RT on
DLSS Quality
35 fps160 fps
2Unreal Engine 4.0Calisto Protocol1440p ultra
RT on
38 fps avg
13 fps 1%
good avg
Stuttering 1%
58 fps avg59 fps avg
3RE engineResident Evil 4?????60% of 16gbbetter but not as good as 16gbStable15% faster avg
22% better 1% lows
4DecimaHorizon Forbidden West1440p
Very high
DLSS Quality
Framegen on
broken messworld's better But Occasional spikes3 times faster than 8gbdouble 8gb avg
5Naughty Dog custom engineTLOU 1 remake1440p
Ultra preset
smooth frametimesame as pcie 3
Closer to 50 fps
20% faster than 8gbsame as pcie 3
Closer to 60 fps
6Unreal Engine 4.0Star Wars Jedi Survivor1440p
Epic RT in
DLSS Quality
Framegen on
closer to playable but still horrific frame times2/3rd faster than 8gb on avg
7RED engine 4Cyberpunk 20771440p ultra
Medium quality RT
DLSS Quality
Framegen in
not bad
But still spotty
same as pcie 32.5 times better 1% lowssame as pcie 3
8Insomniac Engine v4.0Ratchet & Clank rift apart1440p
Very high
RT on
DLSS Quality
Framegen on
stuttery dumpster fire
Single digit 1% lows
40% faster on avg
20+ fps 1% lows
ran rather well
Nvidia's gift to gamers
same as pcie 3
9SlipspaceHalo Infinite1440p ultra textures
RT on
A Blurry messThe way the game is meant to be played
10LuminousForspokenultra quality texturesTakes you back 20 yearsThank god the 16gb version exists
 
BMW: A new low

1% lows are low even in 1080p 75% DLSS & FG enabled for the 8gb card when compared to the 16gb version

The mitigating factor is that this is nv version of UE5 with special crippleware RTX software enabled

https://www.techspot.com/review/2883-black-myth-wukong-benchmark/

Screenshot_20240830-183117_Opera.jpg
 
1% lows are low even in 1080p 75% DLSS & FG enabled for the 8gb card when compared to the 16gb version
Not sure if it is new ? and here that not like the 16 GB can run it anyclose to a playable frame rate (even a 3090ti can't), is it even uncommon when running heavy pathtracing ?

Phantom liberty as well, at 1080p at least
https://tpucdn.com/review/cyberpunk...ance-analysis/images/min-fps-pt-1920-1080.png

Alan wake 2 was even more brutal https://www.techpowerup.com/review/alan-wake-2-performance-benchmark/7.html

1% low 16 GB 4060ti: 29.1 fps
1% low 8 GB 4060ti : 5.4 fps
 
not like the 16 GB can run it anyclose to a playable frame rate
Isn't 55 avg & 47 1% low considered playable ?

I don't remember the 8gb card failing so badly at a reasonable resolution compared to 16gb before.
  1. Reasonable playing configuration
  2. 8gb is juddery/muddy/slow
  3. 16gb is smooth & playable
Have we had such a scenario before ?
 
Isn't 55 avg & 47 1% low considered playable ?
Not with frame generation turned on, that probably mean you cannot sustain 30 fps correctly, well it depend for who, according to HUB video with those numbers they say 80 fps was around the strick minimum to look for (4070 super), at least 100 for those that are any fps-latency sensitive (4070 ti super), 100 fps with FG probably mean you are getting close to 60 fps in avg.

16gb is far from being smooth here and it is far from a resonable playing configuration, we are talking about cinematic quality graphic + full Ray tracing, that usually 4070ti super and above type of stuff.
 
Last edited:
Not sure if it is new ? and here that not like the 16 GB can run it anyclose to a playable frame rate (even a 3090ti can't), is it even uncommon when running heavy pathtracing ?

Phantom liberty as well, at 1080p at least
https://tpucdn.com/review/cyberpunk...ance-analysis/images/min-fps-pt-1920-1080.png

Alan wake 2 was even more brutal https://www.techpowerup.com/review/alan-wake-2-performance-benchmark/7.html

1% low 16 GB 4060ti: 29.1 fps
1% low 8 GB 4060ti : 5.4 fps
In most of these examples they are Ultra quality RT max or even path tracing. Scenarios where the 16GB 4060ti is getting much better lows, but still sub 40 fps averages.

You can make 8GB work at that performance level with modern games using a little common sense. That, or just play games a couple years old - likely there are dozens of titles you missed and many of higher quality than today's DEI developed messes.
 
Marees
graphic quality doesn't really have much to do with fps/refresh rates, 24/25 were mainly to have more film to shoot on (vs 50/60), one reason why 70mm imax stuff is limited ~45-60 min, as rolls get too large @70mm negatives
 
Marees
graphic quality doesn't really have much to do with fps/refresh rates, 24/25 were mainly to have more film to shoot on (vs 50/60), one reason why 70mm imax stuff is limited ~45-60 min, as rolls get too large @70mm negatives
It was a joke alluding to the fact that 24fps is enough to create a smooth video in our brain

(But game reponse can respond to as high as 120fps)
 
lol, next time add "sarcasm mode: on", for old folks like me :D
yeah, personally dont care about it as long as there is no motion stutter when camera is panning, and why i stopped watching many drone videos running stock settings (shutter speed/motion rate wrong).
 
BMW: A new low

1% lows are low even in 1080p 75% DLSS & FG enabled for the 8gb card when compared to the 16gb version

The mitigating factor is that this is nv version of UE5 with special crippleware RTX software enabled

https://www.techspot.com/review/2883-black-myth-wukong-benchmark/

View attachment 676450
Why do you expect to run at Max settings on 8GB card?

Cinematic, Ultra = no.
Set it to High, then it's fine.

The low end cards, this is expected. You want Cinematic, get a 4070 or better.
 
Why do you expect to run at Max settings on 8GB card?

Cinematic, Ultra = no.
Set it to High, then it's fine.

The low end cards, this is expected. You want Cinematic, get a 4070 or better.
yeah... some people are pushin some cards above what they are actually capable of. in the wukong thread i posted some suggested optimized settings that lets my sig rig do 55-75 at 4k(FG on, 60% scaling), it uses 6.2GB of the vram, w/ a mix of cinematic and high, only global illumination is dropped to medium(supposedly makes very little visual difference).
 
Back
Top