Nazo
2[H]4U
- Joined
- Apr 2, 2002
- Messages
- 3,672
I just recently did a system upgrade. Basically all major components. CPU, motherboard, RAM, and GPU. Even tossed in a new PCI-E M.2 SSD for good measure. For the most part the hardware should be able to handle pretty much anything with reasonable quality settings, but I'm having issues with sometimes things not really being as smooth as they want. (This is 1080p and just going for 60 FPS, not 4K at 144 or something.) It's actually really hard to pin down or exactly describe what it's doing even. The best I could try to say is micro hitches in games, lower framerates than should be even with a lot of settings lowered, etc. It's hard even for me to 100% pin things down as sometimes it's less even a visibly obvious thing like stutters and sometimes just feels, well, "wrong" somehow. Sometimes it sort of bothers me visually (I don't have motion-sickness or etc generally speaking, but lately it's something almost close to that at times.) Having vsync on for maximum smoothness does generally help a lot, but still has issues (especially where framerates are dropping.) I'm most suspecting the videocard, but it really should be more than good enough if settings aren't maxed out or otherwise set ridiculously for 1080p60. The odd thing is, while the same games weren't running buttery smooth and definitely had framerates lower than 60 many times, they didn't have things like microstutters on the system I had before (Ryzen 5 2600 w/ Geforce 1060.) First, here's the hardware:
CPU: Ryzen 5 5600X overspecced (more about that in a moment) to constant 4.4GHz
Motherboard: ASUS ROG Strix B550-A Gaming (99.99% sure this isn't the culprit.)
RAM: 2x16GB Mushkin Redline Lumina PC4-32000 (4000MHz) CL-18
GPU: Zotac Geforce RTX 3060Ti Twin Edge OC (LHR) with or without slight extra overclocking (I'm most suspecting the GPU.)
SSD: Western Digital 1TB WD Blue SN570 NVMe (99.9999% sure this isn't the culprit either, but anyway, while it's not the absolute fastest by any means, it's decent.)
I will mention on the SSD thing that I actually have multiple SSDs in this system. The OS is on a cheap SATA SSD (still better performance than most HDDs, but not meant for gaming,) low end games are on a slightly better SATA SSD, and then the games that are most demanding are on the good NVMe SSD. (This is mostly just because I am not a rich person so each is smaller than ideal and I've bought them years apart, but I do consider it a good idea to let the OS have its own dedicated drive where reasonably possible and a really small lower performance SSD is actually quite cheap.) The two SATA drives are on separate busses from each other, but I am running the higher demand games from the NVMe drive anyway, so it shouldn't really matter. Task manager doesn't show heavy disk usage in any game for whatever that really means (which is to say not much.)
First, on the CPU. I say "overspecced" because I'm not exactly overclocking it, but I am running it outside of specifications. Instead of running at 3.8GHz or lower depending on load then boosting to 4.6GHz and lowering as heat builds up, I have set it to a fixed 4.4GHz speed. This means no CPU governor shifting frequencies (so no switching latency) and a pretty consistent performance. (As for why I did this, it's less for performance and more just because I could manually set a fixed voltage that, up to that speed, can be extremely low for far less heat production and generally just easier on the chip's long term lifetime, so that isn't really related. The voltage difference from anything over 4.4GHz on up is insane and the temperature rises exponentially with full stock able to hit the thermal throttling limit within minutes if I run something like encoding or Prime95 testing or etc and a negative voltage bias in the PBO only helping a little.) Gaming-wise this has very little effect since 4.4GHz on a six core is plenty for today's games (and I rarely see more than 20-something percentage usage on the CPU during gaming -- I think the most I ever saw was around 40-something -- anyway) with the extra 200MHz not making any visible difference, but it should be mentioned just in case. However, like I said, it should, if anything, actually decrease any latencies since it actually means a lot less switching around. Most likely not enough to be visible, but the point is, it would actually be more likely to decrease the micro-stutters if it even did anything. As far as I can tell, the CPU pretty much blows away anything any games are throwing at it right now. (Also worth noting that when thermal throttling started really kicking in I could easily see it drop as low as 4.2GHz on stock. The 5000x series has a real temperature problem and I hope they fixed that with the 7000 series.)
Now, on the memory, there is an interesting point there. I initially had kept my old RAM (2x8GB of 3200MHz) but decided to upgrade to 32GB because of doing a lot of things that use RAM very badly (namely modding tools like Unity Asset Studio which I guess must load everything into RAM or something.) Curiously enough I saw a huge jump in general smoothness of gaming. This is especially strange because games today just don't really utilize 32GB at all. Even with the 16GB I think I never caught a game using more than 8GB or so roughly and I still don't with 32. It is faster and by a decent margin (and still with decently low latencies,) but the difference really shouldn't be so incredibly huge.
Now the GPU is a different matter. I didn't properly research it and possibly shouldn't have gone with this particular brand in particular. (You know how tricky GPU availability was up until just recently.) The 3060Ti is supposed to be roughly on par with the 2080 Super, so theoretically this should be no slouch with even extremely high settings in most games. However, just as an example, if I run Cyberpunk 2077 with otherwise pretty low quality settings but RTX turned on, my framerates pretty much max out in the high 40s. (So long for the RTX units on the 3000 series being supposedly so much better if the 3060Ti truly does roughly equal the 2080 Super in performance otherwise and I'm seeing such a huge drop with it on.) The moment I turn off RTX it positively flies and honestly I don't really see the huge differences I'm supposed to be seeing (mind you, I'm gaming, not screenshotting) so I don't so much mind turning it off and just using normal ambient occlusion anyway. I have tried adding a bit of overclocking (specifically using this post) but actually didn't see a whole lot of difference. (Well, with the undervolting it's a tiny bit cooler I guess and it's certainly not hurting anything at least.) On the subject of temperatures, I did change the thermal paste and actually saw at least a 2C lower temperature for my troubles. Typically it stays far below what should be its thermal throttling temperature. However, I did once run a game that had a bug of some sort that would make the GPU hit 100% utilization at all times and run the fan up to the max, yet the temperature was about the same, so I've been wondering if there may actually be something in the firmware or whatever that incorrectly handles the throttling (such as having the limit internally lower than it's actually supposed to be.) Now, the Zotac model may not be the best of the best, but generally speaking, the only real complaint by most people is supposed to be how noisy it tends to be and I can deal with that. But at the same time, it's also the most likely culprit here. (One of my biggest fears in particular is that the whole LHR limiting thing could go really wrong somehow and affect gaming -- such as if it misdetected something and thought I was mining instead of gaming. Normally LHR shouldn't affect gaming presumably, but I do worry if something went wrong somehow and applied limiting by accident.) As far as I can determine, while it's not the best of the best by any means, it should be plenty for 1080p60 even with RTX on as long as all the settings are reasonable (and by reasonable I don't even mean all high or necessarily all medium necessarily. I have actually set a few things on low and still have seen issues.)
I've actually been wondering if it could even be video driver related. I particularly noticed that the shared RAM available to the card doubled when I increased my RAM (presumably it's half the system RAM.) Though, of course, that means it was still at least equal to the GPU's own VRAM (8GB) even before the upgrade, so still really should have been enough. Actually, I'm not quite clear on why it's using shared RAM to begin with, but I presume that is some sort of efficiency thing in loading textures and such into shared RAM before the GPU actually needs them or something. It is worth noting that this is with the full features of PCI-Express 4.0 enabled and working (at least as far as I can tell) and technically even resizable BAR is enabled and working (though I doubt this or really any other system can truly utilize them and it's unlikely either really adds any noticeable performance increase, at least I can say they are there adding whatever tiny bit they add -- which is maybe like a 5% improvement.)
I have also, along the way, had to reinstall Windows once since I got all this and it did the same thing before as after, so I think it's unlikely something in it is the culprit or any viruses or miners or anything. (I reformatted the partition Windows was on even.) One game in particular I'm having issues with is Empyrion -- not the most optimized game by any means, but when I'm not seeing any bottlenecks (CPU, GPU, and RAM all less than 100% utilized as far as I can tell) and it's running off the good SSD I have to begin to wonder why the framerates drop at times. (Even if it's not the best optimized, when things drop there must be a bottleneck somewhere! It's not as if they've implemented framerate drops as an intentional thing after all!) There must be a bottleneck somewhere and I'd like to track it down and at least minimize it.
CPU: Ryzen 5 5600X overspecced (more about that in a moment) to constant 4.4GHz
Motherboard: ASUS ROG Strix B550-A Gaming (99.99% sure this isn't the culprit.)
RAM: 2x16GB Mushkin Redline Lumina PC4-32000 (4000MHz) CL-18
GPU: Zotac Geforce RTX 3060Ti Twin Edge OC (LHR) with or without slight extra overclocking (I'm most suspecting the GPU.)
SSD: Western Digital 1TB WD Blue SN570 NVMe (99.9999% sure this isn't the culprit either, but anyway, while it's not the absolute fastest by any means, it's decent.)
I will mention on the SSD thing that I actually have multiple SSDs in this system. The OS is on a cheap SATA SSD (still better performance than most HDDs, but not meant for gaming,) low end games are on a slightly better SATA SSD, and then the games that are most demanding are on the good NVMe SSD. (This is mostly just because I am not a rich person so each is smaller than ideal and I've bought them years apart, but I do consider it a good idea to let the OS have its own dedicated drive where reasonably possible and a really small lower performance SSD is actually quite cheap.) The two SATA drives are on separate busses from each other, but I am running the higher demand games from the NVMe drive anyway, so it shouldn't really matter. Task manager doesn't show heavy disk usage in any game for whatever that really means (which is to say not much.)
First, on the CPU. I say "overspecced" because I'm not exactly overclocking it, but I am running it outside of specifications. Instead of running at 3.8GHz or lower depending on load then boosting to 4.6GHz and lowering as heat builds up, I have set it to a fixed 4.4GHz speed. This means no CPU governor shifting frequencies (so no switching latency) and a pretty consistent performance. (As for why I did this, it's less for performance and more just because I could manually set a fixed voltage that, up to that speed, can be extremely low for far less heat production and generally just easier on the chip's long term lifetime, so that isn't really related. The voltage difference from anything over 4.4GHz on up is insane and the temperature rises exponentially with full stock able to hit the thermal throttling limit within minutes if I run something like encoding or Prime95 testing or etc and a negative voltage bias in the PBO only helping a little.) Gaming-wise this has very little effect since 4.4GHz on a six core is plenty for today's games (and I rarely see more than 20-something percentage usage on the CPU during gaming -- I think the most I ever saw was around 40-something -- anyway) with the extra 200MHz not making any visible difference, but it should be mentioned just in case. However, like I said, it should, if anything, actually decrease any latencies since it actually means a lot less switching around. Most likely not enough to be visible, but the point is, it would actually be more likely to decrease the micro-stutters if it even did anything. As far as I can tell, the CPU pretty much blows away anything any games are throwing at it right now. (Also worth noting that when thermal throttling started really kicking in I could easily see it drop as low as 4.2GHz on stock. The 5000x series has a real temperature problem and I hope they fixed that with the 7000 series.)
Now, on the memory, there is an interesting point there. I initially had kept my old RAM (2x8GB of 3200MHz) but decided to upgrade to 32GB because of doing a lot of things that use RAM very badly (namely modding tools like Unity Asset Studio which I guess must load everything into RAM or something.) Curiously enough I saw a huge jump in general smoothness of gaming. This is especially strange because games today just don't really utilize 32GB at all. Even with the 16GB I think I never caught a game using more than 8GB or so roughly and I still don't with 32. It is faster and by a decent margin (and still with decently low latencies,) but the difference really shouldn't be so incredibly huge.
Now the GPU is a different matter. I didn't properly research it and possibly shouldn't have gone with this particular brand in particular. (You know how tricky GPU availability was up until just recently.) The 3060Ti is supposed to be roughly on par with the 2080 Super, so theoretically this should be no slouch with even extremely high settings in most games. However, just as an example, if I run Cyberpunk 2077 with otherwise pretty low quality settings but RTX turned on, my framerates pretty much max out in the high 40s. (So long for the RTX units on the 3000 series being supposedly so much better if the 3060Ti truly does roughly equal the 2080 Super in performance otherwise and I'm seeing such a huge drop with it on.) The moment I turn off RTX it positively flies and honestly I don't really see the huge differences I'm supposed to be seeing (mind you, I'm gaming, not screenshotting) so I don't so much mind turning it off and just using normal ambient occlusion anyway. I have tried adding a bit of overclocking (specifically using this post) but actually didn't see a whole lot of difference. (Well, with the undervolting it's a tiny bit cooler I guess and it's certainly not hurting anything at least.) On the subject of temperatures, I did change the thermal paste and actually saw at least a 2C lower temperature for my troubles. Typically it stays far below what should be its thermal throttling temperature. However, I did once run a game that had a bug of some sort that would make the GPU hit 100% utilization at all times and run the fan up to the max, yet the temperature was about the same, so I've been wondering if there may actually be something in the firmware or whatever that incorrectly handles the throttling (such as having the limit internally lower than it's actually supposed to be.) Now, the Zotac model may not be the best of the best, but generally speaking, the only real complaint by most people is supposed to be how noisy it tends to be and I can deal with that. But at the same time, it's also the most likely culprit here. (One of my biggest fears in particular is that the whole LHR limiting thing could go really wrong somehow and affect gaming -- such as if it misdetected something and thought I was mining instead of gaming. Normally LHR shouldn't affect gaming presumably, but I do worry if something went wrong somehow and applied limiting by accident.) As far as I can determine, while it's not the best of the best by any means, it should be plenty for 1080p60 even with RTX on as long as all the settings are reasonable (and by reasonable I don't even mean all high or necessarily all medium necessarily. I have actually set a few things on low and still have seen issues.)
I've actually been wondering if it could even be video driver related. I particularly noticed that the shared RAM available to the card doubled when I increased my RAM (presumably it's half the system RAM.) Though, of course, that means it was still at least equal to the GPU's own VRAM (8GB) even before the upgrade, so still really should have been enough. Actually, I'm not quite clear on why it's using shared RAM to begin with, but I presume that is some sort of efficiency thing in loading textures and such into shared RAM before the GPU actually needs them or something. It is worth noting that this is with the full features of PCI-Express 4.0 enabled and working (at least as far as I can tell) and technically even resizable BAR is enabled and working (though I doubt this or really any other system can truly utilize them and it's unlikely either really adds any noticeable performance increase, at least I can say they are there adding whatever tiny bit they add -- which is maybe like a 5% improvement.)
I have also, along the way, had to reinstall Windows once since I got all this and it did the same thing before as after, so I think it's unlikely something in it is the culprit or any viruses or miners or anything. (I reformatted the partition Windows was on even.) One game in particular I'm having issues with is Empyrion -- not the most optimized game by any means, but when I'm not seeing any bottlenecks (CPU, GPU, and RAM all less than 100% utilized as far as I can tell) and it's running off the good SSD I have to begin to wonder why the framerates drop at times. (Even if it's not the best optimized, when things drop there must be a bottleneck somewhere! It's not as if they've implemented framerate drops as an intentional thing after all!) There must be a bottleneck somewhere and I'd like to track it down and at least minimize it.
Last edited: