Trying to pin down a strange performance issue.

Nazo

2[H]4U
Joined
Apr 2, 2002
Messages
3,672
I just recently did a system upgrade. Basically all major components. CPU, motherboard, RAM, and GPU. Even tossed in a new PCI-E M.2 SSD for good measure. For the most part the hardware should be able to handle pretty much anything with reasonable quality settings, but I'm having issues with sometimes things not really being as smooth as they want. (This is 1080p and just going for 60 FPS, not 4K at 144 or something.) It's actually really hard to pin down or exactly describe what it's doing even. The best I could try to say is micro hitches in games, lower framerates than should be even with a lot of settings lowered, etc. It's hard even for me to 100% pin things down as sometimes it's less even a visibly obvious thing like stutters and sometimes just feels, well, "wrong" somehow. Sometimes it sort of bothers me visually (I don't have motion-sickness or etc generally speaking, but lately it's something almost close to that at times.) Having vsync on for maximum smoothness does generally help a lot, but still has issues (especially where framerates are dropping.) I'm most suspecting the videocard, but it really should be more than good enough if settings aren't maxed out or otherwise set ridiculously for 1080p60. The odd thing is, while the same games weren't running buttery smooth and definitely had framerates lower than 60 many times, they didn't have things like microstutters on the system I had before (Ryzen 5 2600 w/ Geforce 1060.) First, here's the hardware:

CPU: Ryzen 5 5600X overspecced (more about that in a moment) to constant 4.4GHz
Motherboard: ASUS ROG Strix B550-A Gaming (99.99% sure this isn't the culprit.)
RAM: 2x16GB Mushkin Redline Lumina PC4-32000 (4000MHz) CL-18
GPU: Zotac Geforce RTX 3060Ti Twin Edge OC (LHR) with or without slight extra overclocking (I'm most suspecting the GPU.)
SSD: Western Digital 1TB WD Blue SN570 NVMe (99.9999% sure this isn't the culprit either, but anyway, while it's not the absolute fastest by any means, it's decent.)

I will mention on the SSD thing that I actually have multiple SSDs in this system. The OS is on a cheap SATA SSD (still better performance than most HDDs, but not meant for gaming,) low end games are on a slightly better SATA SSD, and then the games that are most demanding are on the good NVMe SSD. (This is mostly just because I am not a rich person so each is smaller than ideal and I've bought them years apart, but I do consider it a good idea to let the OS have its own dedicated drive where reasonably possible and a really small lower performance SSD is actually quite cheap.) The two SATA drives are on separate busses from each other, but I am running the higher demand games from the NVMe drive anyway, so it shouldn't really matter. Task manager doesn't show heavy disk usage in any game for whatever that really means (which is to say not much.)

First, on the CPU. I say "overspecced" because I'm not exactly overclocking it, but I am running it outside of specifications. Instead of running at 3.8GHz or lower depending on load then boosting to 4.6GHz and lowering as heat builds up, I have set it to a fixed 4.4GHz speed. This means no CPU governor shifting frequencies (so no switching latency) and a pretty consistent performance. (As for why I did this, it's less for performance and more just because I could manually set a fixed voltage that, up to that speed, can be extremely low for far less heat production and generally just easier on the chip's long term lifetime, so that isn't really related. The voltage difference from anything over 4.4GHz on up is insane and the temperature rises exponentially with full stock able to hit the thermal throttling limit within minutes if I run something like encoding or Prime95 testing or etc and a negative voltage bias in the PBO only helping a little.) Gaming-wise this has very little effect since 4.4GHz on a six core is plenty for today's games (and I rarely see more than 20-something percentage usage on the CPU during gaming -- I think the most I ever saw was around 40-something -- anyway) with the extra 200MHz not making any visible difference, but it should be mentioned just in case. However, like I said, it should, if anything, actually decrease any latencies since it actually means a lot less switching around. Most likely not enough to be visible, but the point is, it would actually be more likely to decrease the micro-stutters if it even did anything. As far as I can tell, the CPU pretty much blows away anything any games are throwing at it right now. (Also worth noting that when thermal throttling started really kicking in I could easily see it drop as low as 4.2GHz on stock. The 5000x series has a real temperature problem and I hope they fixed that with the 7000 series.)

Now, on the memory, there is an interesting point there. I initially had kept my old RAM (2x8GB of 3200MHz) but decided to upgrade to 32GB because of doing a lot of things that use RAM very badly (namely modding tools like Unity Asset Studio which I guess must load everything into RAM or something.) Curiously enough I saw a huge jump in general smoothness of gaming. This is especially strange because games today just don't really utilize 32GB at all. Even with the 16GB I think I never caught a game using more than 8GB or so roughly and I still don't with 32. It is faster and by a decent margin (and still with decently low latencies,) but the difference really shouldn't be so incredibly huge.

Now the GPU is a different matter. I didn't properly research it and possibly shouldn't have gone with this particular brand in particular. (You know how tricky GPU availability was up until just recently.) The 3060Ti is supposed to be roughly on par with the 2080 Super, so theoretically this should be no slouch with even extremely high settings in most games. However, just as an example, if I run Cyberpunk 2077 with otherwise pretty low quality settings but RTX turned on, my framerates pretty much max out in the high 40s. (So long for the RTX units on the 3000 series being supposedly so much better if the 3060Ti truly does roughly equal the 2080 Super in performance otherwise and I'm seeing such a huge drop with it on.) The moment I turn off RTX it positively flies and honestly I don't really see the huge differences I'm supposed to be seeing (mind you, I'm gaming, not screenshotting) so I don't so much mind turning it off and just using normal ambient occlusion anyway. I have tried adding a bit of overclocking (specifically using this post) but actually didn't see a whole lot of difference. (Well, with the undervolting it's a tiny bit cooler I guess and it's certainly not hurting anything at least.) On the subject of temperatures, I did change the thermal paste and actually saw at least a 2C lower temperature for my troubles. Typically it stays far below what should be its thermal throttling temperature. However, I did once run a game that had a bug of some sort that would make the GPU hit 100% utilization at all times and run the fan up to the max, yet the temperature was about the same, so I've been wondering if there may actually be something in the firmware or whatever that incorrectly handles the throttling (such as having the limit internally lower than it's actually supposed to be.) Now, the Zotac model may not be the best of the best, but generally speaking, the only real complaint by most people is supposed to be how noisy it tends to be and I can deal with that. But at the same time, it's also the most likely culprit here. (One of my biggest fears in particular is that the whole LHR limiting thing could go really wrong somehow and affect gaming -- such as if it misdetected something and thought I was mining instead of gaming. Normally LHR shouldn't affect gaming presumably, but I do worry if something went wrong somehow and applied limiting by accident.) As far as I can determine, while it's not the best of the best by any means, it should be plenty for 1080p60 even with RTX on as long as all the settings are reasonable (and by reasonable I don't even mean all high or necessarily all medium necessarily. I have actually set a few things on low and still have seen issues.)

I've actually been wondering if it could even be video driver related. I particularly noticed that the shared RAM available to the card doubled when I increased my RAM (presumably it's half the system RAM.) Though, of course, that means it was still at least equal to the GPU's own VRAM (8GB) even before the upgrade, so still really should have been enough. Actually, I'm not quite clear on why it's using shared RAM to begin with, but I presume that is some sort of efficiency thing in loading textures and such into shared RAM before the GPU actually needs them or something. It is worth noting that this is with the full features of PCI-Express 4.0 enabled and working (at least as far as I can tell) and technically even resizable BAR is enabled and working (though I doubt this or really any other system can truly utilize them and it's unlikely either really adds any noticeable performance increase, at least I can say they are there adding whatever tiny bit they add -- which is maybe like a 5% improvement.)

I have also, along the way, had to reinstall Windows once since I got all this and it did the same thing before as after, so I think it's unlikely something in it is the culprit or any viruses or miners or anything. (I reformatted the partition Windows was on even.) One game in particular I'm having issues with is Empyrion -- not the most optimized game by any means, but when I'm not seeing any bottlenecks (CPU, GPU, and RAM all less than 100% utilized as far as I can tell) and it's running off the good SSD I have to begin to wonder why the framerates drop at times. (Even if it's not the best optimized, when things drop there must be a bottleneck somewhere! It's not as if they've implemented framerate drops as an intentional thing after all!) There must be a bottleneck somewhere and I'd like to track it down and at least minimize it.
 
Last edited:
I did specify it wasn't just the one game, just that that one was the one I saw it in the most. It's definitely not less than 15 FPS though. They're describing a far more extreme issue than what I'm seeing that is for one game specifically, not something that is in many. As for "more concise" which detail should I leave out? Which thing do you think people need to not know? The CPU settings? The GPU? Or maybe leave out the fact the games are on a fast SSD or the RAM upgrade? Which one? If the issue was a simple "just run a troubleshooter" kind of thing that didn't require details I wouldn't need to post here. This might be hard to believe since I've only been on here for twenty years (geez it has been that long?) and have been using computers since the 80s, but I actually am vaguely familiar with how computers work and how to troubleshoot simple problems (especially now that you can google some crazy obscure things and find actual results.) In my experience if I don't provide details I spend a week with people asking about them and having to fill them in post by post. Besides, it has only been a couple of hours since I posted and already there was one answer.

HardForum is a forum. It is not Twitter or IRC. One line statements without any real detail are not necessary or useful.
 
Last edited:
Have you used HW info to examine all onboard sensors? I had a similar issue several months ago when I added a third Nvme drive and HW info helped me find the issue. My chipset temp was hitting almost 70C and causing intermittent sluggishness. Maybe your chip is struggling to run at 2000 on the Fclock Have you tried 1900 to see if it's more consistent?
 
Ok, ngl, it didn't occur to me to check the chipset given that on a board like this that shouldn't be an issue. I'll take a look at that. Thanks.

EDIT: Seems FClock is running at 1800MHz. None of the temperature sensors available in HWInfo are high enough to be a problem. Chipset is only 38C. I remember digging into FClock a while back when looking over options regarding over/underclocking/etc and the general consensus by most seemed to be to just leave it alone, so I did. It definitely can't be overheating though at least.

On a whim I'm actually trying something a bit... unorthodox. I've turned off SMT. This is a six core processor, so should be enough for gaming in general probably. I did see some small hitches in Empyrion several times when loading terrain and such, but most of my games as a whole I would just swear are running more smoothly. Could be placebo. I'm not 100% sure yet.


EDIT2: Ok, I looked into fclock more. It's actually supposed to be matching the RAM. At the time I had looked it up and only saw people saying that overclocking it can cause desyncing, thus is not recommended, but now that I better understand it I see that underclocking would also result in desyncing... I'm not sure why auto put it at 1800 then since that would be to match 1800 (3600) RAM, yet this is 2000 (4000.) I will definitely try setting it to 2000 and see how things go. Temperatures are likely to be a non-issue for me in general.

EDIT3: So one interesting thing I found was that my page file was filling up. Turning that off seems to have helped a bit. I can't really figure out what was filling it up because nothing actually is using more than about 11GB absolute max -- my assumption is the GPU drivers are at fault, caching a lot of something or something, not sure. But the swap isn't super fast so really isn't good for such things. It's still inconsistent though. Interestingly turning SMT off does sometimes produce better results, then sometimes worse, so I guess it's best to keep it on overall with six cores. Very tricky to pin down anything exact on that though.
 
Last edited:
So far with no swap files I've only ever gotten a crash once in the most extreme situation ever (two things that have basically zero optimizations and generally not so great design running at the same time) and things are indeed generally running better. Empyrion had an interesting issue where it would fail to load some stuff like icons for some interfaces after a short while and this seems to have gone away too. This, to me, points to potentially an issue with the way the drivers are loading things or something. But I can't really tell what's going on there or how to fix it if so. Ideally I'd like to find a way to fix whatever is going on there because it is best to have at least some swap space since things are designed to assume it's there (so much so that long term if I find no other way I may even need to setup a RAM drive with a small swap file as stupid and inefficient as that is because some things just are designed to assume it's there and can crash in extreme -- albeit rare -- scenarios if it isn't...)

There still seems to be a bit of a lower performance ceiling than there really should be though. Either benchmark listings and reviews are just wrong in implying the 3060 Ti is supposed to be where it is performance-wise and it's not even that much better than a 1060 (non-ti) or something just seems to be up here. I just can't believe it could possibly be even remotely CPU limited and the RAM is good speed and good latency (I'm sure there is lower out there, but it's pretty decent and I'm running it stable with a 1T command rate at that) so that shouldn't be an issue either. One thing I've been wondering is if the whole LHR thing could be triggering and if it could affect gaming more than it's officially supposed to. Of course I have a monitor connected and I don't do any mining and LHR is supposed to not particularly affect actual gaming, but it's enough to make me wonder if there could be a bad limiter mechanism on this one or something too. After all, Zotac is not exactly what you'd call the high end brand... Is the 3060 Ti just really not as impressive as all the reviews and such seemed to say? One trick I've come up with since I really want to be able to use vsync is to use vsync "fast" mode but set a FPS limit slightly over 60 so it only goes a little over (instead of full 100% unlimited framerates.) This seems to help strike a pretty good balance at least so I can at least have vsync on. I really feel like a 3060 Ti should be able to use full vsync on when I'm not exactly going nuts with graphics settings though. This seems to work pretty well though at least.
 
I'll admit I haven't read your entire novel, but have you made sure you mobo BIOS is fully up to date?

There was a known issue of AMD chipsets suffering from micro stutter when TPM2.0 was enabled. This have been solved in newer BIOS revisions. (Not sure if you are on Windows 10 or 11 but worth checking if you haven't already)

Edit: I also see a lot of details involving indepth tweaking. Have you checked the performance issues are still present when everything is at stock settings?
 
Always go Intel belonged to the Intel Retail Edge club from 2014-2017 the Steam forums complaints mainly come from AMD rigs sure some games work good but not everything.
 
Always go Intel belonged to the Intel Retail Edge club from 2014-2017 the Steam forums complaints mainly come from AMD rigs sure some games work good but not everything.
That is just about as unhelpful and inappropriate as a response as it gets. To be perfectly frank, it's borderline actual trolling and I was tempted to outright report it. Besides, to answer you more directly, I don't like the direction Intel is going anyway. AMD has been striking a better balance of efficiency and performance that suits me far better for a while now. And the reason you see more complaints from "AMD rigs" (I think maybe it's the people not the computers doing the complaining, but whatever) is because people who have AMD made a conscious choice more often than not (Intel being the default for quite a wide range of prebuilts and etc) and more frequently know what they're doing enough to actually tweak things, mess with settings, overclock, etc etc. I have been using both AMD and Intel processors since the 80s and can assure you that each has had their ups and downs with Intel having quite their own share of issues and, in fact, Intel's latest (the Raptor Lake core) is exactly the opposite of the sort of profile I want out of a processor just to give you a very immediate example. Far too much power usage (which likely translates directly to heat for that matter) and incompatibility (can outright crash software in Windows 10 or lower because it doesn't know not to send advanced instructions to the economy cores that can't do them and I don't want to use Windows 11 right now.) Over the years each has had good options and bad options and just right now AMD suits me better. Perhaps my next processor will be Intel if they fix their design (just as an example, those economy cores -- if properly utilized -- could also translate to using insanely low power when the system is not fully utilized but right now it's tuned only to push higher benchmark numbers rather than taking advantage of this potentiality.) Perhaps not. Maybe AMD will copy the idea and make it more compatible and efficient before Intel does. Maybe they'll come up with something entirely different. Who knows? Neither is inherently better and one has to make the choice using the information available at the time and at the time I bought this processor it was the better option to suit my needs.

It's fine to have an opinion and everyone is welcome to do so, but to try to force your opinion on others almost religiously is going too far and doesn't help anyone, so please in the future try to restrain your desire to do so. Besides, right now I'm broke, so I won't be buying any new processors for a while. Even IF you were right (and I will frankly state outright that you simply are not) it wouldn't be very useful or helpful because I can't do anything about that.

Oh. And as I already mentioned, there is a very very distinct possibility the issue actually lies in the GPU itself or its drivers rather than anything to do with the CPU. Oops. Too much fanboyism actually hurts everyone. Be adaptable.

I'll admit I haven't read your entire novel, but have you made sure you mobo BIOS is fully up to date?

There was a known issue of AMD chipsets suffering from micro stutter when TPM2.0 was enabled. This have been solved in newer BIOS revisions. (Not sure if you are on Windows 10 or 11 but worth checking if you haven't already)

Edit: I also see a lot of details involving indepth tweaking. Have you checked the performance issues are still present when everything is at stock settings?
Sorry for no response. I can't really answer right now, because I started to open the latest BIOS update file just to try to check the version numbers against each other to make sure I was on the latest (it turns out I was) and it just starts flashing without confirmation, lol. Not sure why they chose to have it just straight up flash if you open a BIOS update file even if it's the same version number since everything else about this BIOS seems actually very well made, but anyway, as a side effect it has completely reset the BIOS settings (except the date, which bugs me since that means it is at least capable of keeping some settings if it wants. The basic clock functionality may be independent, but the date is not.) Along the way of going through every single section redoing all the settings, I found out something interesting: hidden in the overclock section is an option to actually set a negative sign on the maximum boost. Thus I can have the benefits of PBO tweaking (namely the lower frequencies and fully sleeping cores) while still limiting the maximum frequency to keep the CPU always on the good side of the voltage curve (over 4.4GHz or so the curve of required voltages to maintain stability shoots up exponentially, so limit to that speed and temperatures and all stay incredibly low, so up to now I've been using a manual speed override and voltage setting with boosting and all disabled.) It's funny though because the basic "non overclock" page for PBO settings can only go positive on the max boost setting (for actual overclocking) whereas the only way to "underclock" is in the overclock section.

Anyway, I'm running through the process of testing PBO curve optimizer settings on a per core basis. (Results so far being that it's more core 4 more than anything else that binned this CPU as a 5600X instead of something higher as all but one do -30 and possibly could go lower with the one other exception doing -29.) Thus I can't really do gaming while testing, but the results when this is done should be excellent.

It is interesting about the fTPM issue though. I see the latest BIOS update does actually confirm what you said. I forgot I had actually done the update already or wouldn't have gone through this process. I am using Windows 10, so yes, that issue potentially would have affected me if not for that. Honestly I feel MS kind of screwed everyone up with the whole TPM thing (they should at least have been working with manufacturers on it for several years or something first) but it seems I already have the BIOS revision fixing that so I guess that isn't it. I'm actually kind of hoping that having completely reset all BIOS settings and going through it all again maybe will have some effect on all this (eg if something was set wrong before.) Fingers crossed, but still more testing yet to go to get the right numbers for stability while lowering as much as possible. (On the up shot, Prime95 going all out pushing as hard as possible with AVX2 and everything maxes out at around 74C for me right now. Actual gaming will probably be around 40C or lower at the rate things are going. This processor is going to last me a long time -- and I need it to.) I should be able to test gaming soon.
 
Last edited:
Test the power supply.
yeah, and the ram. 16gb is enough for most games. when I was playing warzone with 16gb my ram was maxed out and my SSD was getting used, gameplay was still smooth though. I upgraded to 32gb and my avg fps increased by 4.
 
Well, I don't know for sure about just a brand as a whole so specifically doing such a thing, but FWIW, the RAM I am using right now is indeed Mushkin. I don't think you can look at brand only even in this though -- especially since Mushkin doesn't make the actual chips (off the top of my head I think the actual RAM chips are Hynix on this model.) It's worth noting here that this RAM actually ran things more smoothly than what I had before.

Test the power supply.
Hmm. I don't have a PSU tester. Not sure how I'd test it. Officially it's supposed to be a pretty decent quality PSU. It's the EVGA 210-GQ-0650-V1 650 GQ, which reportedly uses a good manufacturer (afraid I forgot who just off the top of my head) and quality components. I'm not really sure if I could use my old 450W PSU to test with or not (it's a bit of a misconception that computers need quite as high of PSU wattages as people have been lead to believe -- the situation is actually a lot more complicated -- but it would be pushing things a bit, especially since it's pretty old at this point.) I can say that the voltages seem to be well within tolerances as much as I've watched them so far. It may be worth looking into a bit more though and I'll have to fire up a monitor during gaming and see if anything jumps especially.

yeah, and the ram. 16gb is enough for most games. when I was playing warzone with 16gb my ram was maxed out and my SSD was getting used, gameplay was still smooth though. I upgraded to 32gb and my avg fps increased by 4.
The RAM tested stable in Prime95 large FFT sizes and Memtest. But the chances of two entirely different sets of RAM having such a specific and unusual issue in a row seem pretty low to me anyway.
 
Last edited:
Well, I don't know for sure about just a brand as a whole so specifically doing such a thing, but FWIW, the RAM I am using right now is indeed Mushkin.


Hmm. I don't have a PSU tester. Not sure how I'd test it. Officially it's supposed to be a pretty decent quality PSU. It's the EVGA 210-GQ-0650-V1 650 GQ, which reportedly uses a good manufacturer (afraid I forgot who just off the top of my head) and quality components. I'm not really sure if I could use my old 450W PSU to test with or not (it's a bit of a misconception that computers need quite as high of PSU wattages as people have been lead to believe -- the situation is actually a lot more complicated -- but it would be pushing things a bit, especially since it's pretty old at this point.) I can say that the voltages seem to be well within tolerances as much as I've watched them so far. It may be worth looking into a bit more though and I'll have to fire up a monitor during gaming and see if anything jumps especially.


The RAM tested stable in Prime95 large FFT sizes and Memtest. But the chances of two entirely different sets of RAM having such a specific and unusual issue in a row seem pretty low to me anyway.
I think stress testers and memtest can miss things since what you want to test is your gaming issue. it would be super easy to just take one of the stick out and see if the issue is gone whatever stick is in just put it closest to the CPU. for testing the PSU I would buy one off amazon and return it or if you're not ok with that then sell it.
 
I don't really have enough money even to buy a PSU to return. It's also risky that something could go wrong in regards to the return process and I could get stuck with the cost and no return (rare, I know, but still, the possibility is non-zero.) I guess at some point I really do need to buy a real PSU tester though. I may add that to my wishlist for someday, but I guess that doesn't help today.

I'm not sure about the RAM testing 100% stable then not actually being so. And again, the fact that I had the issue before (worse in fact) with completely different RAM makes it pretty suspicious if somehow two completely different sets could produce the same issue. I can try what you suggest later though. (I have Prime95 going full tilt atm and don't want to stop it until it has had long enough. I may have my final CO numbers here.) I'll be honest in that I don't expect it to get any better. Especially since I'm suspecting that it may be more GPU or driver related. That may actually make it worse with either one individually. Remember, I had 16GB before and the true upgrade was to 32GB, plus I've put more onto the RAM by disabling the swap files, yet gotten smoother results in doing so. Especially the disappearing texture issue going away with that really really makes me suspect the GPU (probably drivers, but who knows what's going on with this firmware.)

Actually, I almost forgot, but I do have something to add. I've been messing around with Stable Diffusion mostly just for the heck of it (with the right settings even a 6GB VRAM GPU can run it, so this 8 generally handles most tasks without crashing unless I really go nuts.) Mostly just playing around, but anyway, randomly I noticed one time it was going significantly faster for no apparent reason. My suspicion here is that it has been running afoul of the LHR mining limits as it may well look like a miner to the firmware or whatever then one bootup it randomly didn't. More testing needed to truly confirm this though. It may have been that I had some settings different and didn't even realize somehow. I still have suspicions frequently that it somehow is getting stuck in limiting mode in normal games. (And it does run suspiciously cool even when using 100% GPU...)

That said, I'll still try these suggested tests when I have the time to do so. My old PSU will probably hold up long enough to do a bit of quick testing as long as I don't go nuts and Prime95 it or something.
 
Last edited:
If you suspect the GPU or drivers, I would start by nuking the GPU drivers using DDU and doing a fresh install of the latest drivers, make sure all your other drivers are up to date, and if you did not already try so, run the CPU at its normal settings, testing has shown that messing with the default CPU behaviour can have adverse effects. It's always a good idea when looking for issues to run your machine as close to stock as possible and see if that fixes/improves things.

I doubt the LHR issue you seem to think about is an issue or we would have heard about it a lot more already.

(btw it is possible I suggested something you already tried, your explanations are very long and I may have missed it somehow)
 
If you suspect the GPU or drivers, I would start by nuking the GPU drivers using DDU and doing a fresh install of the latest drivers, make sure all your other drivers are up to date
Yeah, I did do that already. It's more like a suspect some bad design or something that just plain doesn't work right in this specific setup somehow though. For example, since turning off swap seemed to make a difference both in performance and actual in-game visuals (namely that texture loading issue in Empyrion) it makes me wonder if it is caching things strangely. Of course, most people even playing the same games don't report the same issues, so it would have to be a combination of a number of things that somehow I was unlucky enough to hit upon if it were such a thing.

, and if you did not already try so, run the CPU at its normal settings
Perhaps not as much so as you'd like. I did accidentally test with no undervolting, just with the boost limited (but again, the CPU seems to be very very underutilized as things stand right now) as I forgot to set the PBO OC numbers after a BIOS reset, but I will admit I haven't tested all stock (basically I almost have never used this CPU significantly at pure stock since it just ran so badly on the stock HSF, but with this HSF at least it should be reasonably possible to test.) I'll be honest though, I'm in no hurry to test this and if undervolting isn't an issue (as confirmed by accidentally not undervolting) I really am inclined to say it just doesn't point to that. (Of note here, for clarity, I'm basically lowering my CPU a tiny bit rather than overclocking. Most, if not all of the issues that bring about instabilities and cause stutters and etc when overclocking simply don't apply because I'm running it cooler and at lower power levels versus stock.) Well, I will still test when I reasonably can anyway just to tick off the standard testing process. (BTW, my computer is plugged in, just to go ahead and tick that one off too, lol.)

I doubt the LHR issue you seem to think about is an issue or we would have heard about it a lot more already.
Very possibly. It's a paranoia of mine -- though not entirely without reason. For clarity, I kind of bought this videocard in a hurry since I saw the prices all going down right at a time I actually had enough money to do so and all I had before was a 1060 (which was definitely starting to show its age.) I didn't really research it well and sort of feel like Zotac doesn't necessarily represent the highest quality brand and that this card isn't the best option. Actually, in retrospect, I regret not springing for the 3070 instead (or really, the 3070 Ti is probably the optimal balance for my needs,) which isn't even that much more than the 3060 Ti. That said, part of why I'm paranoid about the LHR thing is I know that apparently one of the things that were known (at least in the past) to trigger limiting was if one did not have a monitor plugged in and this one seems to be determined to fight with me over my monitor. (I think it doesn't like that it is HDMI.) Rarely when I boot the computer up it beeps at me to indicate no plugged in monitors and I have nothing on the screen until Windows starts up (which resets the GPU as the drivers load.) But the biggest problem is possibly my own fault: I like to leave a DisplayPort cable unplugged because it goes to my secondary screen (my primary one doesn't have DP) and my stupid videocard will not in any ordering of the cables put my primary monitor as display 1 on bootup if the DP cable is plugged into the second screen (so I have to turn to the side and look at the secondary monitor when doing anything in the BIOS, which is frustrating and makes my neck hurt -- normally the secondary monitor is for glancing, not full turned and I can't position it any better than it is.) It's dumb, I know, but it translates to a situation that triggers my paranoia that maybe it could be thinking a real monitor isn't connected (apparently they make dummy dongles) and is limiting things on purpose. Which is, I'm sure, just me being paranoid, but at least it's not completely without reason.


Remember also that one of the issues I mentioned was that turning RTX on in Cyberpunk 2077 with everything in the RTX section turned off but the RTX lighting itself caused it to go to the low 40 FPS ranges. Unless a 3060 Ti is legitimately supposed to not be enough for that, it seems like something is fishy. (Well, I need to test again with my newer settings I guess. I haven't tested since.) Supposedly the 3060 Ti benchmarks in the general vicinity of a 2080 Super, which really seems like more than what that game should be expecting people to have, and supposedly the 3000 series actually has even better RTX cores versus its predecessors, so something is definitely fishy there no matter how you look at it and it really seems unlikely to be CPU (and not terribly likely to be RAM for that matter.) That feels like a GPU issue. Well, unless it's just supposed to perform that badly (but then, if so, I don't even see why the game even has RTX since that makes it useless for probably 75% of the market with everything older or lower than a 3080 or whatever it requires to do the bare minimum and that would mean it was probably just unplayable with RTX when it came out.) Presuming minimum RTX settings should be able to run well on this card it definitely points to a limiting factor somewhere in there that shouldn't be. But I guess more testing needed since I've changed a bunch of stuff around, so I'll get back on that when I can.


Honestly, if I ever get the money I'm going to upgrade my videocard, though sadly it's probably not likely I can any time soon -- even if they have (thankfully) gone down even further and now are much closer to the prices they are supposed to be since the mining craze seems to have cooled off a little bit. Though I do also kind of regret not getting a higher end CPU since I found myself pushing it more than I thought I would, I definitely regret the GPU more. Well, it was a bad market for anything silicon a bit back thanks to recent events.



EDIT: I got curious and ran Cinebench R23 a few times just to see what would happen. I don't know how it works, so I am curious if it's supposed to fill different cells at different rates (more noticeable in single thread test mode) but the actual performance results were basically what I would expect. 1460 in single thread, 11400 in multi. Single thread score was a tad lower than stock for a 5600X (roughly on par with a 5600 which, coincidentally, runs at 4.4GHz. Go figure) which is to be expected with it running at 4.4 instead of 4.65 stock boost since single thread generally won't result in underclocking on stock setups. Multi-thread score actually tested a bit higher than normal for a 5600X compared to most of the reviewer posted scores, but it seems they end up at 4.3GHz or so when heat forces their CPUs to downclock on the sustained test of all cores, so this would be consistent with my cores all running a bit faster than where a 5600X on all stock ends up during this test. Theoretically this confirms the issues aren't the CPU. I'll do more real testing (including gaming on all stock) later when I have enough time to truly do so, but my test with setting PBO settings manually with a 4.4GHz boost limit indicated basically the same gameplay normally but more hitching on tasks that have to wake up sleeping cores and scale them up like terrain generation (due presumably to them taking longer to hit the full boost speed.) It was good to find a benchmark that actually runs long enough and hard enough to test CPUs in closer to real world situations where underclocking due to heat is a thing. (BTW, fun fact, it still doesn't push it hard enough. I guess because it's only AVX instead of AVX2. I saw a high temperature about 10C less than what Prime95 can do, though that's still almost 15C above what most games end up doing.) Maybe I'm wrong, but I feel like this does verify that the CPU isn't the culprit here since I'd be expecting much more inconsistent results or something to indicate oddness there if it was.
 
Last edited:
So I tested with all stock and still got the issues. The closer to stock, it seems the more I see of hitching and stuff. I may have pinned something down actually. Along the way of messing with all this I actually also found an option in the overclocking menu that would let me limit the maximum frequency that PBO used such that I could sort of set the CPU almost to stock while maintaining the benefits of keeping it no higher than 4.4GHz (right below the large voltage jump on the curve) and even benefit from the curve optimizer being configurable per core (so no more highest common denominator.) Interestingly enough, the hitching in some things like terrain generation was actually much worse versus a fixed frequency, which put me on to something. There may be two separate issues here I need to address rather than one. While I still can't explain the maximum framerates being as low as they are with stuff such as using RTX at all, I may have caught onto what the hitching is. It may, ironically, be the CPU actually being too far ahead most of the time. Because it's not being heavily utilized, several cores are usually running very low (like 2GHz or lower even) or even outright sleeping if I look at Ryzen Master. Then when I do something that needs a sudden, unexpected heavy load from the CPU (such as terrain generation as entering an area for instance) the CPU suddenly is needed to be at full speed and has to wake up and upclock cores. I think the issue is that at those moments there is just long enough before it actually fully wakes up (and scales up in the case of PBO) that it, in effect, adds notable latency. Very tiny, but enough to be noticeable and irritating when load is variable (aka gaming.) Especially if these processes are spinning up multiple threads so it has to do this for multiple cores. Even with PBO disabled via manual frequency override, it still does this (but then, after all, it's effectively the other side of the equation rather. Instead of setting how to scale up it sets how to scale down.)

I wanted to try disabling this. However, strangely I seem to be hitting a wall at this. I tried the Windows "high performance" power profile but as I am typing this one core is running probably around 2300MHz or so (probably actually an average of a lot lower) and the others are all asleep. I believe, rather, that Windows just has no real say in this and the CPU is effectively governing itself (or I guess specifically the SoC) and completely ignoring the OS. So, in a bit of an extreme just to see if it works, I tried disabling c-states in the BIOS. This should disable all that. Except it didn't. I also disabled PSS (seems to be the modern equivalent of the old "Cool'n Quiet") and this too should disable all that. Except that didn't either. Either Ryzen Master is lying to me (though this seems rather unlikely on its own, the fact it's producing consistently changing results sort of seems to imply it's not a snapshot or something) or something is outright bypassing these settings and still lowering core clocks. Which would explain a lot perhaps.

I don't know that I necessarily want all that entirely disabled permanently, but I do at least want to test with no downclocking/sleeping to see what happens performance-wise -- whether it truly does eliminate those micro-hitches that are bugging me so much. Obviously if this succeeds it will have the disadvantage that this computer will use more power for low usage/idling, but then this is my heavy driver (I have a mini computer that's basically low end laptop hardware maxing at roughly 40 watts with 20 being more typical for low power tasks, so it's actually more efficient for low power tasks than this system can be even at its best.) I'm not worried about temperatures as I have a good cooler.

Any idea what I'm missing here? What else could be responsible for downclocking and parking the cores if not c-states and/or PSS? This may not be the culprit, but it really does seem worth at least trying to see what happens. Though, what I really wish, is that I could get to the CPU governor and override its profile to something snappier. I can watch Ryzen Master as I fire up something like Prime95 and it seems like the cores have to go through a number of steps before they hit full speed even with that instant 100% load. (It's very fast of course, but the fact I can see it at all means it's too slow.) Something more extreme like an all on/all off where it allows low idles, but then ramps straight to 100% would be more ideal for how I use my computer.
 
Last edited:
(It's very fast of course, but the fact I can see it at all means it's too slow.)
The monitoring software is feeding it to you relatively slowly, so that you can see the steps. In reality, its so fast, its like every step happens at once.
 
Got any specifics to that claim? First, I can't replace it. No money. Second, it's a pretty decent board so really shouldn't be likely to necessarily be beyond hope. Third, I don't really like "solutions" where the problem isn't addressed or fixed and merely traded for another problem or bypassed in a way that doesn't guarantee it won't just happen again. If it is the board, it might potentially be a fixable situation and I would like to at least address that before assuming that it's hopeless. So if you have a specific claim to make as to why the board is apparently so utterly hopeless that it should be tossed in the trash right now, I'd at least like to know what they apparently are just in case we are talking about something fixable even if you are right.

Either that, or send me the money to go along with that careless statement.

The monitoring software is feeding it to you relatively slowly, so that you can see the steps. In reality, its so fast, its like every step happens at once.
Putting aside that you kind of ignored my fundamental question and this isn't very constructive, it's a rather large claim that doesn't seem to have any basis in reality and I'd like further explanation. The monitoring software checks the status of the hardware once per interval (I think that one is once per second off the top of my head) and has no way of knowing what the hardware has done between checks. (If it did check so constantly that it did know that status it would actually be using so many resources as to be a problem in itself.) Thus, for it to "feed you relatively slowly" it would have to basically lie and just make up numbers under an assumption that it must have gone through them. That really doesn't make sense. Why would they write it to do that? Would you or anyone else design a program to do such a thing? Don't get me wrong. It isn't that it can't be fast, just that whatever is governing the frequencies doesn't seem to be configured to be. It's worth noting that this would be consistent with the fact that I had noticeably more hitching with PBO -- which is designed to go through more steps with more variables to decide where to put the CPU at any given moment -- versus the fixed frequency setting where I'm only fighting against power saving features.

But regardless, my question was: is there a way to make it a completely fixed frequency given that whatever is doing this is bypassing both the motherboard and the OS telling the CPU to run at a fixed speed? Regardless of whatever the monitoring software may be doing, the point is to test this and see what happens with it.
 
Which, as I said, I already did. Something seems to be overriding it.
 
Already did. Again. I am asking what it would be. C-states are disabled. PBO is disabled. Frequency and voltage are manually configured. PSS is disabled. This is why the so called one and a half paragraph "wall of text" exists -- I already stated all this.
 
I think I found the solution and it's sort of connected (indirectly) to the lower load resulting in lower frequencies thing. There is an option in my BIOS called "uncore OC" which is sort of a misnomer since it doesn't actually OC. It forces the uncore frequency to remain fixed at its base without underclocking in lower loads. Since I enabled this it seems like things are generally smoother and situations such as flying off planet and back on in games like Empyrion now barely hitch at all. Clearly it was being too slow about ramping up the memory frequency in particular in lower load situations.

Still disappointed about the GPU. 3DMark tests it slightly above average, so I suppose that translates to a 3060 Ti just not being as capable as reviews and such imply because I still find it to be limiting in things such as turning on RTX lighting (and only RTX lighting) in Cyberpunk 2077 for example, but I guess that must just be normal for it given the results.
 
Back
Top