X3D (or is it AMD?) single core torture

kalston

[H]ard|Gawd
Joined
Mar 10, 2011
Messages
1,477
Hello!

I accidentally stumbled upon an interesting X3D torture scenario. A single threaded game loading screen that pushes temperatures similar or higher than CR23 multi core test for me (which has until now been the hottest test for a 7800 X3D).

What you need: the game Age of Empires 2 DE (I have all DLCs and was testing this in Return of Rome but I highly doubt it matters) and a X3D CPU (well any CPU I guess but for sure nothing special happens on my Intel rigs)

Setup:
-enable and install the free UHD graphics DLC
-host a single player or multiplayer skirmish match with random map, standard settings
-ALL slots filled with a random AI
-extreme difficulty
-random map type with maximum size (Megarandom+ludicrous is perfect for this)


Then press start and watch your temps and fan speeds, it will last a few seconds. I'm running the game off a Gen 4 NVME but I'm pretty sure it doesn't matter one bit.


I don't know how this works but this ramps up my CPU fans to practicalyl 100% and HWinfo records CPU temperatures in the 86c range. (Fractal Torrent and PA 120 SE cooler, 5600MT / CL28 RAM with SOC of 1.2)
The fan speed and temperature will of course vary depending on silicon and your preferred curve, but you get the idea. The really crazy part is that wattage is really low, in the 35w range and if you check task manager one core is pegged at 100% but that's it!


I am not crashing or erroring or anything, I'm just baffled by this unique behaviour and would love to hear results from other people. Also I really wonder how this is even possible, is it just the Vcache getting absolutely trashed in this test or something?!

Thanks!
 
Last edited:
I observed the same thing with Hogwarts Legacy doing the shader precache (which takes decades, millenia, EONS even for some ungodly reason). Single core spiked to 85C and sent all the fans into a frenzy.
 
So? What am I to do about this? This will happen on any CPU - AMD or Intel.

Hogfart legacy can utilize more than one core though (based on what I observed on my X3D). Shader compilation would be faster on a higher core count CPU typically (based on my experience of down grading from 5900X to 5800X3D). Higher cores meant all shader compilation stuff usually was faster on 5900X (e.g., COD MW2, Horizon Zero Dawn etc.)
 
So? What am I to do about this? This will happen on any CPU - AMD or Intel.

Hogfart legacy can utilize more than one core though (based on what I observed on my X3D). Shader compilation would be faster on a higher core count CPU typically (based on my experience of down grading from 5900X to 5800X3D). Higher cores meant all shader compilation stuff usually was faster on 5900X (e.g., COD MW2, Horizon Zero Dawn etc.)

I saw the exact opposite, both the 7800X3D and my 13900K only pegged out one core when doing it and it took like 10 minutes the first time. I only noticed because I opened the control panel thinking the game had crashed.
 
Maybe you are right then. I used to just dick around a bit when Hogfart was loading. But definitely on the other games when I had 5900X it was faster.
 
I observed the same thing with Hogwarts Legacy doing the shader precache (which takes decades, millenia, EONS even for some ungodly reason). Single core spiked to 85C and sent all the fans into a frenzy.
Interesting, thanks. I don't have that game to test it myself. Does shader compilation not use the GPU as well though? So it would dump more heat in the case maybe? But still this behaviour seems crazy, on my 11900k some loading screens were hot and loud, but that was multi-threaded ones like BF V, so it made "sense" to me.
 
Shader compilation is mostly a CPU bound activity based on what I see on the clock speed, temps and core usage perspective.
 
Noticed this on some games doing this weird behavior. It's no worse than doing a prime load or cinebench load, just really is a bad utilization of resources on the programmer's part. I've left prime95 on for 48 hours on my builds as my break in testing so a game doing this for a few minutes on startup doesn't really bother me save for the increased fan speeds.
 
I did not say that this was a problem or that it bothered me or caused any issue whatsoever. OK sure it gets loud for my taste but I can find my way around that, it's just a detail and totally subjective.

It is just surprising and I'm trying to understand and wondering how common it is and if other people are experiencing it too.

It's not a type of torture that seems to be part of the standard suite of tests by reviewers or overclockers but I feel like it might be interesting to include it for X3D CPUs. My best guess is that this type of workload hammers the cache which is what is generating heat. But I'm still surprised this can get as hot as a multi-core torture test, with so few watts going through the chip. And what would happen if some software managed to hammer both the cores and the cache at the same time? I guess massive throttling but I would be curious to see it in action and test my cooling against that.

P95 does throttle clocks a bit but it's nothing drastic compared to what I've seen on Intel chips for example. It's clearly watts/amps/voltage limited (normal for X3D) in this workload; and does not get that hot or loud. CR23 multicore is way hotter than P95, as is this single core load I'm talking about in this thread.

I am sensitive to noise so yes I like to tune my rigs to avoid loud outbursts. To do that it's nice if I can test my cooling setup easily from the get go. Traditionally running P95, Aida64 and Cinebench (throw in Furmark to dump more heat in the case) has been more than enough for me to tune my cooling setup, but I got caught off guard by this instance.
 
The only reason I noticed this random hammering of the single cores was because by default the mb fans plugged in were set to ramp up based on CPU temp, so having an initial game load up ramp up my case intake fans to max surprised me. I've since swapped those fans to an ambient sensor and my aio cooler is set to respond to the whole package heat which won't go crazy if 1 core heats up. It's weird but I don't think it's that concerning. I've managed to play games with benches running in the background. The scheduler takes care of that, and priority goes to what the user is doing and the background stuff gets shafted.
 
Linpack Xtreme.. 10GB load.. that should get your ppts up there..

If single core boosts are running hot for you, you have to improve your cooling.
 
I've said this since Ryzen 3000 series and was ostracized for even mentioning it here.
Their loss, I'm pretty sure the cpu runs cooler with smt off too (which is awesome for this specific application), I haven't used AMD in a while but intel's run hotter with HT on and it also works against you in gaming and not just for frames per second but it also causes microstutter.
 
Linpack Xtreme.. 10GB load.. that should get your ppts up there..

If single core boosts are running hot for you, you have to improve your cooling.
Missed the point. I clearly explained that multi threaded workloads are not running "hot". I just tried Linpack Xtreme since you mentioned that, it's not even close to the scenario I mentioned in this thread: I can't even reach 80c with it! With slower fans.

And single thread run is below 60c.
 
Missed the point.
I do smoke a large amount of Cannabis, I do miss the point occasionally.

Maybe check your TIM? Some guys say there is no such thing as too much, but there is.

Cant hit 80c? Really? What frequency are the cores running at when its loaded?

Kind of impressive.
 
X3D CPUs are special and have a lot of limits (frequency, voltage, amps, temperature...) in place to avoid blowing up the fragile cache on them.

But here you can see the difference:
2023-06-03_175013.png

2023-06-03_175151.png


The single core load I'm talking about in this thread looks more like CR23... but with 35w and and only core under load.
 
I gotcha now.

Interesting that it is aiming for 4500MHz with Linpack. That is what I aim for on my 5900X, my 5600X does it at 4650, and my 58X3D does it at 4250 I think.

I forgot that they took power limits away from you newer X3D users.. just like they did on the old X3D.
 
Here's what happens in that game I mentioned:

Screenshot 2023-06-04 002927.png


Looking at task manager it loaded 2 cores but one after the other, so still seems totally single threaded.

Look at those watts and the temps!
 
I've said this since Ryzen 3000 series and was ostracized for even mentioning it here.
I'm curious (and tempted to try) how this interacts when you have an all core OC. I don't need 32 threads for gaming on my 5950x, but turing SMT off would not speed up any cores and windows scheduling has plenty of core to choose from with SMT on. I've always read that leaving SMT on is generally preferred even for gaming, and only and handful of games might benefit, and a small percentage at that.

I'd think leaving it on allows a game to keep all tasks one a single CCD, thus lowering latency and microstutter if assigned correctly.
 
The cores boost higher if cooling is sufficient for the workload. EDC requirements per core are lower thus giving each core more of the total power budget. I can only think of a few games that would utilize 32 threads and none that really wouldn't benefit from the added mhz. I run my 7800x3d like this and it never really exceeds 50% utilization in any of the games I play and it stays at or below 30 watts in most. You can potentially increase the negative offsets on your curve optimizer settings as well and it doesn't hurt to give it a try.
 
I'm curious (and tempted to try) how this interacts when you have an all core OC. I don't need 32 threads for gaming on my 5950x, but turing SMT off would not speed up any cores and windows scheduling has plenty of core to choose from with SMT on. I've always read that leaving SMT on is generally preferred even for gaming, and only and handful of games might benefit, and a small percentage at that.

I'd think leaving it on allows a game to keep all tasks one a single CCD, thus lowering latency and microstutter if assigned correctly.

If you look up actual testing you will see that SMT in gaming performs worse in every way while introducing microstutter, but don't take my word for it there are tests everywhere not just the one I posted. I would like to see where you read that it's preferred to leave on.
 
This is where Intel is better.

AMD has divided computer users into gamer class, and production class.

Intel can do both.
 
Yea I'm really happy with the 7800X3D and it's performance at very low power consumption. Summer heat is just around the corner and any way I can minimize latent heat in my game room is always welcomed. I only hope that gpu's can reduce their heat output to the same level someday but I think I'll be too old to care by then.
 
If you look up actual testing you will see that SMT in gaming performs worse in every way while introducing microstutter, but don't take my word for it there are tests everywhere not just the one I posted. I would like to see where you read that it's preferred to leave on.



Really depend on the game but even with a 3900x, 1% low were 1% faster with SMT on, as for test there is some strange test outthere, I saw one that told us that if you game with the same CPU with 50% more 7-zip workload going on in the background the game will be worst than on the same cpu but with less background workload as if it told us anything about HT on or off
 
They really haven't. My 7900x plays any game just fine.
My 5900X does too. I personally thought X3D was a waste of money because I like my 5900X better in the end.
Huh? 7950X3D
That is new, like AM5.. so you know it always wasn't like that. Can you run your 7950X3D in Win10 and still have roughly the same performance, or do you need 11s schedular?

I only ask because I don't know :D
 
I'm just going to solve the debate for myself now and turn SMT off and run a bunch of tests knowing how it already tests with SMT on... lol. Difference is, I changed to all core OC a while back which made a HUGE difference in microstutter, so we will see what this does. Being a 16 core processor i will not be losing much with it off, the thing will be can I keep games on a single CCD still just fine.
 
Last edited:
As a folowup... Turned SMT off on my 5950X, which allowed me to set a new personal record with my 4090. Basically, if you have a Ryzen 5000 series, go with all core OC, CPPC off, and turn SMT off. These CPU fly that way over PBO and SMT On it seems. All my games so far too show marked improvements as well (although not by as much as when I made the jump from PBO to All Core OC). So, anyone looking to push FPS, but also eliminate microstutter, this is the way... can confirm with personal use. I game at 4K max settings too for anyone wondering. Tested SOTTR, RDR2, Cyberpunk 2077, FH5, BF2042, COD:MW2, 3DMark, MSFS2020 (did not notice much of a difference here) and Jedi Survior (which also has it's own issues, but it seemed to still run just fine for me).

http://www.3dmark.com/spy/39061839
 
Last edited:
X3D CPUs are special and have a lot of limits (frequency, voltage, amps, temperature...) in place to avoid blowing up the fragile cache on them.

But here you can see the difference:
View attachment 574379
View attachment 574380

The single core load I'm talking about in this thread looks more like CR23... but with 35w and and only core under load.

I'd honestly say you just need better cooling. These may not be high wattage CPUs but they're extremely dense.

1685933629583.png
 
That is the besides the point. And your CPU is a different model, with heat spread probably over a bigger surface area (2 CCDs, both with less cores than mine).

I can increase my fan speeds further or slap a custom loop on it but the situation I'm talking about in this thread will continue to happen.

The tl;dr of this thread is "how can a single threaded workload get my CPU as hot a a multithreaded one with only about 35w going through the chip vs up to 90w"? I have no interest in how good your cooling is for multi threaded workloads because my cooler is already handling that just fine with 25c ambient (and not with 100% fan speed so I have headroom if my ears can bear it) and there is more than enough data for that out there anyway if I had any questions. But so far my results are perfectly in line with other people or reviews and there is no problem.

If you look at my last screen the cores aren't even hitting 69c so it's something else getting hot in the CPU, presumably the cache but hard to tell from those sensors. That's the puzzle and why this thread exists.

This is a discussion rather than a troubleshooting thread. Unless someone can run the same single thread torture (or an equivalent that I can also run) with much better results and prove that there is a problem on my end.
 
Last edited:
That is the besides the point. And your CPU is a different model, with heat spread probably over a bigger surface area (2 CCDs, both with less cores than mine).

I can increase my fan speeds further or slap a custom loop on it but the situation I'm talking about in this thread will continue to happen.

The tl;dr of this thread is "how can a single threaded workload get my CPU as hot a a multithreaded one with only about 35w going through the chip vs up to 90w"? I have no interest in how good your cooling is for multi threaded workloads because my cooler is already handling that just fine with 25c ambient (and not with 100% fan speed so I have headroom if my ears can bear it) and there is more than enough data for that out there anyway if I had any questions. But so far my results are perfectly in line with other people or reviews and there is no problem.

If you look at my last screen the cores aren't even hitting 69c so it's something else getting hot in the CPU, presumably the cache but hard to tell from those sensors. That's the puzzle and why this thread exists.

This is a discussion rather than a troubleshooting thread. Unless someone can run the same single thread torture (or an equivalent that I can also run) with much better results and prove that there is a problem on my end.

Prime95 running on 2 cores reproduce it? I'll run that test as well if so. No need to get defensive.
 
Nah I haven't managed to reproduce it with anything except that game so far. Since LigTasm mentioned the shader generation of Hogwart's Legacy I figure maybe Jedi Survivor which I do own (thanks AMD) may exhibit the same behaviour, but did not get around to installing that yet. But for science I might do it sooner now.

Limiting threads on Prime95 or CR23 or Linpack or Y-cruncher does not cut it, it runs super cool and quiet. Those X3D CPUs are odd beasts and it's not always easy to figure out what's limiting you, for example it's striking how cool P95 runs vs CR23. While on Intel P95 is hot as hell ("unrealistic" people like to say) but CR23 feels somewhat "real", in line with some heavy applications one might run.
 
As a folowup... Turned SMT off on my 5950X, which allowed me to set a new personal record with my 4090. Basically, if you have a Ryzen 5000 series, go with all core OC, CPPC off, and turn SMT off. These CPU fly that way over PBO and SMT On it seems. All my games so far too show marked improvements as well (although not by as much as when I made the jump from PBO to All Core OC). So, anyone looking to push FPS, but also eliminate microstutter, this is the way... can confirm with personal use. I game at 4K max settings too for anyone wondering. Tested SOTTR, RDR2, Cyberpunk 2077, FH5, BF2042, COD:MW2, 3DMark, MSFS2020 (did not notice much of a difference here) and Jedi Survior (which also has it's own issues, but it seemed to still run just fine for me).

http://www.3dmark.com/spy/39061839
Out of curiosity, does your motherboard have a functionality to allow it to switch between both manual all core OC (for many/all core), and PBO2 (for single/few core) depending on load, heat and the like? I know that one of the advanced features of the Asus ROG Crosshair VIII Dark Hero was its ability to do this, but didn't know if any other X570 boards could do the same from other manufacturers. I know that Asus has equipped X670E boards (at least all the ROG named ones) with similar functionality and I think some other manufacturers had a similar functionality but it may have not arrived until the X670E era.
 
Out of curiosity, does your motherboard have a functionality to allow it to switch between both manual all core OC (for many/all core), and PBO2 (for single/few core) depending on load, heat and the like? I know that one of the advanced features of the Asus ROG Crosshair VIII Dark Hero was its ability to do this, but didn't know if any other X570 boards could do the same from other manufacturers. I know that Asus has equipped X670E boards (at least all the ROG named ones) with similar functionality and I think some other manufacturers had a similar functionality but it may have not arrived until the X670E era.
I don't believe so, no. Its either manual OC or PBO on my board. However, I can OC each core individually if I wanted too, but I am just doing a blanket all core because there is still only one voltage setting that I can find without using PBO and curve offsets. Honestly, I have another thread somewhere about it, but in short, on Ryzen 5000 series, PBO is a suttery mess with lower performance and gimped memory performance. All core OC on top of everything else I mentioned is 100% the way to go. Performance increase is very measurable and stuttering is non-existent now.
 
So obvious fun fact... SMT off also decreased my temps by almost 10~15C when stress testing... lol. Now I can probably push this all core OC past 4.725Ghz to 4.8Ghz+ maybe... Interestingly, my MT scores (not that it matters much for gaming) were not lowered by as much as I would have guessed. Still got over 25K in CB23 (vs 31K) and CPUz quick test was 10.5K (vs. 13.6K). However, after a few days here I can confirm gaming is even smoother. Why this is not "well known" knowledge is beyond me. Honestly, debate over in my eyes from my testing.
 
I suppose I can test SMT OFF on my end too, but based on my experience with HT on my past Intel systems I'm skeptical.
 
I think some of the micro stutter is caused by the constant thread transitioning of the main render thread/threads to different cpu's. Less threads = less transitions resulting in less micro stutter. I think some people are more sensitive to this than others and the newer gpu's demand lower cpu frame times therefore exacerbating the issue.
 
I think it also helps I still have 16 threads to work with... not sure I would recommend this on 8C or less CPUs as when I tested moving everything to a single CCD with SMT off (so 8 cores for me), performance in a few games did go down (likely because they are coded to use more than 8 cores). The difference was not as large as when I moved from PBO to all core OC with CPPC off, but it was measurable and consistent, nonetheless. Pretty sure even at 4K my 4090 is pushing the limits of this CPU... lol
 
Back
Top