New build, Ryzen 3600 getting 95.5c max temps

You are not allowing the chip to breathe with the stock cooler. It can't even come close to its performance potential. the 3600 is absolutely far far more powerful than the old Intel you referenced. The 3600 has 32MB of l3 Cache and nothing in Intel's desktop lineup even comes within a mile of that.

The 3600 should easily pound your old Intel into dust. either its your cancerous cooling solution (hah I know, not yours, its AMD pathetic stock cooler) or your GPU is just outdated and is the bottleneck.

Also your ram, which I have no idea what you're running, is a MAJOR factor in Zen2 performance. You get the absolute top tier performance with Zen2, all of the chips, even Threadripper 3000 series all the same, when your ram is @ 3600 MT/s and your infinity fabric is set to 1800mhz. It turns out at that point the only thing that can improve performance past that point is running more aggressive timings to lower natural latency. We have threads floating around all over the forums about this. Also I just posted this thread which might help you achieve even faster performance once your cooling is figured out.

Link: Ryzen DRAM Calculator 1.7.0

Let the forum know if you need help using that app or understanding how to apply the settings. Again, I have no idea of your level of expertiese Verado, so forgive me if I seem to be holding your hand - not my intention. I also post responses to people in such a way that is informative for anyone to use as a point of reference. Not all but sometimes I try too.

Yeah, I know is the short answer to everyting. But OP was slow and I was bored and just went out and bought the stuff to try for myself and experiment a little.
After doing all the shit my wife wanted me to do today I went ahead and set all the fans to 100% and set the memory/IF to 3600/1800 and the timings to 16, 16, 16, 36. and performance was vastly improved. The cpu still gets hot as hell. It hit 81C and all cores got stuck at 3.9ghz, and thats just after ~10 minutes gaming. Stock cooler is just as crappy as an Intel cooler.
Arma 3 KOTH also has like 60% higher minimum fps and that is actually quite a big deal to me, as we're talking from ~30 fps to ~50, so I guess I wont be selling this off again.
Nothing left to to but wait for that cooler bracket to arrive..

Update: Lots of replies to keep up with but there was a basic issue and still possibly more.

Bottom line I am an idiot and the bracket fell off the back of mobo during install and I didnt realize. This cooler was not mounted at all.

Easy fix but a pain, I reseated but even then 1 of the 4 screws didnt catch until the first 3 were 100% tight which is probably another issue. I am not super experienced but this setup has been a big hassle. I plan to get another cooler as I am still getting 80 degrees in Pub G. Much better than 95 but still a bit warm to me.

My BIOS says E7C02AMS.350 11/07/2019
VCore 1.462V
DDR Voltage 1.380V

My friend also said the voltage is too high. I have never underclocked and do not particularly like playing around in there but if this is definitely wrong and needs to be fixed i will do it. I just don't understand why it happens. Why not just do it right the first time? lol. Never had this issues with my very old cpu.

Yeah thats an agressive vcore. Mine's never passed 1.369V. It's a known thing that motherboards for ryzen vary wildly in what voltage they will apply.
 
Yeah, I know is the short answer to everyting. But OP was slow and I was bored and just went out and bought the stuff to try for myself and experiment a little.
After doing all the shit my wife wanted me to do today I went ahead and set all the fans to 100% and set the memory/IF to 3600/1800 and the timings to 16, 16, 16, 36. and performance was vastly improved. The cpu still gets hot as hell. It hit 81C and all cores got stuck at 3.9ghz, and thats just after ~10 minutes gaming. Stock cooler is just as crappy as an Intel cooler.
Arma 3 KOTH also has like 60% higher minimum fps and that is actually quite a big deal to me, as we're talking from ~30 fps to ~50, so I guess I wont be selling this off again.
Nothing left to to but wait for that cooler bracket to arrive..



Yeah thats an agressive vcore. Mine's never passed 1.369V. It's a known thing that motherboards for ryzen vary wildly in what voltage they will apply.

Interesting. Seems like a whole other thread is needed for my voltage issues.
 
Yeah I hijacked yours :p
Yeah, I assumed you would think that is what I meant after I pressed enter lol......I meant the voltage issue not you haha. The more info the better for everyone. A lot of this stuff is beyond my knowledge but now I am thinking my RAM sucks and I need a new cooler.
 
Offset Vcore -0.10v then vcore will be around 1.36v and should be performing and boosting the same without all that extra heat. Just adjust the Vcore. Nothing else. Your temps should be in the 70s at load instead.
 
Offset Vcore -0.10v then vcore will be around 1.36v and should be performing and boosting the same without all that extra heat. Just adjust the Vcore. Nothing else. Your temps should be in the 70s at load instead.
Worked as stated. After getting my cooler on the right way I think my max was 87c one time. Only got a couple quick games in but after offsetting - .1v the max was 71c.
 
I just built a new ryzen 3600 system.

In a fractal meshify c. 2 front intake 140mm noctua fans. 1 top mounted noctua 120mm fan intake. 1 noctua 120mm exhaust, and a u14s heatsink. I don't see anything north of 65c during gaming. (I used noctua's nt-h2 paste in an x pattern.) I have noticed it flucuates A LOT in temps compared to my fx 8320 OC. Like anywhere between 35c and 55c depending in random loads - then the fans ramp up and it drops. Not really an issue - I think tjunction max is like, what? 90c or something? I know these new 7nm ryzen chips can run a little hot. I haven't stressed it yet - just some 4hr gaming sessions.
 
Had a similar issue with my 3970X. Even though having a loop with known good Bykski waterblock (used with my previous 1950X setup), 3x120mm rad DDS pump, I was shocked to be hitting 95c way too easy. The TRX40 Aorus Master motherboard comes stock with PBO enabled. It seems PBO on these chips will let the CPU run as fast as the cooling system allows. Turned off PBO and I barely hit 80c. I will be popping on a raystorm sTR3 waterblock on as soon as it gets to me from Australia.
 
Had a similar issue with my 3970X. Even though having a loop with known good Bykski waterblock (used with my previous 1950X setup), 3x120mm rad DDS pump, I was shocked to be hitting 95c way too easy. The TRX40 Aorus Master motherboard comes stock with PBO enabled. It seems PBO on these chips will let the CPU run as fast as the cooling system allows. Turned off PBO and I barely hit 80c. I will be popping on a raystorm sTR3 waterblock on as soon as it gets to me from Australia.

Is your gpu on that loop? Also a 3x120mm is too small for a over 300 watt CPU especially oc'd if you have a gpu in loop.

I have 7x120mm in my case and my gpu full load 2080ti is about 45c. My cpu full load never surpasses 65c - 3960x.
 
Is your gpu on that loop? Also a 3x120mm is too small for a over 300 watt CPU especially oc'd if you have a gpu in loop.

I have 7x120mm in my case and my gpu full load 2080ti is about 45c. My cpu full load never surpasses 65c - 3960x.
Just the CPU on the loop. Mid tower case so no room for 7x120mm. Rad fans are push only. I do have 2x 200mm fans - one in front the other on the door.
 
Had a similar issue with my 3970X. Even though having a loop with known good Bykski waterblock (used with my previous 1950X setup), 3x120mm rad DDS pump, I was shocked to be hitting 95c way too easy. The TRX40 Aorus Master motherboard comes stock with PBO enabled. It seems PBO on these chips will let the CPU run as fast as the cooling system allows. Turned off PBO and I barely hit 80c. I will be popping on a raystorm sTR3 waterblock on as soon as it gets to me from Australia.
Yea, I hhave bykski big block on my 3600 and I also hit around 82c while gaming. I have 360mm + 120mm rad with corsaird HD120 fans on it. Also my GPU is in loop. in short, these chips run very hot, no matter how good your cooling setup is. I have given up tweaking this chip already. Running it in stock as it is.
 
Have not, TBH kind of scary and don't want to. But it says on my BIOS ver E7C02AMS.350 11/07/2019 and had the Ryzen 3000 sticker on the box.



Have not looked on there. New to Ryzen and will need to look into that.
The F50 BIOS is the latest BIOS for the B450 Gigabyte boards. (Nov 2019)

Might want to try this thread at Reddit. AMD posted about high temps on the 3600. It's not paste or cooler problem. It's an AMD problem,and they have their heads up their asses. Check out the youtube video offsetting Vcore by 0.1 and getting the same synthetic scores and lower temps - which AMD says not to do.

Pretty Amazing how badly AMD fucked this CPU control up.
 
Pretty Amazing how badly AMD fucked this CPU control up.
we reading the same thing? that post talks about monitor tools keeping the core alive thus drawing higher voltage and he says not to under-volt, the -.1v offset is not the same as under-volting. the high voltage and ilde temps are due to the bios being aggressive and the monitor tools keeping the core active.
 
Last edited:
Or you can try different offsets between 0 and -0.1 and bench something like CB to find a sweetspot. I ended up with -0.0375. It gave a lot lower temps and higher single and multithread scores.
 
The F50 BIOS is the latest BIOS for the B450 Gigabyte boards. (Nov 2019)

Might want to try this thread at Reddit. AMD posted about high temps on the 3600. It's not paste or cooler problem. It's an AMD problem,and they have their heads up their asses. Check out the youtube video offsetting Vcore by 0.1 and getting the same synthetic scores and lower temps - which AMD says not to do.

Pretty Amazing how badly AMD fucked this CPU control up.

Learn to read. It's observer effect and 99% of ppl cannot seem to get that thru their heads.
 
Learn to read. It's observer effect and 99% of ppl cannot seem to get that thru their heads.

Exactly. That doesn't sound like an AMD problem at all. That sounds like a program not designed to work with the AMD boosting algorithms. It even specifically says in the reddit post that the AMD firmware is doing what it is designed to do in light of aggressive monitoring...
 
we reading the same thing? that post talks about monitor tools keeping the core alive thus drawing higher voltage and he says not to under-volt, the -.1v offset is not the same as under-volting. the high voltage and ilde temps are due to the bios being aggressive and the monitor tools keeping the core active.
That sounds great, but why is this happening on the 3600 and no other CPU's? Some report high temps with NOTHING running at ilde using AMD's own monitoring tool.

Did you read this?

"We have determined that many popular monitoring tools are quite aggressive in how they monitor the behavior of a core. Some of them wake every core in the system for 20ms, and do this as often as every 200ms. From the perspective of the processor firmware, this is interpreted as a workload that's asking for sustained performance from the core(s). The firmware is designed to respond to such a pattern by boosting: higher clocks, higher voltages. "

So, why is the firmware ding that when it has never been an issue before?
 
I read that and had the exact opposite conclusion that you do. It's not the CPU's fault. It's the software's fault. In fairness, it probably wasn't written with the Ryzen 3000 series boost in mind.

In the OP's example, he's hitting 95C, but the voltage is also 1.475V. That's not a normal load voltage. From your own reddit post, the OP isn't using CPU-Z, he's using hwinfo which is tripping the boost when it isn't needed and then giving the higher temps and voltage associated with the boost. FWIW, my 3600X under stock conditions hits 1.35V and ~70C under 100% sustained load conditions.
 
Last edited:
That sounds great, but why is this happening on the 3600 and no other CPU's? Some report high temps with NOTHING running at ilde using AMD's own monitoring tool.

Did you read this?

"We have determined that many popular monitoring tools are quite aggressive in how they monitor the behavior of a core. Some of them wake every core in the system for 20ms, and do this as often as every 200ms. From the perspective of the processor firmware, this is interpreted as a workload that's asking for sustained performance from the core(s). The firmware is designed to respond to such a pattern by boosting: higher clocks, higher voltages. "

So, why is the firmware ding that when it has never been an issue before?
again what are you reading?! that post is titled "3rd gen ryzen" and the only chip mentioned is his example systems 3900x.
like just posted by kirby, the chip is doing what its supposed to do, react to usage
 
Or you can try different offsets between 0 and -0.1 and bench something like CB to find a sweetspot. I ended up with -0.0375. It gave a lot lower temps and higher single and multithread scores.
And this! Someone gets it, here.

Yes it is about incorrect reporting, but it is also about high temperatures during normal operations.

It's not just about idle temps and voltages being reported incorrectly, it's about high temps under normal stress situations, whereas other 65 watt CPU never see those types of temperatures - and the only way to tame it is to negatively offset 3 series CPUs. That's not a fuck up?

But look, the idle temps and voltage are going up, and it isn't just "observer effect." AMD actually states this clearly, but tries to make it sound like it really isn't happening:

"From the perspective of the processor firmware, this is interpreted as a workload that's asking for sustained performance from the core(s). The firmware is designed to respond to such a pattern by boosting: higher clocks, higher voltages." "THE FIRMWARE IS DESIGNED TO RESPOND. . . ."

It's never been a problem for any CPU at any time, until now.

"By now, you may know that 3rd Gen Ryzen heralds the return of the Ryzen Balanced power plan (only for 3rd Gen CPUs; everyone else can use the regular ol' Windows plan). This plan specifically enables the 1ms clock selection we've been promoting as a result of CPPC2. This allows the CPU to respond more quickly to workloads, especially bursty workloads, which improves performance for you. In contrast, the default "Balanced" plan that comes with Windows is configured to a 15ms clock selection interval.

Some have noticed that switching to the Windows Balanced plan, instead of the Ryzen Balanced Plan, causes idle voltages to settle. This is because the default Balanced Plan, with 15ms intervals, comparatively instructs the processor to ignore 14 of 15 clock requests relative to the AMD plan.

So, if the monitoring tool is sitting there hammering the cores with boost requests, the default plan is just going to discard most of them. The core frequency and clock will settle to true idle values now and then. But if you run our performance-enhancing plan, the CPU is going to act on every single boost request interpreted from the monitoring tool. Voltages and clock, therefore, will go up. Observer effect in action!" Reddit post

So, the voltages and clock are going up based on the CPUs aggressive boost firmware; the CPU is boosting, using a higher voltage, and heating itself up. And, it isn't a "boost request." The firmware is interpreting it as a boost request. It's not a boost request. How much boost do you need to report back to a monitoring software?

CPUZ, he goes on to say, reports the actual idle voltages.

OK, so monitoring software will need to change their game in order to stop the 3 series firmware from interpreting that request as a boost request. Fair enough. But let me be clear. It is NOT a boost request, or ALL CPUs would be boosting. It's how AMD 3 series CPU's firmware interprets the request.

Now the other problem is high temperatures during normal workloads and stress testing. If this were an observer effect, then tehre wouldn;t be higher temps and voltages. People are reporting 95C temps in these situations with coolers that are well ventilated and have a TDP of 65 watts and over. They are reorting high temps ding normal business relarted tasks. Why are they getting these huge temps relative to other CPUs, including Intel's? Why, indeed?

It seems the new 3 series CPUs are using far too much voltage to achieve their new and improved ability to boost. In fact, the AMD spoke's person says this DEFINITELY in this next quote:

"I anticipate that many people are now trying Ryzen processors for the first time (because they're awesome), and may not understand what to expect versus whatever CPU they had previously. You want to know if what you're seeing is "normal," but may not know what "normal" looks like. I get it! I want to assure you that the CPU needs voltages to boost, and voltages of 1.2-1.5V are perfectly ordinary for Ryzen under load conditions (games, apps, whatever). Even at the desktop, Windows background tasks need love too! You'll see the CPU reach boost clocks and voltages, too. But if your voltage is well and truly stuck, that's what I'm trying to troubleshoot. "

LOL-WTF? AMD need to go into full 1.5V boost to run a background task? Well, that's a first in CPU development--boosting to 1.5 volts to get a background task done. Okay, so how much faster is it relative to undervolting or offsetting? And, does it really degrade your performance? In fact, does the AMD boost really give you that edge?

Wait a minute! Here he states that it is a problem:

"EDIT 7/18/19 As a temporary workaround, you can use the standard Windows Balanced plan. Edit this plan to use 85% minimum processor state, 100% maximum processor state. (Example). This will chill things out as we continue to work this issue. "

OK is it observer effect, or is there really a problem? YES, AMD fucked up and they know it! Whoa--so I'm going to be idling at 85%? Holy shit, can you say "SUCK THAT POWER AMD?!"

"Please note that it is totally normal for your Ryzen to use voltages in a range of 0.200V - 1.500V -- this is the factory operating range of the CPU. It is also totally normal for the temperature to cycle through 10°C swings as boost comes on and off."

Aaahhhh,so I can see now why this little 65 watt wonder is really using 85 watts under stress. It's boosting to 1.5V ?!?! Holy Jesus Mother Mary! 1.5V! Yeah, it's a 65 watt CPU, suuuuuure. Only on paper. And, that's why you need HUGE cooling to cool this puppy down, because it's realistically NOT even close to 65 watts. And, that's why 65 watt CPU coolers are running this little 65 watt CPU at 95C temps and thermal throttling. And, that's why you can offset .1v to.5v and get normal temp results - without degrading performance.

See this Youtube video discussing the AMD Reddit post:
 
again what are you reading?! that post is titled "3rd gen ryzen" and the only chip mentioned is his example systems 3900x.
like just posted by kirby, the chip is doing what its supposed to do, react to usage
You need to reread the post and do some research on the 3series CPUs and high temps. It's all over the internet. The chip is using too much power for no reason at all.
 
TBH, regardless of who to point the finger at, I had a 3600 in an opheon with a scythe big shuriken on it. Similar temps (40+ idles with mega-spikes 50+, and 90ish at load). Tried all the low volt/underclock, PBO off, and fan profile mods I could handle for a few weeks , and...

Pretty much gave up and went AIO (240mm) and replaced the opheon proper with the opheon evo (so I could add more cooling.

Idles as well as stressed temps are where i'd expect them. 30s at idle 60s under load.

Went back to turning PBO on, -0.1v vcore, etyc, and still hovers where i'm comfortable (intel for years got me antsy about higher idles etc)

For complete transperancy, i built a 2600 machine (in a lone L5) simultaneously and it never made me second-guess the cooling (same scythe shuriken cooler). That box idles in the 30s and peaks low 70s (just like the intel chips I was used to). Both builds used the same gigabyte aorus b450i ITX board

Additional info... my 3900x (other machine) is under a 2x120 + 3x120 custom loop with zero temp issues.

For me (YMMV) i just had inadequate cooling for my 3600 with the "big" shuriken, as well as the wraith prism and others before it.

Regardless of your specific scenario or prerequisites, IMHO adding cooling capacity will fix it. Right or wrong (not in a place to blame a company and seek a resolution) , it soothed my OCD about it and i'm happier with it now.
 
Last edited:
You need to reread the post and do some research on the 3series CPUs and high temps. It's all over the internet. The chip is using too much power for no reason at all.

I read the post and watched the video. But your rant about power draw completely ignores the recent trend of CPUs. For example, Intel's 9900k is rated for "95W" but in reality pulls something like 200-250W under load. So you complaining about an extra 20W under load seems extremely naive. Yes, the stock cooler is inadequate on the 3600. Yes, it isn't exactly the easiest to mount evenly, so it is prone to user error. Yes, even a cheap 120mm tower cooler would likely do wonders for most people who have issues.

A -0.1V offset can help. I don't think anyone is denying that it could. Not every chip is the same and boards are designed for the worst chip possible. So some CPUs might require the full boost voltage and others won't. Given that the boosts are controlled by temperature, using less voltage while boosting can result in better temps and therefore better boosts. But it is very chip dependent, and would require testing on each specific CPU. My 3600 might take an offset better than yours for example. This isn't really any different than the discussion about 5700 video card clocking. You can undervolt the GPU and actually get better performance because of the temps/boost algorithms but once again very GPU dependent. Also, there is a diminishing return after a certain point as GN showed in their video. You can't just offset the voltage to something extremely low and then expect miracles.

As for the Ryzen power plan vs the Windows power plan, use whatever works for you... if you like the Windows one better because you feel like the polling rate fits your needs better than have at it. AMD adjusted 3000 series CPU's to boost differently and polling CPU usage at significantly quicker rates and the observer effect could be playing havoc with the boosting algorithm. My point was that if you use a different monitoring program that doesn't cause the CPU to boost all the time, maybe you will have different results. I haven't seen any issues of locked boost voltages without something hitting the CPU (even a background task in many cases) in any 3000 series CPU I've used (3900x, 3800x, 3600x, and 3600).

Maybe I'll go back and play around with the Windows balanced power plan to see if the idle voltage and temps improve in my case (Edit: I did a little bit and it does improve voltage and temps to some degree).
 
Last edited:
I read that and had the exact opposite conclusion that you do. It's not the CPU's fault. It's the software's fault. In fairness, it probably wasn't written with the Ryzen 3000 series boost in mind.

In the OP's example, he's hitting 95C, but the voltage is also 1.475V. That's not a normal load voltage. From your own reddit post, the OP isn't using CPU-Z, he's using hwinfo which is tripping the boost when it isn't needed and then giving the higher temps and voltage associated with the boost. FWIW, my 3600X under stock conditions hits 1.35V and ~70C under 100% sustained load conditions.
That sounds about right. Reporting errors aside, people have been reporting a lot worse temps than you have though. Maybe it is a combination of BIOS updates and chipset updates that willsolve that problem. It still seems like 1.35v is pretty high. Have you tested your temps running the CPU stress test in Prime 95?
 
That sounds about right. Reporting errors aside, people have been reporting a lot worse temps than you have though. Maybe it is a combination of BIOS updates and chipset updates that willsolve that problem. It still seems like 1.35v is pretty high. Have you tested your temps running the CPU stress test in Prime 95?

Honestly, I haven't run Prime95 in a long time. I'll see if I can dig it up to test with.

What I have noticed is that the core speeds never drop to the low numbers at idle like I'd normally be used to with older Intel CPUs. A lot of times it is sitting in the mid 3000Mhz range but at ~1.05V or so and temps in the low 40s. It seems like it is always bouncing around even under low intensity workloads (like typing this in Chrome) in the Ryzen balanced plan. In the Windows balanced plan it will drop to the low 2000Mhz range at ~0.9V or so and temps in the mid 30s.
 
For me (YMMV) i just had inadequate cooling for my 3600 with the "big" shuriken, as well as the wraith prism and others before it.

Regardless of your specific scenario or prerequisites, IMHO adding cooling capacity will fix it. Right or wrong (not in a place to blame a company and seek a resolution) , it soothed my OCD about it and i'm happier with it now.

Yet the stock AMD cooler will result in thermal throttling at stock settings in the BIOS under a stress test, like P95 becasue it cannot cool the CPU well enough.

Yes, if you have enough cooling capacity, yuo can fix almost anything, but why should we have to use a 280mm liquid cooler to keep a TDP 65 watt CPU from overheating? To me, that is not the technical definition of a 65 watt CPU. And, when you compare it to other 65 watt CPUs like the 2600, using the same test set up, the temperatures are never a problem. So, why is it that the 3600 with a TDP of 65 watts overheats while the 2600 doesn't, all things equal? It's because under normal stress, the TDP is no where near 65 watts, or it wouldn't be generating that much heat. Is it because it's using more voltage than it needs, again, generating more heat than it needs to?

Many ahve posted offsetting voltage completely solved their problem with no degradation in performance.
 
I read the post and watched the video. But your rant about power draw completely ignores the recent trend of CPUs. For example, Intel's 9900k is rated for "95W" but in reality pulls something like 200-250W under load. So you complaining about an extra 20W under load seems extremely naive. Yes, the stock cooler is inadequate on the 3600. Yes, it isn't exactly the easiest to mount evenly, so it is prone to user error. Yes, even a cheap 120mm tower cooler would likely do wonders for most people who have issues.

A -0.1V offset can help. I don't think anyone is denying that it could. Not every chip is the same and boards are designed for the worst chip possible. So some CPUs might require the full boost voltage and others won't. Given that the boosts are controlled by temperature, using less voltage while boosting can result in better temps and therefore better boosts. But it is very chip dependent, and would require testing on each specific CPU. My 3600 might take an offset better than yours for example. This isn't really any different than the discussion about 5700 video card clocking. You can undervolt the GPU and actually get better performance because of the temps/boost algorithms but once again very GPU dependent. Also, there is a diminishing return after a certain point as GN showed in their video. You can't just offset the voltage to something extremely low and then expect miracles.

As for the Ryzen power plan vs the Windows power plan, use whatever works for you... if you like the Windows one better because you feel like the polling rate fits your needs better than have at it. AMD adjusted 3000 series CPU's to boost differently and polling CPU usage at significantly quicker rates and the observer effect could be playing havoc with the boosting algorithm. My point was that if you use a different monitoring program that doesn't cause the CPU to boost all the time, maybe you will have different results. I haven't seen any issues of locked boost voltages without something hitting the CPU (even a background task in many cases) in any 3000 series CPU I've used (3900x, 3800x, 3600x, and 3600).

Maybe I'll go back and play around with the Windows balanced power plan to see if the idle voltage and temps improve in my case.
Fair enough, but then teh definition of TDP has no meaning anymore, and CPU power has skyrocketed. It's not naive of me to rant about a 95 watt TDP CPU using 250 watts! That's a 250 watt TDP CPU. Or, like I said, TDP has no meaning anymore. IF that is true, then imagine how that will affect all CPU components, including coolers. You buy a cooler rated at 95 watts, and your rig pulls 250 - cooler fail.

My Core i7 Bloomfield OCed to 3.8 used 1.25-1.35v and ran 78C in any CPU heat test I tossed at it. I used an old Cogauge 120mm tower cooler, and that was it. Are we going backwards in time as far as power usage goes?

The bottom line is that people having problems with the 3600's heat is all over the internet, not just "in my mind's eye." I'm only talking about heat issues, here.
 
stock AMD cooler will result in thermal throttling ... like P95 becasue it cannot cool the CPU well enough.
because p95 is an unrealistic load and will overwhelm the stock cooler. that is normal. running in normal usage they are well within spec.
Many have posted offsetting voltage completely solved their problem with no degradation in performance.
and where do they set that offset? the bios. the chip doesnt set the voltage.

all of this seems to be work-arounds for bios and software issues.
 
Fair enough, but then teh definition of TDP has no meaning anymore, and CPU power has skyrocketed. It's not naive of me to rant about a 95 watt TDP CPU using 250 watts! That's a 250 watt TDP CPU. Or, like I said, TDP has no meaning anymore. IF that is true, then imagine how that will affect all CPU components, including coolers. You buy a cooler rated at 95 watts, and your rig pulls 250 - cooler fail.

My Core i7 Bloomfield OCed to 3.8 used 1.25-1.35v and ran 78C in any CPU heat test I tossed at it. I used an old Cogauge 120mm tower cooler, and that was it. Are we going backwards in time as far as power usage goes?

The bottom line is that people having problems with the 3600's heat is all over the internet, not just "in my mind's eye." I'm only talking about heat issues, here.

I didn't mean it as derogatorily as it sounded...sorry.

But yes, TDP is really just an arbitrary number nowadays. There were tests done last year and the 2700x and 9900k actually performed very similarly (even in games) when they were hard capped at their respective 95W and 105W thermal limits. The difference now is that the "Turbo Boost" from your old Bloomfield is now far more complex and takes into account the temperature of the CPU instead of just the CPU load.
 
Honestly, I haven't run Prime95 in a long time. I'll see if I can dig it up to test with.

What I have noticed is that the core speeds never drop to the low numbers at idle like I'd normally be used to with older Intel CPUs. A lot of times it is sitting in the mid 3000Mhz range but at ~1.05V or so and temps in the low 40s. It seems like it is always bouncing around even under low intensity workloads (like typing this in Chrome) in the Ryzen balanced plan. In the Windows balanced plan it will drop to the low 2000Mhz range at ~0.9V or so and temps in the mid 30s.
My old Core i7 Bloomfield would idle at 1200Mhz and OCed to 3.8 about 2200Mhz. There is an old post about it here some here, but wth a different name. Not really relevant, except that AMD can and should do better.

I can seeand understand the TDP being lower than a stress test,since everyday computing using businness apps, 3D modleing, Raytracing, modeling, etc, will not run the CPU wide open, but may need more power for a spike or to complete an instruction without slowing down. But calling a CPU 65 watts should mean overall it will generate the same heat as other 65 watt CPUs, and not use more than 65 watts on average doing it.

Please do run P95 and see what you can cook up (pun intended).
 
because p95 is an unrealistic load and will overwhelm the stock cooler. that is normal. running in normal usage they are well within spec.
and where do they set that offset? the bios. the chip doesnt set the voltage.

all of this seems to be work-arounds for bios and software issues.
I agree with the P95 comment. But the stock cooler, to me, should keep the CPU from throttling, even if it runs it 1C cooler than the throttle temp.

Agreed, but the AMD person was talking about the CPU FIRMWARE interpreting the polling of software as a request to boost. If so, in my mind, that's a fuck up in the firmware, not the software - that, ironically, performs perfectly on all other series CPUs.

I'm going to diable PBO and other BIOS boost options when I get my rig up (3600)and see how that affects delta temps vs leaving everything on.

I can't find the link, but tehre was a test done for performance and the test was eye opening - that the PBO and other boost features of the 3600 don't necessarily mean better performance, while generating a lot more heat.
 
Just as an aside, a couple of days ago when I said my new rig posted without any errors, I looked quickly in the BIOS to see if anything looked strange, and I do remember that sitting in the BIOS, the temp reported was 36C. That was using the Noc L12S coooler. That seemed a little high to me. That was in an open case with the tower on it's side, and no fans blowing on the MB area. Still, seemed hot to me. I was expecting 32-33C BIOS idle.
 
Just as an aside, a couple of days ago when I said my new rig posted without any errors, I looked quickly in the BIOS to see if anything looked strange, and I do remember that sitting in the BIOS, the temp reported was 36C. That was using the Noc L12S coooler. That seemed a little high to me. That was in an open case with the tower on it's side, and no fans blowing on the MB area. Still, seemed hot to me. I was expecting 32-33C BIOS idle.
That's not hot, that's normal.
 
So much going on in this thread for simple overheating? Not going to be able to read all of this.

OP did you get your heat issues hashed out? Whats your final temps settled at now?
 
And this! Someone gets it, here.

Yes it is about incorrect reporting, but it is also about high temperatures during normal operations.

It's not just about idle temps and voltages being reported incorrectly, it's about high temps under normal stress situations, whereas other 65 watt CPU never see those types of temperatures - and the only way to tame it is to negatively offset 3 series CPUs. That's not a fuck up?

But look, the idle temps and voltage are going up, and it isn't just "observer effect." AMD actually states this clearly, but tries to make it sound like it really isn't happening:

"From the perspective of the processor firmware, this is interpreted as a workload that's asking for sustained performance from the core(s). The firmware is designed to respond to such a pattern by boosting: higher clocks, higher voltages." "THE FIRMWARE IS DESIGNED TO RESPOND. . . ."

It's never been a problem for any CPU at any time, until now.

"By now, you may know that 3rd Gen Ryzen heralds the return of the Ryzen Balanced power plan (only for 3rd Gen CPUs; everyone else can use the regular ol' Windows plan). This plan specifically enables the 1ms clock selection we've been promoting as a result of CPPC2. This allows the CPU to respond more quickly to workloads, especially bursty workloads, which improves performance for you. In contrast, the default "Balanced" plan that comes with Windows is configured to a 15ms clock selection interval.

Some have noticed that switching to the Windows Balanced plan, instead of the Ryzen Balanced Plan, causes idle voltages to settle. This is because the default Balanced Plan, with 15ms intervals, comparatively instructs the processor to ignore 14 of 15 clock requests relative to the AMD plan.

So, if the monitoring tool is sitting there hammering the cores with boost requests, the default plan is just going to discard most of them. The core frequency and clock will settle to true idle values now and then. But if you run our performance-enhancing plan, the CPU is going to act on every single boost request interpreted from the monitoring tool. Voltages and clock, therefore, will go up. Observer effect in action!" Reddit post

So, the voltages and clock are going up based on the CPUs aggressive boost firmware; the CPU is boosting, using a higher voltage, and heating itself up. And, it isn't a "boost request." The firmware is interpreting it as a boost request. It's not a boost request. How much boost do you need to report back to a monitoring software?

CPUZ, he goes on to say, reports the actual idle voltages.

OK, so monitoring software will need to change their game in order to stop the 3 series firmware from interpreting that request as a boost request. Fair enough. But let me be clear. It is NOT a boost request, or ALL CPUs would be boosting. It's how AMD 3 series CPU's firmware interprets the request.

Now the other problem is high temperatures during normal workloads and stress testing. If this were an observer effect, then tehre wouldn;t be higher temps and voltages. People are reporting 95C temps in these situations with coolers that are well ventilated and have a TDP of 65 watts and over. They are reorting high temps ding normal business relarted tasks. Why are they getting these huge temps relative to other CPUs, including Intel's? Why, indeed?

It seems the new 3 series CPUs are using far too much voltage to achieve their new and improved ability to boost. In fact, the AMD spoke's person says this DEFINITELY in this next quote:

"I anticipate that many people are now trying Ryzen processors for the first time (because they're awesome), and may not understand what to expect versus whatever CPU they had previously. You want to know if what you're seeing is "normal," but may not know what "normal" looks like. I get it! I want to assure you that the CPU needs voltages to boost, and voltages of 1.2-1.5V are perfectly ordinary for Ryzen under load conditions (games, apps, whatever). Even at the desktop, Windows background tasks need love too! You'll see the CPU reach boost clocks and voltages, too. But if your voltage is well and truly stuck, that's what I'm trying to troubleshoot. "

LOL-WTF? AMD need to go into full 1.5V boost to run a background task? Well, that's a first in CPU development--boosting to 1.5 volts to get a background task done. Okay, so how much faster is it relative to undervolting or offsetting? And, does it really degrade your performance? In fact, does the AMD boost really give you that edge?

Wait a minute! Here he states that it is a problem:

"EDIT 7/18/19 As a temporary workaround, you can use the standard Windows Balanced plan. Edit this plan to use 85% minimum processor state, 100% maximum processor state. (Example). This will chill things out as we continue to work this issue. "

OK is it observer effect, or is there really a problem? YES, AMD fucked up and they know it! Whoa--so I'm going to be idling at 85%? Holy shit, can you say "SUCK THAT POWER AMD?!"

"Please note that it is totally normal for your Ryzen to use voltages in a range of 0.200V - 1.500V -- this is the factory operating range of the CPU. It is also totally normal for the temperature to cycle through 10°C swings as boost comes on and off."

Aaahhhh,so I can see now why this little 65 watt wonder is really using 85 watts under stress. It's boosting to 1.5V ?!?! Holy Jesus Mother Mary! 1.5V! Yeah, it's a 65 watt CPU, suuuuuure. Only on paper. And, that's why you need HUGE cooling to cool this puppy down, because it's realistically NOT even close to 65 watts. And, that's why 65 watt CPU coolers are running this little 65 watt CPU at 95C temps and thermal throttling. And, that's why you can offset .1v to.5v and get normal temp results - without degrading performance.

See this Youtube video discussing the AMD Reddit post:


TDP does not indicate total cpu power used, it is specifically for the cooling of the cpu.
 
I agree with the P95 comment. But the stock cooler, to me, should keep the CPU from throttling, even if it runs it 1C cooler than the throttle temp.

Agreed, but the AMD person was talking about the CPU FIRMWARE interpreting the polling of software as a request to boost. If so, in my mind, that's a fuck up in the firmware, not the software - that, ironically, performs perfectly on all other series CPUs.

I'm going to diable PBO and other BIOS boost options when I get my rig up (3600)and see how that affects delta temps vs leaving everything on.

I can't find the link, but tehre was a test done for performance and the test was eye opening - that the PBO and other boost features of the 3600 don't necessarily mean better performance, while generating a lot more heat.
and under normal use the stock cooler is fine, it wont trottle. p95 IS NOT normal use. it is not a problem with the cpu, the cpu is doing what it is supposed to do. the monitor tools are keeping the core active, that is the problem. why are you hobbling your system instead of getting it setup properly? you are not building a sff youre case is matx so just do it properly.
 
and under normal use the stock cooler is fine, it wont trottle. p95 IS NOT normal use. it is not a problem with the cpu, the cpu is doing what it is supposed to do. the monitor tools are keeping the core active, that is the problem. why are you hobbling your system instead of getting it setup properly? you are not building a sff youre case is matx so just do it properly.
Yeah, we'll juts agree to disagree on the software polling the CPU and the CPU thinking it needs to go to boost to answer. I've already stated my argument against that type of CPU firmware behavior. It's too aggressive. The dude from AMD even said any tasks will put the 3 series into boost mode, which is not good for power usage over time. That's why he said to use the Windows powermode, not the AMD power mode-for now. And I'll bet they relax their power mode algos because they don't need full boost to start a service that takes 5ms to start. That's a ridiculous.

The OP said he was getting 95C while gaming. That's a 'normal' type of usage for a PC. Although he did say his case is not airflow optimized. So there is that.
 
Yeah, we'll juts agree to disagree on the software polling the CPU and the CPU thinking it needs to go to boost to answer. I've already stated my argument against that type of CPU firmware behavior. It's too aggressive. The dude from AMD even said any tasks will put the 3 series into boost mode, which is not good for power usage over time. That's why he said to use the Windows powermode, not the AMD power mode-for now. And I'll bet they relax their power mode algos because they don't need full boost to start a service that takes 5ms to start. That's a ridiculous

You can agree to disagree except that the video you posted appears to agree with what you are disagreeing with.
 
Back
Top