How many 3900x silicon lottery losers are going to upgrade to 3950x?

Your explanation sounds like its from AMD's PR machine. While your explanation is technically true to a point, its actually not that hard to test your system to see if you can achieve the desired boost clocks. This isn't rocket surgery, all you really need to do is fire up the single CPU benchmark in Cinebench, POV-Ray or any number of tests to get the system to boost its clocks to the advertised frequency, or near it anyway. Unfortunately, this doesn't work in all cases or in all configurations. Many people are having issues with this and its something that may vary from processor to processor and even motherboard to motherboard. In most cases, a UEFI BIOS update will resolve the issue. However, I've seen situations where it doesn't work. And like I said, in cases where it doesn't work, the benchmarks back up what I'm seeing in Ryzen Master, which is as close to real time monitoring as you will get for these CPU's.

You bring up Precision Boost Overdrive, so lets talk about that. It doesn't really do anything with the Ryzen 3000 series. All it does is adjust PPT, EDC and TDC values. You can even input your values for it as those values override the CPU's presets and use the motherboard values instead. Even with PBO+AutoOC, boost clock behavior on the 3900X at least doesn't really change. Gamer's Nexus found that it essentially didn't work at all. I only tested it on the 3900X and so far, PBO+AutoOC doesn't do anything. In fact, it often hurts performance. I'm not the only one who experienced this either. It doesn't matter how aggressive the algorithm is, PPT, EDC and TDC values aren't what's holding these back from overclocking. The algorithm worked the same on 2nd generation Threadripper CPU's, but actually worked well. On Ryzen 3000, not so much.
.

Just to note, the CB20 benchmark actually loads every core present from the time you click start until the time you see the first block filled in. While this is a short period, it still causes the CPU to load 100% instantly which causes the CPU to drop to it's lower all core rated speed.

AS far as adjusting the tables, there is some nice benefit to adjustments, the issue is that you need to adjust them down tighter, not higher aka looser. The results will vary based on your cooling, but there is a ton of great information in the strictly technical: Matisse thread over on overclock.net...

Basically if you have good to great cooling, dropping the PPT from 142 down to a window of 124-134(reduces package power giving better efficiency and thermals), TDC down to 85A from 95A (allows higher levels of sustained boost), and with EDC down to 110-115A.

In addition, reducing the Mhz level (say25-100mhz vs the max of 200mhz) while increasing the scalar level to 10X seems to give a higher overall boost speed according to a lot of folks.

Ultimately the incredible heat density of the 7nm chiplet is really starting to make itself known. It would seem that Intel seems to be dealing with this as well with their 10nm WL uarch. They went with a wider core, like AMD did with Zen2, and they are seeing a clockspeed ceiling...

Now I realize that these are TDP restrained SKUs, and we will need to see what a 65-95W desktop TDP will allow, but the general trend is going to be lower clockspeeds with these newer nodes.

The enthusiast will be better going with a upper midrange board and great cooling (high end 360mm AIO to custom loops) vs a lower end AIO/HSF and a "high end" board in the future. This is just my guess but I have a feeling we are going to see that trend continue if the core counts tend to increase again next year/following year.
 
Last edited:
i'm looking forward to seeing some heatpipe infused water blocks. Heatpipes are pretty much the only thing currently manufactured that can move heat in the few mm's needed better than solid copper to any measurable degree. We'd just need to see much smaller ones than you find on air coolers currently. And they'd have to have direct cpu contact. Preferrably right over the cores. Sounds very manual manufacturing-centric.

just some numbers to back up the fact that it could make a difference even with such small distance of copper to water ... a heatpipe has 90 times the thermal conductivity of solid copper. That would probably make up for the 40% smaller die size if they can be fabricated at the right quality for the size needed.

200 dollar waterblocks, here we come.
 
You're contradicting yourself. Is it the motherboard holding back the cpu, or the cpu? You go into length talking about how the cpu couldn't hit the frequencies it theoretically could under one motherboard but switch it to a different one and it did. The cpu chiplets didn't magically get better. That implies that the VRM and setting related to it play a significant part in how the cpu can overclock itself.

I'm not trying to imply anything. I'm simply stating that some people may not be seeing the advertised boost clocks in single-threaded applications in some configurations. I think that comes down to firmware, not necessarily settings or VRM quality. Those could be factors in some cases, but I don't think those are factors here. Neither does ASUS. I'm sure the Crosshair VIII Hero can actually achieve the advertised boost clocks on a 3900X. The interesting thing here is that on the Godlike board, single-threaded tasks were always assigned to the cores in Ryzen Master marked with a gold star. These are the best cores in the CPU. On the ASUS Crosshair VIII Hero, the single-threaded task always goes to core 0 no matter what. It's never assigned to the better cores. Using a 3700X, the behavior I expected is what I see. The single-threaded tasks are assigned to the better cores and it boosts correctly.

The rest of what you stated just backs up the fact that amd isn't misleading people about what the cpu's can do. People are trying to do more than what's advertised or totally skipping the reading part and assuming things that aren't being stated. The idea that peak frequencies above the 4.2-4.3Ghz range were never in question about being limited to a single core since before they went to market even. Feeling gipped by that fact is kind of stupid.

Silicone lottery is real, dont get me wrong. I think the motherboard variables though play a bigger role than that. Even on high end boards, vrm's are tuned pretty conservatively. Upping the frequency of the vrm and giving it higher current limits may let your cpu spread it's wings better on the boards you're not seeing it hit the mark on.

I agree with you in that I don't think AMD is misleading anyone. I think it's what they know from previous Intel processors that has them thinking this is the case. AMD never says that all its cores are capable of achieving the maximum boost clock frequencies or even that those frequencies are necessarily guaranteed. In contrast, Intel says all its cores on a 9900K can hit 5.0GHz, but it says nothing about you being able to achieve this on an all core overclock. Because we can, people generally seem to assume this is how it should be with AMD. Intel also never says that it will boost on more than one core at a time. I'm not sure why so many seem to misunderstand the situation with AMD. If they do, I think its because of their preconceived notions based on what Intel processors have been doing for years.

Personally, as long as a single core can hit the advertised boost clocks in actual single-threaded applications or benchmarks as advertised, I think AMD is covered. The boost clocks advertised were always about single-core clocks since the dawn of Intel's Turbo Boost. That's all AMD has guaranteed. I think cases like the one I'm experiencing are actually influenced by firmware or even Windows scheduler issues. There are certainly issues where that's concerned as evidenced by what reviewers saw in some game testing.

Just to note, the CB20 benchmark actually loads every core present from the time you click start until the time you see the first block filled in. While this is a short period, it still causes the CPU to load 100% instantly which causes the CPU to drop to it's lower all core rated speed.

Fair enough, but after the test gets going, what you normally see is a single core boost to its maximum clock speed and then from that point it will fluctuate up and down. It won't maintain 4.6GHz necessarily through the duration of the test, but it will hit it repeatedly and then sustain it for a time. I've also been able to test boost clock behavior using POV-Ray single-thread test and other software applications as well.
 
I've seen two extremes on the AMD boost clock issue.

I've seen some folks saying AMD is horrible, there needs to be legal action and huge fines. Basically end-of-the-world type stuff. And I've seen those who are completely excusing it, coming up with various excuses for why AMD is perfectly fine doing this.

The truth is somewhere in between, IMHO. der8auer managed to get 4.575 to show up (stock) on his 3900X after a BIOS update. A lot of other reviewers I've seen were able to do likewise (4.55 - 4.575). 4.6 doesn't seem to happen, except for very specific code loops (somebody in the AMD subreddit demonstrated 4.6 with a NOP loop). Some people, as Dan has said, are having trouble with specific motherboards/configurations even with the BIOS updates, and are way lower.

My $.02 is that this is mainly a BIOS problem/teething issue problem, and that it will get better with mature BIOS and AGESA revisions. More people will see the 4.55 - 4.575 clocks out of a 3900X (and corresponding near-correct boost clocks on other CPUs). They might even get 4.6 to work outside of useless special code. But AMD should be held accountable for the release problems. For this, for the RDRAND bug (which was also fixed recently, as I understand). They don't get a pass. It's not the end of the world, they weren't outright blatantly dishonest. But they stretched things a little further than they should have, and the released the CPUs with poorly-tested BIOSes out in the wild. It's a problem of their own making. IMHO, AMD would have been better off claiming ~50MHz less on the max boost, and spending another month or so ironing out BIOS bugs. But it's not a big deal, either.
 
I'm not trying to imply anything. I'm simply stating that some people may not be seeing the advertised boost clocks in single-threaded applications in some configurations. I think that comes down to firmware, not necessarily settings or VRM quality. Those could be factors in some cases, but I don't think those are factors here. Neither does ASUS. I'm sure the Crosshair VIII Hero can actually achieve the advertised boost clocks on a 3900X. The interesting thing here is that on the Godlike board, single-threaded tasks were always assigned to the cores in Ryzen Master marked with a gold star. These are the best cores in the CPU. On the ASUS Crosshair VIII Hero, the single-threaded task always goes to core 0 no matter what. It's never assigned to the better cores. Using a 3700X, the behavior I expected is what I see. The single-threaded tasks are assigned to the better cores and it boosts correctly.

I'd be very surprised to see any correlation between which cores get assigned a task having anything to do with the motherboard. The operating system dictates which cores get scheduled a task and the motherboard really doesn't play any part in the decision making process at all. While there is an abstraction between how the cores are labeled since we can turn hyper threading on and off and impact that, there isn't any realtime abstraction going on that would map what core the OS sees to what core get's used on the hardware level so that the behavior you're suggesting is actually possible.

At least that's something I've never heard being talked about and that would certainly be something that gets talked about on kernel mailing lists since it would impact cpu schedulers and the ability to pin processes to specific cores.
 
I'd be very surprised to see any correlation between which cores get assigned a task having anything to do with the motherboard. The operating system dictates which cores get scheduled a task and the motherboard really doesn't play any part in the decision making process at all. While there is an abstraction between how the cores are labeled since we can turn hyper threading on and off and impact that, there isn't any realtime abstraction going on that would map what core the OS sees to what core get's used on the hardware level so that the behavior you're suggesting is actually possible.

At least that's something I've never heard being talked about and that would certainly be something that gets talked about on kernel mailing lists since it would impact cpu schedulers and the ability to pin processes to specific cores.

That's my point and why I don't think the motherboard is the factor in the one case I've seen on the test bench. I think that's either an AGESA code issue, or a problem with the Windows scheduler specifically.
 
The way I look at overclocking on these new Ryzen chips is that the amount of time you will spend trying to squeeze out an all core overclock (and thus having the side effect of losing a higher single core boost) is simply not worth it. You will never get that time back. Best bet is to let the chip do its thing, leave PBO, Auto OC off and leave default settings at default in BIOS. Just overclock your ram and be done with it. While you're at it - turn off ALL CPU monitoring software besides Ryzen Master. Hell, I even turn off MSI Afterburner CPU reporting. Just enjoy the CPU for what it is, an amazing performer.

I personally am very happy with my 3900X. I've seen it boost to 4575mhz on single core, sure not 4600Mhz, but I'm not losing any sleep over it. This 3900X was a massive upgrade over my 2600X. I really didn't expect it to be such a big jump, minimum and avg fps in games have skyrocketed for me.
 
  • Like
Reactions: blkt
like this
The way I look at overclocking on these new Ryzen chips is that the amount of time you will spend trying to squeeze out an all core overclock (and thus having the side effect of losing a higher single core boost) is simply not worth it. You will never get that time back. Best bet is to let the chip do its thing, leave PBO, Auto OC off and leave default settings at default in BIOS. Just overclock your ram and be done with it. While you're at it - turn off ALL CPU monitoring software besides Ryzen Master. Hell, I even turn off MSI Afterburner CPU reporting. Just enjoy the CPU for what it is, an amazing performer.

I personally am very happy with my 3900X. I've seen it boost to 4575mhz on single core, sure not 4600Mhz, but I'm not losing any sleep over it. This 3900X was a massive upgrade over my 2600X. I really didn't expect it to be such a big jump, minimum and avg fps in games have skyrocketed for me.

They are a much bigger performance increase then a lot of people realize at first since they mainly look at peak/avg FPS And not the huge jump in minimum frame times.


I went from a 4.3Ghx AC 2700 with 3200c14 ram back to my trusty ole [email protected]/3200c16 (it didn't have the vest IMV but clocked like crazy) and the jump to my 3700c with 3600c14 across is insane. The smoothness factor went up a ton, even with an adaptive sync LCD.

If I had never owned the 3700x, I would have looked quickly at the numbers and figured the extra great frame times were nice but I probably would not notice a difference.

The fact that the system is so smooth is what got me looking at the frame times in the first place. I initially thought it was due to replacing my VII with a 5700XT 50th AV ED but the avg FPS were right in the same ballpark. Figgi g further had me comparing the frame times.


I really wish that my 2700 could have done at least 3533c14 across even if it were only stable for a few games. I would really love to compare GTA V with the huge Natural Vision mod that was released earlier this year. I really feel like my 3700x really shines in this game, buy it would be nice to compare the 2700 with it's similar clock speed and the faster ram to the results I had with the 3200c14 ram speed.q
 
Curious how many silicon lottery losers like myself are going to roll the dice on a 3950x hoping for better luck and an additional 4 cores when they are released? I was hoping the next bios update would fix some of my issues with boost and exceptionally high idle voltages but everything appears to be identical. I'm lucky to hit 4.3GHz on my 3900x and very lucky to hit 4.4GHz for about half a second. My plan is to see what the general consensus is with the 3950x and if I see a good number of users having better luck and closer advertised boost clocks with it. If that's the case, i'll likely upgrade.

Thoughts?

Loser, how so? Your board might not have the necessary delivery of power. What about agesa, what about bios version? There are so many variables involved. Cooling?

I hit 4550 peak on my chip. I'd call that a winner!
 
Loser, how so? Your board might not have the necessary delivery of power. What about agesa, what about bios version? There are so many variables involved. Cooling?

I hit 4550 peak on my chip. I'd call that a winner!

Running a ASUS ROG Strix x570-e. PLENTY of available power delivery and I'm on the latest BIOS (1005) which includes the latest AGESA. 280mm AIO for cooling

4550MHz is definitely better than what mine does, but a winner? Would we call it a winner about month ago when it was released? Lets face it, the only reason 4550 seems impressive now is because we have learned AMD was quite liberal with their boost estimates and very few are hitting 4.6GHz
 
Keep in mind the base clock on these motherboards tends to run just shy of 100MHz. So doing the math on that, it can add up and you may fall a little short of the boost clock as a result. This happens on Intel CPU's too.
 
Keep in mind the base clock on these motherboards tends to run just shy of 100MHz. So doing the math on that, it can add up and you may fall a little short of the boost clock as a result. This happens on Intel CPU's too.

Just out of curiosity why do they run the clock just shy of 100? Always thought that was kind of odd.
 
Yeah mine is at 99.80 I'd be pretty happy if I got 46x99.80 but my multi typically tops out at 44 and not for very long either.
 
did some tests with PBO enabled (all pbo settings set to auto) and with pbo disabled (pb still enabled though).

with auto settings on pbo, it does appear to slightly reduce performance a tiny bit and clock rates a tiny bit on my motherboard.

https://openbenchmarking.org/result/1908208-HV-INDIVIDUA24

in linux cores 0-11 are the "first" thread of each real core. cores 12-23 are the second threads to the first 12 in the same order

I started naming them incorrectly, so taskset 01 is actually core 0, and 02 is actually core 1 ...then i correct them. I kept the same naming for the non-pbo runs. I didn't feel the need to run all the non-pbo's as the pattern was clear.

the cores that hit 4.5ghz, stay there for the duration they're active. The cores that dont seem to hover between 4.3-4.4. These are short-term single thread tests i'm forcing to specific cores. Can't say for sure if what I'm measuring is an avg between samples or instantaneous snapshots for frequency in linux. lets assume i never do touch 4.6ghz though. I have to assume it's either silicone lottery or my 0.1uv not jiving with the precision boost algo. Either way, i'm still well on par for performance and i'm not giving up the 10% drop in temps for no gain. I ran tests before and after my uv and actual preformance has consistently been better after.
 
Last edited:
It can, and you can try, but PBO is virtually useless on the 3000 series in my experience so far. At least where the 3900X is concerned. I think some people have reported better results on other Ryzen 3000 series CPU's.


Some brief testing on my 3700x on an x570 Steel Legend showed it maintaining higher all core boost with manual PBO settings rather than auto or off. Auto and off seemed the same. With some manual settings applied it held closer to 4.2 all core in Cinebench, and about 5c hotter. Default dipped down to 4.1 all core as temps started to climb.
 
did some tests with PBO enabled (all pbo settings set to auto) and with pbo disabled (pb still enabled though).

with auto settings on pbo, it does appear to slightly reduce performance a tiny bit and clock rates a tiny bit on my motherboard.

https://openbenchmarking.org/result/1908208-HV-INDIVIDUA24

in linux cores 0-11 are the "first" thread of each real core. cores 12-23 are the second threads to the first 12 in the same order

I started naming them incorrectly, so taskset 01 is actually core 0, and 02 is actually core 1 ...then i correct them. I kept the same naming for the non-pbo runs. I didn't feel the need to run all the non-pbo's as the pattern was clear.

the cores that hit 4.5ghz, stay there for the duration they're active. The cores that dont seem to hover between 4.3-4.4. These are short-term single thread tests i'm forcing to specific cores. Can't say for sure if what I'm measuring is an avg between samples or instantaneous snapshots for frequency in linux. lets assume i never do touch 4.6ghz though. I have to assume it's either silicone lottery or my 0.1uv not jiving with the precision boost algo. Either way, i'm still well on par for performance and i'm not giving up the 10% drop in temps for no gain. I ran tests before and after my uv and actual preformance has consistently been better after.
That's beautiful and exactly what I hoped could be achieved with these. Nice work man!

Solution is less volts and more cooling perhaps.. I'm keen to play with Zen 2 and some different cooling solutions, have a sacrificial board to test with too...
 
It can, and you can try, but PBO is virtually useless on the 3000 series in my experience so far. At least where the 3900X is concerned. I think some people have reported better results on other Ryzen 3000 series CPU's.
I don't think it does anything useful on my 3700x. PBO just lifts the current and power limits, but the cpu is miles away from hitting the limits anyway unless I'm doing all core OCing and start adding voltage.
 
I've run all core benchmarks against my previous runs before i disabled pbo (the pbo was enabled and the individual settings were set to auto)

It made absolutely no discernible difference in frequencies achieved across all cores.

About the only thing i'm noticing between those two settings in benchmarks is pbo enabled and subsettings set to auto == slightly higher temps and performance is slightly lower.

I have not tested with pbo enabled and manual settings used.

my magic cores hit 4.5Ghz without pbo and with and the rest will fluctuate between 4.1 - 4.3Ghz all core depending on how they feel (it's not temperature related)

There's no difference in how long they stay at those frequencies.

It really looks like PBO mode is only worth having enabled if you are setting them manually. The stock auto settings are functionally the same as the cpu's factory behavior with precision boost 2 enabled.
 
I've run all core benchmarks against my previous runs before i disabled pbo (the pbo was enabled and the individual settings were set to auto)

It made absolutely no discernible difference in frequencies achieved across all cores.

About the only thing i'm noticing between those two settings in benchmarks is pbo enabled and subsettings set to auto == slightly higher temps and performance is slightly lower.

I have not tested with pbo enabled and manual settings used.

my magic cores hit 4.5Ghz without pbo and with and the rest will fluctuate between 4.1 - 4.3Ghz all core depending on how they feel (it's not temperature related)

There's no difference in how long they stay at those frequencies.

It really looks like PBO mode is only worth having enabled if you are setting them manually. The stock auto settings are functionally the same as the cpu's factory behavior with precision boost 2 enabled.

I had a similar experience. Also, with PBO enabled on the GODLIKE board, the CPU boosts lower.
 
The way I look at overclocking on these new Ryzen chips is that the amount of time you will spend trying to squeeze out an all core overclock (and thus having the side effect of losing a higher single core boost) is simply not worth it. You will never get that time back. Best bet is to let the chip do its thing, leave PBO, Auto OC off and leave default settings at default in BIOS. Just overclock your ram and be done with it. While you're at it - turn off ALL CPU monitoring software besides Ryzen Master. Hell, I even turn off MSI Afterburner CPU reporting. Just enjoy the CPU for what it is, an amazing performer.

In other words, bury your head in the sand! :ROFLMAO:

I personally am very happy with my 3900X. I've seen it boost to 4575mhz on single core, sure not 4600Mhz, but I'm not losing any sleep over it. This 3900X was a massive upgrade over my 2600X. I really didn't expect it to be such a big jump, minimum and avg fps in games have skyrocketed for me.

Wait, I thought you turned off monitoring? :confused:

You're openly lying to yourself and are so totally losing sleep over not being able to overclock! :LOL:
 
I think we all have a pretty good idea what these chips are about.

If you choose a 3900X over a 9900k, you're getting 8700k-level gaming performance and somewhat better than 9920X workstation performance for roundabout the same money as a 9900k.

If you choose a 3700X or 3800X over a 9900k, you're getting ~95% of the 9900k's performance, for $100-$175 cheaper, depending on the model.

All-core OC isn't really worth much, but paying attention to AGESA version, memory speeds, PBO/Auto-OC, and/or individual CCX overclocking may yield some fruit instead.

That's really about it!
 
I still think the issue with boost clocks is a problem with bios and microcodes. And all of these convoluted explanations will be moot, once better BIOS are released.
 
gaming performance really depends on the game it seems. various gaming benchmarks show the 3900x and similarly clocked zen2's exceeding the 9900k. Though others, noteably tomb raider, show it losing to it by about 10%

3700x is definitely not a bad choice for gamers considering you'll be getting better overall performance, equal gaming, and at roughly the same wattage as a 9900k. For 100 bucks less.
 
gaming performance really depends on the game it seems. various gaming benchmarks show the 3900x and similarly clocked zen2's exceeding the 9900k. Though others, noteably tomb raider, show it losing to it by about 10%

3700x is definitely not a bad choice for gamers considering you'll be getting better overall performance, equal gaming, and at roughly the same wattage as a 9900k. For 100 bucks less.

The 3900X wins some individual game benches, but the overall average is roundabout -6% vs the 9900k. In 1080p. With a high-end GPU. It's roundabout equal to the 8700k.

The 3800X offers more or less identical gaming performance to the 3900X. The 3700X is a little lower, but slap on PBO and it's basically the same as the 3800X.
 
I still think the issue with boost clocks is a problem with bios and microcodes. And all of these convoluted explanations will be moot, once better BIOS are released.

BIOS, yes. Microcode... maybe not. Some reviewers have done tests with the exact same CPU on multiple motherboards. Even with the same AGESA revision, some motherboards hit the boost clocks - a few even exceed them by 25-50MHz - and some aren't even close. So it may be something the motherboard vendors are doing in the BIOS, and not the AGESA revision itself.

I think it's been pretty well established, though, that the CPU *WILL* hit the boost clocks, provided the motherboard and BIOS support it. So it doesn't look like AMD is directly responsible for the problem. But perhaps they are indirectly responsible - maybe they did not give the motherboard vendors enough time to properly integrate the AGESA versions, or pushed the launch too quickly, resulting in these inconsistencies. It's a very AMD thing to do, really.

See below, tested with a 3800X:

https://www.guru3d.com/news-story/r...boost-clocks-with-different-motherboards.html
 
BIOS, yes. Microcode... maybe not. Some reviewers have done tests with the exact same CPU on multiple motherboards. Even with the same AGESA revision, some motherboards hit the boost clocks - a few even exceed them by 25-50MHz - and some aren't even close. So it may be something the motherboard vendors are doing in the BIOS, and not the AGESA revision itself.

I think it's been pretty well established, though, that the CPU *WILL* hit the boost clocks, provided the motherboard and BIOS support it. So it doesn't look like AMD is directly responsible for the problem. But perhaps they are indirectly responsible - maybe they did not give the motherboard vendors enough time to properly integrate the AGESA versions, or pushed the launch too quickly, resulting in these inconsistencies. It's a very AMD thing to do, really.

See below, tested with a 3800X:

https://www.guru3d.com/news-story/r...boost-clocks-with-different-motherboards.html

This is correct. Using the same Ryzen 9 3900X CPU on two different motherboards, it boosts correctly on one and not the other. It works correctly on the MSI MEG X570 GODLIKE and doesn't work right on the ASUS Crosshair VIII Hero. However, the 3700X does work properly on the ASUS board. I spoke to ASUS several times about this issue and even they don't know why my 3900X doesn't boost on that motherboard. ASUS tells me that anything governing the clocks is in the AGESA code which they don't touch and all the motherboards should boost the same using PB2.
 
Interesting and relevant to this post... Steve at Hardware Unboxed just recently released a video where he was testing the same CPU (a 3800x in this case) and RAM in several different motherboards to try and see why people are reporting such hugely varied results in their boost clocks.

tl;dw... Every board he tested out of 12 had very different boost behavior, with the top board boosting to top boost clock on 4 cores, a couple hitting max boost on 2 cores, a couple more hitting it on 1 core, and most NEVER hitting it. Mind you, the only difference in every one of these cases is the motherboard tested - all other hardware remained the same. Also note that the vast majority of these boards are x570.

There is definitely some wonky stuff here the mainboard manufacturers still need to work out.
 
Last edited:
I dont watch video reviews ...make a webpage people :) ...

precision boost is a two way function meaning, it requires the bios to tell it what the board's limits and capabilities are in relation to whatever specific vrm / cooling details it looks at. I'm sure a bunch of motherboard manufacturers are being lazy and not mapping the particular data accurately or purposely positioning some of their boards to perform better than others to upsell flagship / halo products.

Accurate PBO settings should be a way to somewhat work around those things, but that requires manual settings for pbo. And incorrect settings can easily yield worse performance, more heat or even hardware damage, so they know most people will stick to auto/off and buy the motherboard that works out of the box.


So basically, the thread about the future of overclocking has moved to motherboards instead of cpu's. Where potentially, motherboard manufacturers are purposely gimping the ability for cpu's to auto overclock so otherwise identical boards can be sold for different amounts

maybe :)
 
There’s absolutely zero reason for a board manufacturer to gimp overclocking intentionally, particularly on their higher end boards. I’m not sure how you even came to this conclusion.
 
There’s absolutely zero reason for a board manufacturer to gimp overclocking intentionally, particularly on their higher end boards. I’m not sure how you even came to this conclusion.

There's zero reason to artificially limit overclocking on the, say, Crosshair VIII Hero ($359) vs the Crosshair VIII Formula ($659)? You must REALLY REALLY BELIEVE a integrated WB, Aquantia 5G NIC, and OLED screen on the IO shield are all worth $300.
 
There's zero reason to artificially limit overclocking on the, say, Crosshair VIII Hero ($359) vs the Crosshair VIII Formula ($659)? You must REALLY REALLY BELIEVE a integrated WB, Aquantia 5G NIC, and OLED screen on the IO shield are all worth $300.

Only if you really believe the general public is too stupid to realize there’s other board manufacturers that aren’t doing that, if that were the case. And why would that only happen to AMD platform? This conspiracy theory makes zero sense anyway you try and tweak it.
 
There’s absolutely zero reason for a board manufacturer to gimp overclocking intentionally, particularly on their higher end boards. I’m not sure how you even came to this conclusion.


there's plenty reason to gimp older boards.

There's plenty reason to gimp lower priced boards.

But the simplest conclusion is probably the correct one, which I also mentioned. that they just aren't accurately putting the right values in for the particular board and instead copy and pasting from other motherboards in their own lineup.
 
there's plenty reason to gimp older boards.

There's plenty reason to gimp lower priced boards.

But the simplest conclusion is probably the correct one, which I also mentioned. that they just aren't accurately putting the right values in for the particular board and instead copy and pasting from other motherboards in their own lineup.

We aren’t talking about older or cheap boards
 
Only if you really believe the general public is too stupid to realize there’s other board manufacturers that aren’t doing that, if that were the case.


if people can pbo manual settings (and bump other current related limits up) with sane temps ...then the evidence would point to incompetence
We aren’t talking about older or cheap boards


So all 12 boards were the same price? Or were there no correlation between the ones that performed the best and the price they cost? If there's no correlation at all, then it's unlikely it's intentional.

if we find a lot of people can pbo manual settings (and bump other current related limits up) with sane temps ...then the evidence would point to incompetence if the above proves to have no correlation. Incompetence was and is my first guess. It's not like companies spend a lot of effort to get bios's dialed in ...it's been test-in-production for them for over a decade.
 
if people can pbo manual settings (and bump other current related limits up) with sane temps ...then the evidence would point to incompetence



So all 12 boards were the same price? Or were there no correlation between the ones that performed the best and the price they cost? If there's no correlation at all, then it's unlikely it's intentional.

if we find a lot of people can pbo manual settings (and bump other current related limits up) with sane temps ...then the evidence would point to incompetence if the above proves to have no correlation. Incompetence was and is my first guess. It's not like companies spend a lot of effort to get bios's dialed in ...it's been test-in-production for them for over a decade.

So are you coming up with conspiracy theories about intentionally gimping boards without even doing any real reading? There are X470 boards which are not only significantly cheaper, but also significantly older that are boosting higher than many X570 boards
 
if people can pbo manual settings (and bump other current related limits up) with sane temps ...then the evidence would point to incompetence



So all 12 boards were the same price? Or were there no correlation between the ones that performed the best and the price they cost? If there's no correlation at all, then it's unlikely it's intentional.

if we find a lot of people can pbo manual settings (and bump other current related limits up) with sane temps ...then the evidence would point to incompetence if the above proves to have no correlation. Incompetence was and is my first guess. It's not like companies spend a lot of effort to get bios's dialed in ...it's been test-in-production for them for over a decade.

The best board in the Hardware Unboxed investigation was an expensive one (but not the most expensive one). The second best board was a cheap one.
 
So are you coming up with conspiracy theories about intentionally gimping boards without even doing any real reading? There are X470 boards which are not only significantly cheaper, but also significantly older that are boosting higher than many
X570 boards

dude, ironic regarding reading. You can read my original comment that got you all butt hurt and see that I'm being fictitious about motherboards taking over the overclock game with artificial "binning". I didn't "read" the video because it's a video and I make it a point to not watch review videos because they almost always feature an annoying self important tool who needs attention and so makes a video about everything and it makes it harder to reference when needed.

The only reason why a cpu would pb2/pbo better on one motherboard vs another that should otherwise perform the same because it physically has the same or better capabilities (talking about VRM capabilities) is if the manufacturer was crap at making their bios's or did it on purpose. Full stop. There's no need for microcode or special agesa's. These are basic settings for am4 boards and ryzen2 in general that if your ryzen2 cpu boots, should be present and working as designed.

So either the manufacturers are crap at bios quality control (a point i have repeatedly stated is most likely) or they did it on purpose. This is not an issue with AMD or them being unprepared etc. As evidence by the fact that the cpu does function exactly the way it's supposed to on the boards that are coded correctly.

edit: the way precision boost works is not via some magic algorithm running on the motherboard. The cpu just reads a couple values from the motherboard that designate certain hardware limits in the VRM more or less. Values that will vary from board lineup to board lineup based on whether or not they have different vrm's with different cooling capacities on the vrms and marketing choices.

It's not cpu dependent. It's not something they would need to test and check with new cpu's. It's just going to be a fixed (probably conservative) value related to their own motherboard hardware. The cpu does with it what it will and overclocks/etc based on it's own coding. Which the video apparently proved, is working just fine.
 
Last edited:
dude, ironic regarding reading. You can read my original comment that got you all butt hurt and see that I'm being fictitious about motherboards taking over the overclock game with artificial "binning". I didn't "read" the video because it's a video and I make it a point to not watch review videos because they almost always feature an annoying self important tool who needs attention and so makes a video about everything and it makes it harder to reference when needed.

The only reason why a cpu would pb2/pbo better on one motherboard vs another that should otherwise perform the same because it physically has the same or better capabilities (talking about VRM capabilities) is if the manufacturer was crap at making their bios's or did it on purpose. Full stop. There's no need for microcode or special agesa's. These are basic settings for am4 boards and ryzen2 in general that if your ryzen2 cpu boots, should be present and working as designed.

So either the manufacturers are crap at bios quality control (a point i have repeatedly stated is most likely) or they did it on purpose. This is not an issue with AMD or them being unprepared etc. As evidence by the fact that the cpu does function exactly the way it's supposed to on the boards that are coded correctly.

edit: the way precision boost works is not via some magic algorithm running on the motherboard. The cpu just reads a couple values from the motherboard that designate certain hardware limits in the VRM more or less. Values that will vary from board lineup to board lineup based on whether or not they have different vrm's with different cooling capacities on the vrms and marketing choices.

It's not cpu dependent. It's not something they would need to test and check with new cpu's. It's just going to be a fixed (probably conservative) value related to their own motherboard hardware. The cpu does with it what it will and overclocks/etc based on it's own coding. Which the video apparently proved, is working just fine.

I'm not but hurt about anything, you just aren't making any sense. There's literally no evidence to support your (gimping) theory and in fact, there's direct evidence that counters it. I'm really not sure where you come up with this crap, this is worse than your theory that the only reason the dot method for thermal paste exists is to save thermal paste. More thinking and less impulsiveness would suit your future posts well.

Buggy bios' are nothing new and have happened since the beginning of bios' you could've just stopped there and while you'd simply be pointing out the obvious, it at least wouldn't have been so nonsensical.
 
it was never a theory, just a tongue in cheek statement made in relation to what made cpu overclocking a thing back when it mattered where manufacturers like intel would bin cpu's not by capability since they were all the same. Even put a smiley face after to show you it wasn't serious.

It's not my fault you took it seriously. And I responded to your statement about there not being any reason for such a behavior from manufacturers because it was stupid. There's 100% good reason to obsolete old hardware, we see companies try and do it all the time. Never said they were and specifically said I had no idea if the video provided such evidence as I had not watched it and I had none.

Really, i dont know how much more clearer I could have made things. I guess i should have made a video?

this is not a buggy bios problem. If it's incompetence then it's just crap work from whatever little team or person is responsible for making the settings. These are literally just number fields that the cpu accesses that have to be correct for the pb2/pbo behavior to work as designed.
 
I'm not trying to imply anything. I'm simply stating that some people may not be seeing the advertised boost clocks in single-threaded applications in some configurations. I think that comes down to firmware, not necessarily settings or VRM quality. Those could be factors in some cases, but I don't think those are factors here. Neither does ASUS. I'm sure the Crosshair VIII Hero can actually achieve the advertised boost clocks on a 3900X. The interesting thing here is that on the Godlike board, single-threaded tasks were always assigned to the cores in Ryzen Master marked with a gold star. These are the best cores in the CPU. On the ASUS Crosshair VIII Hero, the single-threaded task always goes to core 0 no matter what. It's never assigned to the better cores. Using a 3700X, the behavior I expected is what I see. The single-threaded tasks are assigned to the better cores and it boosts correctly.



I agree with you in that I don't think AMD is misleading anyone. I think it's what they know from previous Intel processors that has them thinking this is the case. AMD never says that all its cores are capable of achieving the maximum boost clock frequencies or even that those frequencies are necessarily guaranteed. In contrast, Intel says all its cores on a 9900K can hit 5.0GHz, but it says nothing about you being able to achieve this on an all core overclock. Because we can, people generally seem to assume this is how it should be with AMD. Intel also never says that it will boost on more than one core at a time. I'm not sure why so many seem to misunderstand the situation with AMD. If they do, I think its because of their preconceived notions based on what Intel processors have been doing for years.

Personally, as long as a single core can hit the advertised boost clocks in actual single-threaded applications or benchmarks as advertised, I think AMD is covered. The boost clocks advertised were always about single-core clocks since the dawn of Intel's Turbo Boost. That's all AMD has guaranteed. I think cases like the one I'm experiencing are actually influenced by firmware or even Windows scheduler issues. There are certainly issues where that's concerned as evidenced by what reviewers saw in some game testing.



Fair enough, but after the test gets going, what you normally see is a single core boost to its maximum clock speed and then from that point it will fluctuate up and down. It won't maintain 4.6GHz necessarily through the duration of the test, but it will hit it repeatedly and then sustain it for a time. I've also been able to test boost clock behavior using POV-Ray single-thread test and other software applications as well.

You are absolutely right. I have the same thing on my crosshair viii hero. Core 3 for me is gold star according to ryzen master. I see in hwinfo64 it actually goes up to 4541.6-4566.66 max. But running cinebench or any other program it just defaults to other core.

I do think some sort of fine tuning has to happen. Or AMD really needs to put some tight grip around stock behavior. Hey if bios is on AUTO these parameters must hit, no excuses. I think board manufacturers might be tweaking too much. That messes up how the core behavior is on Auto depending on board.

now for me to get these boosts I had to really do shit load of tweaking.

PBO on enabled +200, I did see +200 does boost all core boost.

Asus performance enhancer changed to default from AUTO. (not sure why asus would customize this setting) on AUTO it doesn't sustain boosts it seems.

Memory at 3533, had to drop it from 3600. Not much performance difference but did help me get boost over 4500, or I was stuck at 4491

everything else on auto.

now I should have to do all this shit for what should be stock lol.
 
  • Like
Reactions: N4CR
like this
Back
Top