How many 3900x silicon lottery losers are going to upgrade to 3950x?

https://openbenchmarking.org/result/1908234-HV-1908220HV55&obr_nor=y&obr_hgv=3900x0.1+8-22-19

So many other setting changes didn't even make it on that test run.

All the tweaking and changes amount to basically no change in performance, but do impact heat output significantly.


Where your processes get assigned when you run them has to do with the operating system process scheduler (the kernel). It would have to be updated to be aware of the heterogeneous distribution of core capability in modern (zen2 and beyond) cpus. I assume there's some identifier in the cpu of these golden cores ...either that or these programs are assigning identifiers based on just whichever shows the highest frequency in some initial test it makes. Either way, the kernel has to be specially coded since nothing like this has existed before on x86.

ARM has something similar, where they have high perf cores and low perf cores and the kernel uses a kind of power management scheduler setup to decide when to push processes to the high powered cores (however that's linux, not windows) . A similar system could be setup for x86 though.

I wouldn't expect windows or even linux to put a lot of work into doing this until Intel jumps on board with individual core binning .. and if they have and just haven't been very vocal about it, then it would need to get much more common and impactful to performance to justify the effort.

The motherboard bios/firmware/agesa and cpu have absolutely nothing to do with how processes get assigned to cores to run. Nothing in any future update will change that.

For now, it seems seeing that cpu core hit 4.55Ghz rather than 4.3 or 4.2 is more for bragging rights than it is for serious boosts to single thread application performance. Basically eye candy for hardware enthusiasts and benchmark reviewers.

Until then, you'll have to manually pin processes you want to definitely run on the fastest core(s) yourself after identifying which ones those are everytime you run the application.

this can be done in the windows task scheduler for cpu affinity ... and in linux via taskset -c <core number> and either the command or pid of a running one
 
This rings true until you see accounts of some motherboards, with the same CPU, assigning threads differently from each other. That is, some assigning the best cores the single core workload to achieve max boost, and others just assigning threads down the line in order of cores. There IS something else to this.
 
The motherboard bios/firmware/agesa and cpu have absolutely nothing to do with how processes get assigned to cores to run. Nothing in any future update will change that.

I wouldn't be so sure about that.

Using my Ryzen 9 3900X as an example, it behaves differently on two different motherboards. On the MSI MEG X570 GODLIKE the 3900X boosts correctly, while on the ASUS Crosshair VIII Hero it doesn't. They both use the same AGESA code. The rest of the system configuration is identical. Same RAM, same SSD, fresh OS install, same patch level, OS build, drivers, video card and everything else is exactly the same. On that same ASUS Crosshair VIII Hero, the 3700X boosts correctly. When you look at the MEG X570 GODLIKE, any single threaded task like Cinebench R20 loads on Core 2, which is the one with the gold star. On the ASUS, it goes to core 0. It's core 1 (counting from 0) on the 3700X with the gold star. When you run the same test with the same OS install and other hardware, the threads still go to the one with the gold star.

You would think that the motherboard has nothing to do with it, but the same CPU can behave differently on different motherboards. That's definitely one of the variables. Oddly, a different CPU works correctly, so there doesn't seem to be a rhyme or reason why things work this way. ASUS doesn't even know why this is happening as I brought it to their attention right away. That's all contradictory, but the point being that we really can't say what the problem is or how it might be corrected in the future, if it can even be corrected. You can't really say definitively that firmware or AGESA code updates have nothing to do with this as the AGESA code 1.0.0.3 patch AB addressed a boost clock issue successfully for many people including me on the GODLIKE motherboard.
 
that's less an indicator that the motherboard matters and more that the windows scheduler may already be aware of "preferred" cores for high priority / cpu loading tasks.

when it comes to assigning which cpu gets what task ...that's 100% definitively, the kernel (linux or windows) decision. I know linux has no code that references a preferred cpu - it has no way of auto scheduling high priority tasks to the fastest core, it sees them all the same and simply goes by which one has free time or based on the different type of schedulers you can use, which one's time has been exceeded and preempt it. Windows may be looking at whatever identifier is being set that lets those applications give cores a gold star vs not. Then giving processes it knows are heavy load preferential treatment on them.

The power management cooperation with amd that microsoft has done may play into those changes. Unfortunately, I've seen none of that going on on the linux side of things.

The boosting differences do have a motherboard factor for sure. The cpu only boosts according to what the motherboard exposes to it (the various number fields that provide limits and bounds for current / voltage / wattage) .. and that will vary mobo to mobo.

edit: these motherboard manufacturers probably have one or two guys responsible for updating the standard bios firmware to a given new motherboard. almost definitely you'd never communicate directly with these people...so even if they're aware how precision boost works, nobody you talk to likely is or even knows what their own product actually does under the hood. annoying I'm sure.

i saw one comment on reddit about communication with asus saying that they told them that AMD gives them the values. hahaa. It's literally values defined by the given motherboard. AMD would have no way of providing them. They just provide the minimum required by spec.
 
Last edited:
that's less an indicator that the motherboard matters and more that the windows scheduler may already be aware of "preferred" cores for high priority / cpu loading tasks.

when it comes to assigning which cpu gets what task ...that's 100% definitively, the kernel (linux or windows) decision. I know linux has no code that references a preferred cpu - it has no way of auto scheduling high priority tasks to the fastest core, it sees them all the same and simply goes by which one has free time or based on the different type of schedulers you can use, which one's time has been exceeded and preempt it. Windows may be looking at whatever identifier is being set that lets those applications give cores a gold star vs not. Then giving processes it knows are heavy load preferential treatment on them.

The power management cooperation with amd that microsoft has done may play into those changes. Unfortunately, I've seen none of that going on on the linux side of things.

The boosting differences do have a motherboard factor for sure. The cpu only boosts according to what the motherboard exposes to it (the various number fields that provide limits and bounds for current / voltage / wattage) .. and that will vary mobo to mobo.

edit: these motherboard manufacturers probably have one or two guys responsible for updating the standard bios firmware to a given new motherboard. almost definitely you'd never communicate directly with these people...so even if they're aware how precision boost works, nobody you talk to likely is or even knows what their own product actually does under the hood. annoying I'm sure.

i saw one comment on reddit about communication with asus saying that they told them that AMD gives them the values. hahaa. It's literally values defined by the given motherboard. AMD would have no way of providing them. They just provide the minimum required by spec.

I don't think the scheduler is aware of anything beyond the basic topology of the CPU and that's only on build 1903. All that does is try and use cores within the same CCX and CCD so that it doesn't incur any additional and unnecessary latency. This is per AMD's reviewer's guide. Logically, this should mean that Core 0 or 1 would be the first ones loaded. How the processor is loaded isn't a mystery either, because we can see the utilization in task manager. We can also watch which cores are being used at a given time in real time (or as close to it as possible) via Ryzen Master. Yet, when the system is otherwise idle and when boost clocks are working properly, the core with the gold star is always the one that gets loaded. This leads me to believe there is either some logic in the CPU or AGESA code that does this, possibly by forwarding the task to the core with the gold star. This would also be independent of the OS, which would make sense.

The power management stuff you brought up is CPPC2, and requires the balanced AMD Ryzen plan be used in conjunction with the proper chipset drivers, Ryzen 3000 series CPU's and Windows 10 build 1903. All this does is control frequency ramp up time and improve it for "burst" work loads in order to improve task opening and general system responsiveness. This would impact any task that has a short transient spike of CPU usage which might otherwise not last long enough to allow the CPU core to hit its maximum boost frequency.

Also, according to AMD, boost clock behavior is handled by four variables alone. OEM preset, referring to AMD's clock speed limit for a given processor model. This is the boost clock speed limit for that particular model. The other limits are PPT, EDC, and TDC. That's it. The motherboard doesn't factor in when using PB2 at stock settings. This allows you to hit the proper boost clocks on any motherboard, regardless of design characteristics. This is from AMD's documentation:

Package Power Tracking (“PPT”): The PPT threshold is the allowed socket power consumption permitted across the voltage rails supplying the socket. Applications with high thread counts, and/or “heavy” threads, can encounter PPT limits that can be alleviated with a raised PPT limit.
  • Default for Socket AM4 is at least 142W on motherboards rated for 105W TDP processors.
  • Default for Socket AM4 is at least 88W on motherboards rated for 65W TDP processors.

Thermal Design Current (“TDC”): The maximum current (amps) that can be delivered by a specific motherboard’s voltage regulator configuration in thermally-constrained scenarios.

  • Default for socket AM4 is at least 95A on motherboards rated for 105W TDP processors.
  • Default for socket AM4 is at least 60A on motherboards rated for 65W TDP processors.
Electrical Design Current (“EDC”): The maximum current (amps) that can be delivered by a specific motherboard’s voltage regulator configuration in a peak (“spike”) condition for a short period of time.
  • Default for socket AM4 is 140A on motherboards rated for 105W TDP processors.
  • Default for socket AM4 is 90A on motherboards rated for 65W TDP processors.

Essentially, the motherboard has little to do with it using PB2 or default / automatic settings. Enabling PBO switches to the motherboards preset values which are defined by the motherboard manufacturer, or the user when manual settings are used in conjunction with PBO. Thus, your assertion that the boost clocks have a "motherboard factor" in isn't necessarily correct. I'm sure certain conditions have to be met, but that those conditions can or should be met by nearly any motherboard so long as it supports the total wattage of the CPU installed. In theory, if you had a motherboard that could only handle a 65w CPU, (88 max per spec) then yes, I suppose it would prevent boost clocking of a 3900X. However, that's not what we've seen. You'd have to find some low end board with a very poor VRM implementation for that to be the case and even then. And that's not what we are talking about here. We are talking about boost clocking on X570 motherboards being inconsistent. Per ASUS, CPU boost clocks are controlled by the AGESA code and nothing else. On motherboards that meet the power requirements for a given CPU, boost clocks should be identical on all motherboards. Again, per ASUS.

You are right about one thing. AMD would provide the minimums per the specification for PPT, EDC and TDC values. Motherboards would need to be built to meet those requirements at a minimum. While ASUS has sometimes been misleading about its VRM designs, those designs have always been "good enough" to overclock CPU's and to run them properly at stock speeds. So the idea that an ASUS Crosshair VIII Hero would be incapable of meeting the requirements to boost a 3900X to 4.6GHz on a single core is frankly, nonsense. Again, per ASUS, it absolutely can and does work in their labs. Again, all we can do is speculate here because frankly, none of us knows exactly why it is that some people can achieve the correct boost clocks on some motherboard and CPU combinations, while others can't. People have experienced all kinds of mixed results with changing RAM and messing with certain settings which either makes things worse, or allows things to work correctly.

Lastly, I've been a reviewer for 14 years. It's obvious you know nothing about that business. I've met and spoken to some of the engineers that design this stuff directly many times over the years. It's not something you necessarily do day to day, but for you to say that I'd never communicate with these people is simply talking out of your ass. These guys get hauled out to do press events and talk to reviewers who have questions beyond the scope of what the PR guys can handle or relay via third party inquiries such as E-Mail. I've spoken to plenty of the people who have been responsible for designing the motherboards at ASUS and GIGABYTE many times. I and other reviewers have even been asked for input on a given design by these guys when the boards were still early in development.
 
the cpu that gets loaded up can't be done independently of what the kernel expects. If the process scheduler says run xyz process on cpu 3 then it will run on cpu 3. The cpu / bios etc can't arbitrarily have it run on cpu 0. That's not how things work. That would require a hardware abstraction layer that would not only abstract the actual cpu getting the load from the kernel, but from your monitoring software as well.


it's kernel level software that manages what gets run on which cpu at all times. Every single process is specifically told to run on a specific cpu. The hardware doesn't have a say.

the bios has a default set of values for ppt, tdc, and edc given by amd for various processors. This would be what should be getting read from the motherboard by the cpu for pb2

For pbo, the motherboard manufacturers were supposed to have a separate set of values specifically set to match what the capabilities are of the particular motherboard. In asus on mine, this is labeled "Motherboard" in the settings options.

You then have the option to manually set these values to whatever you want.

My suggestion was that if the manufacturer just used their motherboard values for both the default and their "motherboard" setting, instead of using amd's defaults at all, then we'd see similar behavior to what we're seeing where there appears to be no difference with pbo and not having pbo enabled.

The algo is the same, the cpu is still reading from the motherboard with either option active...it's just with pb2 it should be the amd defaults but what if that's not the case in many boards?

In addition to those 3 settings though, there is still the scaler option, the frequency limit option and the temp limit options that are specific to pbo being enabled. So a lot of what I'm seeing in testing could be explained by incorrect default settings actually being motherboard capacities and not amd defaults. And the main difference between pbo and non-pbo then is just the scaler value that controls voltage boosting - which you'd see showing up as different based on silicone lottery. (which if you're lucky, means the only difference between pbo and non-pbo is mostly heat ..since you aren't excluded from hitting high frequencies at a lower voltage used with normal pb2)

as for your access to the devs and engineers ... you're the exception not the rule. Any regular person emailing asus or another motherboard manufacturer about issues or problems is going to speak to a support representative and it stops there. It's not like they are going to have their dev's and engineers field questions and support tickets from the public... or are you saying that's what Asus does?


edit: you know, after re-reading this ... I could test the theory fairly easily. If i manually set the pbo values to the amd defaults for the tdp of my cpu, i should see a decrease in performance that I haven't seen in enabling pbo or disabling it.

This would also potentially bring me down to be in line with the numbers I'm seeing for the 3900x in other more official benchmarks using phoronix for the 3900x, which my numbers have always been higher with or without pbo being enabled in most tests.

I'll try that after work. If i see no difference in performance, then I give up my theories (at least in terms of pbo).

edit2: the amd and microsoft work with power management was only brought up to suggest that microsoft and amd do make amd specific changes to kernel features (like any os would), but with amd's close work directly with microsoft throughout all of ryzen in dealing with frequency scaling and sleep modes and such, they could have implemented something earlier on to show the scheduler how to identify "fast" cores and the scheduler would give them preferrential treatment. Or microsoft could have designed a more general feature into their scheduler that tests the cores on bootup to see quickly if there are differences in peak frequency and give the highest ones preferrential treatment. Then they'd be future proof and not need any specific amd/intel code. Either option is possible. Something similar could find it's way in linux too if someone had the desire to make the effort. Not sure 200mhz on cpu's that are hitting 4.3Ghz without it is worth it though. < 5% freq diff for a feature that is technically warranty breaking.

Technically from what's been stated. PB2 gives you that peak freq you see printed. PBO should give you up to 200Mhz over that if you're capable. Has _anyone_ seen greater than 4.6Ghz from the 3900x? Even on motherboards that appear to have properly functioning PBO. I thought someone even did chilled setups and all it did was let them have more cores reach near 4.6Ghz but never really exceed it as they approached 0C. Are the 3800/3700 types seeing > than the printed values for the boost clocks or just boost clocks for longer or boost clocks at all where regular pb2 wasn't even reaching them?
 
Last edited:
the cpu that gets loaded up can't be done independently of what the kernel expects. If the process scheduler says run xyz process on cpu 3 then it will run on cpu 3. The cpu / bios etc can't arbitrarily have it run on cpu 0. That's not how things work. That would require a hardware abstraction layer that would not only abstract the actual cpu getting the load from the kernel, but from your monitoring software as well.


it's kernel level software that manages what gets run on which cpu at all times. Every single process is specifically told to run on a specific cpu. The hardware doesn't have a say.

I'm simply making a statement based on what I've observed. Which to say, is consistent with a given configuration, but inconsistent when looking at different CPU's and motherboards. I would have thought that only Windows or whatever kernel your using would have exclusive control over this, but the behavior I see with these systems brings that into question. Why would Windows somehow always use the preferred core on one configuration, and not another? Why would it use Core 2 on one CPU, and Core 0 everytime on another when all system variables aside from the CPU are the same?

the bios has a default set of values for ppt, tdc, and edc given by amd for various processors. This would be what should be getting read from the motherboard by the cpu for pb2

I'm not sure where the disagreement is here. Where these values are defined are largely unimportant. The values are what they are for each CPU. I'm fairly certain these are actually defined in the AGESA code rather than somewhere else. These are AMD's OEM values. The motherboard makers cannot alter AGESA code. The motherboard makers are told what those values are to build their VRM's accordingly. Therefore, all motherboards should be able to boost CPU's the same way.

For pbo, the motherboard manufacturers were supposed to have a separate set of values specifically set to match what the capabilities are of the particular motherboard. In asus on mine, this is labeled "Motherboard" in the settings options.

You then have the option to manually set these values to whatever you want.

I'm not sure why you are restating this. Again, what I said was in reference to PB2. What I said about PBO, is the same thing. It does use motherboard specific values, which all the boards I've seen actually say "motherboard" or are manual values the user can change.

My suggestion was that if the manufacturer just used their motherboard values for both the default and their "motherboard" setting, instead of using amd's defaults at all, then we'd see similar behavior to what we're seeing where there appears to be no difference with pbo and not having pbo enabled.

The default values used for PB2 are AMD's values. Not the motherboard manufacturers. AMD is clear on this. And we actually do see similar behavior between PB2 and PBO regarding the 3900X. Many people even report worse performance or lower boost behavior when utilizing the PBO option. No one is really sure why this is the case. PBO does seem to work better on the lower end CPU's, especially the 65 watt ones but I haven't verified this myself. I'm still working on the 3700X and 3600X reviews, so I'm going to get into all of that tonight and this weekend.

The algo is the same, the cpu is still reading from the motherboard with either option active...it's just with pb2 it should be the amd defaults but what if that's not the case in many boards?

Yes, AMD is clear in their documentation to reviewers that PB2 and PBO use the same algorithm. The only difference are that PB2 uses AMD's pre-defined values while PBO uses the motherboard manufacturers values. This is the case with all AM4 motherboards.

In addition to those 3 settings though, there is still the scaler option, the frequency limit option and the temp limit options that are specific to pbo being enabled. So a lot of what I'm seeing in testing could be explained by incorrect default settings actually being motherboard capacities and not amd defaults. And the main difference between pbo and non-pbo then is just the scaler value that controls voltage boosting - which you'd see showing up as different based on silicone lottery. (which if you're lucky, means the only difference between pbo and non-pbo is mostly heat ..since you aren't excluded from hitting high frequencies at a lower voltage used with normal pb2)

Well there are more options than those when using PBO in manual mode. So fair enough, but again AMD's documentation makes it pretty clear that the four values I mentioned are the important ones.

as for your access to the devs and engineers ... you're the exception not the rule. Any regular person emailing asus or another motherboard manufacturer about issues or problems is going to speak to a support representative and it stops there. It's not like they are going to have their dev's and engineers field questions and support tickets from the public... or are you saying that's what Asus does?

No. I am not saying that. I said what I said to tell you that when I say ASUS or another manufacturer says "XYZ" its not coming from some low level phone support person. Sometimes this information is relayed through their PR contacts, sometimes its directly from an engineer in the E-Mail chain. Sometimes, I have spoken to engineers directly and even in person. I've met and spoken with number of them from various companies over the years. I've also met the leadership in charge of those product lines.

you know, after re-reading this ... I could test the theory fairly easily. If i manually set the pbo values to the amd defaults for the tdp of my cpu, i should see a decrease in performance that I haven't seen in enabling pbo or disabling it.

This would also potentially bring me down to be in line with the numbers I'm seeing for the 3900x in other more official benchmarks using phoronix for the 3900x, which my numbers have always been higher with or without pbo being enabled in most tests.

I'll try that after work. If i see no difference in performance, then I give up my theories (at least in terms of pbo).

I'm not sure why your numbers are what they are. Could be silicon lottery, could be a matter of your configuration being some how better. Have you messed with your FCLK and RAM clocks or timings? I know your not doing anything exotic with cooling but I also don't know what your ambient temperatures look like. I've had both good and bad on that front with my test setup and frankly, they didn't impact the results much at all. My office is actually the coldest room in my house but its not exactly huge. If I just have one or two machines running, its cold. If I have both test benches going, it gets hot as hell in the room and my ambient temps start to increase noticeably.
 
We've got the hardware guy (Dan_D) and the software guy (Darth Ender) hashing PBO out over and over across several different threads here now. I'm going with Dan_D who speaks directly with the motherboard makers. PBO is a waste for almost all usage scenarios based on my reading of countless forum posts and reviews. It is a head scratcher that my freaking Hero VIII can't boost where it needs to, but right now, I can't get the freaking RAM stable, so it's all moot. ;-) But credit to Darth Ender, I've been doing all my setup and testing on my machine with cpu set to -.1 offset.
 
We've got the hardware guy (Dan_D) and the software guy (Darth Ender) hashing PBO out over and over across several different threads here now. I'm going with Dan_D who speaks directly with the motherboard makers. PBO is a waste for almost all usage scenarios based on my reading of countless forum posts and reviews. It is a head scratcher that my freaking Hero VIII can't boost where it needs to, but right now, I can't get the freaking RAM stable, so it's all moot. ;-) But credit to Darth Ender, I've been doing all my setup and testing on my machine with cpu set to -.1 offset.

I don't have all the answers. I just know what AMD says about what its done in conjunction with Microsoft as it relates to the scheduler and CPPC2. Nothing on that front supports the idea that anything but the scheduler can assign threads to specific CPU cores. So keep that in mind. On the hardware side, I'm working with ASUS to try and find a solution, but even they don't know what's going on. That tells me that this is probably an AMD issue. We've seen boost clocking fixed via AGESA code already in some cases. To say this can't be solved by AMD via an AGESA code or firmware update is frankly, premature. We don't actually know what the problem is. It's obviously not a simple solution or we'd have seen a fix by now from whatever party is required. My guess is, its a combination of factors that come together to create a situation where you either do or do not get the correct advertised boost clocks.
 
I'm simply making a statement based on what I've observed. Which to say, is consistent with a given configuration, but inconsistent when looking at different CPU's and motherboards. I would have thought that only Windows or whatever kernel your using would have exclusive control over this, but the behavior I see with these systems brings that into question. Why would Windows somehow always use the preferred core on one configuration, and not another? Why would it use Core 2 on one CPU, and Core 0 everytime on another when all system variables aside from the CPU are the same?

I'm confused now, I thought you were saying that windows is preferentially scheduling processes to the golden cores regardless of where those golden cores exist ... so some cpu's it's core 0, some core2, but windows somehow knows and schedules the process to it. Are you saying that this behavior is sometimes not seen based on the motherboard? So move cpu where this is occurring all the time to another mobo and it suddenly stops occurring?

In any case though, the operating system can't execute a process on cpu 3 and the cpu do it on cpu 0. The motherboard can't do that either. The kernel has complete control at all times over which cpu a process executes on. It's either identifying faster cpu's itself or it's been coded to look for something that lets it know.

The only other alternative i can think of is complete chance, which seems unlikely. But i do know the motherboard or cpu can't switch it up on their own. And unless cpu # seen by the kernel is not cpu # seen on your monitoring software, which would be odd, those are the options when it comes to that.

I'm not sure where the disagreement is here. Where these values are defined are largely unimportant. The values are what they are for each CPU. I'm fairly certain these are actually defined in the AGESA code rather than somewhere else. These are AMD's OEM values. The motherboard makers cannot alter AGESA code. The motherboard makers are told what those values are to build their VRM's accordingly. Therefore, all motherboards should be able to boost CPU's the same way.

I'm not sure why you are restating this. Again, what I said was in reference to PB2. What I said about PBO, is the same thing. It does use motherboard specific values, which all the boards I've seen actually say "motherboard" or are manual values the user can change.

The default values used for PB2 are AMD's values. Not the motherboard manufacturers. AMD is clear on this. And we actually do see similar behavior between PB2 and PBO regarding the 3900X. Many people even report worse performance or lower boost behavior when utilizing the PBO option. No one is really sure why this is the case. PBO does seem to work better on the lower end CPU's, especially the 65 watt ones but I haven't verified this myself. I'm still working on the 3700X and 3600X reviews, so I'm going to get into all of that tonight and this weekend.

It is my understanding that while amd provides the values for pb2, they're still read from the motherboard from the same location as the PBO values rather than just read from some internal cpu location. The motherboard is responsible for swapping that location in memory out with PBO or manual values based on bios settings. So if the motherboard loaded their own values instead of the default values there, perhaps with the thought that amd wouldn't use it unless we activated some other overclocking function, then the pb2 mode would see the same limits we should only see if pbo was enabled, but without the other pbo options in effect such as the scaler.

That could be completely wrong. I'm just thinking of ways to explain why pbo appears to have no effect from pb2 except increasing heat. However, it seems more and more likely that rather than this idea that i'm already using pbo values, that I'm just not seeing pbo at all.

Yes, AMD is clear in their documentation to reviewers that PB2 and PBO use the same algorithm. The only difference are that PB2 uses AMD's pre-defined values while PBO uses the motherboard manufacturers values. This is the case with all AM4 motherboards.



Well there are more options than those when using PBO in manual mode. So fair enough, but again AMD's documentation makes it pretty clear that the four values I mentioned are the important ones.

Which again, stands to question what's going wrong when there's no difference but the values are changed? What's the algorithm seeing that keeps it from doing what it should that everyone is not seeing? Because we see the temps look good, we see the board has the ability, we know the algorithm is controlled by the cpu, we set the values to what they should be for pbo to have room to work. Are we expecting way too much from it?

No. I am not saying that. I said what I said to tell you that when I say ASUS or another manufacturer says "XYZ" its not coming from some low level phone support person. Sometimes this information is relayed through their PR contacts, sometimes its directly from an engineer in the E-Mail chain. Sometimes, I have spoken to engineers directly and even in person. I've met and spoken with number of them from various companies over the years. I've also met the leadership in charge of those product lines.



I'm not sure why your numbers are what they are. Could be silicon lottery, could be a matter of your configuration being some how better. Have you messed with your FCLK and RAM clocks or timings? I know your not doing anything exotic with cooling but I also don't know what your ambient temperatures look like. I've had both good and bad on that front with my test setup and frankly, they didn't impact the results much at all. My office is actually the coldest room in my house but its not exactly huge. If I just have one or two machines running, its cold. If I have both test benches going, it gets hot as hell in the room and my ambient temps start to increase noticeably.

I guess it comes down to this.

Has anyone seen 4.7Ghz out of the 3900x ...much less 4.8Ghz? That's what pbo working should allow single cores to hit. Simply making it able to hold the boost cores of 4.5-4.6Ghz longer or 4.3 full cores is not pbo working, just adding voltage. pbo should be removing that frequency ceiling and I haven't seen anyone really talking about breaking it. Just reaching/maintaining pb2 boost frequencies better or longer.

It's almost as if there is a flag that the cpu expects to be set to "void the warranty" so that it unlocks and allows you to hit that up to 200Mhz over boost clock rate that it's not seeing from the motherboard even if we agree to the warning and enable pbo. (again though, I'm assuming amd has some way of knowing when pbo has been activated so they can void their warranty like they have stated it would be - perhaps that's just a lot of legalese hotair)
 
Last edited:
I'm confused now, I thought you were saying that windows is preferentially scheduling processes to the golden cores regardless of where those golden cores exist ... so some cpu's it's core 0, some core2, but windows somehow knows and schedules the process to it. Are you saying that this behavior is sometimes not seen based on the motherboard? So move cpu where this is occurring all the time to another mobo and it suddenly stops occurring?

What I've observed, I've stated many times already. First off, none of my CPU's have their golden cores marked as core 0. Not a single one. Let's use the Ryzen 9 3900X and 3700X on the ASUS Crosshair VIII Hero as an example one more time. The 3900X does not ever get a single threaded task loaded on its golden cores using this configuration. However, if all I do is swap in a 3700X, then it does work correctly. A single-threaded task will be scheduled on the golden cores without issue. This happens each and every time. However, that same 3900X exhibits different behavior on the MSI GODLIKE board. On that one, single-threaded tasks are always scheduled on the golden cores. The only hardware change in that case is the motherboard or the CPU. The 3900X works properly on the MSI but not the ASUS board. The 3700X works fine on the ASUS board. I haven't tested it on the MSI.

In any case though, the operating system can't execute a process on cpu 3 and the cpu do it on cpu 0. The motherboard can't do that either. The kernel has complete control at all times over which cpu a process executes on. It's either identifying faster cpu's itself or it's been coded to look for something that lets it know.

You say this, but Windows has no way of knowing which are the golden CPU cores. And if it does, why does it work in some cases and not others? I think this is the point where we are in speculative territory. I don't know what causes the process to land on the correct core with a particular CPU on one motherboard and not another. Again, changing only the motherboard in this equation yields different results. Also, changing only the CPU yields a different result. No one can answer why this is. Not you, not me and not even ASUS knows.

The only other alternative i can think of is complete chance, which seems unlikely. But i do know the motherboard or cpu can't switch it up on their own. And unless cpu # seen by the kernel is not cpu # seen on your monitoring software, which would be odd, those are the options when it comes to that.

I'll agree its not random chance because this behavior is consistent. I can run Cinebench, POV-Ray, or any other single-threaded test on a given configuration and get the same results each and every time. Altering the hardware configuration is the only variable that yields a different result so far.

It is my understanding that while amd provides the values for pb2, they're still read from the motherboard from the same location as the PBO values rather than just read from some internal cpu location.

It likely comes from the AGESA code, which the motherboard makers cannot modify. So it does come from the motherboard's UEFI BIOS, which is on the motherboard. However, PBO values would have to be set somewhere sle that the motherboard makers can modify. This is again, per ASUS. All the values governing boost behavior are in AGESA code. Period, end of story.

The motherboard is responsible for swapping that location in memory out with PBO or manual values based on bios settings. So if the motherboard loaded their own values instead of the default values there, perhaps with the thought that amd wouldn't use it unless we activated some other overclocking function, then the pb2 mode would see the same limits we should only see if pbo was enabled, but without the other pbo options in effect such as the scaler.

That could be completely wrong. I'm just thinking of ways to explain why pbo appears to have no effect from pb2 except increasing heat. However, it seems more and more likely that rather than this idea that i'm already using pbo values, that I'm just not seeing pbo at all.

Again, per ASUS, that's not how things work. Anything governing boost behavior comes from AGESA code. However, motherboard makers cannot modify that code and therefore, the AGESA code probably has a flag which tells the PB2 algorithm to get its values from a table located in the UEFI that the motherboard makers can modify and do modify specifically for each motherboard model.

Which again, stands to question what's going wrong when there's no difference but the values are changed? What's the algorithm seeing that keeps it from doing what it should that everyone is not seeing? Because we see the temps look good, we see the board has the ability, we know the algorithm is controlled by the cpu, we set the values to what they should be for pbo to have room to work.

Well, that is the $64,000 question. its obvious that there is another limitation in play that prevents the CPU from boosting to either the correct maximum boost clock speeds or going over when the limits are raised allowing a 200MHz offset through PBO+AutoOC. No one, as far as I know has successfully gotten that to work on a 3900X. Therefore, the limiting factor has to be something we can't see or can't control. We just can't answer that question right now. It could come down to simple silicon lottery, and the ability to deliver current within certain ranges of ripple, transient response, etc. The VRM's on the MSI MEG X570 GODLIKE are beefier than those of the Crosshair VIII Hero, but I'm not sure that's coming into play given that each configuration sees different core assignments for those single-threaded tests.

What I need to do is try process lasso, and move the Cinebench thread to the golden core and see if it boosts correctly. That would help figure out what's going on here.

Are we expecting way too much from it?

Perhaps. I think some of the bitching that happens concerning these CPU's does come down to that. People feel cheated because you can't hit 4.6GHz on any core in a 3900X. Obviously, if you could, that would make this issue I'm seeing of Core 0 getting the thread when it should be on core 2 or 3 a moot point. As long as the system were otherwise idle, the system could boost that one core and there wouldn't be much of a discussion here.

I guess it comes down to this.

Has anyone seen 4.7Ghz out of the 3900x ...much less 4.8Ghz? That's what pbo working should allow single cores to hit. Simply making it able to hold the boost cores of 4.5-4.6Ghz longer or 4.3 full cores is not pbo working, just adding voltage. pbo should be removing that frequency ceiling and I haven't seen anyone really talking about breaking it. Just reaching/maintaining pb2 boost frequencies better or longer.

I don't think they have. No reviewer I know of got that to happen. I certainly haven't seen it even when I was able to see nearly a 4.6GHz boost clock. It was actually a little lower, but that's because the base clock isn't actually 100MHz. It's usually 99.8MHz or something.

It's almost as if there is a flag that the cpu expects to be set to "void the warranty" so that it unlocks and allows you to hit that up to 200Mhz over boost clock rate that it's not seeing from the motherboard even if we agree to the warning and enable pbo. (again though, I'm assuming amd has some way of knowing when pbo has been activated so they can void their warranty like they have stated it would be - perhaps that's just a lot of legalese hotair)

Well, its probably just a legal agreement. The way things work, you have to agree to the voiding of the warranty to enable PBO. If using PBO fries your CPU, it simply means AMD is off the hook for replacing it or paying damages if your CPU takes out the motherboard. They know you agreed to this because you can't enable PBO without consenting to losing your warranty. If you said nothing about PBO, I doubt they'd actually know whether you did or didn't enable it without the motherboard. Which you'd never give them and AMD can't ask for it.
 
The windows scheduler knowing about the golden cores could still be a motherboard specific issue if it's data stored in the acpi table rather than something stored in a register that's read directly from the cpu. It would not be the first time that motherboards have not adhered to standards in the acpi table and require future bios updates to correct or the operating system has to work around quirks specific to a given brand or even a given board.

side question though. is this behavior witnessed in real world applications or across any test or did it happen to only be witnessed via a specific test application? Wondering if maybe the test software is picking the fastest core via some method of it's own and simply setting the affinity manually itself.

if the agesa code is the same in the boards being looked at and one mobo has a 3700x working this way and not the 3900x but another board with the same agesa has the 3900x working this way ...that should eliminate the idea that agesa is to blame and that limits what could be the source of the discrepancy ...the only other thing i can think of that the motherboard would be responsible for setting up is the acpi tables.
 
This behavior has been witnessed in several benchmarks such as POV-Ray, Cinebench R20 (single-thread), WinRAR's ST benchmark, and so on. I highly doubt each of these applications, which are all based on versions released before Ryzen 3000 series CPU's would have any way of selecting the golden core. And again, these same tests work on the same processor using the MSI board, but not the ASUS. I haven't looked at boost clocks while gaming specifically, but it seems that the proper boost behavior isn't occuring using the 3900X on the ASUS board as it benchmarks lower than the MSI does in most of the tests. That's in line with the 3900X not boosting correctly in those cases.
 
On the boards where the behavior is not seen or the cpu's where the behavior is not seen on a given motherboard, do you see a difference in the frequency the "golden" cores can reach?

Because if the golden cores simply dont boost as high in those situations as when the scheduling works then maybe it's a general behavior the scheduler tests for on bootup to identify preferrential cores based on performance, rather than reading the id of them from some table. And that it fails on some boards because those cores fail to be significantly faster to be weighted any different from the other cores.
 
I do not have an answer for that. The golden cores should be the same on both boards. The 3900X I have can reach 4.6GHz on its golden core or cores.
 
I do not have an answer for that. The golden cores should be the same on both boards. The 3900X I have can reach 4.6GHz on its golden core or cores.

well in the case where the cpu was getting scheduled on the golden cores on one motherboard but not the other, the golden cores identified by the monitor application shouldn't show a difference in which cores are golden. That's not what i was suggesting. But the motherboards may not be letting those cores boost to the same frequencies. If the motherboard that doesn't schedule correctly to the golden cores shows peak frequency lower than the board that does schedule on the golden cores, then that would point to a reason why the behavior is the way it is in terms of process scheduling.

Like if on an MSI godlike my golden cores hit 4.6Ghz and the non-golden ones are 4.25-4.3 and those golden cores are always picked by windows to run single tasks but i move my cpu to my Asus board and it only hits 4.45ghz on the golden and 4.3 on the rest and it's random which core gets the single task.
 
i meant via pbo. not via manual overclocks. :) I thought i saw a post somewhere that did a LN2 test specifically for pbo that either showed it being unable to break 4.6ghz or that it didn't start doing so until it was near 0C. They just set PBO on and let the temps fall and checked where the frequencies fell and they stepped the temps down until it blue screened well into the negative C's.
 
Good point. It's friday - internet jerk day. Or maybe they've extended it to every day.
 
well in the case where the cpu was getting scheduled on the golden cores on one motherboard but not the other, the golden cores identified by the monitor application shouldn't show a difference in which cores are golden. That's not what i was suggesting. But the motherboards may not be letting those cores boost to the same frequencies. If the motherboard that doesn't schedule correctly to the golden cores shows peak frequency lower than the board that does schedule on the golden cores, then that would point to a reason why the behavior is the way it is in terms of process scheduling.

Like if on an MSI godlike my golden cores hit 4.6Ghz and the non-golden ones are 4.25-4.3 and those golden cores are always picked by windows to run single tasks but i move my cpu to my Asus board and it only hits 4.45ghz on the golden and 4.3 on the rest and it's random which core gets the single task.

I won't put it past that. Asus rep is busy saying boost is not their problem. You are right, my asus board never selects the golden core that boosts the highest according to hwinfo64 and my bclk is stuck at 99.8, okay may be not never, I guess its more random. I think asus needs to get their shit together before blaming AMD for their lack of support. Shamino is basically saying its all Agesa issue, not their problem. I don't think its the same Asus anymore when it comes to bios optimizations.
 
How high are these things boosting in win7?

I'm hoping 4.7-4.9, but Win 7 users being modest wouldn't brag about it on the internet. Would have to be on a B450/X470 board, because AMD / MB suppliers only provide drivers for the microsoft's latest data-mining edition on the X570.
 
So i ran with pbo manual settings for ppt / tdc / edc all set to what should be the amd defaults listed above. This resulted in the best performance out of the bunch of settings I've tried but also the hottest temps.
But by best, i mean 1% better. So not really different at all. It literally looks like the only setting that does anything is the voltage scaler causing it to produce more heat (roughly 5C under load)

https://openbenchmarking.org/result/1908231-HV-1908220HV88

3900x0.1 pbo_defaultmanual


Frequencies never ventured close to the peaks sustained. They dont really venture close to max boost sustained even when the temps are in the 60's and 70's. well below thermal threshholds. But performance is still way up there compared to some other machines i've seen these benchmarks done on. So not sad about it really.
 
I'm happy with my 3900x. No need to change it.

But... I may adventure into theb64 core threadripper realm. Depends on price to performance etc...
 
I just dont see the 3950X is going to really be any better, perhaps maybe better single or dual core clocks but it will be just barely and all core clocks are likely to be lower. To me it seems clear that current process is pushed to the max for current clocks and likely will need 7+ node to see better clocks.
 
The 3950 has a boost clock of 4.7GHz. in theory, it's the best binned part. Well, one CCD is anyway.
 
The 3950 has a boost clock of 4.7GHz. in theory, it's the best binned part. Well, one CCD is anyway.

One CCD guaranteed to hit 4.7GHz at some point while idling at desktop using specific measurement techniques. Mission accomplished!

That said, I'm fine with my 3900x now. If they really bin the 3950s, it's not the single core that's going to make a difference, it's more equal CCDs than the one good and one bad typical for the 3900x.
 
The 3950 has a boost clock of 4.7GHz. in theory, it's the best binned part. Well, one CCD is anyway.

Yeah, I have a feeling the 3950X will have the same "problem" as the 3900X, where you get one golden chiplet, and one shitlet.

But perhaps a Threadripper equivalent will use two golden chiplets, albeit at a higher price point. Just speculating, of course.
 
One CCD guaranteed to hit 4.7GHz at some point while idling at desktop using specific measurement techniques. Mission accomplished!

That said, I'm fine with my 3900x now. If they really bin the 3950s, it's not the single core that's going to make a difference, it's more equal CCDs than the one good and one bad typical for the 3900x.

Living up that poof eh? :ROFLMAO::ROFLMAO:
 
On my ASUS board, the Star cores of Ryzen Master are the slowest clocked cores and core 0 and 1 are the ones that boost to 4.516 ghz. Which have no star or dot.

The newest official bios for the C6H, 7403 took out pcie 4 which so far as been flawless with the 5700XT. Corsair was about to sell me a ssd drive with pcie 4. That won’t be happening now.
 
When people say games benefit from high single core performance, they don't mean on only one core. Modern games need high core performance on multiple cores, as soon as you start using multiple cores that single core boost is out the window.

You don't seem to understand how Ryzen performs.
The fully open dies have more of a chance of spreading that heat around, thus able to run cooler.... hitting max freq on more cores.
 
You don't seem to understand how Ryzen performs.
The fully open dies have more of a chance of spreading that heat around, thus able to run cooler.... hitting max freq on more cores.

My 3900x hits max boost on one core. Light muliti core loading of any type results in 4.25 to 4.3ghz boost. Heavy multi core is 4.05 to 4.1 boost. For gaming, Intel still has a substantial lead in pure single core and lightly threaded multi core gaming.
 
My 3900x hits max boost on one core. Light muliti core loading of any type results in 4.25 to 4.3ghz boost. Heavy multi core is 4.05 to 4.1 boost. For gaming, Intel still has a substantial lead in pure single core and lightly threaded multi core gaming.

Substantial? I’ve read and watched multiple reviews and at least at my resolution of 1440p the difference was marginal not substantial.
 
  • Like
Reactions: N4CR
like this
You don't seem to understand how Ryzen performs.
The fully open dies have more of a chance of spreading that heat around, thus able to run cooler.... hitting max freq on more cores.


You are the one that is does not understand. The 7nm chiplet dies are incredibly dense, and to compound that they are very small. That means it's much harder to pull heat away from them
 
Substantial? I’ve read and watched multiple reviews and at least at my resolution of 1440p the difference was marginal not substantial.

"Substantial" is a loaded word. In 1080p the gap is ~6%. In 1440, it is much less, due to GPU loading.

I don't regard this as "substantial" but do regard it as worth mentioning/factoring into a purchase decision.
 
meh, I dont own amd or intel stock, so I could not care less what someone else bought.

The important thing is, aspects associated with either chip that have been true for nearly the last 10 years are no longer true. And when intel joins the game sometime next year, everyone is going to be adjusting their knowledgebase for sure on what chip is best in any various situation.

It will be an exciting time in 2020 for the pc landscape. Well, as exciting as fighting over performance hardly anyone needs or can make use of can get.
 
I guess the 3950x time frame is still a big mystery, searching around I can hardly find rumor sites and such talking about it. Either they have it wrapped down tight or it's not close to launch time.
 
I'd bet they make an appearance in time for christmas. Along with the navi 14 based boards.

Dropping big navi and the 3950 in time for christmas with nvidia and intel not dropping equivalent products will be a massive bonus win for amd for 2019.
 
Back
Top