ccityinstaller
Supreme [H]ardness
- Joined
- Feb 23, 2007
- Messages
- 4,236
Your explanation sounds like its from AMD's PR machine. While your explanation is technically true to a point, its actually not that hard to test your system to see if you can achieve the desired boost clocks. This isn't rocket surgery, all you really need to do is fire up the single CPU benchmark in Cinebench, POV-Ray or any number of tests to get the system to boost its clocks to the advertised frequency, or near it anyway. Unfortunately, this doesn't work in all cases or in all configurations. Many people are having issues with this and its something that may vary from processor to processor and even motherboard to motherboard. In most cases, a UEFI BIOS update will resolve the issue. However, I've seen situations where it doesn't work. And like I said, in cases where it doesn't work, the benchmarks back up what I'm seeing in Ryzen Master, which is as close to real time monitoring as you will get for these CPU's.
You bring up Precision Boost Overdrive, so lets talk about that. It doesn't really do anything with the Ryzen 3000 series. All it does is adjust PPT, EDC and TDC values. You can even input your values for it as those values override the CPU's presets and use the motherboard values instead. Even with PBO+AutoOC, boost clock behavior on the 3900X at least doesn't really change. Gamer's Nexus found that it essentially didn't work at all. I only tested it on the 3900X and so far, PBO+AutoOC doesn't do anything. In fact, it often hurts performance. I'm not the only one who experienced this either. It doesn't matter how aggressive the algorithm is, PPT, EDC and TDC values aren't what's holding these back from overclocking. The algorithm worked the same on 2nd generation Threadripper CPU's, but actually worked well. On Ryzen 3000, not so much.
.
Just to note, the CB20 benchmark actually loads every core present from the time you click start until the time you see the first block filled in. While this is a short period, it still causes the CPU to load 100% instantly which causes the CPU to drop to it's lower all core rated speed.
AS far as adjusting the tables, there is some nice benefit to adjustments, the issue is that you need to adjust them down tighter, not higher aka looser. The results will vary based on your cooling, but there is a ton of great information in the strictly technical: Matisse thread over on overclock.net...
Basically if you have good to great cooling, dropping the PPT from 142 down to a window of 124-134(reduces package power giving better efficiency and thermals), TDC down to 85A from 95A (allows higher levels of sustained boost), and with EDC down to 110-115A.
In addition, reducing the Mhz level (say25-100mhz vs the max of 200mhz) while increasing the scalar level to 10X seems to give a higher overall boost speed according to a lot of folks.
Ultimately the incredible heat density of the 7nm chiplet is really starting to make itself known. It would seem that Intel seems to be dealing with this as well with their 10nm WL uarch. They went with a wider core, like AMD did with Zen2, and they are seeing a clockspeed ceiling...
Now I realize that these are TDP restrained SKUs, and we will need to see what a 65-95W desktop TDP will allow, but the general trend is going to be lower clockspeeds with these newer nodes.
The enthusiast will be better going with a upper midrange board and great cooling (high end 360mm AIO to custom loops) vs a lower end AIO/HSF and a "high end" board in the future. This is just my guess but I have a feeling we are going to see that trend continue if the core counts tend to increase again next year/following year.
Last edited: