The Future of Overclocking....

Nearsite

Gawd
Joined
Apr 21, 2006
Messages
965
With the introduction of Ryzen 2, traditional overclocking seems to be going away - in that most of the chips we're getting are already pretty optimized out of the gate.

I believe the future of overclocking will be enthusiasts focusing mostly on cooling capabilities to allow our CPU's to hit their maximum boost for as long as possible. Understand that cooling is one of the main factors in overclock, but I believe there will be an exponential increase in cooling products in the future to keep not just the CPU and GPU cool, but the temps within the case as well.

What do you guys think?
 
i'm thinking the first step needs to come from the chip manufacturers.

We need a better way to distribute heat to the heatsink from the ever denser cpu cores becaues just laying copper on top is no longer efficient enough at the densities we're seeing of transistors that generate heat.

That or they need to move away from pure silicone and into some of the alternative alloys like gallium + silicone types that create less heat to begin with.

Unless something is done on that level, i'm not sure there's much we can do on the consumer end to improve cooling.
 
It's only AMD that's trying to kill it.

Intel's still thriving. Nvidia's still very overclockable too, despite the power limits lately.
 
i'm thinking the first step needs to come from the chip manufacturers.

We need a better way to distribute heat to the heatsink from the ever denser cpu cores becaues just laying copper on top is no longer efficient enough at the densities we're seeing of transistors that generate heat.

There is already a solution to do this but it's not cost effective for consumer electronics.. yet.
 
It's only AMD that's trying to kill it.

Intel's still thriving. Nvidia's still very overclockable too, despite the power limits lately.

I don't think AMD is actively trying to "kill" it. It's just less relevant with their boosting algorithms.
 
i'm thinking the first step needs to come from the chip manufacturers.

We need a better way to distribute heat to the heatsink from the ever denser cpu cores becaues just laying copper on top is no longer efficient enough at the densities we're seeing of transistors that generate heat.

That or they need to move away from pure silicone and into some of the alternative alloys like gallium + silicone types that create less heat to begin with.

Unless something is done on that level, i'm not sure there's much we can do on the consumer end to improve cooling.

I am sure they are developing new alloys and tech to produce chips. They just haven't made it cost effective and possibly have yield issues with current process. You have to realize all chip makers are working on shit that we might not even see/know about for another 10 years. Their roadmaps don't show all their projects.
 
They are for sure,. The processor that powers 5g is not pure silicone (at least the next gen version you will see in phones).

But I think we will need to see that kind of thing in cpu's before we will see the ability to overclock the top end products built on 7nm and future 5nm
 
I think it's the FAB issue preventing the overclocking or Ryzen 3000. TSMC's 7mn process is not optimized like Intel's 14nm.
 
I lost some boost off my 2600 from last non-zen2 bios update.

I'll have to dig in and see what changed.

Static OC'ing might turn into getting back whatever firmware took away.
 
I don't think AMD is actively trying to "kill" it. It's just less relevant with their boosting algorithms.

From my perspective, I would say they are. Based on what I've read lately, seems to me in AMD's eyes the perfect chip design just runs at it's absolute silicon max right out the gate. It's already using max safe voltage, and only adding better cooling would make a difference.

I mean that's pretty much where Ryzen is right now tbh. It has really put me off AMD, as for me, overclocking is a key part of building any new PC.
 
From my perspective, I would say they are. Based on what I've read lately, seems to me in AMD's eyes the perfect chip design just runs at it's absolute silicon max right out the gate. It's already using max safe voltage, and only adding better cooling would make a difference.

I mean that's pretty much where Ryzen is right now tbh. It has really put me off AMD, as for me, overclocking is a key part of building any new PC.

I don't see a problem with this. Takes out the guesswork and rigorous testing from overclocking. I get that you are not getting "free" performance anymore like you could with Intel where they can clock 500+ MHz higher boost clocks in best cases. Of course if you are just overclocking for the fun of it then you are losing a hobby.

In the same way cooling is not as complicated anymore. It used to be homemade waterblocks and car radiators if you wanted an overclocked and silent rig, now you can buy any hefty air cooler and it'll do the job. GPUs are still benefiting a lot more from slapping at least an AIO on it.
 
both manufactures are using overclocking to keep selling ever faster cpu's ... just like memory manufacturers have been doing for decades. You're no longer getting that free lunch playing the silicone lottery blind hoping to get lucky. Overclocking isn't dying, or going away. It's just being monetized by the companies that have been giving away what amounts to free money.

it's a win win for companies who can use language such as "up to" and not be liable for dealing with yields that dont always measure up or with operating conditions that greatly reduce the lifespan of said hardware when used at the voltages/thermal envelopes those overclocks bring.

But seriously, just overclocking as a hobby has been dead or living on borrowed time since cpu's started hitting multiple gigahertz and the overclocks are measured in the low hundreds of mhz at best. Massively inefficient boosts to speed for what amounts to single digit percentage increases in performance that look more like dick measuring contests than the practical ingenuity the practice started out on. Plus, if you're used to AMD's brand of cpu's...then you're not paying for overpriced hardware in the first place.

I'm not sure how much empathy the hardcore overclocker crowd will get from the enthusiast community at large. There's still plenty of room to play the benchmark peacock game with sustained peak frequencies, undervolted setups for power use, and most importantly still, temperature while at given performance levels.

So still plenty aspects will survive and plenty of ways to show that your hardware is the best for those who need/want/are compelled to.
 
AMD's Ryzen latest chips are already pre-overclocked as far as I'm concerned.

You pay the same for a given model, but one will boost higher due to better silicon than another.

That's right on the verge of false advertising. I would not be happy to pay the same as my peers but have a slower CPU because of bad luck. That's very much the realm of overclocking, but AMD have moved their mainstream right into that domain as we can all see.
 
AMD's Ryzen latest chips are already pre-overclocked as far as I'm concerned.

You pay the same for a given model, but one will boost higher due to better silicon than another.

That's right on the verge of false advertising. I would not be happy to pay the same as my peers but have a slower CPU because of bad luck. That's very much the realm of overclocking, but AMD have moved their mainstream right into that domain as we can all see.

I don't know why, but you seem a little butt hurt because AMD already did the work of pre-binning CPU's. Intel isn't any more OCing friendly as they charge you more for chipsets just to allow you the privilege of attempting an OC. Plus, their turbo boost is exactly the same concept. Do you think they aren't pre-binning CPU's also?

I don't understand the false advertising argument either. AMD sells their processors multiplier unlocked. You are free to attempt anything your heart desires based on your cooling, etc.
 
I don't know why, but you seem a little butt hurt because AMD already did the work of pre-binning CPU's. Intel isn't any more OCing friendly as they charge you more for chipsets just to allow you the privilege of attempting an OC. Plus, their turbo boost is exactly the same concept. Do you think they aren't pre-binning CPU's also?

I don't understand the false advertising argument either. AMD sells their processors multiplier unlocked. You are free to attempt anything your heart desires based on your cooling, etc.

They've taken the fun out of it, is my point. I'm coming from a overclocking hobbyist perspective.

Regarding false advertising, please correct me as I may be wrong, but as I understand it a good 3700X will boost higher than a bad one in the same config?
 
They've taken the fun out of it, is my point. I'm coming from a overclocking hobbyist perspective.

Regarding false advertising, please correct me as I may be wrong, but as I understand it a good 3700X will boost higher than a bad one in the same config?

I guess I see it from that perspective. Honestly, the most fun I've had overclocking recently was seeing how high my X5660 would go in an old X58 Sabertooth motherboard.

As for the 3700X (or any other Ryzen 2 part), my take on it is that all chips will hit a certain frequency (base) and then the boost algorithm takes over. How long different CPU's boost at or near the max boost and how many cores will do that is luck of the draw. I don't see how that is misleading unless the CPU is physically unable to hit the max boost ever. I don't think there is anything in the small print that says any more than one core will ever hit the max boost.

Looking at siliconlottery.com, it looks like AMD's processing tech is slightly more stable as the differences between an all core OC between the best and worse samples is 200 Mhz. On Intel, it's 300 Mhz. Obviously, Intel parts hit higher all core frequencies (at significant temps), but their marketing in the Turbo Boost is very similar. You're only guaranteed a single core boost to 5 Ghz for example with the 9900K and no mention is made of how long that boost will last, etc. Cooling and motherboard settings (override the power limits for boost) are going to come into play obviously.

Generally speaking though, there are a lot less variables, and the gains to be achieved aren't as great as days gone by by both AMD and Intel. I think it's kind of inevitable as the limits of silicon are coming into focus. This is coming from someone who took a 1.6Ghz Core 2 Celeron (E1200 I believe) to 3.2Ghz, also had a Barton core 2500+ mobile chip at 3200+ speeds, and a Q6600 right to the 200Mhz FSB for something like an 800Mhz boost. I like OCing, but I don't spend as much time doing it as I used to.
 
Last edited:
I managed to get my 3600 to above 2700x and 3600x levels (according to CPU-Z at least) I was able to tune the volts/tighten ram clocks a wee bit, but, with the way BIOS/Agesa are "current" seems any further "push what the cpu does automatic" results in anything but stable more often than not imparted various "issues" like things seeming to "lock up" or various other "instabilities"
though if I left "all alone" after manual lock in ram settings (same ones, just make sure they not spike above where they should)
give/gave me performance I expect.


death of OC was the moment they "allowed" it in the first place...sadly just like so many other things, there ain't no free lunch.

I suppose "early on" to take the base "chip" and run extra volts through it etc absolutely was possible, now is either the makers (such as AMD) had no choice but to lock things down/away
and "bleeding edge" has to be far far better "controlled" otherwise nasty shit likely will ensue
( if the issue "way early on" such as AMD Duron etc (much much larger nm scale) was due to how hot/high volt were in such tiny place,
I can only imagine how very tightly controlled they now MUST be (no choice) or likely the in-field failure rate(s) would be disturbingly high (effectively same volts always was, but much much smaller spot for his level of speed/heat


I suppose AMD could do a TWKR version of Ryzen something or other, but I am thinking, if they could do something like that, they would have already
instead Intel does their 5Ghz "all core" requires crazy cooling to achieve...i.e...well beyond "normal" consumer usage/ability/budget.

anywho..the end was there, before the race even began..much like Carburetors that promised as much performance and vast reduction in fuel, physics is one harsh bitch.

daresay however that all the makers CPU-GPU etc have their work cut out for them big time (if it ever was easy)
to stay in the game over the next few years, that much is sure IMO
 
They've taken the fun out of it, is my point. I'm coming from a overclocking hobbyist perspective.

Regarding false advertising, please correct me as I may be wrong, but as I understand it a good 3700X will boost higher than a bad one in the same config?
I can tell you coming from an intel 6800k to a 3700x that I haven't had so much fun overclocking in many many years, OCing the 6800k was a frustrating mess comparatively. It wasn't as simple as add vcore to make a slightly unstable clock stable. With half a dozen of supplemental settings and voltages it was a nightmare.

With the Ryzen Master utility it's almost too easy.
 
From my perspective, I would say they are. Based on what I've read lately, seems to me in AMD's eyes the perfect chip design just runs at it's absolute silicon max right out the gate. It's already using max safe voltage, and only adding better cooling would make a difference.

I mean that's pretty much where Ryzen is right now tbh. It has really put me off AMD, as for me, overclocking is a key part of building any new PC.

Memory overclocking sees huge gain on the platform though. It's just shifted the focus of the OC / tinker time. AVG & 1% low FPS on my setup using Shadow of the Tomb Raider improved ~15% just from the difference between DDR4-3200 XMP profile and DDR4-3800 tuned timings and subtimings.
 
Memory overclocking sees huge gain on the platform though. It's just shifted the focus of the OC / tinker time. AVG & 1% low FPS on my setup using Shadow of the Tomb Raider improved ~15% just from the difference between DDR4-3200 XMP profile and DDR4-3800 tuned timings and subtimings.

True. But all that memory tweaking is just getting back the performance cost of a chiplet design.

I guess I'm just a bit miffed at expecting to upgrade to Ryzen this summer and then finding the investment, in particularr the X570 motherboards, just way too high and doesn't get you anywhere since overclocking the core is essentially dead. Might as well get the cheapest SKU of your favourite brand, but even that aint exactly cheap 'cos PCI 4.0.

So there's not even the 'buy crazy cheap and overclock to madness' angle either. :bored:

/disillusioned overclocker
 
overclockin started dying with when cpu got segmented by multicore. We cnat overclock us to extra cores

When overclocking would take a budget chip and make it as good as a expensive top performance chip it was nice.
 
overclockin started dying with when cpu got segmented by multicore. We cnat overclock us to extra cores

When overclocking would take a budget chip and make it as good as a expensive top performance chip it was nice.

Funny, I was just thinking the same thing earlier today. Back in those days, overclocking a single core CPU literally bought you performance.
 
Funny, I was just thinking the same thing earlier today. Back in those days, overclocking a single core CPU literally bought you performance.
What? Overclocking doesn't bring you performance now? Of course the days of 60-80% gains are gone, but I take anything I can get.
 
I did a quick test in cinebench 20 to see how much performance I "loose" by overclocking. It might be different for the 3900x, but for the 3700x it is still very much worth it to overclock.


Single core:
default settings: 495 100.0%
precision PBO : 494 99.8%
all core OC: 502 101.4%

multi core:
default settings: 4611 100.0%
precision PBO: 4626 100.3%
all core OC: 5151 111.7%

So apparently I don't loose anything in single core, and as I predicted PBO does literally nothing on the 3700X, It actually clocks the chip slower in the single core test, at default it ran at 4300, with PBO the max boost clock was 4293. The funny bit is, that my all core OC uses less voltage than what PBO rams trough the chip left to its own devices.

Yes, 11% is less than what I was able to OC on the 6800K on all cores, but I buy CPUs to get better performance not to have bigger percentage overclocks.

And 2993 -> 5151 is a huge 72% jump. I didn't expect that much out of this upgrade.
 
Last edited:
just curious, how long does the cinebench benchmark actually last? Does it do a back to back re-test a few times and average it or just a single run? I tend to do all my benchmarks in linux and haven't booted into windows with the new cpu/mobo yet and have never used cinebench before.

a 10% boost over even regular precision boost all core is intriguing. I haven't seen precision boost be that bad at auto-overclocking with settings that are fully sustainable over long periods (like with full load lasting longer than 5 minutes straight at least)
 
What? Overclocking doesn't bring you performance now? Of course the days of 60-80% gains are gone, but I take anything I can get.

The relative ceiling was higher is what I'm getting at, and critically the lowest end SKU had the same number of cores as the highest (1!).

So you could buy the cheapest CPU of a series and overclock it higher than the highest one and have essentially 'bought' a much more expensive chip by just putting effort into overclocking.

You can't do that to a 9900K. ;)
 
I did a quick test in cinebench 20 to see how much performance I "loose" by overclocking. It might be different for the 3900x, but for the 3700x it is still very much worth it to overclock.


Single core:
default settings: 495 100.0%
precision PBO : 494 99.8%
all core OC: 502 101.4%

multi core:
default settings: 4611 100.0%
precision PBO: 4626 100.3%
all core OC: 5151 111.7%

So apparently I don't loose anything in single core, and as I predicted PBO does literally nothing on the 3700X, It actually clocks the chip slower in the single core test, at default it ran at 4300, with PBO the max boost clock was 4293. The funny bit is, that my all core OC uses less voltage than what PBO rams trough the chip left to its own devices.

Yes, 11% is less than what I was able to OC on the 6800K on all cores, but I buy CPUs to get better performance not to have bigger percentage overclocks.

And 2993 -> 5151 is a huge 72% jump. I didn't expect that much out of this upgrade.


Out of curiosity have you run benchmarks using PB2 as well? PBO has been reported to actually work worse than just using PB2 in many instances.
 
Out of curiosity have you run benchmarks using PB2 as well? PBO has been reported to actually work worse than just using PB2 in many instances.
I believe precision boost is active on default settings. But the cpu is hitting the 88w power limit trough the entire test so it won't boost any higher. And with PBO the power limit is still enforced it only lifts the current limit, so probably that's why it does nothing more.
 
No idea how I missed default settings scores in your post. Please ignore my stupidity lol.
 
Back
Top