Philosophy of Overclocking and Stability


Jan 5, 2005
I was just wondering how people who OC their rigs deal with system stability. Every time i OC and something crashes, i just assume its the OC and roll it back. How can i relatively easily OC my CPU without constantly worrying that it is compromising system integrity?

I want to get a 5.0 GHz Kaby Lake from Silicon Lottery, but how do i make sure its the foundation for a reliable workstation?

I just want a stable system with as fast single-threaded performance as possible. Help?
My favorite stability tests vary depending on the purpose of the build. For my own gaming rigs I usually do a combination of Prime95 blend plus a GPU benchmark or two, sometimes simultaneously if I know that particular build is thermally on the edge. Scientific computing builds have to pass a few hours of SmallFFT, no ifs thens or buts. Small FFT is completely unrealistic; outside of cryptocurrency mining and maybe password cracking (both long relegated to the domain of GPUs), there are no real-world tasks that fully utilize AVX2 and operate inside L1/L2, but the higher instruction throughput and hotter temps make it a good accelerated lifetime test of sorts.

I've found that stressing other parts of the system (drives, network, USB) concurrently can sometimes help hunt down frustrating instabilities that might matter later down the line. Most notably, I've had several systems where excessive load on the PCIe controller (4+ GPUs) triggered extremely high SATA latencies, leading to poor system responsiveness despite solid CPU/GPU benchmark scores.