My basis for that was that no one I saw even reached boost clocks as OC, if someone tested a 9900k and it did not reach 4.7Ghz singlecore or somone published a test with [email protected] all core OC, I would not call that overclocking either. I would need to see 4.8Ghz.I was referring to the statement you made where you said that no one tested overclocked performance, which is untrue. All the reviews I've seen, including the one I wrote had overclocked values. I had a 4.3GHz all core overclock in the data set. Sure, you could try manually clocking a single core up to the boost clocks, but really there isn't a need for this. So the figures where we saw 4.4GHz and later 4.6GHz boost clocks in the update pretty well have this covered on both ends of the spectrum. The data is there. When you look at the Intel data its presented the same way. Stock speeds which include boost clocks and all core overclocks. That's exactly what I (and others) did.
Perhaps holding amd to intel's turbo standards is wrong, but I also feel it is wrong to call anything between base clock->boost clock OC.
This really is just a myth perpetuated by ultra benchmarks and 60Hz screens. If you choose ultra and then play, you will hard-bottleneck the gpu, and reach the conclusion that there's virtually no difference between cpus. Some have even included old sandy bridge 2600k in such benchmarks and gone "see, almost no difference!" but again, it is because ultra 4k will hard bottleneck the gpu with overkill settings. It is hard to quantify, so I'll pull numbers out of my ass and say ultra generally looks 10% better than high, but 120 vs 60 fps is 100% more motion resolution.I've used a wide range of processors and while I agree, more is better, your still primarily GPU bound. Tests have been done in the past showing virtually no difference between many CPU's using a high end graphics card at higher resolutions. This is basically well known and generally accepted.
Trying to reach ~120fps 4k, both gpu and cpu will bottleneck at various settings and situations, and cpu and gpu hungry settings will need to be tweaked separately.
It is the same with slightly weaker hardware in 1440p 144fps+, or 240fps 1080p.
It has been the same since I started running 5760x1080@120Hz sometime around 2011-12.
If you saw my graphs above, you can see that briefly, cpu has spiked and gpu performance dropped, since cpu bottlenecked. And this is with as perfectly tweaked balance as I can manage - I could increase all cpu hungry settings and have the cpu bottleneck more. If i bought a more powerful cpu, I could probably increase some setting - but maybe not as things like shadows don't always offer the granularity needed (low -med-high shadows for example).
For people tweaking for higher fps, a 5% increase in cpu power can be a 5% increase in fps, or high shadows instead of medium. Or not. Now, 5% I agree is in the range where it is acceptable, gain 4 more cores for a tiny fps loss - I'll take it. But if it aproaches 10-20%, not me. Add optional overclocking to the cpu you compare to, makes matters worse. Add possibly flaky 99th percentile min fps (which IF true in the LTT test probably gives clearly visible stutters and needs to be cleared up).
Personally, I think I will just wait and see if it turns out to be nothing, re-evaluate after the release-problems are cleared up. (there are so many release-problems when looking around, in addition to that destiny2 can't even be started on 3900 so I'd need to wait for that anyway, etc).