Well, you and others objected to my pre-launch claim that, due to higher performance per core, the 10C Skylake "would be faster" than the 16C TR on workloads with 24 threads. I am talking about the 1920X, for two reasons (i) because we don't have workloads with exactly 24 threads running on the 1950X, and (ii) because the equation I used for my claim applies to both the 1950X and the 1920X, because it is a basic equation of computing
Well then your equation is wrong because its giving too much value to SMT. I'm pretty sure that 16c / 16t TR will be faster than 1920x in all workloads, unless AMD somehow (magically) increased SMT performance so much that its actually giving over 35% perf. increase per core.
That is also the reason why 6c / 6t Coffee lake will be faster than 4c / 8t Kaby. I'm gonna go as far and say that 6c / 6t CFL @ stock 4.3Ghz all core turbo is going to be faster than 5Ghz 4c / 8t Kaby when utilizing all available threads.
But why don't you post your equation so we can all see it and maybe understand why you come to these conclusions based on your equations.
TreadRipper with 3200MHz RAM
When 10C SKL wins to 12C TR, it usually does by a large amount that when it loses. I have not computed the average, but I am sure that it will be close to the average reported by HFR review, despite the above benches are all encoding/rendering, whereas HFR average is for a broader range of applications.
That is with unknown timings... and I know for a fact that timings (subtimings in particular) play a HUGE factor when optimizing Ryzen latency penalties. It plays such a big role that 3Ghz Ryzen with fast memory and optimized subtimings is faster than 4Ghz ryzen with "standard" memory profiles. And the same applies to TR.
3Ghz Ryzen with 3466C14 + proper subtimings is beating out 4Ghz 3200C14 "standard profile".
But I'll concede that it doesn't affect with workloads which doesn't have that much thread dependencies. But then again, you've been spouting for a while now that latency will be a problem with EPYC and TR... and from tests I've read, they seem to be doing just fine with server / workstation loads.
Last edited: