Intel Core i9-9980XE vs AMD Ryzen Threadripper @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,510
Intel Core i9-9980XE vs AMD Ryzen Threadripper

Today Intel is kicking off its newest High End Desktop processor, the Intel Core i9-9980XE Extreme Edition. This 14nm Skylake-X CPU boasts 18 Cores and 36 Threads and has an expected retail price of $1979. We compare the i9-9980XE to AMD's entire line of Threadripper CPUs to see where the 9980XE sits in the HEDT stack.

If you like our content, please support HardOCP on Patreon.
 
Last edited:
Just buy it!! :D. Seriously though, looking forward to seeing how this thing performs.
 
intel must have a nice stash of crack if they think this processor makes any sense seeing the light of day at that price..
They'll probably still sell at that price, for better or worse.

Are Intel's yields any better on these chips than the desktop 14nm parts?
 
Well Intel sure proved me wrong - there IS an upgrade path for x299.

Makes me wish I had jumped on that platform back when it launched. /s
 
It's based on Skylake, so it's still affected by Spectre & Meltdown, right? Just Wondering...
 
Any thoughts on how HEVC/H265/X265 would affect the multimedia encoding scores? It is my understanding that it is heavily optimized for AVX.
 
Kyle were the placements in the single threaded Hyper Pi and WPrime reversed for Intel? because it's showing the OC'ed chip as slower than the stock chip.
 
Benchmarks are great and all, but I would like to hear Kyle's impressions and opinion of daily desktop usage and gaming. Does your system just run so ungodly fast you could never use a laptop or older desktop again? Or is the difference MEH???

NOT complaining just asking,
Best, Earl.....
 
Kyle were the placements in the single threaded Hyper Pi and WPrime reversed for Intel? because it's showing the OC'ed chip as slower than the stock chip.

that's because overclocked it sat at 4.1Ghz all core all the time.. where as stock you still have the boost control increase clock rates above 4.1Ghz during single/double/quad core usage.

check the core clock scaling graph Kyle put on the second to last page, it'll explain it better.
 
Any thoughts on how HEVC/H265/X265 would affect the multimedia encoding scores? It is my understanding that it is heavily optimized for AVX.
H.265 encode of the same workload in HandBrake - 277 seconds. 11.245% increase in encode time.
 
Last edited:
It's based on Skylake, so it's still affected by Spectre & Meltdown, right? Just Wondering...

Yes, but all current fixes for Spectre/Meltdown,L1TF should be in that bios. So any impact should already be reflected in Kyle's benchmark results.

Kyle, why did CPU-Z say it was a 7980XE, but then 9980XE right below that?
 
Yes, but all current fixes for Spectre/Meltdown,L1TF should be in that bios. So any impact should already be reflected in Kyle's benchmark results.

Kyle, why did CPU-Z say it was a 7980XE?
Just an update issue with GPUz. Look a couple lines under that title line.
 
Thanks for review.

After price match I was able to get my 9900K for $519 ...... I'm more than happy with 8 Cores / 16 Threads.
 
Kyle were the placements in the single threaded Hyper Pi and WPrime reversed for Intel? because it's showing the OC'ed chip as slower than the stock chip.
Those numbers are correct. I updated the text so you do not have to read till the end for the discussion.
 
This one comes to mind first: https://www.grc.com/inspectre.htm

The tools will give a CPUID.

To view the microcode revision:
For first core, look at:

HKEY_LOCAL_MACHINE\HARDWARE\DESCRIPTION\System\CentralProcessor\0

For example:

"Update Revision" = 0xba - current latest microcode (from mcupdate_*.dll)

"Previous Update Revision" = 0xb3 - default original microcode version (from BIOS)

"Identifier" - Intel64 Family 6 Model 15 Stepping 11

"Platform Specific Field 1" - 0x80

Microcode is taken from c:\Windows\System32\mcupdate_GenuineIntel.dll (or mcupdate_AuthenticAMD.dll) using "Identifier" and "Platform Specific Field 1". For Intel, you can search for "DataVersion" UTF-16 string from mcupdate_GenuineIntel.dll to see all included ucode versions. For cpu id from example: "6fb-80,ba" (format is FamilyModelStepping-PF,ucRevision in hex).
 
Last edited:
Inspectre 1.png
 
Too expensive. If AMD could up their encoding speeds for h264 and h265, there would be really no reason to get that Intel.
 
Sounds like Handbrake is broken for HEVC on that CPU.
EDIT: 100% on me. I did not have the right codecs on the test box to play those H.265 encodes back. I pulled them over to my personal machine to double check and had no issues running the files with VLC player. As you might guess, our test boxes are pretty well stripped down except for exactly what they need to test. Again, less gremlins to chase during testing.
 
Last edited:
Interesting results. And I kind of agree Intel is smoking a big crack pipe here for the consumer market. Who will buy this? Enterprises that need a high end workstation CPU and have been an intel house for so long that the IT organization is reluctant to try the AMD solution.
 
Quite frankly, here at HardOCP we do not pay much attention to TDP specifications, as it is mostly not important to computer hardware enthusiasts.
I see what you did there. I guess those of us hanging out in "Small Form Factor Systems" don't count, heh. But in all seriousness, nice review.
 
If you are a content creator that also uses the same machine to game on and want the highest FPS possible then get the Intel chip.

If you are only content creating/workstation type workloads and looking to save a few bucks get an AMD chip.

If you are a youtuber and want fastest rendering times get the Intel chip.
 
This one comes to mind first: https://www.grc.com/inspectre.htm

The tools will give a CPUID.

To view the microcode revision:
For first core, look at:

HKEY_LOCAL_MACHINE\HARDWARE\DESCRIPTION\System\CentralProcessor\0

For example:

"Update Revision" = 0xba - current latest microcode (from mcupdate_*.dll)

"Previous Update Revision" = 0xb3 - default original microcode version (from BIOS)

"Identifier" - Intel64 Family 6 Model 15 Stepping 11

"Platform Specific Field 1" - 0x80

Microcode is taken from c:\Windows\System32\mcupdate_GenuineIntel.dll (or mcupdate_AuthenticAMD.dll) using "Identifier" and "Platform Specific Field 1". For Intel, you can search for "DataVersion" UTF-16 string from mcupdate_GenuineIntel.dll to see all included ucode versions. For cpu id from example: "6fb-80,ba" (format is FamilyModelStepping-PF,ucRevision in hex).
Not finding that in the dll.

DataVersion.png


That definitely clarifies things for the page.

But it raises another pair of questions. First, whether HEDT Skylake motherboards will offer the ability to tune in scaled multipliers based on the number of active cores like Intel has offered at least up through Ivy Bridge? (presumably since I haven't bought anything newer from them).

Second, if scaled multipliers if offered, to what extent is that chasing really diminishing returns as it seems having mass cores at 4.1 and single, dual, or quad cores at running at between 4.5 and 4.7 might have some real additional payoff?
You can set multiplier per core if you wish on the board I am using and all that I have seen. We do not get into this in reviews since you can surely go down a huge wormhole tuning and tweaking in such a way, and the fact is, we do not have the resources to overclock with such granularity. For this review we only ran about 100 different tests across 5 different hardware setups...and we did those tests multiple times each. Now add per core overclock scaling to that for funsies and you will see that it gets expensive in terms of time quickly.
 
If you are a content creator that also uses the same machine to game on and want the highest FPS possible then get the Intel chip.

If you are only content creating/workstation type workloads and looking to save a few bucks get an AMD chip.

If you are a youtuber and want fastest rendering times get the Intel chip.
I still think that is a fairly limited view, and to a lot of people $1000 more for an equivalent CPU is a lot of money.

I doubt most content creators will be gaming at 1080p. And we only see CPU limitations in very few games at 1440p.

For the "Youtuber" thing I guess you are referring to H.264 encode? Even then the 2950X is on par with the 9980XE, certainly not $1100 better. I guess if you just have an extra grand in your pocket...but you are going to have to be a hell of a Youtuber to make that money back.
 
If you are a content creator that also uses the same machine to game on and want the highest FPS possible then get the Intel chip.

If you are only content creating/workstation type workloads and looking to save a few bucks get an AMD chip.

If you are a youtuber and want fastest rendering times get the Intel chip.

yes and no.. if you go back and look at the 2950x review and look at the overclock numbers, it's 1000 dollars less and should outperform the overclocked 9980xe while also using roughly 50w less to do so.
 
Last edited:
Nice review. I didn't expect it to be this behind in both price/performance and performance alone. And without DLM for the TR.
 
I'm not sure about y'all. But my budget usually targets the low end, not the top of the stack. If you look at the costs, the basin falls CPUs on the low end are competitive with AMD's low end HPC. Lower cores but faster clocks and better IPC. I'm waiting to see some benchmarks.

cpus.jpg
 
Interesting review, thanks Kyle!

Few thoughts:
- Lost Planet seems like such a disparity in terms of results, also weird clock scaling, can we be sure this is not also Intel compiler fuckery at hand? Thing is with that lurking in the background it's always something that makes CPU reviews between manufacturers a little murky, as it's almost never mentioned.
- 100-105°c on water under OC P95 for hours though? I would not want to do that for long, Kyle you have bigger balls than I do - I'd be pulling out at that temp!
- Is this the same die as the 28 Core? If so, why is the die not in the full 28 core configuration or higher? Yields? Power? Clock scaling?
Will be very interesting to know this. Funny seeing nearly no OC left on both teams at this level.
Also one other thing, people go on about latency with TR, it's not looking like there is a huge difference between that and the 9980XE in real world operation with the proper scheduling patch applied.


If you are a content creator that also uses the same machine to game on and want the highest FPS possible then get the Intel chip.

If you are only content creating/workstation type workloads and looking to save a few bucks get an AMD chip.

If you are a youtuber and want fastest rendering times get the Intel chip.

Content creators and youtubers are the same thing and this reads like an product advertisement with all those buzzwords and position, Elmy. The answer actually depends on your workload for content creators. AMD is faster in Blender, slower in Premiere pro.
 
Back
Top