Core i9-10980XE Review Roundup

the mitigation patches aren't mandatory and most home users aren't installing them. The average home user isn't massively security conscious.

Isn't Microsoft automatically adding in patches through Windows 10. I know that the micro-code updates are through the BIOS and 99/100 home user's could care less about doing a BIOS update, but I thought MS was also implementing some workarounds that impacted performance on some level, with intel CPU's. And starting at the 8 series, isn't the hardware mitigation built into the CPU? So even without the micro-code updates, you'll still see performance impacted albeit under limited scenarios.
 
Isn't Microsoft automatically adding in patches through Windows 10. I know that the micro-code updates are through the BIOS and 99/100 home user's could care less about doing a BIOS update, but I thought MS was also implementing some workarounds that impacted performance on some level, with intel CPU's. And starting at the 8 series, isn't the hardware mitigation built into the CPU? So even without the micro-code updates, you'll still see performance impacted albeit under limited scenarios.

Some mitigations are done by Microsoft automatically but not all of them. Additionally, Microsoft has updated some of the mitigation patches / workarounds to have less impact to CPU performance. Some flat out require a BIOS update. Very few of these mitigations are done at the silicon level. The designs are already done and can't be added onto late in the development cycle. Of course, some are done on the 9th and 10th generation processors. It's believed that the 10980XE is slightly slower than the 9980XE in some tests because of these mitigations. I didn't catch this but other reviewers who had 9980XE's did.
 
Interesting. How does this translate to 99 and 95 percentile?

Do you have a 3950X to compare? It seems to have an additional boost in games, maybe due to cache or something?

Toms is basically what made me choose to go with a 9900kf (that was six months ago, so it’d be a 9900ks if today). That and a 3900x/3950x being impossible to find. They essientially have OC vs OC.

https://www.tomshardware.com/reviews/amd-ryzen-9-3950x-review/3

People are orgasming over Ryzen, and it is good and the best choice in most cases but for gaming and some productivity the 9900ks is best. I don’t think I’ve ever seen my 8 cores fully utilized and sometimes I encode (Adobe Premiere Elements) and game at the same time...
 
Interesting. How does this translate to 99 and 95 percentile?

Do you have a 3950X to compare? It seems to have an additional boost in games, maybe due to cache or something?

Not sure. I'd have to pull that data. As for the 3950X, I didn't get a sample. They only sent them to the largest sites and most well known YouTubers. TheFPSReview is still too new for that. I'm surprised I got a 10980XE for similar reasons.
 
Last edited:
Toms is basically what made me choose to go with a 9900kf (that was six months ago, so it’d be a 9900ks if today). That and a 3900x/3950x being impossible to find. They essientially have OC vs OC.

https://www.tomshardware.com/reviews/amd-ryzen-9-3950x-review/3

People are orgasming over Ryzen, and it is good and the best choice in most cases but for gaming and some productivity the 9900ks is best. I don’t think I’ve ever seen my 8 cores fully utilized and sometimes I encode (Adobe Premiere Elements) and game at the same time...
I don't think it was ever questioned that the 9900 is the king of gaming. I'm curious about Dan's findings if they are game [engine] specific and how rare the glitches are (99 percentile).

It would be interesting to test if these have any effect:
https://www.techpowerup.com/forums/threads/1usmus-power-plan-for-amd-ryzen-new-developments.261243/

and

1574343377756-png.png


Dan_D, are you able to give it a spin?
 
Last edited:
First off, you make some good points. However, the mitigation patches aren't mandatory and most home users aren't installing them. The average home user isn't massively security conscious. Secondly, you are incorrect. Past 2560x1440, the processor you have does matter. It isn't 100% up to the GPU at that point. I learned this the hard way.

This behavior isn't isolated to Destiny 2 either. Destiny 2's engine is a weird one and I'll give you the fact that it can be an outlier on many things. We see the same thing in Ghost Recon Breakpoint. You can see the 10980XE vs. the 3900X and the later again turns in higher maximum frame rates, but the lows and average rates are even worse. The average is just playable, and the lows indicate a less than stellar gaming experience. In contrast, the 10980XE never drops below 60FPS. Even a simulated 10920X with several of the 10980XE's cores disabled doesn't change that. A real 10920X has slightly better clocks, so it would be even better here.

I hadn't done any 4K testing on the game or the 10980XE, but that sort of stuff will be done in future articles. I'm doing a follow up on the 10980XE as well when Intel supposedly releases a microcode update that improves overclocking. I'm not sure that I believe that, nor do I have any idea how that's possible, but I'll put it to the test.

But the point is, your CPU matters whether you are at 1920x1080 or 3840x2160. It can still mean the difference between a good experience and a bad one. A 2nd generation Threadripper on a discount would server all my needs outside of gaming. In my experience, they aren't good gaming CPUs and therefore, I won't use one.
Examples like this is usually why I always do some testing for myself. Like this way I discovered that hyper threading on skylake-x completely nukes performance in FF14 compared to it being disabled but yet sandy bridge-e (3930k), ivy bridge-e (1680v2), haswell-e (5960x) and broadwell-e (6950x) don't have this problem, only skylake-x lol (7900x, 7980xe, 9900x). It is a bit odd and seems to be worse the more cores you have from me messing around with the various systems/cpus that I have. FF14 also has some bad issues with skylake-x turbo and whatever they call the downclocking now where it gets stuttering from it while doing certain things (like using a mount). When I got my 7820x setup that really confused me for a while until I was playing around in bios and finally flipped something that made it not do it anymore since it happened even when running at stock. I was like how is this new system slower than my old 3930k lol.

For the mitigations it seems to really depend on the workload and for companies that do federal work these vulnerabilities are a huge since all systems need to be patched for them. :( Phoronix has done some pretty extensive testing of the performance impact of the mitigations if anybody is interested. They use the server CPUs but the result is applicable to desktop parts as well.
 
Last edited:
First off, you make some good points. However, the mitigation patches aren't mandatory and most home users aren't installing them. The average home user isn't massively security conscious. Secondly, you are incorrect. Past 2560x1440, the processor you have does matter. It isn't 100% up to the GPU at that point. I learned this the hard way.

Let me show you what I mean. Below is a graph outlining the averages for Destiny 2 at 4K resolution. This is maxed out everything with motion blur disabled, chromatic aberration disabled and no V-Sync. As you can see, the averages favor AMD here and they are largely the same across the board. The 9900K actually loses ground at stock speeds. The processors here are the 3900X, 9900K, and the Threadripper 2920X. I'm an avid player of the game and the experience between these isn't the same, I assure you. This is what the data said, but it doesn't tell the whole truth.

View attachment 203087



View attachment 203088

The graph above shows the data between the 3900X and the 9900K. As you can see, we are still at the same settings but now I'm showing minimums, averages, and maximum framerates. The picture is entirely different. You can see the AMD Ryzen 9 3900X shoot way out ahead with a much higher maximum. It's average appears the same once again but the minimum frame rates are in the toilet. That's not a joke, and not a typo. These are manual runs in the game which I repeated multiple times. The Intel system shows this drop to 56.7FPS, but at no point did I ever see it on the FPS counter in game. Nor did I ever feel it. On the AMD side, I not only felt it, but saw FPS dips on the FPS counter. Now, I never saw drops that low, but the mid-40FPS range was a common sight on the AMD side. It's by virtue of the excessively high frame rates that the averages come up. The Threadripper data is even worse. I didn't include it because I missed placed the data at the last minute, and while I remember some of the numbers, I wasn't going to include data I can't back up.

Suffice it to say, the 2920X achieved a minimum score of 26FPS on an all core overclock of 4.2FPS and a maximum of somewhere past 189FPS. Using PBO it jumped to a minimum of 36FPS with a maximum over 200FPS. I remember the minimums precisely, but not the rest. The averages were again about the same as everything else. Today, Destiny 2 performance is better on AMD hardware than it was. But keep in mind, initially, we had the 3900X lose to the 2920X. 1st and 2nd gen Threadripper CPU's are about the worst there are for gaming in the modern era.


View attachment 203089

This behavior isn't isolated to Destiny 2 either. Destiny 2's engine is a weird one and I'll give you the fact that it can be an outlier on many things. We see the same thing in Ghost Recon Breakpoint. You can see the 10980XE vs. the 3900X and the later again turns in higher maximum frame rates, but the lows and average rates are even worse. The average is just playable, and the lows indicate a less than stellar gaming experience. In contrast, the 10980XE never drops below 60FPS. Even a simulated 10920X with several of the 10980XE's cores disabled doesn't change that. A real 10920X has slightly better clocks, so it would be even better here.

I hadn't done any 4K testing on the game or the 10980XE, but that sort of stuff will be done in future articles. I'm doing a follow up on the 10980XE as well when Intel supposedly releases a microcode update that improves overclocking. I'm not sure that I believe that, nor do I have any idea how that's possible, but I'll put it to the test.

But the point is, your CPU matters whether you are at 1920x1080 or 3840x2160. It can still mean the difference between a good experience and a bad one. A 2nd generation Threadripper on a discount would server all my needs outside of gaming. In my experience, they aren't good gaming CPUs and therefore, I won't use one.

Way I see it is that in most tasks these CPUs are all quite close when both are optimised, unless it's MT optimised and AMD tends to dick the competition typically.

Destiny 2 is pretty obviously having an issue with AMD. Is it Intel compiler fuckery? Is it coding? Lazy optimisation? Who knows. If you are only playing that game then yeah, big deal. Otherwise, it's an outlier as you say. For something so close to be now 30-40% of the speed suddenly, when not using heaps of threads crossing chiplets or any potential latency issues from that, It makes no sense. That doesn't happen in most other games and certainly not in many applications.
Just like Matlab until people figured out to change some flags so the Intel bullshit compiler wouldn't screw performance on non-Intel CPUs recently. I wonder how much of the slight 'advantage' Intel has is because of this. I see very, very little investigation into it which is pretty sad. Just 'AMD suxx in some titles' and no digging into why - that's not aimed at you by the way.

The latency on 980/10980 isn't great either.. so it can't just be latency or uarch alone..
Window scheduler? Do they have destiny on linux yet? etc etc... that's where I'd be looking as it would be easiest to start with.
 
Just like Matlab until people figured out to change some flags so the Intel bullshit compiler wouldn't screw performance on non-Intel CPUs recently. I wonder how much of the slight 'advantage' Intel has is because of this. I see very, very little investigation into it which is pretty sad. Just 'AMD suxx in some titles' and no digging into why - that's not aimed at you by the way.

Should note that the Intel compiler was the only compiler that wasn't architecture agnostic that was worth using for several decades. If software needed to be performance optimized, it's what was used. There wasn't anything else.

This is an instance of AMD being 'creative' with their architecture to the point that current software had trouble dealing with their non-standard engineering decisions. NUMA on consumer CPUs? Hard core and cache group delineations on top of that? While everyone else had be coding for Intels and AMDs monolithic designs?

Yeah. No one had to worry about that stuff for consumer applications until AMD brought out Zen's goofy approach which by and large was driven by their need to have smaller CPU 'building blocks' that TSMC might actually be able to produce in volume.
 
Whoa man don't bring up the Pentium 4 it's already been dead for years now!
 
Way I see it is that in most tasks these CPUs are all quite close when both are optimised, unless it's MT optimised and AMD tends to dick the competition typically.

Destiny 2 is pretty obviously having an issue with AMD. Is it Intel compiler fuckery? Is it coding? Lazy optimisation? Who knows. If you are only playing that game then yeah, big deal. Otherwise, it's an outlier as you say. For something so close to be now 30-40% of the speed suddenly, when not using heaps of threads crossing chiplets or any potential latency issues from that, It makes no sense. That doesn't happen in most other games and certainly not in many applications.
Just like Matlab until people figured out to change some flags so the Intel bullshit compiler wouldn't screw performance on non-Intel CPUs recently. I wonder how much of the slight 'advantage' Intel has is because of this. I see very, very little investigation into it which is pretty sad. Just 'AMD suxx in some titles' and no digging into why - that's not aimed at you by the way.

The latency on 980/10980 isn't great either.. so it can't just be latency or uarch alone..
Window scheduler? Do they have destiny on linux yet? etc etc... that's where I'd be looking as it would be easiest to start with.

Destiny 2 isn't on Linux and the issues I've seen with the game are now better after several AGESA code updates and multiple game patches. Additionally, the 10980XE kind of sucks here at lower resolutions. Its similar to what I saw with Threadripper and AMD chips in the past. I'm actually investigating the performance of these CPU's under different conditions. What you see at 1920x1080 is often far different than what you see at 4K.
 
  • Like
Reactions: N4CR
like this
Destiny 2 isn't on Linux and the issues I've seen with the game are now better after several AGESA code updates and multiple game patches. Additionally, the 10980XE kind of sucks here at lower resolutions. Its similar to what I saw with Threadripper and AMD chips in the past. I'm actually investigating the performance of these CPU's under different conditions. What you see at 1920x1080 is often far different than what you see at 4K.

I own a 7940x, and the struggle is real. At 1080p it's all about the CPU frantically throwing data to the GPU. Mature refined process or no, the 10980XE's clocks drop enough by din of the sheer massiveness of the chip to keep it from hitting high refresh bliss on a reliable basis for most games at sub-1440p resolutions. In this scenario it's better to have 8 cores thumping north of 4.5 GHz than 18 at 3.8. Whereas at 4K the bottleneck shifts over to the GPU in most circumstances, and things equalize a lot more. It does look like silicon mitigations for Cascade Lake-X are having a diffuse but noticeable effect on performance on top of Skylake-X's known clock scaling problems... It'll be interesting to see how these chips shake out after a few months of real world use. I'm not upgrading, for the record.
 
I own a 7940x, and the struggle is real. At 1080p it's all about the CPU frantically throwing data to the GPU. Mature refined process or no, the 10980XE's clocks drop enough by din of the sheer massiveness of the chip to keep it from hitting high refresh bliss on a reliable basis for most games at sub-1440p resolutions. In this scenario it's better to have 8 cores thumping north of 4.5 GHz than 18 at 3.8. Whereas at 4K the bottleneck shifts over to the GPU in most circumstances, and things equalize a lot more. It does look like silicon mitigations for Cascade Lake-X are having a diffuse but noticeable effect on performance on top of Skylake-X's known clock scaling problems... It'll be interesting to see how these chips shake out after a few months of real world use. I'm not upgrading, for the record.
The mesh bus is bound to hurt them in game performance.
 
  • Like
Reactions: Halon
like this
The mesh bus is bound to hurt them in game performance.

It does. I have done some testing at 4K with the 10980XE @ 4.7GHz. In Destiny 2, a game Intel is generally very strong at, the 10980XE has the same problems as the 2nd Generation Threadrippers do. Very high maximum frame rates, solid averages and low minimums.

upload_2019-11-29_23-18-31.png


Here is the data straight out of Frameview for those interested.
 
Back
Top