Core i9-10980XE Review Roundup

Gaming performance seems more than good enough on all of the new Intel CPUs. This is what blows my mind though:
Screenshot_20191126-195355_YouTube.jpg


The 10920x is using over 100w more than the 9920x...
Screenshot_20191126-195413_YouTube.jpg

...while actually getting worse performance.

You can call BS on AMD fine wine but it is looking like Intel is stale whiskey.
 
I have to wonder if they are using older numbers or a different motherboard for the 9920X and the 10920X. That could easily account for the difference in power consumption. They probably don't test every system every single time. The numbers are accumulated over a period of time and only redone when they have to be.
 
I think the above basically happens because very few motherboards actually cap tdp, gamersnexus talks about it at least with the 10980xe. My personal experience is that my x299 carbon pro ac capped a 7800x, 7820x, but my 7940x has no tdp limit (by default). When you look at the above chart it looks like the 9920x is following intel tdp limits.

My 7940x was limited by it's core based turbo speeds and chews about 210 watts rather than the 165 tdp rating. It wasn't a problem even on a hyper 212 evo (since the 7940x die uses the 18 core die the density isn't an issue). On my board you could see it by using intel xtreme tuning. I don't have any of the newer processors to test, I know my board had a bios available for cascade lake x awhile ago.


That said, that's a huge variation on power and I don't think a hyper 212 could handle 100 watts on top of 165. I'd like to see more reviews with the other processors before I consider this power draw the norm.


*I'm not looking for a new processor, the 3900x and 7940x fulfill all my needs*

None of these seem to be in stock.

Steve's pretty good about using new data, so no idea on that@ Dan_D.
 
I think the above basically happens because very few motherboards actually cap tdp, gamersnexus talks about it at least with the 10980xe. My personal experience is that my x299 carbon pro ac capped a 7800x, 7820x, but my 7940x has no tdp limit (by default). When you look at the above chart it looks like the 9920x is following intel tdp limits.

My 7940x was limited by it's core based turbo speeds and chews about 210 watts rather than the 165 tdp rating. It wasn't a problem even on a hyper 212 evo (since the 7940x die uses the 18 core die the density isn't an issue). On my board you could see it by using intel xtreme tuning. I don't have any of the newer processors to test, I know my board had a bios available for cascade lake x awhile ago.


That said, that's a huge variation on power and I don't think a hyper 212 could handle 100 watts on top of 165. I'd like to see more reviews with the other processors before I consider this power draw the norm.


*I'm not looking for a new processor, the 3900x and 7940x fulfill all my needs*

None of these seem to be in stock.

Steve's pretty good about using new data, so no idea on that@ Dan_D.

The ASUS boards do not follow TDP limits. You can enable them, but they are not by default.
 
Gaming performance seems more than good enough on all of the new Intel CPUs. This is what blows my mind though:
View attachment 202752

The 10920x is using over 100w more than the 9920x...
View attachment 202753
...while actually getting worse performance.

You can call BS on AMD fine wine but it is looking like Intel is stale whiskey.

These two charts really show how amazing the 3950x is as well given the power consumption, and where it ranks.
 
"Intel has done the right thing in cutting prices and the reality is that even though the Ryzen 9 3950X is a stunning processor, the Core i9-10980XE still beats it in plenty of tests and it's huge overclocking headroom means it leapfrogs the otherwise dominant AMD CPU in areas it's weak in at stock speed. This is assuming your average retail sample overclocks better than your typcial Core i9-9980XE as mine did - that's something only time will tell and is something to bare in mind."

Source: https://www.forbes.com/sites/antony...s-ryzen-9-3950x/?ss=consumertech#16d06b38405f

That's such a bad review. It's a dead end platform. Buy that chip and there's no where to go literally.
 
All platforms are dead platforms.

This is obtuse.

Obviously, they all hit a dead end eventually. However, when a socket / platform first appears, it should support two or three generations of CPU upgrades. The benefit of these upgrades vs. buying a new motherboard are certainly debatable. But to say they are all dead as if a platform's upgrade path shouldn't be considered is disingenuous.
 
Yeah, if you bought a higher end X370 board you could move to the 3950x now. That's decent lifespan for a platform. Moving from a 1700x to a 3950x is a HUGE upgrade. Heck, even just moving to a 3700x would give you fairly large gains even though the core count remains the same.

This reason is why I went with X470 over a year ago, and i'm glad I did. Went from a 2700x to a 3900x which is still a massive upgrade all things considered.
 
All platforms are dead platforms.
The X370 had a hell of a lot more lifespan (3 generations) than the Z170 which was a dead end platform, even though it used Skylake, which ironically enough is what Intel is still pushing to this day...
Not saying Intel didn't have a few stopgaps in there (Sandy Bridge to Ivy Bridge and Skylake to Kaby Lake on very selective platforms), so your statement really applies far more towards Intel than it does AMD.

You have exactly zero chance for this to stay true.
Still waiting for an answer on this - I was legitimately asking. ;)
 
The X370 had a hell of a lot more lifespan (3 generations) than the Z170 which was a dead end platform, even though it used Skylake, which ironically enough is what Intel is still pushing to this day...
Not saying Intel didn't have a few stopgaps in there (Sandy Bridge to Ivy Bridge and Skylake to Kaby Lake on very selective platforms), so your statement really applies far more towards Intel than it does AMD.

Two versus three?

Okay.

And that's IF you bought a board that is both capable and supported.

More broadly speaking, very few of those that buy a decent system are actually going to be seriously considering upgrading just the CPU. They're far more likely to keep their current CPU until the platform is obsolete.

Still waiting for an answer on this - I was legitimately asking. ;)

What's your guarantee that vulnerabilities won't be found in Zen, or any other architecture for that matter, over more than half a decade and hundreds of millions of deployments?

Zero.
 
Yeah, if you bought a higher end X370 board you could move to the 3950x now. That's decent lifespan for a platform. Moving from a 1700x to a 3950x is a HUGE upgrade. Heck, even just moving to a 3700x would give you fairly large gains even though the core count remains the same.

This reason is why I went with X470 over a year ago, and i'm glad I did. Went from a 2700x to a 3900x which is still a massive upgrade all things considered.

I'm really not arguing against this, just pointing out that situations like the enviable one that you find yourself in are rare and involve quite a bit of luck on top of prudent planning.

Even as I'm guilty of similar gambles myself, it's not something I recommend to others ;)
 
This is obtuse.

Obviously, they all hit a dead end eventually. However, when a socket / platform first appears, it should support two or three generations of CPU upgrades. The benefit of these upgrades vs. buying a new motherboard are certainly debatable. But to say they are all dead as if a platform's upgrade path shouldn't be considered is disingenuous.

I'm mostly saying that while upgrades may be possible, the luck involved in actuality being able to make use of that upgrade path along with the likelihood that an upgrade while the platform is still relevant makes planning for such upgrade potential somewhat ill-advised.
 
I'm mostly saying that while upgrades may be possible, the luck involved in actuality being able to make use of that upgrade path along with the likelihood that an upgrade while the platform is still relevant makes planning for such upgrade potential somewhat ill-advised.

Luck has nothing to do with it. Having a viable upgrade path has been the norm rather than the exception. As for people being able to make use of the upgrade path, that's a bit of a nonsensical statement. Of course they can. The 10980XE is a perfect example of that. Its faster at stock speeds than the 9980XE and 7980XE CPU's it replaced. It has a slightly improved feature set. It overclocks better. Of course, if you drop it in your applications will run faster whether you are making full use of the cores and threads or not. A faster CPU always helps. Now whether or not that upgrade is cost effective for the performance gained is another matter. In some cases the clear answer could be yes. In many cases, it isn't.

I'm someone who often buys a new CPU and platform when it first arrives. I may even upgrade the motherboard periodically given my access to them. However, I do not generally upgrade the CPU unless there are substantial gains to be had from doing so. When I was on X99 with a 5960X, I skipped Broadwell-E entirely. It clocked about 200MHz less and the IPC difference didn't make up for it. Not only that, but Intel jacked the price up to $1,500 or so vs. the $1,050 or so that the 5960X went for. The 6950X would have given me two additional CPU cores over the 5960X, overclock worse and not make up for the lost clock speed with IPC. The extra two cores would have done nothing for me at the time so that was a hard pass. Was the chip faster? In certain situations, absolutely. But it wasn't cost effective or worth while in my opinion.

Obviously, in regards to the 10980XE it isn't going to be worth the price of admission over the existing 7980XE or 9980XE processors. But, if you can leverage it and your running something like a 7740X, 7820X, etc. then it can make a lot of sense. You won't have to replace your expensive motherboard with another expensive X570, X399 or TR40X motherboard. You also won't have the potential into being forced to change RAM as X570, X399 and presumably TR40X are fickle about memory compatibility. If you go with X570 you lose out on PCIe lanes and memory bandwidth which may be important for some. The point being, upgrades can and do sometimes make sense. Not always, but sometimes. Luck has nothing to do with it.

Also, for some $1,000 isn't really that much money to spend on a CPU. Those same people could be hardcore Intel zealots. I talked with a Co-Worker yesterday who refuses to use AMD because of one bad experience 15 years ago. So he's firmly against using AMD products. Even he acknowledges that it doesn't make sense and its outdated information but that's how he feels.

AMD told us AM4 would be supported through 2020. There are obvious caveats to using some older boards with today's Ryzen 3000 series CPU's but AMD has had a history of using sockets longer than they should. No one had a reason to doubt AMD on the matter. Intel has usually enjoyed a fairly long life cycle for its HEDT parts and platforms. Each platform lasting around 3 years with at least two generations of compatible CPU's to go in them. Intel does sometimes screw people on the mainstream side, so I'll give you that one. Even then, you normally get a compatible refresh with every other release requiring a new motherboard.
 
I'm really not arguing against this, just pointing out that situations like the enviable one that you find yourself in are rare and involve quite a bit of luck on top of prudent planning.

Even as I'm guilty of similar gambles myself, it's not something I recommend to others ;)

Luck is kind of a poor word as AMD stated all along that AM4 would be supported until 2020. Not much luck involved.

Prudent planning always makes sense though, especially with timing new platforms. Intel releasing the Z370 so close to the Z270 is an example. There was virtually no reason to ever invest in a Z270 platform. The Z270 rebrand (Z370) is perfectly viable years later. I'm thinking of dropping a 9700k into my Z370 Taichi for a gaming machine.
 
10980?...they're going with full zip code names now??

I prefer this over recycling old names. Although they could restart if they get off the core i series and call it something different.

Maybe there is a chance of that when the 7nm EUV parts hit in 2021 or 2022.
 
Overclocked for a few more frames and pulling in excess of 400 Watts. Amazon top sellers not only in U.S is clearly showing which side the market is leaning towards. AMD has a clear winner in 3900X or 3950x.
Competition is a good thing...:D
 
Two versus three?
I stand corrected, I didn't realize Z170 also supported Kaby Lake after a firmware update.
Good call.

Okay.

And that's IF you bought a board that is both capable and supported.
AMD has always been this way.
With Intel, it is very hit and miss, and sometimes only two generations are supported; Z270 was a complete joke and totally unnecessary, and rightfully phased out quickly.

More broadly speaking, very few of those that buy a decent system are actually going to be seriously considering upgrading just the CPU. They're far more likely to keep their current CPU until the platform is obsolete.
That is totally untrue.
I've known a lot of people that have upgraded different generations of just CPUs on both Intel and AMD boards, both personally and professionally, for decades now.

What's your guarantee that vulnerabilities won't be found in Zen, or any other architecture for that matter, over more than half a decade and hundreds of millions of deployments?

Zero.
I agree that there are no guarantees of anything of this nature.
Come tomorrow and there could be a CPU-killer exploit that is unpatchable, unfixable, and destroys a generation of processors.

However, AMD has only had a few (one layer of Meltdown and Spectre) in the last two years.
Intel, on the other hand, is at around 70 hardware vulnerabilities, all of which keep chipping away and performance, functionality (SMT on their CPUs is basically dead), and loss in value.

Odds are, there will be quite a few more exploits in the coming weeks (or hours, at this rate) with Intel CPUs, and other products. :p
 
yeah and guess what .... these chips are also being held in the underverse somewhere in an alternate 7th dimension. Vaporware just like Threadripper 3.
 
  • Like
Reactions: Auer
like this
yeah and guess what .... these chips are also being held in the underverse somewhere in an alternate 7th dimension. Vaporware just like Threadripper 3.

Are you going to salt every thread because you can’t find a threadripper3?
 
Intel, on the other hand, is at around 70 hardware vulnerabilities, all of which keep chipping away and performance, functionality (SMT on their CPUs is basically dead), and loss in value.

Odds are, there will be quite a few more exploits in the coming weeks (or hours, at this rate) with Intel CPUs, and other products. :p

I think a lot of people outside of the [H] or enthusiast community forget about the performance penalties that keep getting applied to Intel's CPU's, it's a bit frightening the performance impacts. And while the majority of user's won't notice, and even then only under certain workloads, it hurts re-sale like you said, and it's in the back of your mind when decided on a platform upgrade.
The only CPU I would consider from Intel is the 9900K/KS, simply because my rig is used for pure gaming, but even then after 1440p who cares... you'd have to be really brand loyal to buy Intel these days, or just an uneducated consumer (or both).
 
I think a lot of people outside of the [H] or enthusiast community forget about the performance penalties that keep getting applied to Intel's CPU's, it's a bit frightening the performance impacts. And while the majority of user's won't notice, and even then only under certain workloads, it hurts re-sale like you said, and it's in the back of your mind when decided on a platform upgrade.
The only CPU I would consider from Intel is the 9900K/KS, simply because my rig is used for pure gaming, but even then after 1440p who cares... you'd have to be really brand loyal to buy Intel these days, or just an uneducated consumer (or both).

It’s more about Hz than resolution.

But after seeing Thread Ripper’s performance I have to agree with you. The other problem is availability though. Both vendors seemed to do basically a paper launch.
 
  • Like
Reactions: Auer
like this
I think a lot of people outside of the [H] or enthusiast community forget about the performance penalties that keep getting applied to Intel's CPU's, it's a bit frightening the performance impacts. And while the majority of user's won't notice, and even then only under certain workloads, it hurts re-sale like you said, and it's in the back of your mind when decided on a platform upgrade.
The only CPU I would consider from Intel is the 9900K/KS, simply because my rig is used for pure gaming, but even then after 1440p who cares... you'd have to be really brand loyal to buy Intel these days, or just an uneducated consumer (or both).

First off, you make some good points. However, the mitigation patches aren't mandatory and most home users aren't installing them. The average home user isn't massively security conscious. Secondly, you are incorrect. Past 2560x1440, the processor you have does matter. It isn't 100% up to the GPU at that point. I learned this the hard way.

Let me show you what I mean. Below is a graph outlining the averages for Destiny 2 at 4K resolution. This is maxed out everything with motion blur disabled, chromatic aberration disabled and no V-Sync. As you can see, the averages favor AMD here and they are largely the same across the board. The 9900K actually loses ground at stock speeds. The processors here are the 3900X, 9900K, and the Threadripper 2920X. I'm an avid player of the game and the experience between these isn't the same, I assure you. This is what the data said, but it doesn't tell the whole truth.

D2.png




D3.png


The graph above shows the data between the 3900X and the 9900K. As you can see, we are still at the same settings but now I'm showing minimums, averages, and maximum framerates. The picture is entirely different. You can see the AMD Ryzen 9 3900X shoot way out ahead with a much higher maximum. It's average appears the same once again but the minimum frame rates are in the toilet. That's not a joke, and not a typo. These are manual runs in the game which I repeated multiple times. The Intel system shows this drop to 56.7FPS, but at no point did I ever see it on the FPS counter in game. Nor did I ever feel it. On the AMD side, I not only felt it, but saw FPS dips on the FPS counter. Now, I never saw drops that low, but the mid-40FPS range was a common sight on the AMD side. It's by virtue of the excessively high frame rates that the averages come up. The Threadripper data is even worse. I didn't include it because I missed placed the data at the last minute, and while I remember some of the numbers, I wasn't going to include data I can't back up.

Suffice it to say, the 2920X achieved a minimum score of 26FPS on an all core overclock of 4.2FPS and a maximum of somewhere past 189FPS. Using PBO it jumped to a minimum of 36FPS with a maximum over 200FPS. I remember the minimums precisely, but not the rest. The averages were again about the same as everything else. Today, Destiny 2 performance is better on AMD hardware than it was. But keep in mind, initially, we had the 3900X lose to the 2920X. 1st and 2nd gen Threadripper CPU's are about the worst there are for gaming in the modern era.


Ghost-Recon.png


This behavior isn't isolated to Destiny 2 either. Destiny 2's engine is a weird one and I'll give you the fact that it can be an outlier on many things. We see the same thing in Ghost Recon Breakpoint. You can see the 10980XE vs. the 3900X and the later again turns in higher maximum frame rates, but the lows and average rates are even worse. The average is just playable, and the lows indicate a less than stellar gaming experience. In contrast, the 10980XE never drops below 60FPS. Even a simulated 10920X with several of the 10980XE's cores disabled doesn't change that. A real 10920X has slightly better clocks, so it would be even better here.

I hadn't done any 4K testing on the game or the 10980XE, but that sort of stuff will be done in future articles. I'm doing a follow up on the 10980XE as well when Intel supposedly releases a microcode update that improves overclocking. I'm not sure that I believe that, nor do I have any idea how that's possible, but I'll put it to the test.

But the point is, your CPU matters whether you are at 1920x1080 or 3840x2160. It can still mean the difference between a good experience and a bad one. A 2nd generation Threadripper on a discount would server all my needs outside of gaming. In my experience, they aren't good gaming CPUs and therefore, I won't use one.
 
First off, you make some good points. However, the mitigation patches aren't mandatory and most home users aren't installing them. The average home user isn't massively security conscious. Secondly, you are incorrect. Past 2560x1440, the processor you have does matter. It isn't 100% up to the GPU at that point. I learned this the hard way.

Let me show you what I mean. Below is a graph outlining the averages for Destiny 2 at 4K resolution. This is maxed out everything with motion blur disabled, chromatic aberration disabled and no V-Sync. As you can see, the averages favor AMD here and they are largely the same across the board. The 9900K actually loses ground at stock speeds. The processors here are the 3900X, 9900K, and the Threadripper 2920X. I'm an avid player of the game and the experience between these isn't the same, I assure you. This is what the data said, but it doesn't tell the whole truth.

View attachment 203087



View attachment 203088

The graph above shows the data between the 3900X and the 9900K. As you can see, we are still at the same settings but now I'm showing minimums, averages, and maximum framerates. The picture is entirely different. You can see the AMD Ryzen 9 3900X shoot way out ahead with a much higher maximum. It's average appears the same once again but the minimum frame rates are in the toilet. That's not a joke, and not a typo. These are manual runs in the game which I repeated multiple times. The Intel system shows this drop to 56.7FPS, but at no point did I ever see it on the FPS counter in game. Nor did I ever feel it. On the AMD side, I not only felt it, but saw FPS dips on the FPS counter. Now, I never saw drops that low, but the mid-40FPS range was a common sight on the AMD side. It's by virtue of the excessively high frame rates that the averages come up. The Threadripper data is even worse. I didn't include it because I missed placed the data at the last minute, and while I remember some of the numbers, I wasn't going to include data I can't back up.

Suffice it to say, the 2920X achieved a minimum score of 26FPS on an all core overclock of 4.2FPS and a maximum of somewhere past 189FPS. Using PBO it jumped to a minimum of 36FPS with a maximum over 200FPS. I remember the minimums precisely, but not the rest. The averages were again about the same as everything else. Today, Destiny 2 performance is better on AMD hardware than it was. But keep in mind, initially, we had the 3900X lose to the 2920X. 1st and 2nd gen Threadripper CPU's are about the worst there are for gaming in the modern era.


View attachment 203089

This behavior isn't isolated to Destiny 2 either. Destiny 2's engine is a weird one and I'll give you the fact that it can be an outlier on many things. We see the same thing in Ghost Recon Breakpoint. You can see the 10980XE vs. the 3900X and the later again turns in higher maximum frame rates, but the lows and average rates are even worse. The average is just playable, and the lows indicate a less than stellar gaming experience. In contrast, the 10980XE never drops below 60FPS. Even a simulated 10920X with several of the 10980XE's cores disabled doesn't change that. A real 10920X has slightly better clocks, so it would be even better here.

I hadn't done any 4K testing on the game or the 10980XE, but that sort of stuff will be done in future articles. I'm doing a follow up on the 10980XE as well when Intel supposedly releases a microcode update that improves overclocking. I'm not sure that I believe that, nor do I have any idea how that's possible, but I'll put it to the test.

But the point is, your CPU matters whether you are at 1920x1080 or 3840x2160. It can still mean the difference between a good experience and a bad one. A 2nd generation Threadripper on a discount would server all my needs outside of gaming. In my experience, they aren't good gaming CPUs and therefore, I won't use one.

Appreciate you posting all that. Yes, I totally agree that often looking at the max FPS isn't the ideal way to gauge cpu gaming performance because you don't see the whole story as you've shown. This is where Intel does shine, and probably where it's is worth considering the Intel side if your primarily going to game. People too often just look at graphs and will state that while the 9900k wins many game benches, and maybe now the 3900x or 3950k is winning, the most important number to keep track of is the minimum. I honestly never paid much attention to the minimums until recently but I know now I'd rather take the CPU that gives me higher minimums than the one that provides higher max fps. Smoother gaming = happy gaming. Is too easy to look at a chart and either say that, well after 1440P they all look to be within margin of error, or the 9900k is doing 10fps higher, or the 3950x is now 10 fps highter in XYZ game, but those minimums are much more important.
So yes, I know that after 1440P the CPU is still important, but I guess I meant it's less important since you're most likely to be GPU bound. But for those minimums after 1440p, yeah, CPU does really matter.
 
Last edited:
Appreciate you posting all that. Yes, I totally agree that often looking at the max FPS isn't the ideal way to gauge cpu gaming performance because you don't see the whole story as you've shown. This is where Intel does shine, and probably where it's is worth considering the Intel side if your primarily going to game. People too often just look at graphs and will state that while the 9900k wins many game benches, and maybe now the 3900x or 3950k is winning, the most important number to keep track of is the minimum. I honestly never paid much attention to the minimums until recently but I know now I'd rather take the CPU that gives me higher minimums than the one that provides highers.
Also yes, I know that after 1440P the CPU is still important, but I guess I meant it's less important since you're most likely to be GPU bound. But for those minimums after 1440p, yeah, CPU does really matter.

Your GPU bound at 1440P and beyond, but that doesn't mean the CPU doesn't factor in. Before I figured this out, I trucked along ignorant to the reality that Destiny 2 was running like crap on my machine. I knew it ran worse than I had wanted, but but I figured that was just how it is and I couldn't do anything about it. I had two other friends that were using a Samsung KS8500 48" 4K TV for display, same as me. I was working on one of their machines resolving some unrelated issue and I fired up Destiny 2. His settings were maxed out and his game felt smoother. His frame rates didn't spike as high as mine, but ultimately, his game was smoother. He had a 3770K at stock speeds and GTX 980's in SLI. I was rocking the 2920X and an RTX 2080 Ti. That was the moment when I knew something was up.

The CPU can bottleneck the graphics card. The problem is, people aren't clear on what that means or necessarily what that looks like. They often assume you need to be on some seriously low end or ancient chip for that to happen. The 3770K is ancient and the 2920X isn't. It's also far from low end even though it was the baby Threadripper at the time. Sure, if you pop down to an i7 920 @ stock speeds or something like that, I'm sure you might see serious bottlenecking and its technically there on something like a 3770K, but its not necessarily apparent. That is, you might still get a smooth experience where you are primarily GPU bound. The game in question matters to. Something with the Frostbite engine would be pretty awful on a really old CPU. Ghost Recon doesn't do well on older CPU's and so on.
 
First off, you make some good points. However, the mitigation patches aren't mandatory and most home users aren't installing them. The average home user isn't massively security conscious. Secondly, you are incorrect. Past 2560x1440, the processor you have does matter. It isn't 100% up to the GPU at that point. I learned this the hard way.

Let me show you what I mean. Below is a graph outlining the averages for Destiny 2 at 4K resolution. This is maxed out everything with motion blur disabled, chromatic aberration disabled and no V-Sync. As you can see, the averages favor AMD here and they are largely the same across the board. The 9900K actually loses ground at stock speeds. The processors here are the 3900X, 9900K, and the Threadripper 2920X. I'm an avid player of the game and the experience between these isn't the same, I assure you. This is what the data said, but it doesn't tell the whole truth.

View attachment 203087



View attachment 203088

The graph above shows the data between the 3900X and the 9900K. As you can see, we are still at the same settings but now I'm showing minimums, averages, and maximum framerates. The picture is entirely different. You can see the AMD Ryzen 9 3900X shoot way out ahead with a much higher maximum. It's average appears the same once again but the minimum frame rates are in the toilet. That's not a joke, and not a typo. These are manual runs in the game which I repeated multiple times. The Intel system shows this drop to 56.7FPS, but at no point did I ever see it on the FPS counter in game. Nor did I ever feel it. On the AMD side, I not only felt it, but saw FPS dips on the FPS counter. Now, I never saw drops that low, but the mid-40FPS range was a common sight on the AMD side. It's by virtue of the excessively high frame rates that the averages come up. The Threadripper data is even worse. I didn't include it because I missed placed the data at the last minute, and while I remember some of the numbers, I wasn't going to include data I can't back up.

Suffice it to say, the 2920X achieved a minimum score of 26FPS on an all core overclock of 4.2FPS and a maximum of somewhere past 189FPS. Using PBO it jumped to a minimum of 36FPS with a maximum over 200FPS. I remember the minimums precisely, but not the rest. The averages were again about the same as everything else. Today, Destiny 2 performance is better on AMD hardware than it was. But keep in mind, initially, we had the 3900X lose to the 2920X. 1st and 2nd gen Threadripper CPU's are about the worst there are for gaming in the modern era.


View attachment 203089

This behavior isn't isolated to Destiny 2 either. Destiny 2's engine is a weird one and I'll give you the fact that it can be an outlier on many things. We see the same thing in Ghost Recon Breakpoint. You can see the 10980XE vs. the 3900X and the later again turns in higher maximum frame rates, but the lows and average rates are even worse. The average is just playable, and the lows indicate a less than stellar gaming experience. In contrast, the 10980XE never drops below 60FPS. Even a simulated 10920X with several of the 10980XE's cores disabled doesn't change that. A real 10920X has slightly better clocks, so it would be even better here.

I hadn't done any 4K testing on the game or the 10980XE, but that sort of stuff will be done in future articles. I'm doing a follow up on the 10980XE as well when Intel supposedly releases a microcode update that improves overclocking. I'm not sure that I believe that, nor do I have any idea how that's possible, but I'll put it to the test.

But the point is, your CPU matters whether you are at 1920x1080 or 3840x2160. It can still mean the difference between a good experience and a bad one. A 2nd generation Threadripper on a discount would server all my needs outside of gaming. In my experience, they aren't good gaming CPUs and therefore, I won't use one.

Steve of GN covered this in depth so won't get into it to much. From the offset TR3 gaming performance over TR2 is out of this world different so I think TR2 discussions should be left out as a unrepresentative state of current affairs.

Other issues the 10890XE has more threads, memory bandwidth and runs higher frequency than the 3900X without isolating the core clocks it is hard to really attribute it to anything other than AMD CPUs perform around what all 3.8-4ghz CPUs do showing core clock deficits. Also IF latency will play a factor in older games.

All things considered when I replaced my 4970K with a 3600 battlefield 5 was hugely better but frostbite is the only game engine out that leverages everything not just core clock and multiplayer murdered a 4970K.
 
Steve of GN covered this in depth so won't get into it to much. From the offset TR3 gaming performance over TR2 is out of this world different so I think TR2 discussions should be left out as a unrepresentative state of current affairs.

Other issues the 10890XE has more threads, memory bandwidth and runs higher frequency than the 3900X without isolating the core clocks it is hard to really attribute it to anything other than AMD CPUs perform around what all 3.8-4ghz CPUs do showing core clock deficits. Also IF latency will play a factor in older games.

All things considered when I replaced my 4970K with a 3600 battlefield 5 was hugely better but frostbite is the only game engine out that leverages everything not just core clock and multiplayer murdered a 4970K.

I used 2nd generation Threadripper to help illustrate the point that your CPU matters even when you are supposedly GPU bound. Even so, I think TR 2 is a very relevant CPU in the sense that there are lots of them out there and more will be bought because of the discounts coming to them. The 24c/48t 2970WX has already had its priced slashed in places. I understand that the difference between the Zen 2 based TR3 is far superior to the 2nd generation TR's. That goes without saying. I knew that would be the case when I put the 3900X up against the 2920X and the latter lost almost every benchmark possible.
 
Your GPU bound at 1440P and beyond, but that doesn't mean the CPU doesn't factor in. Before I figured this out, I trucked along ignorant to the reality that Destiny 2 was running like crap on my machine. I knew it ran worse than I had wanted, but but I figured that was just how it is and I couldn't do anything about it. I had two other friends that were using a Samsung KS8500 48" 4K TV for display, same as me. I was working on one of their machines resolving some unrelated issue and I fired up Destiny 2. His settings were maxed out and his game felt smoother. His frame rates didn't spike as high as mine, but ultimately, his game was smoother. He had a 3770K at stock speeds and GTX 980's in SLI. I was rocking the 2920X and an RTX 2080 Ti. That was the moment when I knew something was up.

The CPU can bottleneck the graphics card. The problem is, people aren't clear on what that means or necessarily what that looks like. They often assume you need to be on some seriously low end or ancient chip for that to happen. The 3770K is ancient and the 2920X isn't. It's also far from low end even though it was the baby Threadripper at the time. Sure, if you pop down to an i7 920 @ stock speeds or something like that, I'm sure you might see serious bottlenecking and its technically there on something like a 3770K, but its not necessarily apparent. That is, you might still get a smooth experience where you are primarily GPU bound. The game in question matters to. Something with the Frostbite engine would be pretty awful on a really old CPU. Ghost Recon doesn't do well on older CPU's and so on.

From the outside you wouldn't think a TR chip or even the 3900x would bottle neck any game because it's new, lots of cores, fast, and seeing FPS that are within margin of error or close to it. I didn't fully understand the full story until I recently watched the GN video on the 4-6-8 core RDR2 spikes that were shown in the FPS's low's. Then it becomes very apparent how much CPU matter's even at the higher resolutions. It's harder to track down as you said and not every game responds the same way, some react different to higher frequency, some to core count. Overall though, yeah, if I could re-do my gaming system today, I'd probably swap in a 9900KS just to keep the low's up. Ultimately, after 1440p, all your high's are going to be pretty close iwth most (recent) CPU's, but as you stated the CPU will and can still bottleneck the GPU, just not in ways most people think of, or expect.
 
I used 2nd generation Threadripper to help illustrate the point that your CPU matters even when you are supposedly GPU bound. Even so, I think TR 2 is a very relevant CPU in the sense that there are lots of them out there and more will be bought because of the discounts coming to them. The 24c/48t 2970WX has already had its priced slashed in places. I understand that the difference between the Zen 2 based TR3 is far superior to the 2nd generation TR's. That goes without saying. I knew that would be the case when I put the 3900X up against the 2920X and the latter lost almost every benchmark possible.


TR2 price cuts are aimed at selling surpass, their prime use cases make them good deals but gaming was a case of all them threads but how to use them.

AMD are pioneering the way through high threaded desktop and how to use all of them, which is why TR3 is amazing. I call that working a problem, something I haven't seen Intel do since sandy to haswell
 
Back
Top