Core i9-10980XE Review Roundup

Just about the only thing I've gotten out of this thread so far is that Linus posted a huge whiny rant complaining about the literal exact thing his video was about. It was a six hour delay and he had already shot and edited his AMD review. He could have easily done a single video which was a head-to-head review and simply released it 6 hours later than the rant.

I don't care much for Linus but he had every right to call Intel on their BS stunt. It was quite refreshing in this corporate suck up culture, actually.

Oh right, I also learned that Intel's new stuff beats AMD's new stuff in some ways while losing to it in others - and many comparisons actually could go either way depending on the allegiance of the one doing the benchmarking. Or, in other words, the two brands have achieved parity.
That's certainly...a take.
 
I use Vegas a lot, humble bundle does a deal on it every so often and once you have a license I think it was 200 or 250 to upgrade to the newest when they run their promotions. I've been meaning to switch to resolve for everything but the Vegas interface is just too easy and quick haha. Motion tracking and layers (or should I say nodes) look to be more advanced on resolve. Older versions of vegas used to crash a lot but the newest has some decent stability improvements. I don't think it can make use of the 64 threads on the 3970x yet though. Color grading on resolve is way better than vegas.
Thanks for the excellent feedback! If it can handle the threads I'd love to know as that's something that really disappoints me about current Adobe systems when looking forward to a new rig.

For the price of them both you could still be paying around what Adobe wants for a subscription AND own the software indefinitely over two years. So a 2 year payoff for two excellent tools and taking advantage of their individual strengths.
Half of the more advanced adobe stuff requires their other software anyway so it's not like anything is majorly different in workflow. Just less integrated.
The sooner Adobe is shunted to a 1/4 of market share, the better. Their poor optimization just makes for salty wounds.
 
Except, single-threaded performance doesn't favor Intel in this scenario unless clock speeds exceed that of a Ryzen 3000 CPU. The 3950X boosts to 4.7GHz which is over the 10980XE's standard Turbo Boost 2.0 clocks. Turbo Boost Max 3.0 only kicks in on four cores max and even then, 4.8GHz is the best you'll see. AMD has slightly better IPC at this point with Zen 2. Also keep in mind that AMD can't just make a Threadripper simply optimized for overclocking. AMD is already pushing the limits of what can be done currently on its 7nm process. The 3900X and most likely, 3950X's can't overclock for shit and that's not likely to change much if at all in Threadripper form. AMD isn't letting Intel have squat either. The 3950X and 3960X do leave a gap, but one that in some ways can be filled by AMD's 2nd generation Threadripper parts. However, those parts have their downsides leaving Intel some gap at which it can be useful.

A 360 AIO is also insufficient for a highly overclocked 10980XE. I saw temperatures hit 100c easily many times during my review using a custom loop. My opinion is that the 10980XE would be more appealing and more competitive if it were closer in price to the 3950X. The 10980XE when overclocked can be faster in some cases and its on an HEDT platform which is appealing. I also think that Intel should have thrown the 165w TDP out the window as its largely a lie anyway and raised the clocks on these CPU's a bit more.



This makes absolutely no sense. The 3970X is nearly double the price of Intel's Core i9 10980XE. The 2990WX isn't really an alternative either. While better in some multi-threaded workloads for sure, there are cases where the 10980XE will perform better. Outside of applications that can utilize 32c/64t, the 2990WX is going to get beaten in a lot of areas as the 10980XE has better IPC and vastly superior clocks without the internal latency penalties the 2990WX incurs from being built the way it is.



AMD has passed this level of performance.

The last comment was a jest to some like juanrga who have insinuated AMD is Sandy performance.


As for the rest the 2990/2920 are irrelevant now. gN review showcased how much improvement third gen TR has improved in gaming they exhibit similar performance to the regular desktop part not needing cores disabled. In production aps across every review the TR3's are just workload destroyers. The 10890 won in Adobe and that was barely. Per GN the gaming uptick from gen 2 was like 30-40% with good playability.
 
  • Like
Reactions: Mega6
like this
I've stated previously how bad 2nd generation Threadrippers were at gaming. I found this out the hard way when I put one in my own machine briefly. Even the 2nd generation Ryzen's were well behind the curve and the improvement going to a Ryzen 3000 is pretty substantial in some cases. I figured those improvements would carry over into 3rd generation Threadrippers. I just didn't know they would virtually eliminate most if not all their weaknesses outside of price.
 
From the notable benches seen it looks like it delivers similar performance to the 3950X with benefits of added memory bandwidth for some games that leverage that well, compared to second gen which was still work in progress the reviews seem to support that these are now genuine all-round HEDT systems.
 
I just didn't know they would virtually eliminate most if not all their weaknesses outside of price.
The latency improvements caught me off-guard too. I'd expected a small performance loss as usual with TR/HEDT/X299 etc. Makes me wonder where it comes from exactly (I/O chip presumably - can't just be extra ram channels or cache increase) and if that can be leveraged on the AM4 socket. That's another few percent IPC already there.
 
Hardware Unboxed got their hands on the entire Cascade Lake-X lineup:



Those gaming results though, ouch:

Screenshot_20191126-132353.jpg Screenshot_20191126-132402.jpg Screenshot_20191126-132408.jpg Screenshot_20191126-132323.jpg Screenshot_20191126-132334.jpg Screenshot_20191126-132341.jpg Screenshot_20191126-132346.jpg
 
  • Like
Reactions: Auer
like this


I'm seeing two things here:

  1. Since these are all modern games -- it pretty much doesn't matter what CPU you use as long as you're hitting 4.3GHz+ with eight cores for gaming, be it Zen2 or some Skylake refinement
  2. These games, despite being modern, don't really appear to be very CPU limited -- so they're representative of gaming performance today, but not necessarily going forward
I need to see more gaming reviews. I don't like Anandtech's spread just based on the specific games tested, but they did test more CPU-limited scenarios where a spread between CPUs can be seen. And there, you do see what we'd expect in theory: the faster clocked CPUs are at the top in the greater majority of the tests.
 
I don't care much for Linus but he had every right to call Intel on their BS stunt. It was quite refreshing in this corporate suck up culture, actually.


That's certainly...a take.

I think it would be better if AMD. Intel, Nv etc simply wouldnt give out free stuff for reviews ahead of time.

Skip the launch day hype and get reviews in a day or two after reviewers have gone out and bought whatever it is they want to review.

This shit is honestly not that urgent.
 
I think it would be better if AMD. Intel, Nv etc simply wouldnt give out free stuff for reviews ahead of time.

Skip the launch day hype and get reviews in a day or two after reviewers have gone out and bought whatever it is they want to review.

This shit is honestly not that urgent.

What if LTT had simply waited the six hours and gone directly to a head-to-head comparison? That would have made Intel's strategy backfire with zero negative impact on LTT. Without LTT being such a willing participant, the strategy can't work.

So, instead of breaking the strategy that he "hates," Linus did his damnedest to ensure the success of the strategy. Oh, and he did the Millennial thing of complaining about himself without realizing the irony.
 
What if LTT had simply waited the six hours and gone directly to a head-to-head comparison? That would have made Intel's strategy backfire with zero negative impact on LTT. Without LTT being such a willing participant, the strategy can't work.

So, instead of breaking the strategy that he "hates," Linus did his damnedest to ensure the success of the strategy. Oh, and he did the Millennial thing of complaining about himself without realizing the irony.
Ah, but those early clicks plus virtue signaling = $. Who needs real integrity when you can have an appearance of one?

Ian Cutress from Anandtech did exactly that and kudos to him.
 
  • Like
Reactions: Auer
like this
I'm seeing two things here:

  1. Since these are all modern games -- it pretty much doesn't matter what CPU you use as long as you're hitting 4.3GHz+ with eight cores for gaming, be it Zen2 or some Skylake refinement
  2. These games, despite being modern, don't really appear to be very CPU limited -- so they're representative of gaming performance today, but not necessarily going forward
I need to see more gaming reviews. I don't like Anandtech's spread just based on the specific games tested, but they did test more CPU-limited scenarios where a spread between CPUs can be seen. And there, you do see what we'd expect in theory: the faster clocked CPUs are at the top in the greater majority of the tests.
I don't think the small differences warrant deeper testing. All these CPUs are great in gaming and those who need the extreme low visuals/high FPS would get a 9900K(S) anyway.
 
I don't think the small differences warrant deeper testing. All these CPUs are great in gaming and those who need the extreme low visuals/high FPS would get a 9900K(S) anyway.

You don't think the CPU performance should be tested...?

These tests are only showing GPU performance.
 
What if LTT had simply waited the six hours and gone directly to a head-to-head comparison? That would have made Intel's strategy backfire with zero negative impact on LTT. Without LTT being such a willing participant, the strategy can't work.

So, instead of breaking the strategy that he "hates," Linus did his damnedest to ensure the success of the strategy. Oh, and he did the Millennial thing of complaining about himself without realizing the irony.

I think the only strategy he cares about is maximizing views and ad revenue, which two videos with clickbait titles does much better than one. Does he need to put a close up of his face on both thumbnails? If he wants to maximize clicks and view counts he does haha.
 
If that were so there would be no FPS differences.

Well, relatively speaking, there are very few FPS differences. Only the old TR2 CPUs really stand out overall.

But the point is that if you can double GPU speed -- look at folks around here using five year old CPUs, or even older -- the differences in CPU performance becomes relevant.

Which is why this is simulated by dropping resolution and / or settings on current games below what one would actually play, which you see in reviews like those that the [H] used to do as well as the review that Anandtech published.
 


My only issue there is that everything's at stock speeds. Sure, there isn't much to be gained on the Ryzen 3000 lineup in terms of overclocking and I'd assume the Threadripper's are the same way. However, when you overclock a 9900K or a 10980XE, their performance in games increases dramatically.

Evaluating stock performance is important. But I think overclocked performance is important as well. Enthusiasts are going to want to know how these things will perform when they are overclocked. I know not everyone does that, but all aspects of a CPU's performance should be considered.
 
Especially the 9900K -- the 10980XE seems to be thermally limited in terms of just finding something to cool it, but the 9900K can be overclocked so very easily.
 
  • Like
Reactions: Meeho
like this
I think the only strategy he cares about is maximizing views and ad revenue, which two videos with clickbait titles does much better than one. Does he need to put a close up of his face on both thumbnails? If he wants to maximize clicks and view counts he does haha.

He owns a growing business with plans to expand further and Youtube ad revenue is shit, along with other garbage YT constantly pulls leads to creators needing to do that stuff in order to get engagement. Beyond that, his rant was well worth putting in there and putting out. Calling out chicken shit tactics like that is something people in the tech press should do. If he only cared about the money he'd put out a normal review, in his usual style, in order to not try and piss off Intel and risk not being able to get products early going forward. For as big and popular as Linus is a rant like he did is going to ruffle feathers at Intel and could easily push them to blacklist him.


Edit: By the way, that's not what "clickbait" is. Clickbait would be using something to lure people in that has nothing to do with the video itself.

Well, relatively speaking, there are very few FPS differences. Only the old TR2 CPUs really stand out overall.

But the point is that if you can double GPU speed -- look at folks around here using five year old CPUs, or even older -- the differences in CPU performance becomes relevant.

Which is why this is simulated by dropping resolution and / or settings on current games below what one would actually play, which you see in reviews like those that the [H] used to do as well as the review that Anandtech published.

1080p IS a CPU limited scenario, especially above 60 fps. Those are the very same kind of tests you point at when you talk about the 99000K being better for gaming than Ryzen alternatives.
 
My only issue there is that everything's at stock speeds. Sure, there isn't much to be gained on the Ryzen 3000 lineup in terms of overclocking and I'd assume the Threadripper's are the same way. However, when you overclock a 9900K or a 10980XE, their performance in games increases dramatically.

Evaluating stock performance is important. But I think overclocked performance is important as well. Enthusiasts are going to want to know how these things will perform when they are overclocked. I know not everyone does that, but all aspects of a CPU's performance should be considered.
I agree with you on the 9900K and that is THE gaming CPU, but it ain't a 3950X or HEDT competitor. The 10980XE is so limited by its power consumption that it prohibits overclocking for all but the small fraction of the overclocking enthusiasts, and those are few in number as is. And even then you lose much more in other tasks than you gain in gaming compared to TR3.
 
For as big and popular as Linus is a rant like he did is going to ruffle feathers at Intel and could easily push them to blacklist him.

He put out the rant about their marketing / PR tactic, but also fairly evaluated the CPU fairly. Intel's in a bit of a lose / lose position here so they're clearly trying different stuff, and let's be clear, they have to. They have to produce returns for their investors. At the same time, I'd put the odds at half that the official at Intel that greenlit that NDA expiration did it knowing that they'd probably get called out, but did it because they had to do something. Perhaps next time they'll be able to point to Linus' rant (and others) to argue against the tactic.

1080p IS a CPU limited scenario, especially above 60 fps. Those are the very same kind of tests you point at when you talk about the 99000K being better for gaming than Ryzen alternatives.

It is somewhat CPU limited. But we can easily show when moving over to Anandtech's benchmarks that it's still quite GPU limited too- which means that there's room to test lower settings, and that if one had a significantly more powerful GPU, there'd be a larger delta.

And while I mentioned this above, it's worth putting it back into perspective: the overall point is that these top-end Zen 2 CPUs don't lose much in terms of gaming today in the same way that Zen and Zen+ SKUs did.
 
You don't think the CPU performance should be tested...?

These tests are only showing GPU performance.

There are two schools of thought regarding CPU performance as it relates to gaming. The first school of thought is that you want to lower the resolution and settings as much as possible to isolate the CPU from the GPU by making the latter do as little as possible. This allows the differences in CPU performance to be seen more clearly. However, these results are arguably academic as this isn't how anyone outside of competitive gaming would run their systems. You can also argue that the difference between 300FPS and 400FPS isn't worth talking about.

The second school of thought is to benchmark your CPU using the fastest GPU you can and use maximum quality settings to show what you will actually see in gaming with different CPU's. The biggest argument here is that it becomes more GPU dependent and that the CPU doesn't matter.

In a sense, both sides have merit. Both sides are right, and both methods to not provide the whole truth. When I found out my Threadripper 2920X was hitting lows that were behind friend's machines with a 3770K at stock speeds or another friend's 4970K (both systems using 1080Ti's, not 2080Ti's) I realized something wasn't right. I dove into the problem and to make a long story short, I found that averages do not tell the whole story. At the time I was running Destiny 2 at 4K. I play the game allot. Probably more than a person should. In any case, I found that the 2920X using a 4.2GHz manual overclock (all cores) would achieve a low of 26FPS in that game. I never saw anything that low watching the frame counter, but dips into the mid-40's were common place. On a 9900K however, my minimums were 56FPS with all the other hardware being the same. This dip was something I never saw on the FPS counter in game, nor felt, but one that showed up in Frameview data I gathered.

I never would have seen this using low settings alone. However, at 1920x1080, we are talking about a real world resolution that is very much CPU limited. This is why its our minimum testing resolution for CPU's now. I do reduce all the eye candy to isolate the CPU, but it seems to translate well. Basically it just makes the differences between low, average and maximum frame rates more obvious and does remove the GPU from the equation to a large degree. For example, the results I see between a 2080 Super and a 2080Ti are indistinguishable. And that's what you want to do if you can.

But, benchmarking at 3440x1440 max settings or 3840x2160 are valid as well. You'd be wrong in thinking that your CPU doesn't make much of a difference in gaming at higher resolutions. Yes, the GPU is more important, but one CPU vs. another can be the difference between a good experience and a bad one. Destiny 2 isn't unplayable on a 2920X, but at 4K, it doesn't provide a good experience. At 3440x1440 using a G-Sync monitor, this problem was minimized, but the 9900K simply provided a way better experience. It provided that 60FPS minimum I needed at 4K. I'm now at 3440x1440 to have G-Sync and higher refresh rates in games, but what I learned going down the rabbit hole trying to get a decent 4K experience taught me allot and it shaped my CPU reviews from then on out.

I'd like to get a 4K monitor on my test bench so I can include higher resolution gaming data along side the lower resolution data going forward.

Similarly, I think this type of performance should betested at both stock and overclocked speeds. Many enthusiasts will do the latter, especially if it has tangible benefits. It also provides different context for a CPU if the gains are there. Overclocked or not, the 3900X is what it is. At stock speeds, the 10980XE is "meh". Overclocked, there are some caveats to using one but its performance improves dramatically. This is something many reviews simply failed to capture and why I was less harsh than others about it.
 
I agree with you on the 9900K and that is THE gaming CPU, but it ain't a 3950X or HEDT competitor.

Once you get to the 3700X or 9900K, you've already surpassed 'basic needs' and even gaming needs, and you're pushing into the content creation space, i.e. video editing for most consumers, or other professional workloads that can actually use more cores. The 9900K variants just represent the absolute best you can get for gaming, the 'no compromise' CPU.

The 10980XE is so limited by its power consumption that it prohibits overclocking for all but the small fraction of the overclocking enthusiasts, and those are few in numbers as is.

The eighteen-core CPU is limited, sure. Read Dan_D 's review (and Anandtech's for good measure) where the other new SKUs are outlined, and the potential for better overclocking is mentioned. If that's what you're into.

And even then you lose much more in other tasks than you gain in gaming compared to TR3.

...and a MUCH lower price of entry. Also mentioned in reviews, and something that bears repeating.
 
I agree with you on the 9900K and that is THE gaming CPU, but it ain't a 3950X or HEDT competitor. The 10980XE is so limited by its power consumption that it prohibits overclocking for all but the small fraction of the overclocking enthusiasts, and those are few in number as is. And even then you lose much more in other tasks than you gain in gaming compared to TR3.

I'm not sure I agree with this statement. I would agree that there is a big difference between overclocking a 10980XE and a 9900K. The latter, you can get away with a good 360 AIO pretty easily. Even a top end 280 AIO might get it done. However, the 10980XE certainly requires a custom loop and a good one at that. You need a water block that can eek out every degree of cooling possible, you need a large radiator to give you the capacity you need. If that's what you mean by its power consumption being prohibitive, then we are on the same page. Few people have custom loops for something like a 10980XE, and that's a fair statement.

But, I totally disagree with the statement that your comment on the Core i9 10980XE vs. Threadripper 3. They do not compete directly with each other. Intel made sure of it by pricing the 10980XE at half the cost of the 3970X and $400 cheaper than the 3960X. The only CPU it kind of competes with is the 3950X and admittedly, that's not a solid win for Intel either. But, if you need the IO and memory bandwidth (or can use AVX-512) then its a solid choice. As one article put it, "Intel is winning the middle" with the 10980XE, and while that's hyperbole, I can largely agree with the sentiment.
 
There are two schools of thought regarding CPU performance as it relates to gaming. The first school of thought is that you want to lower the resolution and settings as much as possible to isolate the CPU from the GPU by making the latter do as little as possible. This allows the differences in CPU performance to be seen more clearly. However, these results are arguably academic as this isn't how anyone outside of competitive gaming would run their systems. You can also argue that the difference between 300FPS and 400FPS isn't worth talking about.

The second school of thought is to benchmark your CPU using the fastest GPU you can and use maximum quality settings to show what you will actually see in gaming with different CPU's. The biggest argument here is that it becomes more GPU dependent and that the CPU doesn't matter.

I'll add a third school of thought: I don't see these as 'sides', but data points that even together do not provide a complete picture. Essentially, you'd want to show where each particular CPU starts to limit the GPU in a broad spectrum of games. That means absolutely starting at highest settings and resolutions, even higher than one might actually use, to see if the CPU has an effect on frametimes when framerates are dropped to the lowest that might be considered to be playable, and then lowering resolution and detail settings until framerates stop increasing, i.e., you're actually limited by the CPU.

When I look at various reviews for CPU, this is what I'm looking for, because I'm trying to answer a different question. I care about the difference in CPU performance for today's games at resolutions and setting levels that I'd use today, but what I really want to know is how well the CPU is likely to hold up over time.

To me, it was abundantly clear that Zen and Zen+ weren't going to hold up to well to Intel's Skylake revisions. Zen+ after months of updates to every level of the software stack from games to microcode started to get there, but IPC and core speed was and is still too far behind.

Zen 2, in Ryzen 3000 and now TR 3000 CPUs, gets far closer. One can expect these CPUs to hold up pretty well, given that they have headroom in terms of framerate, but at the same time for gaming they're still shown to be slower than the 9900K when the GPU is removed from the equation.


On the other hand, since I get in trouble if I don't put things into perspective, I'll add that I have almost no inclination to recommend the 9900K. Despite being the very best at what it does, just like TR3 puts a very high price of entry on HEDT, the 3600 and 3700 Ryzen CPUs make higher-end gaming far more accessible especially if you're not going for the highest-end GPUs.
 
I'm not sure I agree with this statement. I would agree that there is a big difference between overclocking a 10980XE and a 9900K. The latter, you can get away with a good 360 AIO pretty easily. Even a top end 280 AIO might get it done. However, the 10980XE certainly requires a custom loop and a good one at that. You need a water block that can eek out every degree of cooling possible, you need a large radiator to give you the capacity you need. If that's what you mean by its power consumption being prohibitive, then we are on the same page. Few people have custom loops for something like a 10980XE, and that's a fair statement.
Yes, that's what I was referring to.

But, I totally disagree with the statement that your comment on the Core i9 10980XE vs. Threadripper 3. They do not compete directly with each other. Intel made sure of it by pricing the 10980XE at half the cost of the 3970X and $400 cheaper than the 3960X. The only CPU it kind of competes with is the 3950X and admittedly, that's not a solid win for Intel either. But, if you need the IO and memory bandwidth (or can use AVX-512) then its a solid choice. As one article put it, "Intel is winning the middle" with the 10980XE, and while that's hyperbole, I can largely agree with the sentiment.
I'm going by Ian's take on the matter of price. Even if we consider the price difference as a deal braker, the 3950X beats it in many workloads or trails by a small margin. That leaves the segment of being both price restricted and needing high I/O and memory, while not having the monster 48/64 core upgrade path. It's a real segment where the 10980XE is the best option, but that's a lot of qualifiers.
 
Yes, that's what I was referring to.


I'm going by Ian's take on the matter of price. Even if we consider the price difference as a deal braker, the 3950X beats it in many workloads or trails by a small margin. That leaves the segment of being both price restricted and needing high I/O and memory, while not having the monster 48/64 core upgrade path. It's a real segment where the 10980XE is the best option, but that's a lot of qualifiers.

There are definitely a lot of qualifiers before the 10980XE makes sense. I said as much in my review. It makes for the longest conclusions section I've written in 14 years of reviewing hardware. The short version being that the trade offs of Threadripper 2 being unacceptable to you (such as poor gaming performance) or you are already an existing X299 motherboard owner without either a 7980XE or 9980XE.

The only other case I can come up with is one where you need more IO and memory bandwidth than X570 can provide and thus, the 3950X isn't a more attractive option for you. Only in these outlying cases does the 10980XE make sense. Even then I'd say that you need to be willing to push the thing a bit for the performance to truly reach the levels it needs to be at over a 3950X.
 
He owns a growing business with plans to expand further and Youtube ad revenue is shit, along with other garbage YT constantly pulls leads to creators needing to do that stuff in order to get engagement. Beyond that, his rant was well worth putting in there and putting out. Calling out chicken shit tactics like that is something people in the tech press should do. If he only cared about the money he'd put out a normal review, in his usual style, in order to not try and piss off Intel and risk not being able to get products early going forward. For as big and popular as Linus is a rant like he did is going to ruffle feathers at Intel and could easily push them to blacklist him.


Edit: By the way, that's not what "clickbait" is. Clickbait would be using something to lure people in that has nothing to do with the video itself.



1080p IS a CPU limited scenario, especially above 60 fps. Those are the very same kind of tests you point at when you talk about the 99000K being better for gaming than Ryzen alternatives.

Linus can only influence his fans, not intel, so he did the obvious thing and gave the kids some meme worthy cookies.

Every time a poor reviewer feels "wronged" the outrage meter hits the red zone and the clicks and views roll in.
 
I'll continue to be a fan of performance, thanks.
Intel's performance continues to dwindle every other week due to more and more security exploits.
Why pay $999 for an Intel CPU that in the next two months will have the performance, and thus the value, of a $700 Intel CPU?

I'm a fan of performance as well, but I'm also a fan of getting what I pay for.
At least with AMD (and all other CPU/chip makers), we get what we pay for; can't say that at all about Intel for the last two years, and LTT's benchmarks even show how the latest patches are starting to effect performance, even on these latest (Skylake-refresh) CPUs.

I get that these are corporations that don't give two fucks about their customers, but at least AMD has numerous products on all fronts with true value and performance that benefit nearly all users on all fronts and industries.
Intel's performance is literally here today and gone next week; no thanks.
 
He owns a growing business with plans to expand further and Youtube ad revenue is shit, along with other garbage YT constantly pulls leads to creators needing to do that stuff in order to get engagement. Beyond that, his rant was well worth putting in there and putting out. Calling out chicken shit tactics like that is something people in the tech press should do. If he only cared about the money he'd put out a normal review, in his usual style, in order to not try and piss off Intel and risk not being able to get products early going forward. For as big and popular as Linus is a rant like he did is going to ruffle feathers at Intel and could easily push them to blacklist him.


Edit: By the way, that's not what "clickbait" is. Clickbait would be using something to lure people in that has nothing to do with the video itself.

I disagree. Getting outraged in today's market will pull in views. He's also got a large enough subscriber base that even if he does piss off Intel, it won't stay that way for too long. Plus, he's probably got enough connections in the industry that he can work around them black listing him the way Kyle did.
 
There are two schools of thought regarding CPU performance as it relates to gaming. The first school of thought is that you want to lower the resolution and settings as much as possible to isolate the CPU from the GPU by making the latter do as little as possible. This allows the differences in CPU performance to be seen more clearly. However, these results are arguably academic as this isn't how anyone outside of competitive gaming would run their systems. You can also argue that the difference between 300FPS and 400FPS isn't worth talking about.

The second school of thought is to benchmark your CPU using the fastest GPU you can and use maximum quality settings to show what you will actually see in gaming with different CPU's. The biggest argument here is that it becomes more GPU dependent and that the CPU doesn't matter.

In a sense, both sides have merit. Both sides are right, and both methods to not provide the whole truth. When I found out my Threadripper 2920X was hitting lows that were behind friend's machines with a 3770K at stock speeds or another friend's 4970K (both systems using 1080Ti's, not 2080Ti's) I realized something wasn't right. I dove into the problem and to make a long story short, I found that averages do not tell the whole story. At the time I was running Destiny 2 at 4K. I play the game allot. Probably more than a person should. In any case, I found that the 2920X using a 4.2GHz manual overclock (all cores) would achieve a low of 26FPS in that game. I never saw anything that low watching the frame counter, but dips into the mid-40's were common place. On a 9900K however, my minimums were 56FPS with all the other hardware being the same. This dip was something I never saw on the FPS counter in game, nor felt, but one that showed up in Frameview data I gathered.

I never would have seen this using low settings alone. However, at 1920x1080, we are talking about a real world resolution that is very much CPU limited. This is why its our minimum testing resolution for CPU's now. I do reduce all the eye candy to isolate the CPU, but it seems to translate well. Basically it just makes the differences between low, average and maximum frame rates more obvious and does remove the GPU from the equation to a large degree. For example, the results I see between a 2080 Super and a 2080Ti are indistinguishable. And that's what you want to do if you can.

But, benchmarking at 3440x1440 max settings or 3840x2160 are valid as well. You'd be wrong in thinking that your CPU doesn't make much of a difference in gaming at higher resolutions. Yes, the GPU is more important, but one CPU vs. another can be the difference between a good experience and a bad one. Destiny 2 isn't unplayable on a 2920X, but at 4K, it doesn't provide a good experience. At 3440x1440 using a G-Sync monitor, this problem was minimized, but the 9900K simply provided a way better experience. It provided that 60FPS minimum I needed at 4K. I'm now at 3440x1440 to have G-Sync and higher refresh rates in games, but what I learned going down the rabbit hole trying to get a decent 4K experience taught me allot and it shaped my CPU reviews from then on out.

I'd like to get a 4K monitor on my test bench so I can include higher resolution gaming data along side the lower resolution data going forward.

Similarly, I think this type of performance should betested at both stock and overclocked speeds. Many enthusiasts will do the latter, especially if it has tangible benefits. It also provides different context for a CPU if the gains are there. Overclocked or not, the 3900X is what it is. At stock speeds, the 10980XE is "meh". Overclocked, there are some caveats to using one but its performance improves dramatically. This is something many reviews simply failed to capture and why I was less harsh than others about it.

Agreed. Even at higher resolutions the CPU still matters in terms of keeping those minimum's up higher.
 
Intel's performance continues to dwindle every other week due to more and more security exploits.
Why pay $999 for an Intel CPU that in the next two months will have the performance, and thus the value, of a $700 Intel CPU?

I'm a fan of performance as well, but I'm also a fan of getting what I pay for.
At least with AMD (and all other CPU/chip makers), we get what we pay for; can't say that at all about Intel for the last two years, and LTT's benchmarks even show how the latest patches are starting to effect performance, even on these latest (Skylake-refresh) CPUs.

I get that these are corporations that don't give two fucks about their customers, but at least AMD has numerous products on all fronts with true value and performance that benefit nearly all users on all fronts and industries.
Intel's performance is literally here today and gone next week; no thanks.

Do you have a specific LTT benchmark in mind you can link? My understanding is nothing recent really affects gamers.

What you speak of was a risk when I went 9900KF but I saw it as a very low one. My initial plan was to go 3950x but seeing the 3900x availablity being non-existant I decided I didn’t want to wait. I figured the 3950x would be even worse off.

When I did my research OC vs OC Intel was 20% ahead in minimums when over 90Hz. I figured it was a very low risk that would be impacted to the point parity.

I am interested in those benchmarks. I build rigs regularly, mostly AMD, but for my VR rig I went Intel.
 
Back
Top