Cascade Lake-X 10980XE 5.1Ghz boost all cores

So glad to see that Intel has responded to AMD as far as price. What a time to be alive

Intel doesn't really respond to AMD as much as they respond to demand. Intel's biggest issue isn't that AMD has become competitive, but that they're having issues shipping their own designs in volume.
 
Intel doesn't really respond to AMD as much as they respond to demand. Intel's biggest issue isn't that AMD has become competitive, but that they're having issues shipping their own designs in volume.

Bullshit
Cascade-Lake-X.jpg
 
AMD's Ryzen boost debacle is a much greater marketing fail than anything Intel has messed up lately.

Meh, its all bitching. They resolved that with bios updates and to be honest if I asked this question to 100 times "Did you buy these processors for single core performance" the answer will likely be no 90 percent of the time. I never did but I understand the bitching but that is mostly fixed for those who wanna stare at monitoring software lol.
 
  • Like
Reactions: N4CR
like this
For consumers? Since the necessary workloads and levels of exposure of threat surface needed to exploit are extremely rare for consumer workloads, AMDs marketing / engineering repeatedly falling short of the company's claims is much more headline grabbing, most especially since they just lost a lawsuit over their false marketing claims for their last architecture.
Yeah... that's why after patching my 6700K @ 4GHz, for gaming and video editing, it suddenly felt like a 2500K @ 3.3GHz.
Gaming performance, depending on the game, tanked, and video renders that used to take 10 minutes now take between 15-25 minutes for the same job.

I paid for 6700K-level performance, not 2500K-level performance - is Intel going to give me a refund?
There are other Intel CPUs I run, both personally and at work, and any that have the latest firmware and software patches installed for all/most of these exploits have had clear performance penalties.

VMs, both Type 1 and Type 2 have had performance hits as well, and essentially needing to disable HT/SMT on Intel CPUs to help mitigate Foreshadow has hurt performance on some workloads as well.
Essentially, any Intel process from Sandy Bridge (2010) to Kaby Lake (2016) have, depending on the workloads, had modest to large performance hits.

Heck, the later CPUs post-2017 still have the exploits in hardware, they were just (mostly) patched in microcode/firmware before being shipped.
So, loss of performance and loss of features in both consumer and enterprise are quite obvious, and the loss of value is really where it hurts the most.

Intel's trust, especially in enterprise, is basically garbage at this point.
I would take the loss of a few hundred MHz in boost clocks over the loss of features, performance, and security any day - not counting the work hours/time, energy, and money to patch all of Intel's bullshit corner-cutting mistakes on a system-to-system basis.


Perhaps the level of exposure on the consumer-level is low, but, are you willing to take that risk and go totally unpatched in microcode/firmware and software for 22 exploits???
I sure as hell wouldn't.
 
Yeah... that's why after patching my 6700K @ 4GHz, for gaming and video editing, it suddenly felt like a 2500K @ 3.3GHz.
Gaming performance, depending on the game, tanked, and video renders that used to take 10 minutes now take between 15-25 minutes for the same job.

While I don't doubt your experience as it tracks in the same direction as fact, you are quoting your own feelings here.

I paid for 6700K-level performance, not 2500K-level performance - is Intel going to give me a refund?

Ask a lawyer? AMD had to pay up.

Perhaps the level of exposure on the consumer-level is low, but, are you willing to take that risk and go totally unpatched in microcode/firmware and software for 22 exploits???
I sure as hell wouldn't.

The threat surface for the consumer is so small that the performance difference may matter more.
 
We're talking about pricing, move along.

:facepalm:

Try using your eyes, that chart is literally about performance per $, and the drastic increase in that metric is almost entirely due to the large price drop.

Oddly, the comparison is against AMDs hedt offering, how strange.
 
Intel doesn't really respond to AMD as much as they respond to demand. Intel's biggest issue isn't that AMD has become competitive, but that they're having issues shipping their own designs in volume.

I guess you don't realize that when demand is so high that you cannot ship enough of a product you can raise prices or possibly keep them the same; you definitely do not lower them. You can attempt to spin it all you want but in this case you'll only end up dizzy from the attempt. Intel would not have dropped prices unless AMD was a serious threat. That is a fact.
 
While I don't doubt your experience as it tracks in the same direction as fact, you are quoting your own feelings here.
Well, to be fair, the datacenter admins I know have had to continuously patch thousands of systems on a system-to-system basis due to the firmware/microcode updates, and I know they are getting tired of it and have started to migrate to AMD Epyc-based systems.
They did tell me that many VM workloads and CPU-intensive tasks are taking much longer to complete after the exploit patches, or are performing poorly in general.

So I guess it is nice to see that the hundreds of systems I manage aren't the only ones getting performance hits? :p
 
  • Like
Reactions: N4CR
like this
Intel doesn't really respond to AMD as much as they respond to demand. Intel's biggest issue isn't that AMD has become competitive, but that they're having issues shipping their own designs in volume.

Its lunacy to suggest that Intel's MASSIVE and NEVER BEFORE SEEN price cuts are not a response to AMD kicking their ass. If not for AMD, Intel could keep plodding along like it has for years with no concern. They wouldn't bother cutting prices because there would be no incentive for them to do so as there wouldn't be competition breathing down their neck. AMD isn't the only reason they slashed prices like they did, but there is a less than zero chance that they weren't a significant part of it.

Meh, its all bitching. They resolved that with bios updates and to be honest if I asked this question to 100 times "Did you buy these processors for single core performance" the answer will likely be no 90 percent of the time. I never did but I understand the bitching but that is mostly fixed for those who wanna stare at monitoring software lol.

People explained, several times, in the thread about the boost clocks why they were talking about it and why they thought it was an issue. You willfully chose to ignore those people and that's on you. Don't pretend you don't understand when all you did was cover your eyes and pretend rational explanations didn't exist.
 
Well, to be fair, the datacenter admins I know have had to continuously patch thousands of systems on a system-to-system basis due to the firmware/microcode updates, and I know they are getting tired of it and have started to migrate to AMD Epyc-based systems.
They did tell me that many VM workloads and CPU-intensive tasks are taking much longer to complete after the exploit patches, or are performing poorly in general.

So I guess it is nice to see that the hundreds of systems I manage aren't the only ones getting performance hits? :p

Which is why I'm focusing on the consumer side where additional cores don't add much but faster cores do. On the server side, these new CPUs will have their place as mitigations have been handled, but Intel is still behind on number of cores and that counts for many enterprise workloads.
 
Its lunacy to suggest that Intel's MASSIVE and NEVER BEFORE SEEN price cuts are not a response to AMD kicking their ass. If not for AMD, Intel could keep plodding along like it has for years with no concern. They wouldn't bother cutting prices because there would be no incentive for them to do so as there wouldn't be competition breathing down their neck. AMD isn't the only reason they slashed prices like they did, but there is a less than zero chance that they weren't a significant part of it.



People explained, several times, in the thread about the boost clocks why they were talking about it and why they thought it was an issue. You willfully chose to ignore those people and that's on you. Don't pretend you don't understand when all you did was cover your eyes and pretend rational explanations didn't exist.

I understand. I read that. It wasn't an issue. Like I said to each his own. I personally dont give 2 shits about 50 or 25mhz boost clock on single core, which was fixed. I am running 4.4ghz all core on 3900x. I didn't turn a blind eye, i was there fore reviews and I bought it based on reviews as others did.
 
  • Like
Reactions: N4CR
like this
Yeah... that's why after patching my 6700K @ 4GHz, for gaming and video editing, it suddenly felt like a 2500K @ 3.3GHz.
Gaming performance, depending on the game, tanked, and video renders that used to take 10 minutes now take between 15-25 minutes for the same job.

I cannot speak to rendering or video editing performance, but gaming performance for me on a gently-overclocked (4.8G all core) 8700k has been not measurably changed since I got the CPU at launch.
I can speak to big development compilation tasks. I work on a large C++ projects with a few million lines of code and keep lots of metrics on compile time. It's within 5% of the performance at launch, prior to mitigations.

Please do not take this as discounting your data (a different CPU, mostly different tasks), I'm just providing my experience. To me, the issues have... not been an issue. That said, I'd like them to address the issues in HW and avoid penalties all the same.
 
I cannot speak to rendering or video editing performance, but gaming performance for me on a gently-overclocked (4.8G all core) 8700k has been not measurably changed since I got the CPU at launch.
I can speak to big development compilation tasks. I work on a large C++ projects with a few million lines of code and keep lots of metrics on compile time. It's within 5% of the performance at launch, prior to mitigations.

Please do not take this as discounting your data (a different CPU, mostly different tasks), I'm just providing my experience. To me, the issues have... not been an issue. That said, I'd like them to address the issues in HW and avoid penalties all the same.
No, I totally get what you are saying. :)
For the 8XXX CPUs we have, I haven't seen huge performance hits like I have with the 2XXX to 7XXX CPUs; definitely with you on that one.

Granted, most 2XXX to 5XXX CPU-based systems are being phased out at this point due to their age, both Skylake and Kaby Lake based systems are still around and, unfortunately, were probably hit the hardest.
Even with SSDs, lots of RAM, and OS/software optimizations, the performance hits are still there, depending on the task and software; some are hardly noticeable if at all, and others are blatant.
 
Its lunacy to suggest that Intel's MASSIVE and NEVER BEFORE SEEN price cuts are not a response to AMD kicking their ass.

Do you support this point with the price cuts seen with the Pentium IV and Athlon 64? Intel will price according to market demand.

That's how pricing works.
 
That's only a debacle if you bleed blue. Most of it was from uneducated user base expecting Intel like boost.

I'm going to assume that many to most expected boost clocks to sustain more than a millisecond or two.

That's a statistically irrelevant boost, which is what is being argued with statements that dismiss the 'boost clock fixes' because they do not provide a performance benefit.
 
Well considering that Intel used illegal tactics in order to keep AMD from being a bigger threat.....

AMD rode the same core for longer than Intel has ridden Skylake -- whatever tactics Intel used, AMD failed to perform.
 
AMD rode the same core for longer than Intel has ridden Skylake -- whatever tactics Intel used, AMD failed to perform.
Derangel is talking about all of the anti-consumer and OEM-conning tactics that Intel did indeed pull, and were found guilty of, back in the 2000s during their Netburst-era.
AMD was kicking ass and taking names with the Athlon 64/X2, and Intel was losing in nearly every market segment and had no alternative (the Core 1 & 2 CPUs hadn't been quite released yet), but unfortunately, even though AMD won years later, the damage was done.

Intel used Netburst for roughly 8 years from 2000-2008 with the Pentium 4 and Pentium D CPUs - I think Intel has screwed the pooch a bit more than AMD has.
No, the AMD FX processors were not that great, working at about 55% of the IPC clock-for-clock compared to the then-current Sandy Bridge and Ivy Bridge CPUs, and about 95% of the IPC of AMD's own former Phenon II CPUs.

AMD was a bit too early on hoping software was going full-SMP and being heavily threaded, but that turned out to be not the case, at least at the time of their release in Q4 2011; in 2019, we are singing a different tune.
Their FX CPUs will not be missed, nor will their CMT design, but AMD does have a clear win with Zen 2, and with ARM and RISC-V on both on the rise and Intel's own 10nm debacle, not counting the continuously growing number of exploits and/or performance hits on Intel's own CPUs, Intel has one hell of an uphill battle from a pit that they themselves dug with their own arrogance and self-creating market stagnation this decade.

In short, competition is good (for us).
 
Last edited:
Intel used Netburst for roughly 8 years from 2000-2008 with the Pentium 4 and Pentium D CPUs - I think Intel has screwed the pooch a bit more than AMD has.

AMD barely upgraded the Athlon architecture; they did less than Intel has done with Core. 64bit? IMC? Shrink-based clock increases? More cores?

Great -- but those aren't really architectural improvements, and they followed that track record up with Bulldozer. Hopefully they do better with Zen, but it's going to need a lot of work to continue to compete.

Intel has one hell of an uphill battle from a pit that they themselves dug

They bet poorly on their route to 10nm. I'd love to see an accounting of that; but that's on the process side.

Essentially, Intel stumbled their 10nm process by what looks to be four years. TSMC, perpetually behind Intel up to this point, actually leaped ahead to '7nm', and at the same time, AMD put out a CPU design that wasn't halfassed for production on it.

The stars couldn't be more aligned for AMD to snipe a few fractions of the market and ignite their underdog fanbase.

However, there's no certainty that AMD will continue to improve Zen beyond where it is now, and there's no certainty that TSMC will continue to deliver process improvements in volume.

Given that TSMC '7nm' isn't really that advanced and that the current Zen 2 is still squaring with Skylake, it's unlikely that AMD will pull ahead in the near future. It will only take Intel to continue to execute their plan to keep AMD at bay, much like Nvidia does.
 
AMD barely upgraded the Athlon architecture; they did less than Intel has done with Core. 64bit? IMC? Shrink-based clock increases? More cores?

Great -- but those aren't really architectural improvements, and they followed that track record up with Bulldozer.
Which "Athlon" architecture are you talking about?
The move from Socket A to Sockets 754 and 939/940 were massive, and AMD had the first true dual-core x86-64 CPU to market.

If it weren't for AMD's 64-bit extensions, we might still be on x86-32 or heaven forbid, IA-64.
Bulldozer was a whole new microarchitecture, and was built from ground up - it has nothing to do with K10 or earlier - so what exactly do you mean?

The differences, and improvements from K8 to K10 were pretty massive, and Phenom II X4/X6 CPUs were very competitive in 2009 and 2010.
Not to mention, the IMC in the original Athlon 64 removed the need for a FSB completely and lowered latency radically - how is that not a massive improvement architecturally???

Are you actually being serious with your statements? o_O

Hopefully they do better with Zen, but it's going to need a lot of work to continue to compete.
AMD fixed the memory architectural issues from Zen/Zen+ with Zen 2, which was a massive performance issue when more than 16-cores are in use.
I'm not saying further improvements aren't needed, but what is the point you are trying to make, or at least, what "work" are you talking about?

However, there's no certainty that AMD will continue to improve Zen beyond where it is now, and there's no certainty that TSMC will continue to deliver process improvements in volume.
While I agree with this, primarily because we can't see the future, I would trust further improvements in Zen 2/3 beyond anything Intel has to offer at this point.
The 22+ hardware exploits, Intel's history of lying and anti-consumer practices, their market stagnation of the 2010s, their failed attempt at 10nm on the deskop, etc. - all of this is snowballing out of control and Intel is publicly loosing market share to both AMD's offerings, ARM (depending on the market segment), and perhaps RISC-V as well within the coming years.

Intel has knowingly stagnated the market throughout this decade, has quashed all outside innovation of x86-64 (thank AMD and competition for bringing this back), and has fallen short on price/performance for the last few years.
The only company keeping x86-64 alive at this point is AMD, and I do believe that within the next decade, Intel will become a company similar to Oracle and IBM, floating on their x86 ISA license sales and existing as 10% technology and 90% attorneys.
 
Last edited:
AMD barely upgraded the Athlon architecture; they did less than Intel has done with Core. 64bit? IMC? Shrink-based clock increases? More cores?
Gosh can you ever admit when amd does something good? IMC was one of the biggest changes since fpus were socketed.. Even Intel followed suit after...
Uarch regardless it gave a huge latency reduction. That's what they focused on instead and it took Inhell five years to catch them.

I'm going to assume that many to most expected boost clocks to sustain more than a millisecond or two.

That's a statistically irrelevant boost, which is what is being argued with statements that dismiss the 'boost clock fixes' because they do not provide a performance benefit.

Yes there were some small performance changes. Usually back to or slightly above launch bios. Are you intentionally trying to re-write history?
 
Current Intel ring bus vs AMD mesh, Intel leads slightly in gaming. Intel mesh vs AMD mesh looks to be a very different story:

https://www.tweaktown.com/news/6827...ntels-i9-10980xe-3dmark-firestrike/index.html
Yeup although I'm not putting much stock in that bench yet. AMD mtIPC is terrific and why they are shitting on intel in servers, even in intel extension optimised benchmarks lol.
Latency is a large part of that.
AMD has lower intraCCx Latency than even 6700k and 7700k which are the best from Intel. Once you go interCCX it's slower but still quite a lot lower latency than their mesh.. This is what is killing Intel this time around in server and hedt.
This is also why amd is in no rush to drop high core threadripper..they don't need to for once.
 
Intel will become a company similar to Oracle and IBM, floating on their x86 ISA license sales and existing as 10% technology and 90% attorneys.

It is WAY too early to predict Intel's demise. Oracle, IBM, ect all fell not just due to mistakes with products but also management. Intel seems keenly aware of their leadership issues and has been working to fix that and get things back on track. 10nm is a dead end at this point, but if they get things sorted out 7nm could bring back good competition. And, lets be realistic here, you want Intel to make some kind of comeback. I wouldn't trust AMD alone any more than I would Intel and I sure as hell don't want to see things get even more dominated by companies like Qualcomm and other major ARM producers. Intel falling would be bad for consumers. They needed to get their asses kicked and need their ego popped a little more, but I don't think they're out of the game for good.
 
What am I supposed to be gathering from that link? Physics score?
Not really sure. Graphics and Combined is using the GPU and since two different GPUs were used that would throw those scores out. I just ran fire strike to compare and my physics score is 26,114
 
From that article: “Keep in mind however that this is likely not the final sample of the 10980XE and the performance gap could close drastically with improved clocks ( in other words, pinch of salt as always).”

doesnt mean it’s not true either. Standard disclaimer.
 
For these processors, if we don’t know the coolers and amount of thermal throttling it’s mostly meaningless.
just as much fact to it as most of the baseless conjecture that gets thrown around here. Plus given the recent cpu performance of each company- makes total sense.
 
wccftech is usually wrong. Their clickbait / "leaks" have been wrong so many times they were banned from many subreddits lol.

According to them ryzen OCs to 5ghz all cores hahaha.

who would have thought a couple years ago Amd would be spanking intel everywhere? Intel has basically left the desktop market to Amd because they can’t compete. Hahaha.
 
who would have thought a couple years ago Amd would be spanking intel everywhere? Intel has basically left the desktop market to Amd because they can’t compete. Hahaha.

Yeah and if the EPYC 7502P is any indication of what the next threadripper will be like Intel is in trouble for HEDT too.
 
who would have thought a couple years ago Amd would be spanking intel everywhere? Intel has basically left the desktop market to Amd because they can’t compete. Hahaha.

That’s a bit dramatic. In the hands of an overclocker Intel still handily beats AMD in (high Hz) gaming and single thread. AMD has the best value in HEDT but I think that matters even less in that realm.

We’ll see how it pans out with actual reviews but given what we know for 9900k vs 3800x I am skeptical of a 16 core AMD beating a 18 core Intel with proper setups.
 
That’s a bit dramatic. In the hands of an overclocker Intel still handily beats AMD in (high Hz) gaming and single thread. AMD has the best value in HEDT but I think that matters even less in that realm.

We’ll see how it pans out with actual reviews but given what we know for 9900k vs 3800x I am skeptical of a 16 core AMD beating a 18 core Intel with proper setups.

ah yes gaming and single thread. Welcome to the age of multi core. What res u playing games at? 640x480? Amd is better and more efficient at multi core plus Single core is so close that I can def see an Amd 16 core beating an intel 18 core.
 
Back
Top