Intel 18 core: disappointing early indications.

Meanwhile, getting back on topic before we get yelled at by Kyle, anyone have any guesses as to where Intel could be getting the apparent IPC gains for this chip? The mystery chip would have to be nearly 30% better clock-for-clock if the reported specs are true.

I'm sure there are a few more unreported vulnerabilities that can be patched for previous offerings after this chip is released so as not to make liars out of them... :whistle:
 
Catch up?
Yes, the information you are stating about the memory bottlenecks of Zen (2017) and Zen+ (2018) are true.
However, the memory architecture for Zen 2 (2019) has changed dramatically and has removed the previous bottlenecks that were found in the previous architectures, and what the others here are saying is just that "you need to catch up" with the latest information on Zen 2, since what you are saying is only correct about the previous generations, and not the latest.

This isn't a cut on you or anything, so please don't take it that way. :)

15090337476l.jpg


CPUs-de-lidded-1.jpg


This is why in the original Threadripper, the 16-core version would actually outperform the 32-core version when under heavy memory loads, which you are correct on, and has been demonstrated in both synthetic and real-world benchmarks.
However, this bottleneck has been totally redesigned and alleviated entirely with Zen 2's chiplet design, which is what the others here are trying to say.

In other words, you aren't wrong, just a year behind, that's all. (y)
 
Will be interesting to see how this plays out. AMD crushes Intel in high thread benchmarks. This had me regretting my decision to go Intel on my build. After some further research, it turns out that AMD benchmarks well but falls flat when it comes to real workloads.

The problem with AMD’s chiplet approach is that the performance gets bottlenecked whenever the task has to utilize a resource outside of the chiplet. Synthetic benchmarks are fairly trivial tasks repeated quickly, so the chiplets score well there. Real media-laden workloads (think 50+GB RAM) end up cutting performance in half on AMD.

If AMD can get real workloads up to snuff, they’re well positioned to put a world of hurt on one of Intel’s highest margin segments.

You are a year behind. Welcome to 2019, zen 2 has been released that addressed this lol!
 
It's an engineering sample running at 2.2/3.2ghz. I'm sure it won't run at those speeds in production samples.

It's Cascade Lake with poor clock reporting by Geekbench:
https://www.tomshardware.com/news/intel-18-core-cascade-lake-x-cpu,40180.html

It may in fact get higher clocks as this is an engineering sample as stated.

God knows why Intel is taking the long to release HEDT on 14nm++. The 9000x series should have been Cascade lake but instead they release a soldered Skylake X. Most likely due to supply issues of 14nm++.

The oddball B365 motherboards that were partially 22nm is good evidence of this.
 
  • Like
Reactions: N4CR
like this
It's very well established on the user forums for the various rendering products that 2990wx vs 9980XE ends up being a wash in real world usage even though the synthetic benchmarks tell a very different story. You can see a bit of this when you look at the built in benchmarks for Corona. On top of that, there's also viewport performance where TR2 typically gets completely smoked by Skylake.

Lmao, that's not real world, it's a damn script.

And since you didn't bother to read what the author is writing... pointless test is pointless.

Yes, Xeons are very low clocking CPUs, so that is to be expected. Viewport performance in 3D Apps and especially Cinema 4D relies heavily on high single core clocks and IPC. Sorry!

You are a year behind. Welcome to 2019, zen 2 has been released that addressed this lol!

Meanwhile, Twitter, Google, Amazon, Microsoft are all jumping on Epyc.
 
Last edited:
Part of the reason for that has to do with Intel's supply problems as of late.

And the other part may have to do with the newest AMD EPYC processors supporting out of the box more & faster memory, while also supporting more and faster pcie lanes per core, at a relatively lower tdp per performance and much lower cost per thread through their top range product stack ;)

I mean, for datacenters those are all pretty important points.
 
Thunderdolt could not be more aptly named...

Meanwhile, getting back on topic before we get yelled at by Kyle, anyone have any guesses as to where Intel could be getting the apparent IPC gains for this chip? The mystery chip would have to be nearly 30% better clock-for-clock if the reported specs are true.
They are counting security mitigation fixes in their recent figures, I'd bet my rig on it.
 
Interesting article and even more interesting comments > https://www.androidauthority.com/amd-vs-intel-994185/

Terrible article in the performance per watts comparison they implied actual parity in consumption and a nod to the 9900k as being power efficient.

Intel TDP is misleading to say the least, specially in comparison to AMD's, Intel goes way beyond the paper tdp during actual workloads as shown in pretty much every benchmark from the serious review websites, while in comparison amd actually stays within the tdp throttling itself smartly to not go beyond the allotted power even under high loads.
 
Terrible article in the performance per watts comparison they implied actual parity in consumption and a nod to the 9900k as being power efficient.

Intel TDP is misleading to say the least, specially in comparison to AMD's, Intel goes way beyond the paper tdp during actual workloads as shown in pretty much every benchmark from the serious review websites, while in comparison amd actually stays within the tdp throttling itself smartly to not go beyond the allotted power even under high loads.
TDP is especially dead when doing AVX workloads. Often doubled.
The thing is with this whole argument is bitching about Zen2 DCPUs with high ram amounts, is that as mentioned earlier, NUMA issues are mostly solved and the RAM limits are really getting into professional use cases. Yes you can run 64gb on Zen2 DCPU with 16gb sticks. Maybe more in future if you can find 32gb sticks. But really, that's what TR3 will be for (semi-pro/workstation) and if professional, workstation or server type workloads then that's what Epyc2 is for. And we all know Epyc2 basically wins every benchmark and quite often doubles the performance of its nearest competitor from Intel, while using less power to boot at a fraction of the cost.

edit: Thunderdolt you may find this very interesting
https://www.servethehome.com/amd-epyc-7002-series-rome-delivers-a-knockout/
And yes they use much more than 50gb of ram..
 
.........unreleased AMD chip..........

I'm pretty sure that Rome has been launched as it has been reviewed by mulitple websites already, so its not like its an unknown anymore.
Also I'm pretty sure the Rome dismantles anything Intel has out right now.

2nd half of username checks out real good.........
 
You are a year behind.. please catch up and show us where this is true with the processors AMD has released this year, as well as Rome that is going to be released very, very soon along with the 3950x, which is what it is being compared to in the article in the OP. Nobody here is talking about AMD's performance from last year's processors.
Soon? They already starting benchmarking them and they are killing Intel in threaded benchmarks.
 
TDP is especially dead when doing AVX workloads. Often doubled.
The thing is with this whole argument is bitching about Zen2 DCPUs with high ram amounts, is that as mentioned earlier, NUMA issues are mostly solved and the RAM limits are really getting into professional use cases. Yes you can run 64gb on Zen2 DCPU with 16gb sticks. Maybe more in future if you can find 32gb sticks. But really, that's what TR3 will be for (semi-pro/workstation) and if professional, workstation or server type workloads then that's what Epyc2 is for. And we all know Epyc2 basically wins every benchmark and quite often doubles the performance of its nearest competitor from Intel, while using less power to boot at a fraction of the cost.

edit: Thunderdolt you may find this very interesting
https://www.servethehome.com/amd-epyc-7002-series-rome-delivers-a-knockout/
And yes they use much more than 50gb of ram..
He'll probably ignore it, it doesn't fit his narrative :).
 
You are correct, it was actually released on August 7th. I thought it was going to be released in September along with the 3950x and the benchmarks where just a demonstration of what was coming.

From anandtech.
The audience was all in unison with reactions with the when questions.
sighs until she said what they wanted.

05:40PM EDT - Q: How is AMD approaching the workstation market? How does that pertain to threadripper? A: We love the workstation market, and yes there will be a next generation of Threadripper. Q: Can you say when? A: If I said soon, is that enough? Q: No? A: How about within a year? Q: Can you say if 2019? A: How about this - you will hear more about Threadripper in 2019.
 
If AMD can get real workloads up to snuff, they’re well positioned to put a world of hurt on one of Intel’s highest margin segments.

It sounds like the new CPU released a few days ago does this. Thanks for agreeing with me, everyone.
 
It sounds like the new CPU released a few days ago does this. Thanks for agreeing with me, everyone.

NO, all of the Zen2 line does this you dolt! Not just Rome. (I am referring to AMD NOT falling flat as you claim does fall flat, nothing more) It's time you pull your head out of the sand buddy. Go look at your "proof' again.. look where the Zen2 line is... look where the 3800x is... Stop looking at last years processors!!
 
Last edited:
NO, all of the Zen2 line does this you dolt! Not just Rome. (I am referring to AMD NOT falling flat as you claim does fall flat, nothing more) It's time you pull your head out of the sand buddy. Go look at your "proof' again.. look where the Zen2 line is... look where the 3800x is... Stop looking at last years processors!!

He's just being a troll now because he made up some complete nonsense, he knows he's wrong, and everyone called him out on it.
 
The last time AMD was ahead, it took Intel 4 or 5 years to right itself. So I expect the same this time and it really looks like it. The A64 dominated both consumer and server side while Intel’s offerings were literally “on fire”. Then Core 2 was released and here we are again.
 
I think Intel is in a much better place than they were in those days, though. They've got an architecture in the pipeline that will compete very well and they're getting their process technology back on track already. I give it a year or two before things really normalize.
 
I think Intel is in a much better place than they were in those days, though. They've got an architecture in the pipeline that will compete very well and they're getting their process technology back on track already. I give it a year or two before things really normalize.

Nah, they are screwed. They are behind on multiple fronts from an architecture flaw (speculative execution) and monolithic die designs that exacerbate an already beleaguered process failure. They failed on 10nm for a reason, it wasn't an accident.
 
Will be interesting to see how this plays out. AMD crushes Intel in high thread benchmarks. This had me regretting my decision to go Intel on my build. After some further research, it turns out that AMD benchmarks well but falls flat when it comes to real workloads.

The problem with AMD’s chiplet approach is that the performance gets bottlenecked whenever the task has to utilize a resource outside of the chiplet. Synthetic benchmarks are fairly trivial tasks repeated quickly, so the chiplets score well there. Real media-laden workloads (think 50+GB RAM) end up cutting performance in half on AMD.

If AMD can get real workloads up to snuff, they’re well positioned to put a world of hurt on one of Intel’s highest margin segments.

I can see large RAM-utilizing workloads performing poorly on the single-chiplet 3000 series chips. They have 1/2 the RAM performance of the 3900x (and presumably, the 3950x as well).
 
They have half the WRITE performance. Read (and copy, somehow) both perform the same on the 3900X and the lower SKUs.
 
I think Intel is in a much better place than they were in those days, though. They've got an architecture in the pipeline that will compete very well and they're getting their process technology back on track already. I give it a year or two before things really normalize.

Intel maybe in a better place from an architectural standpoint, but definitely worse off process wise than they were in the Pentium 4 days.
 
Thanks for agreeing with me, everyone.
We don't agree with you, and have directly pointed out your missteps again and again.

He's just being a troll now because he made up some complete nonsense, he knows he's wrong, and everyone called him out on it.
This, however, we do all agree with. :D
 
Intel maybe in a better place from an architectural standpoint, but definitely worse off process wise than they were in the Pentium 4 days.

Oh, definitely, but at least they were already well on their way to resolving their issues when AMD gut punched them this time. Intel fumbled around for three years after the launch of the Athlon64 before they got an effective response out the door with Conroe. Even the Athlon64 X2 had been out for a year at that point.
 
I can see large RAM-utilizing workloads performing poorly on the single-chiplet 3000 series chips. They have 1/2 the RAM performance of the 3900x (and presumably, the 3950x as well).
Got any proof to this, or just guessing? It's not like benchmarks aren't available that show these CPUs aren't running into a wall as claimed.
 
I can see large RAM-utilizing workloads performing poorly on the single-chiplet 3000 series chips. They have 1/2 the RAM performance of the 3900x (and presumably, the 3950x as well).

Not really.

they have half the write performance but per chiplet there is no difference.
Read is same, you read more than write so it's a non issue, I was worried of this but there is only one place where you see it and that's in Aida64 and memory benchmarks which show total throughput.
If you check IPC tests around there is not any significant findings between single and two chiplets.
 
Got any proof to this, or just guessing? It's not like benchmarks aren't available that show these CPUs aren't running into a wall as claimed.

Synthetic benchmarks are (as noted by another above) showing 1/2 the write performance, and I've read a lot of people that moved from 1xxx and 2xxx chips up to 3xxx chips asking about it over on Overclockers.net. While I am personally running a 3900x (and don't see this discrepancy), my cousin has a 3600x chip. I've not noticed any kind of penalty in anything either of us runs, but if synthetic tests are noticing this difference, than there is a >0% chance of there being some niche-but-real-world workload out there that will suffer from it. This is also pretty much the same reason why I tell people that say "Well, my overclocked system only crashes in Prime95, but Prime95 does completely unrealistic things that no REAL workload would ever do." I tell them that their system is not stable. If even one program can achieve that condition, than it COULD, however unlikely it may seem, occur in another software - because nobody personally writes every piece of software that they run on their PC. You have no way of knowing what is really in somebody else's code and what conditions will trigger it.

I was merely speculating that it would not surprise me, given the limited parameters given by Thunderdolt, that it * could * be a possibility for the discrepancy he is reporting in * his * high-RAM usage workload. I did not mean to imply anything other than that.
 
Synthetic benchmarks are (as noted by another above) showing 1/2 the write performance, and I've read a lot of people that moved from 1xxx and 2xxx chips up to 3xxx chips asking about it over on Overclockers.net. While I am personally running a 3900x (and don't see this discrepancy), my cousin has a 3600x chip. I've not noticed any kind of penalty in anything either of us runs, but if synthetic tests are noticing this difference, than there is a >0% chance of there being some niche-but-real-world workload out there that will suffer from it. This is also pretty much the same reason why I tell people that say "Well, my overclocked system only crashes in Prime95, but Prime95 does completely unrealistic things that no REAL workload would ever do." I tell them that their system is not stable. If even one program can achieve that condition, than it COULD, however unlikely it may seem, occur in another software - because nobody personally writes every piece of software that they run on their PC. You have no way of knowing what is really in somebody else's code and what conditions will trigger it.

I was merely speculating that it would not surprise me, given the limited parameters given by Thunderdolt, that it * could * be a possibility for the discrepancy he is reporting in * his * high-RAM usage workload. I did not mean to imply anything other than that.
I understand, it was reported by another user earlier in this thread that it was some known thing and very typical. In reality, there may be a specific task somewhere, but it's far from typical and falling flat on its face for any heavily threaded work loads. He didn't actually report anything besides speculation, which is why we were questioning it and seeing if anyone actually found evidence of this.
So his statement about > 50GB makes me doubt he was talking of a 3600 CPU, more likely Rome or possibly TR?
 
Back
Top