cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,074
Intel's Executive VP and GM of the Data Center Group, Navin Shenoy has released a statement announcing the new Intel data center roadmap. First he announced new server accessories such as the Intel Optane DC persistent memory which allows for a larger persistent memory tier between the DRAM and SSDs. New "Cascade Glacier" SmartNICs are coming and a new laser based connectivity called silicon photonics. Finally the plan for new Intel Xeon processors was announced.

Cascade Lake is a future Intel Xeon Scalable processor based on 14nm technology that will introduce Intel Optane DC persistent memory and a set of new AI features called Intel DL Boost. This embedded AI accelerator will speed deep learning inference workloads, with an expected 11 times faster image recognition than the current generation Intel Xeon Scalable processors when they launched in July 2017. Cascade Lake is targeted to begin shipping late this year.

Cooper Lake is a future Intel Xeon Scalable processor that is based on 14nm technology. Cooper Lake will introduce a new generation platform with significant performance improvements, new I/O features, new Intel DL Boost capabilities (Bfloat16) that improve AI/deep learning training performance, and additional Intel Optane DC persistent memory innovations. Cooper Lake is targeted for 2019 shipments.

Ice Lake is a future Intel Xeon Scalable processor based on 10nm technology that shares a common platform with Cooper Lake and is planned as a fast follow-on targeted for 2020 shipments.
 
This roadmap is... confusing. Where is the benefit of these new CPU's for anything other than AI learning? I don't see it and I think it's clear there are not large performance gains or they would have mentioned it.
 
This roadmap is... confusing. Where is the benefit of these new CPU's for anything other than AI learning? I don't see it and I think it's clear there are not large performance gains or they would have mentioned it.


This seems to just be more spin to try to cover up that they are up schitt's creek in the server market because of their 10nm process failure.

Let's see how long they can deceive the investors. I'd expect the stock to take a real drubbing at some point when investors finally realize how big of a deal this 10nm process issue is.

They aren't even talking about shipping 10nm until Cooper Lake in 2020, and that's the gimped 10nm attempted to rescue the failed process, and performs nothing like what they were originally supposed to release back in 2015. Rumors suggest it is more equivalent to a 12nm process.

I suspect investors right now have seen these reports, but are uncertain if this is real, or just an attempt of a short seller to damage Intel stock. It could always be the latter, but I don't really think so.
 
VengefulSpiritedKatydid-max-1mb.gif
 
Honestly all I saw was some announcements for better accessories. Nothing about the new processors said "New" "Innovative" "Faster" "I need to spend money now!"
 
Intel never-ever-ever dreamed that they'd be f0cked face first in the data center - cloud infrastructure - business by AMD. This is what happens when you fleece your customers.
 
Optane "RAM", for when you need to hold onto those pesky client decryption keys so the NSA can get their paws on them without anyone knowing.
 
Sounds like they couldn't improve performance or efficiency much, so they added some features to try and keep some of their customers from looking elsewhere. I don't sense much urgency from Intel, but if they don't get a solid 10nm launch in 2020 they may be in some seriously hot water.
 
Intel is doing a really good job at keeping the buyers remorse away from my new AMD EPYC purchases.
 
https://semiaccurate.com/2018/08/07/intel-has-no-chance-in-servers-and-they-know-it/

Intel has no chance in servers and they know it

Intel has a cunning plan there too, raise prices from Purley’s ~$13,000 to ~$20,000. No that isn’t a joke, a 6-8% performance boost almost completely due to TDP raises comes with an ~$7000 price increase. Did we mention AMD’s Epyc, which is about 15% slower on a per-socket basis, costs less than 1/4th as much? And has more PCIe lanes, more memory channels, more cores, but does take more energy. Over the service lifetime, SemiAccurate feels safe in claiming that an Epyc box won’t consume very much of the $15,000+ delta, per CPU mind you, in electricity even at the high rates in some countries.

...volume for Cascade won’t start it’s ramp until ~1Q before AMD’s monster Rome CPU.

It won’t’ be a fair fight. Why? Rome will beat Cascade by more than 50% in per-socket performance, likely tie or win on a single threaded basis, and more than double the Cascade’s core count. Please note that by more than 50% we don’t mean a little more, we mean a lot more, think abusive rather than hair’s width margins.

So, yeah.
 
The real question that us tech people are not asking/answering:


But, a data center CTO would ask, why replace current gen XEONs, if we can simply "upgrade" them with new Intel tech? Intel is offering upgrades to existing CPU's.
 
I’m tired of Cores I can’t use.
Give me 128MBs of L3 Cache.
14nm is fine, 4GHz @95 Watts is fine.
If you hit a brick wall, go to L1, L2 and L3 Cache.
Why is this so difficult to implement.
The i7 5775C @ 3.3GHz got as much work done as my i7 4790k @ 4.6GHz.
And that was due to 128MBs of fake L4 Cache.
 
I can't see Intel keeping the performance crown unless they move to chipsets and eat the packaging costs to scale to insane core counts. Cooper Lake might be such a design and Ice Lake-SP more than likely is. Cascade Lake is basically a fixed up Sky Lake-SP so the per clock performance differences will mainly stem from the lack of needing spectre/meltdown patches.

The one thing Intel does have going for them are the package options: OmniPath, FPGA and soon Nervana machine learning acceleration. If your workloads can leverage those, then they'll still be worth considering. Ditto for Optane DIMMs, AVX-512 and >4 socket support.

IF AMD can produce the promised ~10 to 15% IPC increase alongside a core count increase with the Zen 2 based Rome model, there is very little reason to stick with Xeons on single and dual socket servers unless you can leverage some of the features I mentioned above.

AMD isn't alone here either as IBM still offers POWER and there are a handful of ARM players lining up to take care of the low end. I can't say Intel hasn't been innovating here (there list of options/special features is quiet extensive) but them losing the performance crown has made he server market really interesting again.
 
I wonder if AMD will not bother increasing core counts for the Zen2 architecture...? if it was me, I would make Zen 2 all about large, double digit IPC increases and improved efficiency and thermals, then make Zen 3 be the knockout blow to Intel with minor architectural tweaks and more cores...
 
I’m tired of Cores I can’t use.
Give me 128MBs of L3 Cache.
14nm is fine, 4GHz @95 Watts is fine.
If you hit a brick wall, go to L1, L2 and L3 Cache.
Why is this so difficult to implement.
The i7 5775C @ 3.3GHz got as much work done as my i7 4790k @ 4.6GHz.
And that was due to 128MBs of fake L4 Cache.

I would love a 5775C successor. Mine does 4.2 GHz and uses a ton of power.
 
The real question that us tech people are not asking/answering:


But, a data center CTO would ask, why replace current gen XEONs, if we can simply "upgrade" them with new Intel tech? Intel is offering upgrades to existing CPU's.
If you can get better performance over the lifetime of that upgrade, and it offsets the cost of that upgrade, then there's no reason not to (assuming there is no intel-only tech required that your software/hardware relies on).
 
Last edited:
If you can get better performance over the lifetime of that upgrade, and it offsets the cost of that upgrade, then there's no reason not to (assuming there is no intel-only tech required that your software/hardware relies on).

(Software upgrade +Setup) Vs. (new hardware motherboard + CPU + hardware setup + software setup) the cost of labor might be more to make the switch, and labor is usually more expensive than the sum of all parts.
 
(Software upgrade +Setup) Vs. (new hardware motherboard + CPU + hardware setup + software setup) the cost of labor might be more to make the switch, and labor is usually more expensive than the sum of all parts.
By cost of upgrade, I meant total cost of upgrade, which would include all of that (and they could determine roughly what that would be beforehand).

Of course labor would be more, we're talking about a $3-15k system here (for a single server, depending on the amount of ram and storage). But it's not like you have to customize each server in every situation–I'm sure some would be able to use them as configured from their supplier, and would just need to hook them up, install the os image, and test them. They'd have to do two of those even if they just upgraded the CPU, and additionally they'd have to tear down the rack to upgrade it (or send it off, but I doubt they'd do that).
 
Last edited:
Only server at the office that needs more CPU power is my SQL server.
Currently running dual 10 core CPU's.

Don't want more cores, I need better performance per core.
SQL is licensed per core, so more cores mean more expensive SQL licenses.

I might be interested in a new server when I can get a couple 10 or 12 core CPU running higher than 4Ghz, and a couple dozen NVMe SSD drive slots for a reasonable price.
 
I would love a 5775C successor. Mine does 4.2 GHz and uses a ton of power.

Supposedly some U/Mobile Variant has an upgraded version with better delegated Cache.
Not sure who this helps out other than me, but it’s a God Send for Pro Audio, DAWs and live performance venues.
I crunch the numbers mostly in my DSP Rack. 18 x Analog Devices SHARC Processors, but for the Native applications that L4 victim Cache worked great, if it were redesigned or on L3 it would be an incredible performance boost.
I’ve been told real-time flight simulators are very similar to the real time audio apps I use.
 
Don't want more cores, I need better performance per core.
SQL is licensed per core, so more cores mean more expensive SQL licenses.

I don't know if this still works with licensing but you could get a higher core count CPU, disable some cores but keep the L3 cache. Clock speeds generally live in the higher turbo modes with the disabled cores fairly regularly.

Supposedly some U/Mobile Variant has an upgraded version with better delegated Cache.

Skylake did change how the L4 works compared to Broadwell. L3 can snoop between sockets but L4 is now tied to socket's memory controller. The implication when this was disclose was that Sky Lake-SP may actually have some eDRAM options which would have been excellent for some server workloads. However it has only been found on consumer chips with integrated graphics.

Not sure who this helps out other than me, but it’s a God Send for Pro Audio, DAWs and live performance venues.
I crunch the numbers mostly in my DSP Rack. 18 x Analog Devices SHARC Processors, but for the Native applications that L4 victim Cache worked great, if it were redesigned or on L3 it would be an incredible performance boost.
I’ve been told real-time flight simulators are very similar to the real time audio apps I use.

Biamp Tesira Server in that audio rack?
 
I don't know if this still works with licensing but you could get a higher core count CPU, disable some cores but keep the L3 cache. Clock speeds generally live in the higher turbo modes with the disabled cores fairly regularly.

If you are running virtualized, you can count just the cores assigned to the virtual machine(s), so this licensing mode could work to allow more peek performance by only using some cores in turbo mode, but that's an expensive hardware solution.

Other licensing choice is to count the physical cores (not the hyper threaded cores) on all the CPU's in the server.
I have dual 10 core CPU's, so I need 20 licenses. Doesn't matter how many virtual cores are assigned to each VM running SQL, as long as they are all on the same server.

This works better for us, since we have more than one virtualized SQL server, and I can allocate the cores as needed between the VM's.
I also allows me to spin up a test SQL as needed without having to worry about the licensing.

FYI: we have enterprise licensing which is probably different than retail licensing.
 
Back
Top