AMD Wants to Retake the Datacenter

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
In a presentation, AMD illustrated how their EPYC processors will meet the growing demands of the datacenter market. AMD says they're looking to shed the "incrementalism" that's been plaguing the industry by implementing radical changes in server design. More specifically, they are targeting 3 key use cases: Enterprise Hosting and SaaS Providers, Memory Intensive High Performance Computing, and the Virtualization Market

AMD claims that 12% of Enterprise Hosting and SaaS providers don't have their needs met. 1 Socket EPYC servers provide more performance than competing 2 socket solutions in this in this highly I/O bound role, all while offering a lower TCO. Meanwhile about 18% of HPC customers are limited by memory or PCIe bandwidth and core count, while about 55% of the virtualization and cloud market is hampered by security vulnerabilities, I/O capacity, or CPU core counts. On top of that, as current datacenter servers age, performance drops while support costs increase dramatically.

In light of these performance deficiencies and increasing costs, IDC predicts that "6.5M 2-socket servers need to be upgraded in the next 12 months." Thanks to the EPYC platform's relatively low total cost of ownership and high performance per socket, as well as a solid roadmap, AMD seems to think EPYC will account for a large portion of those upgrades. AMD already has 15 system partners, along with some high profile customers like Tencent, Microsoft, The University of Notre Dame, and Dropbox deploying first generation servers, but they claim that "we've only just begun."

It's been a monumental year and a half for AMD EPYC. Since launch, we've accumulated more than 15 partners around the world that have created more than 50 server platforms to support all types of workloads, and impressive customer wins across hosted service providers, HPC and public cloud. But now we're attacking a new market. This opportunity is massive. According to IDC, 6.5M 2-socket servers need to be upgraded in the next 12 months. It is time to declare war on the aging, outdated, infrastructure of the on-premise, virtualized datacenter. It's time to disconnect those old SANS, move to a hyperconverged infrastructure, get more secure VMs and get up to a 45% reduction in a 3-year TCO by using AMD EPYC.
 
We've had about 30 servers on order for intel products and they're now 60 days late on delivery due to supply issues, we're about to make the switch to EPYC if they don't ship within the next 2 weeks, I can definitely see AMD gaining some market share with the show intel is putting on.

Reps say they can ship out EPYC within 7 days, hmmm.
 
Wonder if Intel has an ice-pack for their nards yet? They have, after all, taken an epic kick to the nuts.
 
We've had about 30 servers on order for intel products and they're now 60 days late on delivery due to supply issues, we're about to make the switch to EPYC if they don't ship within the next 2 weeks, I can definitely see AMD gaining some market share with the show intel is putting on.

Reps say they can ship out EPYC within 7 days, hmmm.
There are some report(s) that Intel given discounts to companies having Epyc on order to switch even if they can not supply short term..
 
Hyperconverged is an absolute pain in my balls these days.

My company has a customer who's small datacenter is;

isolated and not connected to any outside network, no connections out of the building.
is actually three separate networks at different classification levels but don't match each other
full of old equipment nearing EOS, has a 1/3 tech refresh target that they consistently fail to follow through on with an actual purchase
and, the senior engineer is convinced VMWare is the only way to go and is following VMWare's drive toward HCI without any justification other than, VMWare is going HCI so we have to go HCI.

He will not step back and consider retaining Converged Infrastructure as an option. He is trying to make the decision for our customer instead of developing options and letting the customer choose for themselves how to spend their money. Not the way I think you keep a contract long term.
 
I hope Intel comes back and comes back hard. The public perception is that Intel is now doing really bad. Not sure how true this is.

I'm happy for AMD. My hope is that they are able to vastly increase their profits and that they might in-turn be able to help fund their graphics division even further.
 
While I am very pro-FreeBSD, what are the areas that AMD needs to improve in CPU support? I really just don't know, and would like to.

From what I've read (take it for what that's worth) AMD does very little development for FreeBSD. New hardware frequently lags way behind as a result it's up to the FreeBSD developers to figure out how to fix the bugs.
 
I hope Intel comes back and comes back hard. The public perception is that Intel is now doing really bad. Not sure how true this is.

I'm happy for AMD. My hope is that they are able to vastly increase their profits and that they might in-turn be able to help fund their graphics division even further.

in the enthusiast community it's 50/50.. in the idiot world of people that don't know anything AMD still doesn't exist to them.. you ever want to see how uninformed people are, go into a best buy or fry's in the prebuild desktop/laptop area and mention AMD around them, they won't have a clue what you're talking about.
 
We've had about 30 servers on order for intel products and they're now 60 days late on delivery due to supply issues, we're about to make the switch to EPYC if they don't ship within the next 2 weeks, I can definitely see AMD gaining some market share with the show intel is putting on.

Reps say they can ship out EPYC within 7 days, hmmm.

why-you-are-not-trying-you-can-do-it-meme.jpg
 
I was looking at Dell servers recently and they have almost no EPYC options. I think I saw a couple 2U rack servers with them and that was it. I was really hoping to get some tower server with EPYC processors.
 
Free is all well and good but it has no place in the Server Room. As a customer, if I pay for a product, and if I pay for support agreements, I can expect support. With free products, I can expect all I want to, but if it's free who gives a fuck?
We've had about 30 servers on order for intel products and they're now 60 days late on delivery due to supply issues, we're about to make the switch to EPYC if they don't ship within the next 2 weeks, I can definitely see AMD gaining some market share with the show intel is putting on.

Reps say they can ship out EPYC within 7 days, hmmm.


Is that an Intel issue or a Dell or CISCO or whatever vendor issue?

I mean I get it, in a ground truth way, doesn't matter, you can either get them or you can't and if the difference is just which product you order, and if it's costing you money, buy what you can get.

But there are other issues too. It's not just a simple swap, and you have to look at those things in your use case. But I do wonder, is the problem that all the vendors are behind because of Intel availability, or is it just one vendor that uses Intel while others that use Intel can ship in reasonable time.
 
Most the servers at both my offices are 2U rack mount servers with dual Xeon CPU's, and are 3-6 years old.
Most are still populated with 6-12 spinners and running Hyper-V for virtualization. (The Microsoft developer program provides a good number of server licenses so there is no way I could justify VMWare)

This competition to Intel's lock on server CPU's is a good. Hopefully there will be even better AMD products coming out.
I've held off replacing the servers, as moving to virtualization has allowed eliminate older servers by virtualizing them.
I've been waiting for SSD's to come down in price (they are finally cheap enough for some uses), but also for faster CPU's.
Give me enough CPU cores, enough ram, 10GB network cards and fast SSD storage, and I could eliminate even more hardware.

I'm also waiting for a faster SSD interface than SATA at a reasonable price. A SATA raid controller is now a bottleneck if you add enough SSD's
Can't see spending big $$$ for a server that is still using SATA connections.

Maybe in a few years I'll be able to buy a server with a couple 32 core CPU's, 1TB ram, and 20TB of fast NVMe SSD storage.
 
Last edited:


Recently Dell came buy and gave us the sales pitch on their VXrail products, their HCI solutions which are all Intel and no Epyc option that I see. Maybe the HCI push has Dell shoving all their Intel products into them and AMD's timing with Epyc is allowing them to grab some of the traditional server market ?

HCI and those outfits that can't life without VMWare has got a lot of datacenters buying a lot of new hardware outside of their normal tech refresh cycles. Smaller datacenters that go HCI kind of have to go all-in and do a complete tech upgrade swapping it all out at once. A daunting task for sure. Just a thought, see what you guys think on it.
 
And none of those are tower servers. I realize that Tower servers aren't in the datacenter, but I was hoping they'd have some EPYC choices for towers too.

Tower servers still have a place in the world or they wouldn't be around. Nothing wrong with them at all when they fill appropriate use cases.
 
Most the servers at both my offices are 2U rack mount servers with dual Xeon CPU's, and are 3-6 years old.
Most are still populated with 6-12 spinners and running Hyper-V for virtualization. (The Microsoft developer program provides a good number of server licenses so there is no way I could justify VMWare)

This competition to Intel's lock on server CPU's is a good. Hopefully there will be even better AMD products coming out.
I've held off replacing the servers, as moving to virtualization has allowed eliminate older servers by virtualizing them.
I've been waiting for SSD's to come down in price (they are finally cheap enough for some uses), but also for faster CPU's.
Give me enough CPU cores, enough ram, 10GB network cards and fast SSD storage, and I could eliminate even more hardware.

I'm also waiting for a faster SSD interface than SATA at a reasonable price. A SATA raid controller is now a bottleneck if you add enough SSD's
Can't see spending big $$$ for a server that is still using SATA connections.

Maybe in a few years I'll be able to buy a server with a couple 32 core CPU's, 1TB ram, and 20TB of fast NVMe SSD storage.


SAS doesn't do it for you?
 
Free is all well and good but it has no place in the Server Room. As a customer, if I pay for a product, and if I pay for support agreements, I can expect support. With free products, I can expect all I want to, but if it's free who gives a fuck?



Is that an Intel issue or a Dell or CISCO or whatever vendor issue?

I mean I get it, in a ground truth way, doesn't matter, you can either get them or you can't and if the difference is just which product you order, and if it's costing you money, buy what you can get.

But there are other issues too. It's not just a simple swap, and you have to look at those things in your use case. But I do wonder, is the problem that all the vendors are behind because of Intel availability, or is it just one vendor that uses Intel while others that use Intel can ship in reasonable time.
Little bit of both, one vendor is horribly behind, the other big one our company interacts with is only a little delayed in shipping intel orders compared to the other.

And none of those are tower servers. I realize that Tower servers aren't in the datacenter, but I was hoping they'd have some EPYC choices for towers too. I wasn't commenting on the article so much as saying that I really want to be able to try out an EPYC processor in a customer project sometime.

This is the hang up for us at the moment, we don't really want racks for our purposes and really need the towers. If EPYC towers were available we'd probably switch right now.
 
Recently Dell came buy and gave us the sales pitch on their VXrail products, their HCI solutions which are all Intel and no Epyc option that I see. Maybe the HCI push has Dell shoving all their Intel products into them and AMD's timing with Epyc is allowing them to grab some of the traditional server market ?

HCI and those outfits that can't life without VMWare has got a lot of datacenters buying a lot of new hardware outside of their normal tech refresh cycles. Smaller datacenters that go HCI kind of have to go all-in and do a complete tech upgrade swapping it all out at once. A daunting task for sure. Just a thought, see what you guys think on it.

Traditional server market in the enterprise has already falling of the ledge and not coming back. Dense compute/HCI is all of the big server players market now. If you placed an order for 20 Dell R630s or w/e equivalent, they would be like, OK we will sell you them...but are you sure thats the solution you want? There is fast falling interest in pushing the marketing behind them now. Especially with the majority of enterprises building out hybrid cloud solutions going forward and shrinking the datacenter footprint massively. More compute, more storage, less tiles.
 
Alright, been half wondering this and there's enough DC guys/gals in here so I figure I'll complete the thought and ask-

Is Intel's shortage due more to a production issue on their side, or a market demand issue that they were not prepared for due to their continuing 10nm struggle?

Either way, AMD is damn lucky, of course.
 
Alright, been half wondering this and there's enough DC guys/gals in here so I figure I'll complete the thought and ask-

Is Intel's shortage due more to a production issue on their side, or a market demand issue that they were not prepared for due to their continuing 10nm struggle?

Either way, AMD is damn lucky, of course.

It is related to the 10nm struggle to a degree, but not in the negative way many like to paint it, its purely a supply/demand issue and ability to meet a new massive demand which hindered the conversion of 14nm mfr process to 10nm. Intel had highly increased demand for CPU's during the summer, and only growing demands from hyper-scale cloud providers, eg AWS, Azure, GCP. Production factories they use are expected to be back on schedule by October, which should see relief to the market in November. Hence it put everything off schedule, and the delay in the next gen intel procs.

It is good timing for AMD, but IMO, the dry spell in supply isn't necessarily going to be long enough for AMD to take over like it did for a couple years in old opteron days.
 
Thanks- I keep picking up the implication that Intel's manufacturing is breaking down or something based on how the supply side of things is being reported and in some circles, celebrated.

I'll celebrate when AMD outcompetes Intel; not because I care for either, but because a faster option will have become available ;).
 
I'm getting the feeling that the security issues haven't been seen as enough of a problem, even though the mitigations hits data centres worse than most.
 
Last edited:
It is good timing for AMD, but IMO, the dry spell in supply isn't necessarily going to be long enough for AMD to take over like it did for a couple years in old opteron days.

With the DataCenter TAM at $25B, AMD only needs a little piece of it to have massive growth. i.e. 4% == $1B of it. Some are predicting 10% by next year. Also, cost of ownership for Epyc is superior not just the initial purchase.
 
I'm getting the feeling that the security issues haven't been seen as enough of a problem, even though the mitigations hits data centres worse than most.

I bet the patches are still being applied and slowing Xeons down.
 
We have been soooo close to getting some of our customers to jump to Epyc. The problem is not the current, comparatively overpriced Xeons. It's the older Broadwell/Haswell servers that are the real competition. Probably not the case in large corporate "money is no object" environments, but for our mid-sized customers 500-1000 employees a DL360 gen 9 with Broadwell is still a top pick... I think Epyc is going to really take off once availability dries up for these older servers.
 
Nice.

But the review says that even with the 'SQ: Super Quiet' fans in the power supplies it's simply too loud for being at someone's desk. I don't doubt it - all the Supermicro stuff I've worked with has always seemed to put noise at the bottom of the list of priorities.
Supermicro=Supernoisy
 
SAS doesn't do it for you?

Not many SAS SSD's out there, plus it's not enough of an improvement over current SATA.
The newer standards we have on laptops/desktop are faster with lower latency.
If I'm going to spend the big $$$ on a new server, I don't want the raid controller and drive interface to be the bottleneck.
 
Back
Top