AMD Wants to Retake the Datacenter

Traditional server market in the enterprise has already falling of the ledge and not coming back. Dense compute/HCI is all of the big server players market now. If you placed an order for 20 Dell R630s or w/e equivalent, they would be like, OK we will sell you them...but are you sure thats the solution you want? There is fast falling interest in pushing the marketing behind them now. Especially with the majority of enterprises building out hybrid cloud solutions going forward and shrinking the datacenter footprint massively. More compute, more storage, less tiles.

As a software company, we are rolling out a cloud version of our product due to customer demand.

At the same time, as the IT person, we will not be using any cloud applications, if a non cloud version is available.

As for servers, I've only purchased one new server this year.
I did add ram (DDR3 on older servers is cheap), larger drives (non-Dell drives are cheap), and even upgraded some of the CPU's (older used Xeons are cheap).
Going from Dual 2.5Ghz quad cores to Dual 2.8Ghz 10 core CPU's is a significant improvement, and allows me to run more virtualized servers on the same box.

Next plan is to install 10gb network cards in the servers and upgrade some of the switched to handle 10gb.
 
Traditional server market in the enterprise has already falling of the ledge and not coming back. Dense compute/HCI is all of the big server players market now. If you placed an order for 20 Dell R630s or w/e equivalent, they would be like, OK we will sell you them...but are you sure thats the solution you want? There is fast falling interest in pushing the marketing behind them now. Especially with the majority of enterprises building out hybrid cloud solutions going forward and shrinking the datacenter footprint massively. More compute, more storage, less tiles.


Oh I understand the effect on the industry to a degree, or to the degree it impacts me. But it doesn't fit every use case, never did. But it does work to force more small data centers to shut their doors and sign up for AWS or some other big player's services because the alternative looks like a trip on the access road with no way to get on the freeway with the big boys.

My datacenter is just one of those and what's worse is that we have no connection to the world so we don't have the option of buying service from AWS or anyone else. We're Army and can't even pull from a larger Army datacenter cause we literally are not connected outside of our building. Perhaps that is what needs to change but that will be like pulling teeth .... from a dragon. You might get the tooth, but someone is getting eaten or burned for it.
 
Nice.

But the review says that even with the 'SQ: Super Quiet' fans in the power supplies it's simply too loud for being at someone's desk. I don't doubt it - all the Supermicro stuff I've worked with has always seemed to put noise at the bottom of the list of priorities.

SERVERS go in data centers, data closets, etc. Or the desktop of annoyed admins trying to figure out very annoying problems while the fans blare. They aren't meant to be quiet.

WORKSTATIONS go at desktops and will give up things like number of drive bays, redundant power supplies, etc, and thus leave more options for quiet fans.

Supermicro focuses on server, and doesn't make compromises they would be pilloried for to make them desktop friendly.

The workstation market for epyc seems to be small, maybe it will grow, maybe not.
 
Not many SAS SSD's out there, plus it's not enough of an improvement over current SATA.
The newer standards we have on laptops/desktop are faster with lower latency.
If I'm going to spend the big $$$ on a new server, I don't want the raid controller and drive interface to be the bottleneck.


I'm sorry, I thought you were talking about a mass storage solution, (SAN), not small servers with local storage only. I mean that is how servers get access to fast storage, you connect to SAN with fiber or 10GB Ether or even 40GB Ether and now you have more through put. If you connect with iSCSI or FC/FCoE then your server sees the storage on the SAN as a local block storage device just like a local HD. These storage systems can have SATA or SAS drives and even their spinning disks are fast enough that virtual machines on ESXi servers have no problems running fast enough. Even a demanding database server can be set up this way and run quite fast. But if you really want more speed then all-flash storage SAN is the way to go, but they are still mostly SAS interfaces for the individual drives.

Recently their have been some new All-Flash systems announced that use faster interfaces and they look pretty sexy. I'd enjoy playing with them.

https://www.anandtech.com/show/12567/hands-on-samsung-nf1-16-tb-ssds

Now if I am just spouting shit you already know all about then I apologize. Knowledge alone doesn't solve problems and some problems don't fit in neat boxes.
 
I understand that AMD wants to increase its data center market share but saying it wants to "retake" the data center is a bit of a joke. You can't retake what you've never had. AMD's market share in the server space was never all that large to begin with even at the height of their popularity.
 
ThreadRippper is Workstation, Epyc is Server. Did I miss something?

Sort of. Threadripper is HEDT. Epyc is server and workstation. HEDT splits the difference between the enthusiast and workstation markets where they overlap. Its kind of a "prosumer" offering rather than a pure workstation part.
 
  • Like
Reactions: Mega6
like this
Sort of. Threadripper is HEDT. Epyc is server and workstation. HEDT splits the difference between the enthusiast and workstation markets where they overlap. Its kind of a "prosumer" offering rather than a pure workstation part.

Wouldn't that largely be based on what it's plugged in to? Enthusiast board with high-speed non-ECC DDR4 or workstation board with ECC?
 
Wouldn't that largely be based on what it's plugged in to? Enthusiast board with high-speed non-ECC DDR4 or workstation board with ECC?

You could look at it that way but the fact is most of the socket TR4 motherboards out there contain RGB LEDs and are clearly more gaming / enthusiast oriented. Most of the workstation style motherboards I've seen are for Epyc, not Threadripper.
 
You could look at it that way but the fact is most of the socket TR4 motherboards out there contain RGB LEDs and are clearly more gaming / enthusiast oriented. Most of the workstation style motherboards I've seen are for Epyc, not Threadripper.

That's why I'm leaning on your insight- I simply don't have a grasp on the offerings available, particularly when it comes to the workstation stuff.
 
Oh I understand the effect on the industry to a degree, or to the degree it impacts me. But it doesn't fit every use case, never did. But it does work to force more small data centers to shut their doors and sign up for AWS or some other big player's services because the alternative looks like a trip on the access road with no way to get on the freeway with the big boys.

My datacenter is just one of those and what's worse is that we have no connection to the world so we don't have the option of buying service from AWS or anyone else. We're Army and can't even pull from a larger Army datacenter cause we literally are not connected outside of our building. Perhaps that is what needs to change but that will be like pulling teeth .... from a dragon. You might get the tooth, but someone is getting eaten or burned for it.

Indeed, obviously its not a blanket statement, but I'm commenting on how the industry is heading as a whole. There will always be exceptions, no doubt.
 
Indeed, obviously its not a blanket statement, but I'm commenting on how the industry is heading as a whole. There will always be exceptions, no doubt.


My issue with our engineer is that he is justifying chasing this solution set solely because it's VMWare's direction and he's going to follow it regardless of the actual need or appropriateness of the solution. No regard for what the customer really needs. He's a salesman, bet he has stock.
 
You could look at it that way but the fact is most of the socket TR4 motherboards out there contain RGB LEDs and are clearly more gaming / enthusiast oriented. Most of the workstation style motherboards I've seen are for Epyc, not Threadripper.

I'd totally love to get an EPYC for a desktop build. I'm drooling at the prospect of all those PCIe lanes. No matter how much expansion I have, I always want more. This is why I can never go SFF again.

Downside is that the best CPU for this application would be the 7351P, and it's max turbo clock is 2.9Ghz.

The highest turbo you can get on any EPYC is 3.2 on the 7451 and the 7601, and those have way more cores than I need and quite the hefty price tag too.

Are there any EPYC workstation boards that support overclocking? Do the CPU's even support it? If so, I wonder if they have enough headroom to hit over 4Ghz or of if they are binned for other things.
 
Last edited:
I don't know if there are any motherboards that support it. Supermicro has started offering motherboards for workstations that do. However, I don't know of any specific models where that's an offering. GIGABYTE, Supermicro, and others are offering Epyc workstation motherboards which may or may not support it. The problem is that if you want that, most companies will probably tell you to buy a Threadripper and be done with it regardless of the compromises you endure going that route.
 
Would a high end GPU work (properly) and everything else not hiccup on 'real' server board? Like 2/4 socketed Epyc part?

Anyone with access want to check it out?

I'm wondering if a fully populated board (ram wise) would stutter and slow down like with Threadrippers do with two banks in gaming.
 
Would a high end GPU work (properly) and everything else not hiccup on 'real' server board? Like 2/4 socketed Epyc part?

Anyone with access want to check it out?

I'm wondering if a fully populated board (ram wise) would stutter and slow down like with Threadrippers do with two banks in gaming.

Well, there in lies the difficult part. Server and workstation motherboards go through different compatibility testing for hardware than consumer boards do. They have far more complex OROM configurations than consumer boards due to their storage controllers. Additionally, populating the RAM is fine on Epyc boards (or should be) but they won't clock RAM as high as Threadripper will. They usually use ECC / Registered DIMMs etc.
 
It is related to the 10nm struggle to a degree, but not in the negative way many like to paint it, its purely a supply/demand issue and ability to meet a new massive demand which hindered the conversion of 14nm mfr process to 10nm. Intel had highly increased demand for CPU's during the summer, and only growing demands from hyper-scale cloud providers, eg AWS, Azure, GCP. Production factories they use are expected to be back on schedule by October, which should see relief to the market in November. Hence it put everything off schedule, and the delay in the next gen intel procs.

It is good timing for AMD, but IMO, the dry spell in supply isn't necessarily going to be long enough for AMD to take over like it did for a couple years in old opteron days.

Epyc sells on its merits.... if only punters would take time to consider those merits.

As we see from prospects here, the current situation is yet another reason die hard intel closed shops are closely considering an amd alternative.

This alternative is realistically 7nm 64 core in the first half of 2020, with a replaceable interim 14nm chip if u want it sooner. With 128 lanes and 8 channel memory, Xeon needs a lot of loyalty.

The cool thing is AMD can make Zen chiplets so cheaply, yet charge high cost Intel set prices. Cheaper prices, yet better margins.
 
Not many SAS SSD's out there, plus it's not enough of an improvement over current SATA.
The newer standards we have on laptops/desktop are faster with lower latency.
If I'm going to spend the big $$$ on a new server, I don't want the raid controller and drive interface to be the bottleneck.
24x nvme is a fully populated Epyc configuration (96 lanes).
 
Now if I am just spouting shit you already know all about then I apologize. Knowledge alone doesn't solve problems and some problems don't fit in neat boxes.

Company is not big enough (less than 100 employees) to justify anything like that. A good SAN setup would cost more than all the servers I currently have in my computer room. :eek:

More cost effective to use servers with local storage. Most the servers are Dell 2U units with room for 12 SAS/SATA drives. (R510, R720XD, etc)
Mostly 2TB and 4TB sata drives in raid 10 for better performance. Exception is my backup servers that use 6TB & 8TB drives for D2D2T backups.

Currently running everything over gb ethernet, although I'm hoping to move to 10Gbit next year.
If I had a major failure and had to restore TB's of data from tape, it would take way too long over gb ethernet.
 
Epyc sells on its merits.... if only punters would take time to consider those merits.

As we see from prospects here, the current situation is yet another reason die hard intel closed shops are closely considering an amd alternative.

This alternative is realistically 7nm 64 core in the first half of 2020, with a replaceable interim 14nm chip if u want it sooner. With 128 lanes and 8 channel memory, Xeon needs a lot of loyalty.

The cool thing is AMD can make Zen chiplets so cheaply, yet charge high cost Intel set prices. Cheaper prices, yet better margins.

Maybe in the consumer world. Not the enterprise datacenter. It isn't as fickle as the average PC user is. That's what I am talking about because thats where the money is.
 
After picking up a Ryzen 5 for home ESX (been with Intel for maybe 15 years or more?) and building a friend's Ryzen 7 gaming machine (He'd never owned an AMD) I really wanted to replace my work ESX test box with a dual Epyc 8C Dell server. Quote came in at around 20% more than the dual Xeon Silver 8C (also Dell, all other specs the same), unfortunately couldn't justify the cost bump so I ordered the Intel, I figured they'd have been cheaper or at least comparable cost? Maybe I missed something?

Edit: CPUs were Epyc 7251 vs Xeon Silver 4110s for reference
 
Last edited:
I don't think Dell is very AMD friendly at all. Well that is at least from what I am allowed to purchase. The choices I am allowed to purchase (quotes don't even work from the consumer side) as part of our Dell only deal are not great and certainly not good for AMD. I would like to use TRs as our workstations but sadly I can't and have to instead purchase 4C / 8T xeons when I should be getting 8C / 16T instead for the same price.
 
  • Like
Reactions: Mega6
like this
I don't think Dell is very AMD friendly at all. Well that is at least from what I am allowed to purchase. The choices I am allowed to purchase (quotes don't even work from the consumer side) as part of our Dell only deal are not great and certainly not good for AMD. I would like to use TRs as our workstations but sadly I can't and have to instead purchase 4C / 8T xeons when I should be getting 8C / 16T instead for the same price.


Back in the Intel scandal days, Dell was big on the take. Always been big pro intel and I am surprised they even attempt to offer AMD.
 
Back in the Intel scandal days, Dell was big on the take. Always been big pro intel and I am surprised they even attempt to offer AMD.

They do offer AMD, I guess because they have to.

Still, it's something that "is there, somewhere, but if you ever find it, the offer will be outrageously awful and you'll go for intel anyway".

Me likey Dell all in all, but they give no love to AMD at all, since forever.
 
Still, it's something that "is there, somewhere, but if you ever find it, the offer will be outrageously awful and you'll go for intel anyway".

That is exactly my feelings on what I can select.
 
From what I've been seeing so far, it looks like HP is where it is at for AMD gear from OEMs.

I don't like HP, as they expect me to have a support contract in place to download the latest BIOS, even for a machine long out of any support contract (10+ years old). Dell has everything available for the oldest things ever, without any paywalls.
 
I'm sorry, I thought you were talking about a mass storage solution, (SAN), not small servers with local storage only. I mean that is how servers get access to fast storage, you connect to SAN with fiber or 10GB Ether or even 40GB Ether and now you have more through put. If you connect with iSCSI or FC/FCoE then your server sees the storage on the SAN as a local block storage device just like a local HD. These storage systems can have SATA or SAS drives and even their spinning disks are fast enough that virtual machines on ESXi servers have no problems running fast enough. Even a demanding database server can be set up this way and run quite fast. But if you really want more speed then all-flash storage SAN is the way to go, but they are still mostly SAS interfaces for the individual drives.

Recently their have been some new All-Flash systems announced that use faster interfaces and they look pretty sexy. I'd enjoy playing with them.

https://www.anandtech.com/show/12567/hands-on-samsung-nf1-16-tb-ssds

Now if I am just spouting shit you already know all about then I apologize. Knowledge alone doesn't solve problems and some problems don't fit in neat boxes.


I'm no enterprise guy (I just keep servers in my basement) but personally if if given a choice, regardless of budget restrictions I'd always build my own storage server over using some storage appliance, consumer or enterprise.
 
Company is not big enough (less than 100 employees) to justify anything like that. A good SAN setup would cost more than all the servers I currently have in my computer room. :eek:

More cost effective to use servers with local storage. Most the servers are Dell 2U units with room for 12 SAS/SATA drives. (R510, R720XD, etc)
Mostly 2TB and 4TB sata drives in raid 10 for better performance. Exception is my backup servers that use 6TB & 8TB drives for D2D2T backups.

Currently running everything over gb ethernet, although I'm hoping to move to 10Gbit next year.
If I had a major failure and had to restore TB's of data from tape, it would take way too long over gb ethernet.


I agree. Short term get your 10GB Ether up and running. Do what you are doing with local storage to the servers. Long term, give HCI a hard look. Three servers with VSAN integrated, look at those costs vs staying on your current road cause three servers with VSAN can probably run everything you have going and licensing from VMWare would probably be by socket for ESXi. It's expensive to be sure, but it also would chop your footprint to a small rack, lower power, give you an actual SAN, and you'd probably want to keep the backup servers for your tapes, although VMWare and SAN storage give you new recovery options. That next server you need to build with a new OS on it can be virtual, built in an isolated virtual lab space that won't impact the production equipment. When you get it right, you just migrate it in and move the old one out, too easy.

I'm sure these are things you wish you could do now, but with HCI, as much as I have reservations about going to it in our use case, yours sounds perfect for it. Either way, happy trails.
 
I'm no enterprise guy (I just keep servers in my basement) but personally if if given a choice, regardless of budget restrictions I'd always build my own storage server over using some storage appliance, consumer or enterprise.

The use case rules all. Your use case isn't the same and I know you get that. But even in our use case, we run virtual storage servers connected to SAN via iSCSI over 10GB Ether. The virtual machines for the file servers remain small that way, if you "integrate" the storage into the VM so that the VMDK holds the data, the VMDK will get huge and probably die miserably taking your data with it. This way, if I have an issue with my file server it's a VM, I can fix it in minutes and I don't even need to know what the problem is. Just roll it back to a previous snapshot, the VM itself holds nothing of value and is just a tool to manage the attached storage. The attached storage volume can also have a storage snapshot, that will allow individual restore of files easily.

Anyway, it's one solution and not always the best.
 
The use case rules all. Your use case isn't the same and I know you get that. But even in our use case, we run virtual storage servers connected to SAN via iSCSI over 10GB Ether. The virtual machines for the file servers remain small that way, if you "integrate" the storage into the VM so that the VMDK holds the data, the VMDK will get huge and probably die miserably taking your data with it. This way, if I have an issue with my file server it's a VM, I can fix it in minutes and I don't even need to know what the problem is. Just roll it back to a previous snapshot, the VM itself holds nothing of value and is just a tool to manage the attached storage. The attached storage volume can also have a storage snapshot, that will allow individual restore of files easily.

Anyway, it's one solution and not always the best.


Understood, but you could do the same thing, having a bare metal storage server connected to your VM server via iSCSI (or NFS) and 10G Ethernet.

This way you get more control over the process. You could even use the more flexible and secure ZFS file system, which as many benefits over the vendor specific either hardware RAID or appliance manufacturer solution.

System goes down? Those disks can be read from pretty much anything regardless of what hardware it has. To me that is a huge benefit.
 
I'm no enterprise guy (I just keep servers in my basement) but personally if if given a choice, regardless of budget restrictions I'd always build my own storage server over using some storage appliance, consumer or enterprise.
The use case rules all

Exactly.

While I also prefer to build, and have built, Synology appliances and others- ASUSTOR in particular- as single or combined devices. Even considered using one of these devices as a backup target for a built server. They're just so simple, and hell, ASUS as a 4-bay unit with 1x10Gbase-T and 2x1Gbase-T.

[what I can't cheaply get out of these devices is 4+ bays with 10Gbase-T and at least one more ethernet port as well as some form of SSD read and write caching...]
 
licensing from VMWare would probably be by socket for ESXi. It's expensive to be sure, but it also would chop your footprint to a small rack, lower power, give you an actual SAN, and you'd probably want to keep the backup servers for your tapes, although VMWare and SAN storage give you new recovery options.

As a Microsoft developer, Hyper-V is free for us. They give us enough free production licenses for me to run the office.
Going with ESXi would be a huge expense compared to free.

I can already create a virtual test environment inside Hyper-V, so I don't need VMWare for that either.

As for power usage, I've already dropped it significantly when I got rid of our older servers by virtualizing them. The newer severs draw much less power over the day, even though they are running multiple virtual servers. Besides, we don't get a separate power bill as it's included in our lease. :D
 
As a Microsoft developer, Hyper-V is free for us. They give us enough free production licenses for me to run the office.
Going with ESXi would be a huge expense compared to free.

I can already create a virtual test environment inside Hyper-V, so I don't need VMWare for that either.

As for power usage, I've already dropped it significantly when I got rid of our older servers by virtualizing them. The newer severs draw much less power over the day, even though they are running multiple virtual servers. Besides, we don't get a separate power bill as it's included in our lease. :D

The real problem with Hyper-V is it lacks the scalability and some of the more advanced features VMWare has. For the price, and for smaller environments it makes perfect sense to use Hyper-V.
 
As a Microsoft developer, Hyper-V is free for us. They give us enough free production licenses for me to run the office.
Going with ESXi would be a huge expense compared to free.

I can already create a virtual test environment inside Hyper-V, so I don't need VMWare for that either.

As for power usage, I've already dropped it significantly when I got rid of our older servers by virtualizing them. The newer severs draw much less power over the day, even though they are running multiple virtual servers. Besides, we don't get a separate power bill as it's included in our lease. :D

KVM is free and runs on top of a real operating system with a competent kernel, not any of that Microsoft garbage :p

And as a free bonus, you can run any Linux guest (that isn't too security conscious) inside an LXC container at the same time, with WAY less overhead (CPU and RAM) than a VM.
 

No? They're just 4U in a vertical package. Less density but towers work in data centers just fine.

towers-not-a-datacenter.jpg
 
KVM is free and runs on top of a real operating system with a competent kernel, not any of that Microsoft garbage :p

And as a free bonus, you can run any Linux guest (that isn't too security conscious) inside an LXC container at the same time, with WAY less overhead (CPU and RAM) than a VM.


I have a couple Unix servers running under Hyper-V. Happy with the performance.

Since I'm the only IT person, I need to keep it simple enough for someone else to support it when I'm out on vacation.
We sell a product that run on Windows, and we have a lot of people who spend their day installing or supporting the software on Windows, they can step in and support the network while I'm out.
 
Back
Top