cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,060
Spiceworks has conducted a survey of more than 500 IT decision makers in businesses across North America and Europe to determine the current server trends, purchasing plans, and components that will go into them. 85% of respondents said they were looking to purchase new servers in the next 3 years. Company growth was listed as the main driver of the need to purchase a new server, but performance degradation and the maintenance cost of aging hardware were close behind. Servers from Dell Technologies and Hewlett-Packard Enterprise dominated the survey, but the size of the business determined which company was more popular. IBM led everyone in reliability; which was also the most important factor to IT professionals when choosing a server manufacturer.

AMD gained ground on Intel as 93% of respondents say they use Intel, 16% say they use AMD, and 4% say they use ARM. Some companies use a combination of manufacturers, so they would have responded twice. The size of the business matters in the adoption rate of AMD servers as 27% of enterprises use AMD, compared to 17% of mid-size companies and 11% of small businesses. AMD and Intel ranked closely in reliability and AMD topped Intel in the second most important attribute; value for money. 77% of IT professionals associate AMD with "value for the money" while only 43% chose Intel for this metric. Spiceworks believes that more businesses are willing to give AMD a try in the future as the company rolls out its AMD EPYC 7nm processor technology.

But are SSDs ready for the server room, where uptime is critical? In fact, 62 percent of businesses use SSDs in their on-premises physical servers today, and that number is expected to increase to 72 percent by 2020. Currently, among businesses using local SSDs in on-prem servers, 51 percent are using SATA SSDs, 34 percent are using faster SAS SSDs, and 13 percent are using even faster NVMe SSDs.

SSDs are also increasingly finding their way into external storage. Currently, 18 percent of businesses use hybrid storage arrays that make use of both SSDs and spinning hard drives, while 14 percent use all-flash storage arrays. However, as SSDs become more popular, we expect usage of all-flash storage arrays to surpass use of hybrid flash arrays within the next two years.
 
Good. Make a CPU that can hit 5 ghz and I might just upgrade when DDR4 prices drop this year.
 
Makes sense that bigger businesses have a bigger percentage right now. They're still proving themselves, and the big guys have more money to spend on testing new hardware (and more to lose if they don't carefully consider their options).
 
From an enterprise standpoint I need to consider for my next hardware order/refresh how AMD's EPYC CPU's are impacted by the sidechannel vulnerability fix in ESXi. Because that 'fix' disabled hyperthreading on my ESXi servers. Talk about piss me off!
 
All my servers are currently running Intel.

I've put off buying any server for at least another year or 2 to give time for the newer technology to mature and come down in price.

I want the next server I buy to come with all NVMe drives, at least 256GB ram, and dual 12+ core CPU's running at 3+Ghz, while not costing more than I make in a year.
 
We're all intel and dell pretty much currently the biggest thing that keeps us from really looking is the compatibility.
Its really hard to consider starting over with our ESXI clusters in the triple digit host count since EVC won't work cross platform.
All Intel, all AMD, or a slew of split clusters which isn't ideal are the options, so its hard to stray from the status quo for a negligible cost difference compared to re-buying all the compute.

From an enterprise standpoint I need to consider for my next hardware order/refresh how AMD's EPYC CPU's are impacted by the sidechannel vulnerability fix in ESXi. Because that 'fix' disabled hyperthreading on my ESXi servers. Talk about piss me off!
We just made this an acceptable risk so we don't have to halve our compute, none of the equipment is directly external facing on top of this being an unlikely attack vector compared to easier targets.
 
Well we're in the middle of a refresh tender right now and I am pushing for AMD based ESXI hosts as we think we can afford to replace all our current hosts
I like the idea of more dense compute hosts and less of them, but still maintain an N+2 config
 
lol you are talking about a personal purchase in a story about Enterprise IT.
I work for one of the big three cloud services providers doing hardware design. There definately IS a place for ~5GHz in enterprise IT and cloud services. One that is very relevent to me is hardware EDA tools. Depending on what you are doing, run times are measured in hours, sometimes days. Significant parts of the jobs are still single threaded, and the parallel parts typically can only effectively use around 8 cores at a time, so you want fast cores if you're going to get the job time down. Some tools also require several pseudorandom seeds to be run in parallel as you might get only 20% of them to result in a timing clean solution. Last job was at Intel, and we had a good chunk of the local server farm running speed CPUs with 768+GB on RAM to handle some of beefier jobs with 24+ hour run times as you could get it to complete 4-6 hours faster on those machines.
 
Well we're in the middle of a refresh tender right now and I am pushing for AMD based ESXI hosts as we think we can afford to replace all our current hosts
I like the idea of more dense compute hosts and less of them, but still maintain an N+2 config

It's interesting. I work on a team responsible for alarm monitoring servers for an alarm monitoring company. Due to this our serves are all considered critical. In my interview I was asked... Can you handle being responsible for serves where if they go down people can die? Anyway this whole idea of super dense hosts drives me nuts. I don't mind a good bit of density as long as you maintain the N+2. But I watched another team put together a 2 host cluster with quad 22 core cpu's and my thought was. "that's too many damn eggs in one basket." It would never fly on my team.

We're all intel and dell pretty much currently the biggest thing that keeps us from really looking is the compatibility.
Its really hard to consider starting over with our ESXI clusters in the triple digit host count since EVC won't work cross platform.
All Intel, all AMD, or a slew of split clusters which isn't ideal are the options, so its hard to stray from the status quo for a negligible cost difference compared to re-buying all the compute.

We just made this an acceptable risk so we don't have to halve our compute, none of the equipment is directly external facing on top of this being an unlikely attack vector compared to easier targets.

Yea on your first comment I agree completely. That will be a growing pain for us because with new hosts on AMD infrastructure we can't just migrate our VM's over and just expect everything to work. My issue is we spent money expecting to get 36 logical cores a socket and we got hosed. Now if that same patch (documented here https://kb.vmware.com/s/article/55806 ) applies to the AMD EPYC CPU's then I have no real impetuous to switch over other than I like AMD's new architecture and dig having that many PCIE lanes.

And a quick google search gives me this: https://www.amd.com/en/corporate/security-updates Which makes me feel good about EPYC CPU's even more!
 
It's interesting. I work on a team responsible for alarm monitoring servers for an alarm monitoring company. Due to this our serves are all considered critical. In my interview I was asked... Can you handle being responsible for serves where if they go down people can die? Anyway this whole idea of super dense hosts drives me nuts. I don't mind a good bit of density as long as you maintain the N+2. But I watched another team put together a 2 host cluster with quad 22 core cpu's and my thought was. "that's too many damn eggs in one basket." It would never fly on my team.



Yea on your first comment I agree completely. That will be a growing pain for us because with new hosts on AMD infrastructure we can't just migrate our VM's over and just expect everything to work. My issue is we spent money expecting to get 36 logical cores a socket and we got hosed. Now if that same patch (documented here https://kb.vmware.com/s/article/55806 ) applies to the AMD EPYC CPU's then I have no real impetuous to switch over other than I like AMD's new architecture and dig having that many PCIE lanes.

And a quick google search gives me this: https://www.amd.com/en/corporate/security-updates Which makes me feel good about EPYC CPU's even more!
Oh I'm gonna keep N+2 for sure. Just want more compute without having to have loads more hosts
Actually going to mirror two sites for redundancy as well
 
Oh I'm gonna keep N+2 for sure. Just want more compute without having to have loads more hosts
Actually going to mirror two sites for redundancy as well

My team doesn't do actual mirrors via storage mirroring but we mirror by physical and virtual build a A side and B side so we can do guest level maintenance whille moving our traffic. Currently we are doing that every two weeks.
 
I work for one of the big three cloud services providers doing hardware design. There definately IS a place for ~5GHz in enterprise IT and cloud services. One that is very relevent to me is hardware EDA tools. Depending on what you are doing, run times are measured in hours, sometimes days. Significant parts of the jobs are still single threaded, and the parallel parts typically can only effectively use around 8 cores at a time, so you want fast cores if you're going to get the job time down. Some tools also require several pseudorandom seeds to be run in parallel as you might get only 20% of them to result in a timing clean solution. Last job was at Intel, and we had a good chunk of the local server farm running speed CPUs with 768+GB on RAM to handle some of beefier jobs with 24+ hour run times as you could get it to complete 4-6 hours faster on those machines.

You sir have posted a valid reason with a great example.

The post I quoted did no such thing and seems like it belongs in another thread.

You see the difference?
 
If Cisco starts selling AMD Epyc UCS blades then maybe. You still have cpu compatibility issues but we like to keep the hardware families in the same chassis anyway. Losing HT hurt us, but our number one consumer is memory so it wasn't the end of the world. Even our busiest hosts had cpu to spare.
 
Yea I run a few small 4+1 clusters... and before we could run all of our guests on one host if it came down to it, so really loosing hyperthreading didn't hurt my team too bad.
 
Who I feel bad for are teams running 5k+ guests. I'm betting that loosing hyperthreading is going to SUUUUUCK if they ever turn those patches even on.
 
Back
Top