Worth Upgrading Server to Ivy Bridge E5-2xxx V2?

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,743
What do you guys think?

Dual socket Supermicro boards for these are now affordable on eBay, as are the CPU's.

I currently have a dual Socket Westmere-EP hexacore L5640.

It's feeling a bit tired, but I am not yet ready to upgrade to a system with DDR4 and spend several thousand bucks on 256GB of registered RAM.

E5-2xxx V2 is as far as I can go without having to buy more RAM. Seeing I can get a pair of power saving octacore CPU's for about $100 and a Supermicro motherboard for abbout $160, is it a worthwhile interim upgrade until I am ready to spend the big bucks on lots of DDR4, or just a waste?

As I recall, Westmere wasn't as badly hit performance-wise from the hardware bug mitigations as Sandy/Ivy/Haswell were. Did this just eat up the IPC advantage, or are these two gen later CPU's actually going to be an improvement?

Appreciate any thoughts.
 
Depends, how much ram do you need and whats the server used for?

Whats the full current specs?

Also I dont believe the hardware bugs really affect end users, mostly large scale server operations and to date I have not heard nor seen an attack using them.
 
This is a home production server.

I have it running a ton of VM's for home stuff, including a mythtv backend vm, vm's for various servers (Unifi, OpenVPN, SFTP etc.). It also runs a large ZFS storage pool for my network storage, and handles automated snapshots and off-site backups.

Currently I have the deal L5640's maxed out with 192GB of RAM.

As I see it there would be there main reasons for doing this. More CPU power, More RAM capacity and PCIe Gen3 support.

It would be really nice to upgrade my old data Intel SSD's I use for ZIL/SLOG with a set of of Optane drives...
 
Without knowing how you have your VM's setup, I'd start with scaling back some of your CPU cores. One of the big mistakes I see made is people assign way to many cores to VM's. Start with 1 and go up from there. Even if the host % utilization is not 100% it makes a difference. Also scale back your memory to be slightly higher than what the VM is consuming.

After making those tweaks, see how it all runs.
 
Without knowing how you have your VM's setup, I'd start with scaling back some of your CPU cores. One of the big mistakes I see made is people assign way to many cores to VM's. Start with 1 and go up from there. Even if the host % utilization is not 100% it makes a difference. Also scale back your memory to be slightly higher than what the VM is consuming.

After making those tweaks, see how it all runs.

Yes a lot of stuff in a VM env tends to be over provisioned with resources. I try and use resource pools for common task hosts (all the DCs for example) so I can simply carve out the amount i want to expend at most for that task. Otherwise, you just gotta bare minimum it, and slowly scale it up until its only using what it needs.

That being said... ill never say no to MORE POWER. lol.
 
I don't know what you guys are using, but VMWare is designed to allow for over provisioning of resources and unless you have too many servers consuming those resources, it shouldn't be a problem. I agree you want to keep it minimal, but assigning two cores to a VM doesn't mean those cores are exclusive and you need more CPU cores to add more VM's.
 
I don't know what you guys are using, but VMWare is designed to allow for over provisioning of resources and unless you have too many servers consuming those resources, it shouldn't be a problem. I agree you want to keep it minimal, but assigning two cores to a VM doesn't mean those cores are exclusive and you need more CPU cores to add more VM's.

VMWare here too. I wasn't assuming he was using vmware, was just talking provisioning.

Zarathustra[H] what is your HV?
 
VMWare here too. I wasn't assuming he was using vmware, was just talking provisioning.

Zarathustra[H] what is your HV?

Well, the reason I said that is because I only work with VMWare these days. I don't know how over provisioning works in anything else.
 
I don't know what you guys are using, but VMWare is designed to allow for over provisioning of resources and unless you have too many servers consuming those resources, it shouldn't be a problem. I agree you want to keep it minimal, but assigning two cores to a VM doesn't mean those cores are exclusive and you need more CPU cores to add more VM's.
Errr... you do know about CPU Wait time, right?
 
CPU wait time is only an issue if it's an issue. Over provisioning VM's doesn't immediately lead to high cpu wait times or large co stop issues.

It can lead to them, it's not some hard set rule. In general it is best to assign the amount of vcpus necessary to a VM and not just give it an arbitrarily high amount for giggles.
 
VMWare here too. I wasn't assuming he was using vmware, was just talking provisioning.

Zarathustra[H] what is your HV?

I found a pretty sweet deal on some used hardware so i just went ahead and did it.

I use a mix of KVM and LXC on a Debian based Proxmox distribution.

I used to use VMWare years ago, but got tired of the limitations and poor patching on the free edition, and was never going to spend the money necessary for an enterprise license for a server that sits in my basement, so I switched.

New specs are dual E5-2650 v2 and 256GB of RAM. Thus far everything feels a bit smoother, but I think it's more due to improved I/O than the boost in CPU output. (sata2 and PCIe gen 2 were really holding me back as my workloads tend to be I/O heavy.)

Next step will be to replace my old SATA SSD's I use for cache with a mix of Optane and NVME drives, but that will likely be a while. I have to spend money on the desktop first :p
 
I found a pretty sweet deal on some used hardware so i just went ahead and did it.

I use a mix of KVM and LXC on a Debian based Proxmox distribution.

I used to use VMWare years ago, but got tired of the limitations and poor patching on the free edition, and was never going to spend the money necessary for an enterprise license for a server that sits in my basement, so I switched.

New specs are dual E5-2650 v2 and 256GB of RAM. Thus far everything feels a bit smoother, but I think it's more due to improved I/O than the boost in CPU output. (sata2 and PCIe gen 2 were really holding me back as my workloads tend to be I/O heavy.)

Next step will be to replace my old SATA SSD's I use for cache with a mix of Optane and NVME drives, but that will likely be a while. I have to spend money on the desktop first :p

Awesome. I use KVM for some smaller "don't want to buy more vmware" projects at work. Its definitely more long term useful than Free ESXi, which is more or less take it or leave it. Stripped down gateway drug for buying licensing lol.

I get so tired of all of it from work, my house is pretty much an analog affair with a few laptops and a flat screen TV. I try and contain my work to work, or it quickly overwhelms my life. I am lucky to have plenty of resources at work that have been setup for me to test/explore things, so I can just VPN in and do that from my laptop at home. Haven't had a home lab in years.... hell I keep my laptop tucked into a bookshelf with my D&D books.
 
Awesome. I use KVM for some smaller "don't want to buy more vmware" projects at work. Its definitely more long term useful than Free ESXi, which is more or less take it or leave it. Stripped down gateway drug for buying licensing lol.

I get so tired of all of it from work, my house is pretty much an analog affair with a few laptops and a flat screen TV. I try and contain my work to work, or it quickly overwhelms my life. I am lucky to have plenty of resources at work that have been setup for me to test/explore things, so I can just VPN in and do that from my laptop at home. Haven't had a home lab in years.... hell I keep my laptop tucked into a bookshelf with my D&D books.

I can see that. Who wants to bring work home with them?

I'm the opposite. I don't do this stuff professionally, so it's all hobby. It allows me to still enjoy it :p
 
All that said, my home server suffers from the same problem enterprise/cloud/datacenter/whatever servers suffer from everywhere.

By necessity it is scaled to handle the peak loads it sees once in a blue moon/never.

Most of the time it sits near idle with <2% load.

It seems like such a waste.

Outside of things that will cost me money by raising my electric bill for no reason (folding/SETI at home/etc) I wonder what else I could do with it that would bring me benefit....
 
So, perhaps, don't state things as fact if you have nothing to back it up...
 
So, perhaps, don't state things as fact if you have nothing to back it up...

I just linked you on what backed it up. An article that suggests server utilization is an industry wide problem....

...just a little bit of an old one.

Maybe things have changed since?
 
A lot of CPU goes unused in enterprise environments because of failover planning.
 
reaction.gif


Dang all, dial it back a bit, its just a friendly discussion...
 
I found a pretty sweet deal on some used hardware so i just went ahead and did it.

I use a mix of KVM and LXC on a Debian based Proxmox distribution.

I used to use VMWare years ago, but got tired of the limitations and poor patching on the free edition, and was never going to spend the money necessary for an enterprise license for a server that sits in my basement, so I switched.

New specs are dual E5-2650 v2 and 256GB of RAM. Thus far everything feels a bit smoother, but I think it's more due to improved I/O than the boost in CPU output. (sata2 and PCIe gen 2 were really holding me back as my workloads tend to be I/O heavy.)

Next step will be to replace my old SATA SSD's I use for cache with a mix of Optane and NVME drives, but that will likely be a while. I have to spend money on the desktop first :p

Damn, your rocking KVM and LXC and you don't do this stuff professionally? You could easily be murdering a monster salary with that skill set. Whats stopping you?

Unless you want to build an application, teach yourself auto scale, learn some automation language, you might actually be in a good spot with the hardware you are currently rocking. At the end of the day, you will always be chasing that bottle neck. If your current work loads only spike once in awhile, stick to what you got. If you just want to upgrade, then totally do it. Everyone needs a hobby, and this one in your case, is pretty face melting.

As for server utilization being low in a lot of environments? Yah.. 1 to 1 DR is still a thing for a lot of shops. Mine included. It absolutely kills everyone on our team that our remote sites sit idle most of the time in stand-by mode. Its super easy for anyone to pop off and say something like, well if things are in standby, why not run reporting services off of them or whatever. Its all about the work load, the applications sitting on top of the hardware and if your critical apps written internally can be used in a read only state. With out giving away to much.. I'm replicating around 400tbs to remote sites. That data sits in an offline mode until the source site burns to the ground and we initiate a fail over. And I don't do just fail over.. I'm doing point in time recovery of journaled volumes. I actually have to set aside an additional 10% of space minimum to cover the point in time recovery stuff. So if the last point in time copy I got is corrupted, I have 10 or more options to go back to. You could pop off and say, well DR stuff should run Dev/test/qa work loads until the building burns down. Sounds awesome, doesn't always work that way due to "reasons" beyond my control.
 
Damn, your rocking KVM and LXC and you don't do this stuff professionally? You could easily be murdering a monster salary with that skill set. Whats stopping you?

Unless you want to build an application, teach yourself auto scale, learn some automation language, you might actually be in a good spot with the hardware you are currently rocking. At the end of the day, you will always be chasing that bottle neck. If your current work loads only spike once in awhile, stick to what you got. If you just want to upgrade, then totally do it. Everyone needs a hobby, and this one in your case, is pretty face melting.

As for server utilization being low in a lot of environments? Yah.. 1 to 1 DR is still a thing for a lot of shops. Mine included. It absolutely kills everyone on our team that our remote sites sit idle most of the time in stand-by mode. Its super easy for anyone to pop off and say something like, well if things are in standby, why not run reporting services off of them or whatever. Its all about the work load, the applications sitting on top of the hardware and if your critical apps written internally can be used in a read only state. With out giving away to much.. I'm replicating around 400tbs to remote sites. That data sits in an offline mode until the source site burns to the ground and we initiate a fail over. And I don't do just fail over.. I'm doing point in time recovery of journaled volumes. I actually have to set aside an additional 10% of space minimum to cover the point in time recovery stuff. So if the last point in time copy I got is corrupted, I have 10 or more options to go back to. You could pop off and say, well DR stuff should run Dev/test/qa work loads until the building burns down. Sounds awesome, doesn't always work that way due to "reasons" beyond my control.

Quoted for posterity&truth.

Seriously, KVM+LXC as a "side hobby". Most peoples computer side hobby is Dual Booting Linux and Watercooling. lol
 
Damn, your rocking KVM and LXC and you don't do this stuff professionally? You could easily be murdering a monster salary with that skill set. Whats stopping you?

Unless you want to build an application, teach yourself auto scale, learn some automation language, you might actually be in a good spot with the hardware you are currently rocking. At the end of the day, you will always be chasing that bottle neck. If your current work loads only spike once in awhile, stick to what you got. If you just want to upgrade, then totally do it. Everyone needs a hobby, and this one in your case, is pretty face melting.

As for server utilization being low in a lot of environments? Yah.. 1 to 1 DR is still a thing for a lot of shops. Mine included. It absolutely kills everyone on our team that our remote sites sit idle most of the time in stand-by mode. Its super easy for anyone to pop off and say something like, well if things are in standby, why not run reporting services off of them or whatever. Its all about the work load, the applications sitting on top of the hardware and if your critical apps written internally can be used in a read only state. With out giving away to much.. I'm replicating around 400tbs to remote sites. That data sits in an offline mode until the source site burns to the ground and we initiate a fail over. And I don't do just fail over.. I'm doing point in time recovery of journaled volumes. I actually have to set aside an additional 10% of space minimum to cover the point in time recovery stuff. So if the last point in time copy I got is corrupted, I have 10 or more options to go back to. You could pop off and say, well DR stuff should run Dev/test/qa work loads until the building burns down. Sounds awesome, doesn't always work that way due to "reasons" beyond my control.

Quoted for posterity&truth.

Seriously, KVM+LXC as a "side hobby". Most peoples computer side hobby is Dual Booting Linux and Watercooling. lol


I did realize that I was a little bit more extreme in my geeky server hobby than most, but I never thought of myself as THAT unusual.

I just kind of wound up here there same way the industry wound up here.

I started by building a NAS in ~2010. Then I bought a couple of Unifi AP's and needed a server for them, and figured, I already have a server that sits idle most of the time, why not use it? I was running FreeNAS at the time (which is BSD based) on a Little AMD E350 dual core Zacate board and I needed Linux for the Unifi server, so I decided to give Virtualization a try. I upgraded the server box to an AMD FX8120 and maxed it out with 32 GB of ram and installed ESXi. I passed through SAS controllers to the FreeBSD VM for storage, and ran a dedicated Ubuntu VM for the Unifi software. Then as other things came along, I just kept adding dedicated VM's, figuring why not? It's more isolated this way and less likely to have dependency copnflicts than if you run them all in the same Linux VM,

I never needed a ton of cores, so the 8 cores on the Bulldozer were plenty, but I quickly ran into RAM limitations. Eventually I bought a used Supermicro X8DTE dual socket board and a couple of L5640's, primarily because they were cheap, and the board would accept a boat load of RAM. I built them into a Norco RPC-4216 case In the beginning I ran 64GB in there, but as my storage pool grew (necessitating more ram for ZFS) and more VM's, I eventually went to 96GB, and finally maxed it out at 192GB a few years ago when I got a great deal on Registered RAM on eBay.

At about this time I was starting to get really tired of dealing with ESXi's free release, and didn't have the budget or desire to pay for the Enterprise version. The final straw was that they kept leaving known bugs that were killing me unpatched for years, when they were patched in the Enterprise versions. I decided it was time for a change, and went shopping around for alternatives. I settled on that I wanted KVM based on what I was reading at the time, and I really liked the management interface on Proxmox, so I decided to just go with that purpose build Virtual Environment distribution.

This had some unexpected benefits. Since I could run my ZFS pool natively under ZFS for Linux, I no longer had to do that hack with passed through SAS HBA's and sharing the storage back via NFS to ESXi, which was great. I also discovered LXC containers which I had not been familiar with before. The fact that they were so much more lightweight meant a ton of potential savings on system resources and efficiency. I wound up converting almost all of my Linux VM's to LXC containers (only the ones that absolutely needed the VM layer separation stayed as VM's) I got much more efficient memory and CPU usage out of this.

Then the system went unchanged for a while until this most recent upgrade, which was triggered by a few different circumstances.

1.) I was using my SAS HBA's for my storage pool for my direct attach backplane, so for everything else I used consumer SSD's (mostly Samsung's) hooked up to the Sata ports on the motherboard. I felt the 3GB/s Sata standard was holding me back in some cases

2.) I eventually want to move to NVME drives for my cache and log devices for my storage pool. PCIe Gen2 was not going to cut it for this.

3.) Some of my VM's were maxing out single cores on the old L5640 resulting in undesirable effects (like stuttering video) so I wanted faster cores, not necessarily more of them.

4.) I had been using an old HP DL180 G6 server in a friends basement for nightly off-site backups of non-replaceable data using ZFS Send/Recv. The backplane in this case was starting to die, and I always hated that HP server, so I was poking around for better replacement cases and old spare boards, and it turned out that on eBay it was cheaper to get a Supermicro CSE846 barebones with a motherboard in it than it was to get the standalone case.

So, I bought the SuperMicro barebones system with a X9DRI-F board in it, did some searching around for the best cost/performance ratio of used E5-26xx V2 Xeons on eBay and wound up with a set of Octacore 2650v2's, and also picked up 64GB more RAM to make it a nice even 256GB on the new system. I used this to upgrade my home server. Then the old L5460 systrem in my Norco case will be rebuilt to be the new backup server. (I'm not done with this yet, foiled by the screws. The hard drive screws I have have too big heads, so the caddies won't slide into the case. Had to order more with smaller heads)


Anyway, long story short, mine is more of a "Home Production System than it is a home lab. The most important VM's/Containers/Functions on it are:
- MythTV Backend (for recording and playing back TV shows)
- ZFS Pool for NAS (and to support VM's)
- Unifi Server for WAP's
- VM for friend who lets me keep my backup server in his basement, for him to use for backups

But there are also some smaller ones running, like (non-exhaustive list)
- Teamspeak server
- Counter-Strike Server
- Chrooted SFTP server for sharing files with friends/family
- etc, etc.

The server is connected as a VLAN trunk to my 10 gig uplink port on my 48 port Aruba switch via fiber, where I have all sorts of VLAN's set up to try to isolate various things from eachother both for security purposes, and so they don't bother eachother. For instance, my MythTV server has two HD Homerun Prime 3-tuner network tuner boxes. At first I put them on the open network, but it turns out that they announce themselves to every goddamned PC, Mobile device and smart TV on the network, and if any other device than MythTV grabs a tuner and starts using it, it interrupts what MythTV is doing. Yeah, so they got their own VLAN (and their own little container running a DHCP server to give them IP addresses, because in their wisdom HD Homerun didn't give you a way to set static IP's on the devices)

So, the server sends backups to my remote backup server every night. It sends daily snapshots (automatically deleted after a week), weekly snapshots (automatically deleted after a month), monthly snapshots (automatically deleted after a year) and yearly snapshots (automatically deleted after 10 years, this hasn't happened yet :p )

Damn, your rocking KVM and LXC and you don't do this stuff professionally? You could easily be murdering a monster salary with that skill set. Whats stopping you?

How monster are we talking here? I mean, I'm already in a pretty lucrative field (Medical Device Product Development and QA Engineering) but I'd be lying if I didn't say that I might be getting bored with it. I have no idea how my home DIY experience compares with what it actually takes to work in an IT environment though. I'd imagine I would need certificates and work experience, or I'd be starting at a very low level, which would probably be a huge step backwards for me.

It's my own little home network and I enjoy being Network Architect, Sys Admin and IT support all in one. I'm not sure if I'd enjoy doing it professionally. My theory is that if you find something you love to do, you should probably avoid ruining it by turning it into work. I am curious though.
 
You aren't alone in this hobby. I started collecting Netware servers for home use when I was in high school with sites like 2cpu for help when I moved to dual PII's and NT4 which was a long time ago. Going the IT route did make the hobby slightly less enjoyable, doing other stuff now. Keep expanding if you enjoy it, I have multiple 30a 240v circuits and many E5 V1/V2 Supermicro systems now and E5 2697 V2's in my main hypervisor. Hardforum is a good resource, if you are looking for more people with massive home servers for help, check out Serve The Home.
 
I did realize that I was a little bit more extreme in my geeky server hobby than most, but I never thought of myself as THAT unusual.

How monster are we talking here? I mean, I'm already in a pretty lucrative field (Medical Device Product Development and QA Engineering) but I'd be lying if I didn't say that I might be getting bored with it. I have no idea how my home DIY experience compares with what it actually takes to work in an IT environment though. I'd imagine I would need certificates and work experience, or I'd be starting at a very low level, which would probably be a huge step backwards for me.

It's my own little home network and I enjoy being Network Architect, Sys Admin and IT support all in one. I'm not sure if I'd enjoy doing it professionally. My theory is that if you find something you love to do, you should probably avoid ruining it by turning it into work. I am curious though.

It's a gray area, but guys like me are hungry for guys like you with the teams I'm on. I would rather have someone who messes with this stuff just cause than have to fight and argue with someone to learn it. You'd start out 70k+ and could easily be murdering 150k+ in 5 years. Could break 200k in the right environment and industry, but your gonna have the pressure and on call hours to go with it. If you are starting out, certs are a great way to get noticed and get your foot in the door. However, I've been doing this stuff for almost 20 years and I have zero certs and have zero desire to get certs. I do fine with out them. Anyone can memorize a test and get a cert. Go chat with the IT guys in your shop. Befriend them, go to lunch with them, and just hang out. They'll tell you the real deal and whats up. They will absolutely be impressed that you are messing with this stuff at home. But they'll give you insight into that world and that'll help you decide if you want to switch career paths. I'm a die hard infrastructure guy. I love it, and I love the people who do it. Some of the smartest, funniest, and most inspiring people I've ever worked with are IT people. Even with the on calls, the weekend maintenance, the over nighters, 2am melt downs, failed upgrades, being awake for 48 hours fixing something, and general train wrecks.. I can't see myself doing any other kind of work but this stuff. Infrastructure is where my heart is and its the only work I ever want to do.
 
Back
Top