Cost of electricity vs cost of hardware

dogbait

n00b
Joined
Jan 31, 2005
Messages
43
We're a small shop but we've been running in our office server room a pretty nice vSphere setup with a few 16 disk Supermicro servers acting as storage nodes and two Supermicro 1U dual socket servers running ESXi.

It's all fully loaded and man, when it's all cranking the blue lights on those storage nodes twinkle something pretty...(seriously! Everyone who sees them running remark on it :) )

SC836:
7xZqX.jpg


We've just received our power bill though for this year and it's looking to hit near enough £1 per watt per annum. To give an idea of the wattage, three SC836 servers with 48 SATA disks and the two ESXi hosts are pulling about 1600W at idle.

Then there's the cost of air conditioning on top which is probably another 1200W (haven't measured), but as you can tell after hitting the 3-5 year mark the cost of electricity becomes high enough that purchasing a more efficient setup becomes economical.

Our storage nodes are running single Xeon 5160s on a dual socket Xeon board. Each one has 8GB of FB-DIMM memory and each box has 16x SATA disks (1TB). At idle they pull about 320W and at load it hits 400W.

We just bought a Synology 10 bay server just to see if an appliance vs rolling our own is a better option and that at idle pulls maybe 36W and at load just 100W!!!

Now the SC836 has redundant power vs single power and is running 16 vs 10 SATA disks so I'd expect a higher power consumption. But why such a massive difference?

I mean it's at the point where it's feasible to scrap all three SC836 boxes and replace them with Synology boxes!
 
These old Xeons on a dual mainboard, equipped with fully buffered RAM (put yout fingers on it,
they are really hot) are not energy efficient, not under load and not on idle.

You should also look at your disks.
Some disks are in the 5-7 watt area during activity, other needs 15-20 watts - quite independant of the capacity
With newer disks, you can replace 4 x 1 TB with one faster 4 TB disk and 1/4 of power consumption.

If you do not need extreme power, you can replace the mainboards with a current 1155 board and newer RAM and a dual core
low power Xeon (a dualcore Xeon 1220L, enough for a storage server needs 20 watt max under load, on idle much less - v2/ ivy bridge is 17w max)
(Mainbord ex SuperMicro X9 Serie + Xeon-L + 8 GB RAM = about 400 Euro)


I would also think about replacing old disks with newer ones and higher capacity.

If you compare with synology (look at the specs), they usually have no Xeons, mostly slow/ low power/ low performance Atoms or older DualCores,
and low RAM without ECC. With such a hardware you are energy efficient but slow. With newer Xeons you can have nearly the same low wattage
with much more power.
 
I think Gea hit the nail on the head with his reply, i couldn't agree more with his recommendations. I would add, if you don't require these machines to be on lan it may be worth getting some quotes for colocation. It can be surprisingly affordable if you educate yourself sufficiently on the topic, along with many other benefits.
 
Thanks for the replies. I'm very tempted to go for Synology all the way, but a part of me doesn't want to give up the flexibility of rolling our own NAS boxes.

Does a spec like this seem sensible?

  • Supermicro X9SCL+-F Motherboard
  • Intel Xeon E3-1220LV2 CPU
  • 2x4GB ECC Memory

Reading up on the subject, it seems that software RAID has come a long way and in most cases is superior to hardware RAID, especially as far as power management and spin down is concerned.

We're using SAS expanders with Adaptec 3405 cards in a RAID 10 setup. I think that initially we'll keep the RAID card and disk array as is with the new motherboard, CPU and RAM. However is there much to gain from say swapping out the 3405 with an IBM M1015?
 
What exactly are these servers doing?
Why so many disks? IOPS or raw storage?
Why 2 VM hosts?
Why dual core?
How much RAM do they need?
Why 3 storage hosts?

Can't really tell you where you can trim power savings without knowing the requirements of the systems you bought. For instance I assume you have 2 VM hosts for redundancy but maybe you don't need redundancy and, at the time, 2 hosts was the cheapest way to get enough RAM to run all the VMs you needed to. Do you have 3 storage hosts because that's what you needed to meet your bulk storage requirements or did you need tripple redundant storage? Do you need 48 drives for the IOPS for a transactional database or just needed 48TB for archival purposes? If it's IOPS you need then a couple SSD drives, even enterprise grade, would smoke the 1TB drives at a fraction of the power requirements, if it's just archival purposes then 16 3TB 5k RPM drives consolidated into one server would about halve your current power needs.

Telling us why you have what you have will go a long way toward identifying exactly what kind of "fat" you need to trim.

Might want to wait til WS2012 comes out before making any big decisions because it's going to change the landscape quite a bit.
 
What exactly are these servers doing?
Why so many disks? IOPS or raw storage?
Raw storage for most of it, but IOPS for virtual machines over NFS.

Why 2 VM hosts?
Very handy being able to put one in Maintenance Mode and have all VMs transfer automatically to the other so there's nearly zero downtime.

Why dual core?
Assume you mean dual socket. We just bought the machines as specced by the vendor for storage purposes. So dual socket server grade boards with only a single socket occupied.

How much RAM do they need?
Our vmware ESXi servers need at least 12GB RAM if not 16GB and upwards.

Why 3 storage hosts?
2 storage servers for VMs (same failover concept as 2 ESXi hosts). You can do live transfers of VMs back and forth across them in case you have to take one down for maintenance. They also hold all media (6TB+ of RAW photos, camera footage, Windows/Mac roaming profiles), all our MSDN/Microsoft Action Pack ISO images, etc. The third host is simply a backup server which the other two sync to nightly.

2 hosts was the cheapest way to get enough RAM to run all the VMs you needed to.
Good observation, I'd forgotten that this was one of the reasons. FB-DIMM memory modules were expensive (and still are!).

If it's IOPS you need then a couple SSD drives, even enterprise grade, would smoke the 1TB drives at a fraction of the power requirements, if it's just archival purposes then 16 3TB 5k RPM drives consolidated into one server would about halve your current power needs.
Good point, I guess the cost of hardware vs cost of electricity would have to be considered for moving the VMs to SSD. Are SAS drives worth the premium for better IOPs?

Might want to wait til WS2012 comes out before making any big decisions because it's going to change the landscape quite a bit.
Thanks for the tip, I'll take a look at it. It's definitely becoming a chore maintaining 3 Debian Linux based storage systems. I miss the days of being an all Windows shop. A quick search turned this up. Seems to suggest that WS2012 and NFS performance are ready for the big time.
 
I meant dual socket (you were correct) for the VM hosts more than the storage. I was wondering if you needed the CPU cores/power that came with dual socket or if you needed the RAM capacity that came with it.

WS2012 (esp Hyper-V 5) is going to have some massive improvements for small shops such as yours with the whole storage pools, replication, non-shared storage live migrations, data dedup, storage pools, etc... If you haven't played around with the WS2012 RC yet you owe it to yourself to.

Add all that to the Xeon E5-1600 and E5-2600 chips that came out earlier this year and a full system overhaul could give you some pretty massive savings SB uses way less power than the old nehalem cores did. Dual Socket 2011 boards have some impressive memory capacity as well and FB RAM isn't quite as expensive as it used to be, actually quite a bit cheaper than unbuffered ECC is these days.

High RPM SAS vs SSD really depends on how much space your VMs need for their boot drives, but as far as raw IOPS go you're not going to beat a SSD, but if they need more room you can get more storage with the SAS. Another thing SAS can't beat SSDs at is power.

Current exchange rates have £1600 to be about $2500 so you don't really have that much money to work with in reality in order to see a reasonable RoI, but since you're not really paying me to help you and I've given you plenty of other stuff to think about I'll show you what could be possible if you weren't exactly trying to save money :p But for funsies this would give you 2 systems with a total of 24 cores, 48 threads, 128GB RAM, and 36TB storage in a 2U unit at I'd guess around 300-400W with the CPUs mostly idle. Almost half of that cost is from HDDs though which you could get cheaper consumer versions of and save a few grand.

But to end on a serious note your biggest power savings would be replace 1TB drives with 3TB drives (I honesty would stick with 7200RPM, 5k RPM drives can be hell to work with under any kind of real load), 2-4 SSDs depending if you want to RAID-1 them or not for VM boot drives to keep IOPS up, and even get rid of storage server #3 and throw a few more 3TB drives in #1 and #2 and have them backup to each other to those dedicated drives.

Another thing to do would be grab a few Kill-A-Watts and throw them on each server to see where the power draw is really coming from to make sure it actually IS the storage systems, if it's actually the VM hosts drawing the power then a systems upgrade to a E5-2600 might be something to put on the table to your superiors, mentioning of course that the £1600 figure doesn't include AC costs to cool them.
 
I meant dual socket (you were correct) for the VM hosts more than the storage. I was wondering if you needed the CPU cores/power that came with dual socket or if you needed the RAM capacity that came with it.

It was probably overkill, since we're running Debian linux and hardware RAID so the CPU spends most of its time idling. That said it does hit 50% usage if netatalk or smb are serving up files.


But to end on a serious note your biggest power savings would be replace 1TB drives with 3TB drives (I honesty would stick with 7200RPM, 5k RPM drives can be hell to work with under any kind of real load), 2-4 SSDs depending if you want to RAID-1 them or not for VM boot drives to keep IOPS up, and even get rid of storage server #3 and throw a few more 3TB drives in #1 and #2 and have them backup to each other to those dedicated drives.

Thanks, great suggestions. I was under the impression though that a SATA drive when in sleep mode consumes close to 0W?


Another thing to do would be grab a few Kill-A-Watts and throw them on each server to see where the power draw is really coming from to make sure it actually IS the storage systems, if it's actually the VM hosts drawing the power then a systems upgrade to a E5-2600 might be something to put on the table to your superiors, mentioning of course that the £1600 figure doesn't include AC costs to cool them.

I took some pretty detailed measurements of the various systems, here we go:

System 1 - VM Host
OS: ESXi 4.1
MB: Supermicro X7DBR-8
CPU: 2x Intel Xeon E5430 @ 2.66GHz
RAM: 12GB FB-DIMM
HD: 4GB CF

Off: 14W
Idle: 180W
Load: 228W

System 2 - File Server
OS: Debian 6
MB: Supermicro X7DBN
CPU: 1x Intel Xeon E5310 @ 1.60GHz
RAM: 4GB FB-DIMM
HD: 16x 1TB SATA 7200rpm

Off: 36W
Idle: 330W
Load: 395W

System 3 - File Server
OS: Debian 6
MB: Supermicro H8DM3
CPU: 2x AMD Opteron 2214 @ 2.2GHz
RAM: 4GB
HD: 16x 750GB SATA 7200rpm

Off: 36W
Idle: 280W (Controller supports putting disks in sleep mode)
Load: 460W
 
We've just received our power bill though for this year and it's looking to hit near enough £1 per watt per annum. To give an idea of the wattage, three SC836 servers with 48 SATA disks and the two ESXi hosts are pulling about 1600W at idle.

Then there's the cost of air conditioning on top which is probably another 1200W (haven't measured), but as you can tell after hitting the 3-5 year mark the cost of electricity becomes high enough that purchasing a more efficient setup becomes economical.

While you are looking at the power cost for operating the computers, you should look at all of your business costs. There might be a better return on investment by spending on something other than computers.

---

Your power costs seem to be in the range of petty cash - much below what I would worry about.

On the other hand if you are going to need to upgrade in the future, now is as good of time as any.

---

I do like the suggestion of using larger disks. But that is an ongoing plan - whenever you run out of space you replace several disks with one and keep the numbers down.
 
It was probably overkill, since we're running Debian linux and hardware RAID so the CPU spends most of its time idling. That said it does hit 50% usage if netatalk or smb are serving up files.




Thanks, great suggestions. I was under the impression though that a SATA drive when in sleep mode consumes close to 0W?




I took some pretty detailed measurements of the various systems, here we go:

System 1 - VM Host
OS: ESXi 4.1
MB: Supermicro X7DBR-8
CPU: 2x Intel Xeon E5430 @ 2.66GHz
RAM: 12GB FB-DIMM
HD: 4GB CF

Off: 14W
Idle: 180W
Load: 228W

System 2 - File Server
OS: Debian 6
MB: Supermicro X7DBN
CPU: 1x Intel Xeon E5310 @ 1.60GHz
RAM: 4GB FB-DIMM
HD: 16x 1TB SATA 7200rpm

Off: 36W
Idle: 330W
Load: 395W

System 3 - File Server
OS: Debian 6
MB: Supermicro H8DM3
CPU: 2x AMD Opteron 2214 @ 2.2GHz
RAM: 4GB
HD: 16x 750GB SATA 7200rpm

Off: 36W
Idle: 280W (Controller supports putting disks in sleep mode)
Load: 460W

if you need a short cheap term solution: replace your xeon E54XX and E53XX with L54XX ..
your E54XX and E53XX are rated 80W.
L54XX is rated 50W, 30W differences.
during the load: you would save 60W for dual processor or 30W for single processor.

replace your opteron 22XX with HE version to lower processor wattage.

I just check on 3bay.com: XEON L5420 for ~$19 with FS, and OPTERON 2216 HE for ~$6 with FS
total cost: ($19X3)+ $6= $63.
 
It was probably overkill, since we're running Debian linux and hardware RAID so the CPU spends most of its time idling. That said it does hit 50% usage if netatalk or smb are serving up files.




Thanks, great suggestions. I was under the impression though that a SATA drive when in sleep mode consumes close to 0W?




I took some pretty detailed measurements of the various systems, here we go:

System 1 - VM Host
OS: ESXi 4.1
MB: Supermicro X7DBR-8
CPU: 2x Intel Xeon E5430 @ 2.66GHz
RAM: 12GB FB-DIMM
HD: 4GB CF

Off: 14W
Idle: 180W
Load: 228W

System 2 - File Server
OS: Debian 6
MB: Supermicro X7DBN
CPU: 1x Intel Xeon E5310 @ 1.60GHz
RAM: 4GB FB-DIMM
HD: 16x 1TB SATA 7200rpm

Off: 36W
Idle: 330W
Load: 395W

System 3 - File Server
OS: Debian 6
MB: Supermicro H8DM3
CPU: 2x AMD Opteron 2214 @ 2.2GHz
RAM: 4GB
HD: 16x 750GB SATA 7200rpm

Off: 36W
Idle: 280W (Controller supports putting disks in sleep mode)
Load: 460W

Maybe ESXi is breaking power management?


I hvae a dual L5335 with 20 Disks that runs at about half the usage of your System 2.
 
Another thing that would effectively be free power savings is just yank CPU#2 out of all the 2CPU systems and move all the RAM to socket 1. If you're never really even using over 50% CPU then why pay to power it? It's not like 2CPU systems offer any form of redundancy, if either CPU crashes it'll take down the whole box. If you ever find yourself CPU starved in the future you could always stick it back in.

I just checked out the SM boards, looks like the intel boards are non-NUMA so you can just yank a CPU without worrying about RAM, the AMD board is NUMA so if you have 1GB sticks you're fine but if you're using 512MB then you're screwed.
 
Last edited:
Back
Top