How much power is your server drawing?

bexamous

[H]ard|Gawd
Joined
Dec 12, 2005
Messages
1,670
Just wondering.

I've got 2 Xeon X5650, 12x 4GB dimms, 3 M1015, down to 20 drives and still drawing 310 watts or so idling. If I spin down drives it only drops me to something in 280s. At 310w 24/7, and being in CA bay area and well into PG&E's tier 4 pricing of $0.35/kwh, this is about $80/month, lol.
 
All on our UPS:

P9D-E/4L + E3-1240V3+32GB RAM+6x4TB enterprise drives + 3x HBAs + external SAS card + 2x2TB enterprise drives + 6xSSDs, Platinum PSU
Dual-G34 motherboard, 70GB DDR3, 2x16 core Opterons, 1x SSD, Platinum PSU
P9D-WS, E3-1245V3, 16GB DDR3, 4xSSDs, Platinum PSU, 3x monitors

...the UPS draws 750W at the wall. Take away the workstation and three monitors and it's about 450W from memory. HDDs spin down at night or when unused, makes a big difference since each one is close to 10W.

Electricity -starts- at 0.33c/kWhr around here (AU) - the Platinum PSUs sure do end up paying off at that cost.
 
2x E5-2620's w/ 192GB RAM, intel X540-T2, Adaptec 71605Q, 2x Synology DS1513+'s in a HA cluster, 16 total HDD's 2x Intel S3700 cache SSD's, Cisco SG500-28, Netgear XS708E, cable modem, 2x network cable tuners, Ubiquiti Toughswitch with 5 PoE devices attached

All combined 300-350W total, can't remember exact number and obviously varies with load. Those X5650's are killing you.
 
I think my server is drawing about 150w, but I've never measured it accurately.

Xeon E3-1230v3, 2 DIMM, 13 HDD.

But my whole bill is about $70/mo for electricity for a 2 person 2 bedroom apartment.
 
Mine:
i5-760 OCed to 3.6Ghz and 10 HDD's. (2x 7200rpm, 8x 5400rpm) Pulls ~180 Watts at the wall at idle according to the UPS.
 
Just wondering.

I've got 2 Xeon X5650, 12x 4GB dimms, 3 M1015, down to 20 drives and still drawing 310 watts or so idling. If I spin down drives it only drops me to something in 280s. At 310w 24/7, and being in CA bay area and well into PG&E's tier 4 pricing of $0.35/kwh, this is about $80/month, lol.

Fellow Bay Area resident and I feel your Pacific Gas and Explosions pain. I'm running an old P4 in my WHS and it probably draws about the same ;)
 
Me desktop specced server only draws about 90-100w from the wall, when it's serving files or serving media to my htpcs. Drives are all set to always on.

Spec in signature
 
I've got 4x E3s. 2 are on NAS duty, pulling about 100W each. 3 are on ESXi-host duty, and pull (I think) about 70-75W each with about 30% util. My workstation pulls about 100W I think.

Current numbers according to the UPS:
192W+254W = 446W

Under 500W for 5 computers. I like the current generation!

edit: That includes the monitor, too. Dunno what the power draw is on that. Oh, and a switch, and a wireless router. And come to think of it, I've got 3xSATA drives which aren't even in use. I could probably unplug those to bring the bill down...

edit2: Had an electrician come and re-wire our house. Perfect time to get some real numbers out of my boxes:

Switch: 10W
LED Light Strip: 1W
Monitor: 8W phantom, 100W load
Router: 2W
Workstation: 165W during boot, 80W idle
ESXi-01: 86W boot, 35W idle, 3W phantom
ESXi-02: 70W boot, 36W idle, 4W phantom
ESXi-03: 180W boot, 87W idle, 9W phantom (This is my backup NAS as well, thus the higher idle)
NAS: 157W boot, 91W idle, 12W phantom
 
Last edited:
:/ This is making me want to upgrade my server to a E3-1230V3 build or something.
 
Intel Xeon E3-1220, X9SCM-F motherboard, 3x M1015 HBAs, 24x hdds (10x 2TB Samsung F4EGs, 10x Toshiba 3TB 7200 RPM drives, 4x Samsung F1 750 GB). Approximately 180 watts (I am guessing). I think it was 140 watts when I had 10x 1TB Samsung F3s instead of the Toshibas.

When the drives spin down it uses about 40 watts.
 
Which one?

My VM server 72GB RAM 1U "Monstrosity" with 2 2TB drives in raid1 with 2xL5520 is about 120W typical, peaks at about 150W with the CPU pegged.
My File Server with more consumer hardware and energy efficient cpu (i3-4130), no video card, 8x 2TB WD Reds + SSD OS drive is about 100-110W. I pay ~ $0.09/kWh
 
tGeIM9c.jpg

Q8WX6Bu.jpg
 
Just wondering.

I've got 2 Xeon X5650, 12x 4GB dimms, 3 M1015, down to 20 drives and still drawing 310 watts or so idling. If I spin down drives it only drops me to something in 280s. At 310w 24/7, and being in CA bay area and well into PG&E's tier 4 pricing of $0.35/kwh, this is about $80/month, lol.

Fellow Bay Area resident and I feel your Pacific Gas and Explosions pain. I'm running an old P4 in my WHS and it probably draws about the same ;)

vote for a nuclear power plant
 
Do the hdds still get enough airflow with those empty bays? Arn't you supposed to put in the trays with the sliders shut, or the dummy plastic spacers?

They seem fine to me. But this weekend I'll clean things up and see if it improves my temps.

On another note to be fair my networking equipment is taking up half of that draw.

c2t0d0 rpool basic ONLINE S:1 H:0 T:0 VB0250EAVER sat,12 PASSED 25 °C
c5t5000C5004FA34CA9d0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 37 °C
c5t5000C5004FB8D4C1d0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 40 °C
c5t5000C5004FB917D0d0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 37 °C
c5t5000C5004FBBA37Fd0 ZFS raidz ONLINE S:1 H:3 T:2 ST4000DM000-1F2168 sat,12 PASSED 35 °C
c5t5000C5004FBBCACDd0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 38 °C
c5t5000C500606FD815d0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 38 °C
c5t5000C5006072404Cd0 ZFS raidz ONLINE S:0 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 36 °C
c5t5000C50060724912d0 ZFS raidz ONLINE S:0 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 36 °C
c5t5000C500607321F0d0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 35 °C
c5t5000C5006088312Cd0 ZFS spares AVAIL S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 39 °C
c5t5000C50065331CCBd0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 37 °C
c5t5001B449C7B79203d0 MacSSD basic ONLINE S:0 H:0 T:0 SanDisk SDSSDXP240G sat,12 PASSED 33 °C
c5t500A075108FF8696d0 VMWare mirror ONLINE S:1 H:0 T:0 M4-CT256M4SSD2 sat,12 PASSED 0 °C
c5t500A07510909439Fd0 ZFS cache ONLINE S:1 H:0 T:0 M4-CT256M4SSD2 sat,12 PASSED 0 °C
c5t500A0751090943B4d0 VMWare mirror ONLINE S:1 H:0 T:0 M4-CT256M4SSD2 sat,12 PASSED 0 °C
 
power usage of my rack over the last week:

power-week.png


Power went up recently when I added some new SAS expanders which required additional cooling and required me running the chassis fans at 12v instead of 5v. This is powering my router box (relatively low power core i3 with 4 HD's), the main rig in my signature which is 3 chassis (15x3TB drives with just a SAS expander in them in two of the chassis). A switch, printer and some other random stuff.

The drops in power usage is when my 30x3TB drives are asleep and my 16 cores are not busy doing a 13.3 trillion digit pi calculation.
 
They seem fine to me. But this weekend I'll clean things up and see if it improves my temps.

On another note to be fair my networking equipment is taking up half of that draw.

c2t0d0 rpool basic ONLINE S:1 H:0 T:0 VB0250EAVER sat,12 PASSED 25 °C
c5t5000C5004FA34CA9d0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 37 °C
c5t5000C5004FB8D4C1d0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 40 °C
c5t5000C5004FB917D0d0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 37 °C
c5t5000C5004FBBA37Fd0 ZFS raidz ONLINE S:1 H:3 T:2 ST4000DM000-1F2168 sat,12 PASSED 35 °C
c5t5000C5004FBBCACDd0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 38 °C
c5t5000C500606FD815d0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 38 °C
c5t5000C5006072404Cd0 ZFS raidz ONLINE S:0 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 36 °C
c5t5000C50060724912d0 ZFS raidz ONLINE S:0 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 36 °C
c5t5000C500607321F0d0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 35 °C
c5t5000C5006088312Cd0 ZFS spares AVAIL S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 39 °C
c5t5000C50065331CCBd0 ZFS raidz ONLINE S:1 H:0 T:0 ST4000DM000-1F2168 sat,12 PASSED 37 °C
c5t5001B449C7B79203d0 MacSSD basic ONLINE S:0 H:0 T:0 SanDisk SDSSDXP240G sat,12 PASSED 33 °C
c5t500A075108FF8696d0 VMWare mirror ONLINE S:1 H:0 T:0 M4-CT256M4SSD2 sat,12 PASSED 0 °C
c5t500A07510909439Fd0 ZFS cache ONLINE S:1 H:0 T:0 M4-CT256M4SSD2 sat,12 PASSED 0 °C
c5t500A0751090943B4d0 VMWare mirror ONLINE S:1 H:0 T:0 M4-CT256M4SSD2 sat,12 PASSED 0 °C

Yeah those are pretty hot dude. For a bay like that they should be mid 20s.

And those are only the idle temp numbers!
 
Yeah those are pretty hot dude. For a bay like that they should be mid 20s.

And those are only the idle temp numbers!

I'm going to have to disagree with you man. The only reason those bays are open is because I just pulled disks from my JBOD. I've owned this JBOD for over a year and a half and these disks for almost a year. I assure you they have never ran at 25 degrees celsius in this enclosure under any circumstances. Also your not taking into account of the temperature of the closet this gear is in.

These drives running under 40 degrees celsius is beyond fine.

Edit: Check out the temps of houkouonchi's Seagates : http://www.webhostingtalk.com/showpost.php?p=8554333&postcount=23
 
Last edited:
2.3 MH/s Scrypt Mining

ltc4.JPG


My Synology NAS on the other hand is pretty low (haven't measured it recently, but definitely under 100 watts with 4 HD).
 
My AMD 2 * 4310EE with 128Gb ram, 4 * Samsung 500 EVO in raid 10 for VM's. 4 * WD 2Gb Reds in raid 5. Both raids attached to Areca 1882 RAID Controller.

At idle, it runs around 96 watts per hour and this is with 4 VMs running. When the raid 5 is active, it goes up to 135 to 140 watts per hour. I am in Germany so power consumption is very important to me and at the time, this was what I went with.

Edit: Motherboard is the ASUS KCMA-D8.
 
Last edited:
Lets see, currently, I'm running 357watts.

This is through the ups, powering a server, E5530, 24gigs ram, and 13 harddrives. a 24port network switch, wifi, and cable modem.
 
I'm going to have to disagree with you man. The only reason those bays are open is because I just pulled disks from my JBOD. I've owned this JBOD for over a year and a half and these disks for almost a year. I assure you they have never ran at 25 degrees celsius in this enclosure under any circumstances. Also your not taking into account of the temperature of the closet this gear is in.

These drives running under 40 degrees celsius is beyond fine.

Edit: Check out the temps of houkouonchi's Seagates : http://www.webhostingtalk.com/showpost.php?p=8554333&postcount=23

The only time I have ever seen drives run under 30C in any type of enclosure with a lot of drives (12 or more) is when they are in a data center and they are taking in like <70 F air.

I would say anything up to low 40s is fine for 24/7 usage. Now if your drives were consistently running at 50C or more then I would say you have a problem.

My home machine drives usually run mid to high 30s (hitachi) and they run with no issues.

My colo'd server in the DC typically were temps in the 20-25C range (cause it was at the bottom of the rack) and the old array was a bunch of seagate 1.5 TB disks and out of 8 disks I saw 3 failures and they all had re-allocated sectors and other issues over the years. Seagate sucks whether the temperature is low or not.
 
The only time I have ever seen drives run under 30C in any type of enclosure with a lot of drives (12 or more) is when they are in a data center and they are taking in like <70 F air.

I would say anything up to low 40s is fine for 24/7 usage. Now if your drives were consistently running at 50C or more then I would say you have a problem.

My home machine drives usually run mid to high 30s (hitachi) and they run with no issues.

My colo'd server in the DC typically were temps in the 20-25C range (cause it was at the bottom of the rack) and the old array was a bunch of seagate 1.5 TB disks and out of 8 disks I saw 3 failures and they all had re-allocated sectors and other issues over the years. Seagate sucks whether the temperature is low or not.


houkouonchi,

Very few have personally bought as many disks as you but for what it's worth I've had great success with Seagates. Over the 25+ that I've owned in my lifetime I've had maybe 2 failed drives. I noticed that Backblaze had reliability with issues with Seagate 1.5TB drives like yourself, could it just be that series. Have you owned other large batches of Seagates and had similar experiences?

There is no doubt in my mind that Hitachi seems like a better brand right now, but I would not go as far to say Seagates suck.
 
houkouonchi,

Very few have personally bought as many disks as you but for what it's worth I've had great success with Seagates. Over the 25+ that I've owned in my lifetime I've had maybe 2 failed drives. I noticed that Backblaze had reliability with issues with Seagate 1.5TB drives like yourself, could it just be that series. Have you owned other large batches of Seagates and had similar experiences?

There is no doubt in my mind that Hitachi seems like a better brand right now, but I would not go as far to say Seagates suck.

Also had various models of 1 TB seagate disks as well and personally had horrible failure rates. Also have 10,000+ hard-drives at work and seen absolutely horrible failure rates (worse than backblaze).

The lab expansion I did like over a year ago at my current job got free used servers that had the same bad seagate disks... Just off the bat I had to replace 300 of the 1000+ drives (126 servers with 8 disks each).

Even after the initial replacement I had to replace another 150 or so which I was using RMA'd seagate drives and the RMA's keep failing to... Finally I am replacing disks with new HGST disks and so far ~200 have been replaced and none of the HGST drives have failed... Have you been keeping up with the numbers? Over half the drives failed in the lab so far...
 
Server is running an E7200 processor with an Intel X-25M 80GB SSD for OS, 3Ware 9650SE-16ML with x9 4TB HGST 4TB disks (bought them from B&H when they were like $150), but oddly enough, received their new "Megascale 4000.B" disks (product number HMS5C4040BLE640). They're probably rebranded drives, but they run pretty cool at about 30-35C depending on ambient temperature. My previous ST31500341ASs (x16) ran about 40-45C, but all the bays were fully populated, whereas with my new disks, I left empty trays in between hard drives so heat wouldn't conduct to the drive next to it in my server chassis.

Overall server runs about ~100W on disk load (currently migrating files over from external). It's a good improvement from the 170-180W consumption I was getting on my old 16 disk ST31500341AS array.

Even after the initial replacement I had to replace another 150 or so which I was using RMA'd seagate drives and the RMA's keep failing to... Finally I am replacing disks with new HGST disks and so far ~200 have been replaced and none of the HGST drives have failed... Have you been keeping up with the numbers? Over half the drives failed in the lab so far...

Makes me feel good that I got HGST disks for my disk upgrade. :)
 
My home linux box consisting of a Celeron G530 + 8GB RAM, LSI2008 HBA, 7 SATA HDs and 1 SSD draws <80W in idle. Under the typical low load with CPU at ~10% and disks ~5% busy time the power averages at ~85W.
 
Before: 2 Xeon X5650, 12x 4GB dimms, 3 M1015 plus onboard SAS. 28 HDDs 3 SSDs. Idled at ~310watts, ~280w when spinning down as many disks as possible (some were accessed too frequently to say spun down at all).

Got a new server to cut down on power, and so I get to reuse 12core48GB ram as my new workstation (which I put to sleep when not using it so still saving power)... and GF gets my old desktop as hers keeps bluescreening and I'm too lazy to troubleshoot it.

After: E31220V3, X10SL7-F-O, 4x 8GB Samsung dimms, onboard SAS + two old M1015 SAS. 660W Seasonic Platinum PSU. Down to 12x 4TB HDDs, and 3 SSDs.

In addition:
-I cut out 2TB drives to replace them with 4TBs I had but had not gotten around to using. Let me lower total number of disks.
-Got rid of raidz2, still using ZFS for everything but... VMs on RAID1 SSDs, non-media files on RAID1 HDDs, all media on individual disks and then use snapraid for parity. This lets me spin down more disks more of the time.

New setup:
PSU+Mobo w/SAS disabled+CPU+RAM+840Pro = 22watts idling
Same w/SAS enabled = 29-30watts
Above w/2 M1015, 2 more SSDs, 12 4TB drives and a few fans: 110 watts
12 of the 4TB drives spun down: 61 watts
10 spun down (usual): 67watts

So ~300w -> ~67w

Cutting out 233watts 24/7 at PG&E's Tier4 level of 0.35$/kWh = $59.531/month savings
Spent ~$800 = 13.43 months to break even.
 
Last edited:
Core i3-3240, X9SCM-F-L, 16GB ECC RAM, 2x HBA's (LSI), 3x internal HDD's (RZ1) and 8x HDD's (RZ2) in a SE3016 via SFF-8088; about 87 - 162 watts (depending on load). Basically when we're just streaming is hovers around 100 watts, but as soon as a scheduled scrub or rsync starts you see it jump up into 120 ~ 130 watts. When those things are going on plus regular everyday stuff (like streaming) then it'll hit a high of about 162 watts.
 
Before: 2 Xeon X5650, 12x 4GB dimms, 3 M1015 plus onboard SAS. 28 HDDs 3 SSDs. Idled at ~310watts, ~280w when spinning down as many disks as possible (some were accessed too frequently to say spun down at all).

Got a new server to cut down on power, and so I get to reuse 12core48GB ram as my new workstation (which I put to sleep when not using it so still saving power)... and GF gets my old desktop as hers keeps bluescreening and I'm too lazy to troubleshoot it.

After: E31220V3, X10SL7-F-O, 4x 8GB Samsung dimms, onboard SAS + two old M1015 SAS. 660W Seasonic Platinum PSU. Down to 12x 4TB HDDs, and 3 SSDs.

In addition:
-I cut out 2TB drives to replace them with 4TBs I had but had not gotten around to using. Let me lower total number of disks.
-Got rid of raidz2, still using ZFS for everything but... VMs on RAID1 SSDs, non-media files on RAID1 HDDs, all media on individual disks and then use snapraid for parity. This lets me spin down more disks more of the time.

New setup:
PSU+Mobo w/SAS disabled+CPU+RAM+840Pro = 22watts idling
Same w/SAS enabled = 29-30watts
Above w/2 M1015, 2 more SSDs, 20 4TB drives and a few fans: 110 watts
20 of the 4TB drives spun down: 61 watts
18 spun down (usual): 67watts

So ~300w -> ~67w

Cutting out 233watts 24/7 at PG&E's Tier4 level of 0.35$/kWh = $59.531/month savings
Spent ~$800 = 13.43 months to break even.

Nice!

I did something similar going from a Q6600 to an i5 3570. The new one is < 60W where a Q6600 setup was more like 150+.
 
So ~300w -> ~67w

Cutting out 233watts 24/7 at PG&E's Tier4 level of 0.35$/kWh = $59.531/month savings
Spent ~$800 = 13.43 months to break even.

I'm just trying to confirm what your saying here. Your using 20 drives and with 18 spun down and your only pulling 67 watts? What are you using to cool these drives? What are your drive temps?

Also how many kWh are you using total at your house to be hitting Tier4 pricing? It almost seems like you need to cut down on electricity in other places.
 
Man some of you guys use lots of electricity. My whole power bill at my apartment is $60-70/mo and even that's split between 2 people.
 

Doh sorry I wrote a few numbers incorrectly, old setup had 20 2-4TB drives when it was drawing 300watts, then as mentioned when upgrading I consolidated to just 12 4TB drives (and 3 SSDs). When I then wrote power numbers I reused the number 20 incorrectly. I updated post, but basically 110 watts with 0 of 12 spun down, 67 watts was 10 of 12 spun down, 61 with 12 of 12 spun down. This is just in a Norco 2024 or whatever case with 3x 120mm fans in the center, retail cpu heatsink's fan, and a 80mm fan that gets some airflow on M1015s.. oh and I guess the PSU has a fan too, but it is usually off, it only turns on if PSU gets too warm.

Oh and yeah I mean this isn't only thing I'm doing to drop power, but its not like its difficult to hit PG&E's Tier 4. We only use like 1200-1600kWh/month depending on time of year, it just ends up being expensive because of local rates. I mean, its not a tiny amount but its not hugely excessive... avg home in US draws like 900kwh/month. I am looking to cut power everywhere, but 300watts 24/7 is still 219kWh no matter how you look at it, with new setup saving 170kWh of that... plus I mean that is saved every month. I can, and did, get a new pool pump, but that is only saving power like 5-6 months of the year.
 
Last edited:
SirMister, normally the amount of power used is directly in relation to the size of the house, in most cases.

Apartments are going use less electricity, as you said, 2 people live there, so half the amount of people are there than in a house, on avg. Half the water heating, half the lighting, and even half the computers :)

Larger houses have a larger footprint, more pipes, more space to cool/heat, likely lots more vampire electrical devices plugged in. Larger freezers and fridges.
 
The thing that I have that is the closest to a server (Openindiana, ECC memory, 6 core AMD) doesn't draw much as I only start it to copy some hundreds of GB to it, then shut it down. For everyday "serving" I use a PC drawing about 150W with 5 HDDs and a Xeon 3210.

In the garage I have a rig with an FX-8350, 8GB ECC, Enermax Platimax 1350W, and 3 Sapphire 290 mining, draws 1150W and at the moment makes just enough money to pay for that power !
 
65 watts, Intel G540 Celly, ATX motherboard, 8GB memory, 21 assorted hard drives (6*1TB, 13*2TB, 1*3TB, 1*60GB SSD for cache) running UNRaid. Never see more then 2-3 hard drives spun up at a time.
Runs in a Norco 4020
 
Back
Top