Refurb Dell C1100 vs HP DL 160 G6 - RAID support

hardware_failure

[H]ard|Gawd
Joined
Mar 21, 2008
Messages
1,370
Right now there are some really good deals on ebay for the Dell C1100's and HP DL 160 G6's. They typically have 2x QC L5520 or 2x HC L5639's, and 24-72GB of RAM.

Im not gonna list examples but they are pretty easy to find if you do a search on ebay. The higher end 72GB units are around $450 for QC 5520's and $600 for HC L5639's. A VM host's wet dream.

Anyway, If I wanted to run a 4 drive RAID5 on 4TB drives, which of the 2 would be better?

I have read that the HPs use their HP Smart Array B110i cards, that may not support 4TB drives and/or need special licensing? Also, the C1100 are "custom pulled units" that might be locked bios setups and finicky to any type of hardware reconfiguration? (RAM changes, add on cards etc) I dont want to use the Dell's onboard ICH10R for RAID 5. (for obvious reasons) If I picked up one of the Dells Id look into adding a card but obviously it has to work..

Has anyone picked up either of these "specials" on ebay and if so can they share on exp with RAID particularly with 4TB dirves?

Thanks!
 
its about half the price per CPU if you get the c6100, though it maxes out at 32gb of ram per cpu instead of 36GB/ram (64GB/node vs 72GB)

A friend of mine recently got the c1100 from them, he runs a raid 10 on it with no issues, though with >2TB drives I'd imagine you'd need a separate boot drive.
 
its about half the price per CPU if you get the c6100, though it maxes out at 32gb of ram per cpu instead of 36GB/ram (64GB/node vs 72GB)

A friend of mine recently got the c1100 from them, he runs a raid 10 on it with no issues, though with >2TB drives I'd imagine you'd need a separate boot drive.
Being split into 4 nodes is not exactly what I was looking for (c6100) - tho it would be pretty sweet to have 4x nodes maxed out at 64GB each. The current listings do not seem to be that favorable atm.

So your friend runs a RAID10 on the onboard C1100 sata ports? Im very curious about this setup. Id like to have 12TB of space but 8TB would suffice, plus obviously should be faster.

I am leaning more towards the Dell. The proprietary/weirdness of HP servers has always made me wary. Its too bad the c1100 wasnt 2U (more local storage room)

Thanks for the response.
 
I don't know if the 160 supports >3tb out of the box but there is a pcie slot right next to the SAS port I. The mobo. I put a m1015 in so I could pass it through and it worked fine. Make sure you get the trays, some don't have them. There is also no onboard USB connector to put a OS on. I used a small ssd to load esxi and a few important vms and passed the card through. Tight fit but it works.
 
I have a couple C1100s with 4x 2Tb drives and I put a 120gb SSD inside the case just laying in there
 
We bought several of the c1100's at my job to use for student project servers. I've installed RAID arrays in a couple of them with 1.5 and 2TB drives, and have added 10GBe cards to them without issue. I will say that if you get one, make sure to pull it apart and take a look at it before you run it. About half the ones I received in had the heat sink for the second CPU in the wrong orientation, and removing the heatsink showed that the "refurb" team just slapped a bunch of crappy paste over the original gunk and bolted it back together. I'm sure it wouldn't have held up under any serious use.
 
I don't know if the 160 supports >3tb out of the box but there is a pcie slot right next to the SAS port I. The mobo. I put a m1015 in so I could pass it through and it worked fine. Make sure you get the trays, some don't have them. There is also no onboard USB connector to put a OS on. I used a small ssd to load esxi and a few important vms and passed the card through. Tight fit but it works.
Yes I will def make sure they have trays. By onboard USB you mean like a 9 pin header to add something like a flash memory reader to load ESXi?

I have a couple C1100s with 4x 2Tb drives and I put a 120gb SSD inside the case just laying in there
This is what I was planning on doing. (4x 4TB's in the bays and cram a SSD inside wherever it fits)

We bought several of the c1100's at my job to use for student project servers. I've installed RAID arrays in a couple of them with 1.5 and 2TB drives, and have added 10GBe cards to them without issue. I will say that if you get one, make sure to pull it apart and take a look at it before you run it. About half the ones I received in had the heat sink for the second CPU in the wrong orientation, and removing the heatsink showed that the "refurb" team just slapped a bunch of crappy paste over the original gunk and bolted it back together. I'm sure it wouldn't have held up under any serious use.
Since the c1100 only has 1x PCIE slot, did you use the proprietary "mezzanine" slot for the 10GBe cards? Thanks for the heads up on the heatsinks. In general how happy are you with the servers you bought? Did you get any with hex cores?

Thanks everyone for the replies/feedback.
 
Yes, I used the riser card for the 10GBe cards (Intel x540-T1), and had no issues getting them to work for vSphere. Considering the budget I was given for this project, I'm extremely happy with the servers. At a rough guesstimate I've gotten 3x the processing power that I expected to be able to purchase when compared to the state contracts with Dell (though getting the order shepherded through Purchasing was anything but fun). We got all quad-cores, as I expect the performance on the cluster will be limited far more by RAM availability than by CPU count, and so got the faster cores instead of more cores. Depending on your workload, you might have other expectations.
 
There is also no onboard USB connector to put a OS on. I used a small ssd to load esxi and a few important vms and passed the card through. Tight fit but it works.

I used a small almost flush fit 8GB USB drive for the OS (Esxi) on the back USB ports of the C1100s. I have been using them for about 8 months without any issue.

Internally I use a M1015.
 
Yes I will def make sure they have trays. By onboard USB you mean like a 9 pin header to add something like a flash memory reader to load ESXi?

No type A on board (to plug a flash drive in) My supermirco's and other HP machines have them; super convenient.
 
Yes, I used the riser card for the 10GBe cards (Intel x540-T1), and had no issues getting them to work for vSphere. Considering the budget I was given for this project, I'm extremely happy with the servers. At a rough guesstimate I've gotten 3x the processing power that I expected to be able to purchase when compared to the state contracts with Dell (though getting the order shepherded through Purchasing was anything but fun). We got all quad-cores, as I expect the performance on the cluster will be limited far more by RAM availability than by CPU count, and so got the faster cores instead of more cores. Depending on your workload, you might have other expectations.
I am looking into the c1100's maily as backup/roll over hosts to a current production environment that is setup on 2x boxes - both are Supermicro X8DT6 w/ 2x X5650's and 48GB RAM. They each run 6-7 VM's on 2k12 Hyper-V. The CPU's are under 10% usage 99% of the day, but they are both around 80% max ram due to the # of VMs. Id like to setup probably 2x of the c1100's and see how they handle the less I/O intensive VM's and even do some test recovery's from backups etc. These refurb c1100's are a fraction of the price of the original X8DT6's which is why Im looking into them, my budget is super tight. (I wasnt the one that purchased them/set them up originally)

I used a small almost flush fit 8GB USB drive for the OS (Esxi) on the back USB ports of the C1100s. I have been using them for about 8 months without any issue.

Internally I use a M1015.
Thats great to hear they have been working good off of a USB drive for that long. I willy probably look into a similar quality RAID card but wouldnt necessarily need the driver compatibility of a M1015 due to using windows vs ESXi.

No type A on board (to plug a flash drive in) My supermirco's and other HP machines have them; super convenient.
Ahh yes. I forgot about those. Very handy indeed, also great for security HASPs that you dont want out in the open etc.
 
Could please someone confirm is it possible to use 4TB SATA hard drives plugged in to on-board DELL C1100 ICH10R controller? If not - what is the best option for additional SATA RAID controller available on ebay (i.e. best from performance/cost point of view)?

I'm trying to have C1100 ready to be used with ESX 5.5 (or 5.1 if not possible with 5.5).

Thanks!
 
Could please someone confirm is it possible to use 4TB SATA hard drives plugged in to on-board DELL C1100 ICH10R controller? If not - what is the best option for additional SATA RAID controller available on ebay (i.e. best from performance/cost point of view)?

I'm trying to have C1100 ready to be used with ESX 5.5 (or 5.1 if not possible with 5.5).

Thanks!

I cannot confirm 4TB but I know 3TB work without issue. If you do need another controller, m1015 sas/sata controller works well.
 
I cannot confirm 4TB but I know 3TB work without issue. If you do need another controller, m1015 sas/sata controller works well.

TType85, thank you for your answer. I will also test on 4TB tomorrow.
BTW, installed ESX 5.5 today using external DVD driver - starting fine, but can't get DHCP addresses. Could you please let me know what it could be?
 
The 160's are great for virtualization host, but shit for storage. Best way to utilize them is with a fiber/10gbe expansion card to your shared storage. (or just SSD's local)
 
Last edited:
TType85, thank you for your answer. I will also test on 4TB tomorrow.
BTW, installed ESX 5.5 today using external DVD driver - starting fine, but can't get DHCP addresses. Could you please let me know what it could be?

Answering to myself. Just plugged it in directly to the switch and everything is working fine. Hm... Will check on that additional hub I tried to use.
 
Last edited:
I'm curious about support for larger than 4TB drives as well on the HP DL160 G6.

We just picked one up on Amazon for $550 that has 70 something gigs of ram, 16 cores (2 Xeon processors). It doesnt have the hot swap drive bays. I think it does have the raid 0/1 controller. We were testing it out today with Ubuntu 14, and it works great.

What sort of issues/drawbacks are we going to have with this machine? We are planning on using it for a web server and maybe fileserver. So far I see nothing but good stuff on this guy.

Is it better to use the hardware RAID or do software or what?
 
I'm curious about support for larger than 4TB drives as well on the HP DL160 G6.

We just picked one up on Amazon for $550 that has 70 something gigs of ram, 16 cores (2 Xeon processors). It doesnt have the hot swap drive bays. I think it does have the raid 0/1 controller. We were testing it out today with Ubuntu 14, and it works great.

What sort of issues/drawbacks are we going to have with this machine? We are planning on using it for a web server and maybe fileserver. So far I see nothing but good stuff on this guy.

Is it better to use the hardware RAID or do software or what?

My DL160 G6 ESXi servers have 3TB drives in it and recognizes them fine. It should be OK for 4TB+ drives.

One that is used in production in our datacenter is at 194 days of uptime and it would be longer but they replaced some PDU's and since it doesn't have redundant power supplies I had to turn it down for a few min. Very solid servers.
 
Back
Top