This seems like an awesome deal for a VM server, am I missing something?

Red Squirrel

[H]F Junkie
Joined
Nov 29, 2009
Messages
9,211
Last edited:
I have seen 2950 IIIs go for ~$300, so $270 for a II seems OK/normal. Pretty sure the II will not fit more than 32GB, and they suck quite a bit of power.
 
I not sure of the max ram but you will want to keep that in mind. Also listing states they are using 2GB sticks. 8 slots. So if you decide to upgrade past 16GB you will have to buy ALL new ram for it. (if you can even upgrade past 16.

quick google search says 64GB. But not positive what actual version ect in the auction
 
As an eBay Associate, HardForum may earn from qualifying purchases.
This is probably a better deal overall: http://www.ebay.com/itm/Dell-Powere...-4xNODES-NO-HDD-96GB-Ram-Tested-/251266749895

While its almost 3x the price you get 4 independent motherboards with 2x L5520 CPUs and 24GB ECC on each node (total 8x quad core Xeons, 94GB ram). Still a bit old, but newer CPU technology than the E5345, more memory, more upgradeability, multiple nodes for more interesting visualization experiments, etc. All in a cute little 2u chassis.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
This is probably a better deal overall: http://www.ebay.com/itm/Dell-Powere...-4xNODES-NO-HDD-96GB-Ram-Tested-/251266749895

While its almost 3x the price you get 4 independent motherboards with 2x L5520 CPUs and 24GB ECC on each node (total 8x quad core Xeons, 94GB ram). Still a bit old, but newer CPU technology than the E5345, more memory, more upgradeability, multiple nodes for more interesting visualization experiments, etc. All in a cute little 2u chassis.

Adding on here since I think PL has one, and I'm running 3x in colo + 2x in the home lab. These are 2x newer Intel Xeon generations and a big architecture advantage. Feel free to see the big thread on these: http://forums.servethehome.com/proc...00-xs23-ty3-2u-4-node-8-cpu-cloud-server.html

Has part numbers for spares, and etc. The last few posts there have sellers with these under $700. Great way to make an inexpensive cluster.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Also, the 2950's were pretty loud, used quite a bit of power, and worked pretty good as a space heater ;) At least that was my impression from the couple I brought home from work thinking to use as hosts. Not sure if they were the V1 or V2 models though. I do remember that the server noise from the one I brought home put my stock Norco 4220 to shame (no way I could have ran it 24/7 in my office), and I think I remember calculating the cost to run it 24/7 to be about $15-$20 a month in electricity at 10 cents/kwh (I think it ran about 300 watts)

From what I've read, the power usage on the C6100 running all 4 nodes isn't much more than the power draw on the single 2950 I was testing.

Personally, I've really been looking at the C6100 ebay offers. Just not sure about how much noise they produce. Granted, I've already got a Norco 4220 running (replaced the stock "jet engine" fans, but I have the first model that doesn't support the new fan template for 80mm fans)...
 
The C6100 linked looks pretty good, just not quite clear how those multi-node systems work, do you end up with 1x ESXi install or 4x ESXi installs?
 
The C6100 linked looks pretty good, just not quite clear how those multi-node systems work, do you end up with 1x ESXi install or 4x ESXi installs?

They work like 4 separate servers that happen to share some common equipment (PSU, chassis, fans, etc). Look at a picture of the rear - you have four separate IO panels each with its own VGA, USB, etc.
 
Ah I get it now, thanks.

For me to use these I'd have to buy quad port NICs and a secondary PSU, that will probably add quite a bit of cost to the system.

I am currently using 3x 2950 III with dual Xeon E5405s, 32 GB of RAM, and 6x physical NICs each. What I really need is more RAM, sadly the current servers have 8x4GB so it's a matter of buying 8x8GB which at over $1k per server isn't an option.

Adding quad NICs to the C6100 will prolly add like $800, then another however much for the 2nd PSU, so that's like $1k to end up with as much RAM as I currently have. I could recycle the quad NICs I have in the 2950s if I buy LP brackets for them, then it's only ~$500 to end up with as much RAM as I currently have.

Either way I am doomed, I basically need a couple grand to do this right.
 
if your current server is a c2d with 8gb of ram you might not need that much a of a server. I use a laptop as my esx server, it can take 32gb ram. Never had a problem and the power savings/noise reduction are invaluable for the home.
 
32GB of ram in a laptop? How did you manage that? The nice thing with a laptop is that it essentially has a built in UPS... seems like it could work well especially in a proper ventilated server room like mine will be.

Those 4 node units look pretty cool too..... Too bad they wont ship to Canada.
 
Lenovo w520, w530 can all do 32gb of ram, there are also many more than have 4 dimm slots. Very very low power consumption, like 30w. It can also take 3 internal drives, an msata, main bay, and one in the optical bay.

You need to take 5 min to slip the nic drivers into a the iso but that's about it. I run 8 vms no problem. No hvac or ups worries. As someone who used to test our supermicros at home before putting them in production. I hated it. I heard it all the way in the bedroom. And the whirrrrr noise from datacenters, I can live without it.
 
2950 was a good server... I still have 2900s in production whirring away...

the C6100s are great too but no VMWare RAID onboard, have to buy a $100/node daughter card for hardware RAID
 
Not too worried about raid, since I'll be using my SAN anyway. Might throw the OS on a SSD or even a USB stick depending on the OS I go with. Laptop idea almost seems fun too though... but think I'll stick with a proper rackmount system. '

I'll wait till my SAN is built and paid off then I'll start looking at my server options. Might even just give in and build something new. :p Though at this point think the 2950 makes the most financial sense for me. I just need the extra ram more than anything at this point. 32GB will be fine for my needs for now. If I build a new server in the future then I can go even higher. But those C6100s are also very tempting considering it's 4 nodes. Hmmm. Decisions decisions.
 
I have a pair of C6100s in my colo and they are great
 
I mulled the C6100 over a bit and I think I may just go ahead and buy a couple of them. The lack of raid cards is not an issue, I'll just run Win2012 as a VM and do pass through to the disks to then use the disks as iSCSI targets provided by Win2012?

Any flaw with that idea?

Actually ..., one flaw is that they don't come with rails. Ready Rails will add another ~$100 to the setup (in addition to quad nics and a second PSU).
 
Some of the auctions do (I have a search on the STH forum post). For the rails I've bought I've paid $50/ set. There's also info on how people are installing LSI 9202-16e cards to give 16x external SAS ports to a node to give fast external storage, then using it to serve iSCSI, CIFS and etc.

BTW PSU's I've been buying for around $50 each.
 
Look into the C1100s as well. They are badass machines!
 
Ok, ok...I don't know...i've been on the fence for these C6100's and now I need additional capacity so i'm gonna go for it..but my rack is loud enough..that's the next project....quiet the rack...my wife it gonna kill me...lol.
 
I agree with the minuses. RAM is ridiculous. I tried using cheaper ram in ours, but couldn't find any that would work. It's a deadend path which is why they are cheap.
 
I agree with the minuses. RAM is ridiculous. I tried using cheaper ram in ours, but couldn't find any that would work. It's a deadend path which is why they are cheap.

Sort of ..., so one server with 96 GB is $700. Another 96 GB would run you like $1k or some such. OR, you could buy two servers, pull out the RAM from one of them and put it into the other. So for $1,400 you will end up with a 196 GB, 8 socket, octo-core, server that has dual PSUs (2nd pulled from the second server). Then you have 4 spare nodes, maybe sell them off parted out of keep as spares.

Pretty decent deal when you go that route.
 
OR, you could buy two servers, pull out the RAM from one of them and put it into the other.

Most of the 6100's on eBay that have 96GB of RAM have 12 x 2GB sticks in each sled. So all slots are already full with small sticks.
 
Most of the 6100's on eBay that have 96GB of RAM have 12 x 2GB sticks in each sled. So all slots are already full with small sticks.

Yup, but not this guy who sells a truckload of them, which is why that makes is a pretty good deal. Heck, if you are the gambling kind you could just buy the servers and part them out for profit.
 
Ok, ok...I don't know...i've been on the fence for these C6100's and now I need additional capacity so i'm gonna go for it..but my rack is loud enough..that's the next project....quiet the rack...my wife it gonna kill me...lol.

I have been drooling on the c6100 myself, but I am passing for now due to:
- the noise
- additional power usage
- the fact that you can't power on/off the individual nodes separately
- PSU not redundant even when you have dual PSUs

It is an awesome product though for those of us wanting to consolidate all our boxes into one.
 
I have been drooling on the c6100 myself, but I am passing for now due to:
- the noise
- additional power usage
- the fact that you can't power on/off the individual nodes separately
- PSU not redundant even when you have dual PSUs

It is an awesome product though for those of us wanting to consolidate all our boxes into one.

Agree on points 1 (noise) and 2 (power).

Noise can be managed with a fan mod, but if you want it out of the box noise is bad.

You can power each node separately, but you can't do it through a wall socket or managed PDU. You certainly can do it via IPMI.

I don't understand your last point at all. With dual PSUs you absolutely do have a redundant PSU configuration. The PSUs are separate with dual power-distrubution control boards internal to the chassis, even down to using 6-wire fans so they are power redundant. You can safely do an active pull of one PSU on a live system, replace it, pull the other one and replace it all without any impact to the platform's operation.
 
Agree on points 1 (noise) and 2 (power).

Noise can be managed with a fan mod, but if you want it out of the box noise is bad.

You can power each node separately, but you can't do it through a wall socket or managed PDU. You certainly can do it via IPMI.

I don't understand your last point at all. With dual PSUs you absolutely do have a redundant PSU configuration. The PSUs are separate with dual power-distrubution control boards internal to the chassis, even down to using 6-wire fans so they are power redundant. You can safely do an active pull of one PSU on a live system, replace it, pull the other one and replace it all without any impact to the platform's operation.

1.On the noise front, I am not opposed to modding. However, reading the thread in the other forum, it seems that even then we are still looking at 50db+ and a system that still needs to be in a garage.

2. On the powering side, cool that individual nodes can be powered on/off through IPMI.
We are not just talking about rebooting nodes, right? But actually, being able to power down a node to service it. If so, that cool.

3. On the PSU side and from my readings, in dual PSU config, each two nodes get a PSU.
So, if I am wrong, great.
 
Noise can be managed with a fan mod
Got to be careful with that though. I put a NMT-3 fan mod into one of the Supermicro boxes I had and since it only senses ambient air temp it didn't speed up the fan when one of the raid cards started overheading.

The Adaptec cards are passively cooled and require 200 (cfm or Lfm) to stay cool but the overall heat of the card dissipated in the chassis such that the ambient wasn't raised high enough to provide enough airflow.

Fail on my part for putting a fan mod into that particular setup though.
 
PigLover said:
Noise can be managed with a fan mod
Got to be careful with that though. I put a NMT-3 fan mod into one of the Supermicro boxes I had and since it only senses ambient air temp it didn't speed up the fan when one of the raid cards started overheading.

The Adaptec cards are passively cooled and require 200 (cfm or Lfm) to stay cool but the overall heat of the card dissipated in the chassis such that the ambient wasn't raised high enough to provide enough airflow.

Fail on my part for putting a fan mod into that particular setup though.

Good news is that the C6100 uses PCM fans (not temp sensing fans) and has an active Fan Control Board that reads all of the sensors in the chassis and manages the fans (BMC data from each MB and an independent pair of sensors in the drive bay). Makes the fan mod very doable. The only trouble is modding the cables to adapt a 4-pin fan to Dell's 6-pin design.

Also, fan alarm data from the BMCs is still available even with modified fans.
 
Still considering the C6100 purchase since he seems to still have plenty of them.

How does disk access work? In the ebay description it says that the server will take 4 disks right now (lack of controller is the issue I am sure). Does that mean every node gets its own disk?

Also not really clear whether disk trays come with the server, may ask the seller about that on ebay.
 
Post up the seller / auctions with the 96gb that can handle another 96gb... I'd be interested in seeing who this seller is :D
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Still considering the C6100 purchase since he seems to still have plenty of them.

How does disk access work? In the ebay description it says that the server will take 4 disks right now (lack of controller is the issue I am sure). Does that mean every node gets its own disk?

Also not really clear whether disk trays come with the server, may ask the seller about that on ebay.

You could just boot them all via USB flash drive and forgo the disks. You're probably going to present disk via a SMB NAS/SAN anyway? As a consolidated VMware set of hosts, I don't see the need to use them with disk drives unless you're considering something like Hyper-V. Would draw less power too. Not trying to take away from the overall question though.
 
Still considering the C6100 purchase since he seems to still have plenty of them.

How does disk access work? In the ebay description it says that the server will take 4 disks right now (lack of controller is the issue I am sure). Does that mean every node gets its own disk?

Also not really clear whether disk trays come with the server, may ask the seller about that on ebay.

there are 12 drive bays, each vertical set of 3 goes to one node... so the first set of three on the left go to node 1, second set node 2 and so on

it probably comes with 4 trays, so 1 disk per node

but you can use the blank trays for drives, just dont get the little light pipe for the status LED
 
Not to thread rob, but is there a competitor equivalent to the Dell C6100 using AMD/ Opteron CPUs?
I just purchased a built server (AMD) and would like to experiment with more nodes. I just came across this thread and liked the price for the c6100
 
Not to thread rob, but is there a competitor equivalent to the Dell C6100 using AMD/ Opteron CPUs?
I just purchased a built server (AMD) and would like to experiment with more nodes. I just came across this thread and liked the price for the c6100

Yes, but...

Dell C6145 is similar design but with two Opteron 6200 MB "sleds". Each "sled" has 4-sockets.

Here's the "but"...they are not common and good deals are rare. There are a few listings active on eBay right now but the prices are not good. Also, the chassis is a bit under-provisioned - you can't really load it up fully without overloading the PSUs and probably the cooling fans too. Its kind of an "ugly stepsister" design and doesn't look like Dell took it too seriously.

It probably isn't nearly as good a deal as the C6100 (unless you shop victoriously and find one cheap). The C6100s are flooded onto the market mainly because a very large and very well known web site with multiple very large data centers has thousands of them (literally) being retired right now. No such flood of C6145 or similar AMD platforms seems imminent.
 
Back
Top