4+ Node Servers -- How do they share disks?

VirtNewb

n00b
Joined
Sep 1, 2014
Messages
2
Hey guys,

Not new to the forum, but new to posting. I've been seeing these 4+ node servers on eBay and UnixSurplus... and was wondering how do they share the hard disks? I e-mailed UnixSurplus and the sales guy said that the disks operate on whatever node they are plugged into, but that didn't seem correct. The nodes I've seen slide in and out cleanly with no SATA cables attached.

I have searched this forum for a similar thread although could not find anything. Thanks for your help.

Here's an example, please excuse the large screenshot, but I wanted it to be available in the future for other newbs to see what I am talking about as well.

lLeTeHi.jpg
 
It's true, in your case it's 3 disks per node. Those disks are not shared but dedicated to each node. For example, disks in slots 1-3 are permanently assigned to node 1, 4-6 to node 2, and so on. The precise assignment would be in the hardware manual.
 
Thanks, yeah after I posted this I thought what if it's in the BIOS/on board RAID controller settings. So with 12x 3.5" HDD's and 4 nodes, each node could be assigned 3 disks which technically could go RAID 5 but the boards only support 0,1,10 ... why would somebody want those RAID levels with only 3 disks? Raid 0 is unsafe so let's not even talk about that. RAID 1 requires only 2 disks... what do they expect you to do with 3 disks per node and no RAID 5?
 
My guess is that local disks are only used for scratch/temp/intermediate storage so that the data doesn't have to make a network round-trip. The local disks aren't meant for any production storage.

Now that there's SSD caching it would also be worth it to use the local disks for that.

If you want to run Raid5 you can always softraid it.
 
The manual confirms it's three drives per node fixed. I guess they just took the number of bays in one of their standard 2U chassis and divided it by the number of nodes.

According to supermicro the board supports raid 5 in windows but not in linux. Looks like it's intel fakeraid anyway though so if you want raid5 under linux you really wouldn't be losing much by just using linux software raid.

It looks like the connections inside the nodes are by standard sata cables though so if you don't mind it being a bit ghetto you may be able to run a SATA cable out of one node and into another.
 
The disk setup is more interesting in the version with 24x 2.5" drives, six drives per node.

Also, some multi-node servers do allow disk assignment/sharing. For example, the newer Dell 6220 has a SAS switch that can assign any drive to any node.
 
In C6100s there is some flexibility. I believe there were different initial configurations, but the most common is 3 drives per node. Moving to 6 drives connected to 2 nodes is as easy as moving a SAS cable, iirc. Putting more than that on one node required an add-on controller, also iirc.
 
We took a look at the dell vrtx and it can hold 4 blades and up to 25 disks and assign the disks to any node or all nodes or a combination of nodes. Pretty neat kit but can only do write through with redundant raid controllers. Write back requires a single point of failure.... :(

Pricing is almost a wash between getting two 1u servers and a shared sas array. But it is pretty quiet and easy to set up.
 
Another thing to think about with shared chassis servers like the c6100, when you can share power supplies and fans across 4 servers you gain in efficiency.
 
RAID 1 with Hot Spare would be three disks - Depends how paranoid you are about disk failure really
 
Like everyone said, the C6100/C6220 dedicate drives to each node.
But you can reroute cables to your hearts content, want all nodes to have nothing and one to have all disks? Go at it, it's a pain and you'll need an HBA but it is doable.

Beyond that glusterFS,Windows Storage Spaces, VSAN/EVORail
 
I use to have a C6100 Chassis like that one above, man she got noisy after the 3rd node was fired up... For a basement lab no bueno! As long as you are in a server room with cooling these are nice boxes!

Keep in mind that u got 1xPCIe and 1xMezzaine Card Bay and if u wanna do something like vSAN with it use the Mezzaine for 2x10GBe intel addon (bout 250/each on ebay) and use the PCIe to get something like an LSI 2308 based controller/each then u gotta do some wire futzing to get it to hook to the storage up front on the sister board.

One other thing with the 3.5 bays at least in the 6100 i had you couldn't put 2.5 bay drives in it without an adapter (2.5 to 3.5 casing) i used a Icydock, there was no Internal USB for OS drive so u could potentially burn 1 drive bay over it.

I had one sold it! too much noise for my liking i went to custom 4U designs to get the noise handled.

Mine was like 300 shipped with all 4 nodes from a guy here on [H] its the memory that really chews up the price!
 
dsaint, seems odd that yours did not have internal usb ports as the 6100 I have does have internal ports
 
Modderman are you sure that you are not referencing the 6105s ? The 6100s i thought that was a major pullback from them.
 
I believe my model of C6100 (TY3) has traces on the boards for USB, but no socket preinstalled. Someone on STH forums soldered one on successfully.

There are also at least 2 models, maybe more of C6100, fwiw. XS23-SB, XS23-TY3, and possibly a newer one Dell is/was selling.
 
Back
Top