Multiple servers booting from SAS

cheche

n00b
Joined
Jan 13, 2011
Messages
2
Hi

I am currently thinking about new server room and after some reading about sas I have a simple question. My idea is to connect few servers to a sas controlled disk array, servers don't have drives only sas hba and boot from logical drives on array, I want to use raid5 for data security.

So my simple question:Is this possible?
Fiber Channel is not an option.

Please be gentle.
 
Hi

I am currently thinking about new server room and after some reading about sas I have a simple question. My idea is to connect few servers to a sas controlled disk array, servers don't have drives only sas hba and boot from logical drives on array, I want to use raid5 for data security.

So my simple question:Is this possible?
Fiber Channel is not an option.

Please be gentle.

You can boot remotely without local disks, if your Network Adapter (IP or FC based) supports booting remotely ex. like Intel server Nics from a iSCSI SAN. But why do you want to do this:

Look at my Computer history:
Firstly we have had a lot of servers with their own local storage
Then we moved to a lot of servers with common used storage (your approach)
but we have always used local boot drives because its much more simple.
Nowadays we usually do not have a lot of servers. We virtualize them on at least two redundand VM-servers to have them hardware-independant and to use hardware more efficient with common used SAN storage.

I suggest, look at virtualization and SAN. There are a lot of options between no cost and All-In-One (like my napp-it zfs-all-in-one concept) and a full featured professional VMware ESXi Server installation with netapp SAN storage.

Gea
 
Simple answer: yes, it's possible.

You've got lots of options ranging from ghetto to Tier 1 enterprise. The main limitation, assuming your basic needs, is going to be cable lengths - you can have up to 10m runs between your HBAs/RAID cards and your array.

If you want a supported solution, at the high end you have LSI (OEMed by IBM and Dell) SAS arrays (also available with iSCSI and FC) - the Dell version is the MD1200 (12x 3.5") / MD1220 (24x 2.5")- this can connect up to 8 HBAs (or 4 if you want redundant paths) and you can divide up the drives between your servers. You use dumb HBAs - the RAID controllers are actually in the array. The controllers are based on the LSI SAS2108 ROC - just like the high end LSI RAID cards and the Areca 1880 series. This is not cheap, but probably the fastest way to connect an external drive to a server - a single SAS cable can do 2400Gbit/s - better than 10G iSCSI or 8G FC. The drawback is that you will be limited to the drives Dell/IBM supply, and have to pay their drive tax.

If you don't have Dell/IBM servers and want to be supported, look at Overland or see if you can get the LSI units non-OEM.

If you want to use RAID cards instead of HBAs, and use a JBOD enclosure instead of an array, you can build your own JBODs using SAS expanders and connect the SAS expanders to the RAID cards.

Also there is a SAS switch from LSI which can connect multiple servers to multiple JBODS/arrays, but I'm not sure if you can use this to carve up a JBOD between multiple servers.

If you are only talkiing about a small number of drives - you may find it cheaper to attach a JBOD to a server and then share this out to other servers by running an iSCSI target on the server. If using 1G ethernet, cabling will certainly be cheaper, and you can do longer runs, and do multiple runs for servers that need more bandwidth. 10G Ethernet is preferable obviously, but more expensive, and if you don't need >100MB/sec transfer rates, unnecessarry. This solution can be very cheap, but if you want full redundancy and high availablilty it can get very expensive.

Also, personally I'd avoid RAID-5. It may be the best value in terms of cost per GB, but I would go for RAID-1 or 10, or 6 in certain scenarios...

hope this helps,

Aitor
 
Simple answer: yes, it's possible.

You've got lots of options ranging from ghetto to Tier 1 enterprise. The main limitation, assuming your basic needs, is going to be cable lengths - you can have up to 10m runs between your HBAs/RAID cards and your array.

If you want a supported solution, at the high end you have LSI (OEMed by IBM and Dell) SAS arrays (also available with iSCSI and FC) - the Dell version is the MD1200 (12x 3.5") / MD1220 (24x 2.5")- this can connect up to 8 HBAs (or 4 if you want redundant paths) and you can divide up the drives between your servers. You use dumb HBAs - the RAID controllers are actually in the array. The controllers are based on the LSI SAS2108 ROC - just like the high end LSI RAID cards and the Areca 1880 series. This is not cheap, but probably the fastest way to connect an external drive to a server - a single SAS cable can do 2400Gbit/s - better than 10G iSCSI or 8G FC. The drawback is that you will be limited to the drives Dell/IBM supply, and have to pay their drive tax.

If you don't have Dell/IBM servers and want to be supported, look at Overland or see if you can get the LSI units non-OEM.

If you want to use RAID cards instead of HBAs, and use a JBOD enclosure instead of an array, you can build your own JBODs using SAS expanders and connect the SAS expanders to the RAID cards.

Also there is a SAS switch from LSI which can connect multiple servers to multiple JBODS/arrays, but I'm not sure if you can use this to carve up a JBOD between multiple servers.

If you are only talkiing about a small number of drives - you may find it cheaper to attach a JBOD to a server and then share this out to other servers by running an iSCSI target on the server. If using 1G ethernet, cabling will certainly be cheaper, and you can do longer runs, and do multiple runs for servers that need more bandwidth. 10G Ethernet is preferable obviously, but more expensive, and if you don't need >100MB/sec transfer rates, unnecessarry. This solution can be very cheap, but if you want full redundancy and high availablilty it can get very expensive.

Also, personally I'd avoid RAID-5. It may be the best value in terms of cost per GB, but I would go for RAID-1 or 10, or 6 in certain scenarios...

hope this helps,

Aitor
Thank you, the solution with Areca controller or similar is exactly what I need.
So thank you again.
cheers.
 
Back
Top