SAN Server Build – Raid Configuration and Hard Drives Advice

pglover19

n00b
Joined
Aug 3, 2014
Messages
23
Several months ago, I purchased a data center pull from Ebay to create a SAN server.
http://www.ebay.com/itm/131274922976?_trksid=p2059210.m2749.l2649&ssPageName=STRK:MEBIDX:IT
The server consist of the following components:
SuperMicro 3U 16 Bay CSE-836TQ-R800B chassis with dual 800 watt power supply
SuperMicro X7DBE Motherboard
2x Intel Xeon L5420
16GB RAM
SuperMicro IPMI 2.0 card for remote access

The fan and power supply in this server was extremely loud. Over the last month, I have made this server quite by replacing the 5 fans with quieter SuperMicro fans, and replaced the dual 800 watts power supply with dual 500 watts SuperMicro power supplies. Now this server is much quieter. Not silent.
I will be using the SAN server to provide datastores for my VMWare virtual machines, to store my huge collection of movie files for XMBC streaming, and to for backup of my 7 personal home PCs. I have a separate NAS server to back up my SAN server.
I have added the following components to the SAN Server so far:
Mellanox MHGH28-XTC Infiniband Dual 4X 20Gbps PCIe 2.0 Card
120GB Intel 530 Series SSD Drive with Windows Server 2008 R2 installed
120GB Intel 530 Series to be used as a Cache Drive
2 LSI 9260CV-8i with CacheCade Pro 2.0 Hardware Key

Now I need to get 16 SATA drives. Currently I have the following hard drives but not sure which RAID configuration to use. Please advise if I need to purchase new hard drives (same size and model), and what configuration I should use.
1 - 3TB WD Caviar Green 64MB Cache SATA 3.0Gb/s WD30EZRS 5400 RPM
1 – 2TB WD Caviar Black 64MB Cache SATA 3.0Gb/s WD2001FASS 7200 RPM
3 – 2TB WD Caviar Green 64MB Cache SATA 3.0Gb/s WD20EARS 5400 RPM
1 – 2TB WD Caviar Green 64MB Cache SATA 6.0Gb/s WD20EARX 7200 RPM
4 - 1TB Seagate Barracuda 64MB Cache SATA 6.0Gb/s ST1000DM003 7200 RPM
1 – 250GB Seagate Barracuda 8MB Cache SATA 3.0Gb/s ST3250318AS 7200 RPM
1 – 1TB WD Caviar Green 32MB Cache SATA 3.0Gb/s WD10EADS 5400 RPM
2 – 2TB WD Caviar Green 64MB Cache SATA 3.0Gb/s WD20EURS 5400 RPM
 
with different sized drives you could look into something like FlexRaid
 
with different sized drives you could look into something like FlexRaid

So FlexRaid is software based Raid versus hardware raid. I have already purchased the LSI raid cards... Should I just buy new hard drives. If yes, what size and model?
 
So FlexRaid is software based Raid versus hardware raid. ?

No.
Software and hardware raid like Raid5/6 or ZFS Z1/2/3 is realtime striping over all disks with 1 to three disks for redundancy. Sequential performance scale over number of data-disks in such a case while I/O performance (important for a datastore) scale over number of Raids so a Raid 60/Z2 with 2 vdevs has the double I/O performance of a single disk. (A Raid-Z with 5 vdevs has 5 x the I/O performance of a single disk)

FlexRaid/Snapraid etc are more a backupsolution. They take a snapshot on demand and create parity disks. On a disk failure, you can rebuild the faulted disk to this snapshot state (not realtime). Sequential and I/O performance is always like a single disk. This is a good solution but only for static media files.

An option may be a mix of realtime-raid with snapshot raid for media files.

Regarding ESXi datastores where you need I/O performance, snapshots and datasecurity, you should consider a ZFS solution. This is softwareraid with "use all free RAM for caching" that gives a better performance than your hardware-raid cards. As a plus it avoids the write hole problem of hardware raid due the modern CopyOnWrite filesystem. Best controller for such solutions is SATA in AHCI mode or a LSI HBA card without Raid like a LSI 9207 or a IBM 1015 flashed with a LSI 9211-IT firmware. I prefer Solaris/OmniOS SAN appliances for ZFS but you can also use solutions based on BSD or Linux.

Regarding your disks, I would build a realtime Raid from your 2/3 TB disks with 2 TB per disk and trash the rest (or use fo backups in a second pool). High performance ESXi datastore is SSD datastore with a 10% overprovisioning.
 
No.
Software and hardware raid like Raid5/6 or ZFS Z1/2/3 is realtime striping over all disks with 1 to three disks for redundancy. Sequential performance scale over number of data-disks in such a case while I/O performance (important for a datastore) scale over number of Raids so a Raid 60/Z2 with 2 vdevs has the double I/O performance of a single disk. (A Raid-Z with 5 vdevs has 5 x the I/O performance of a single disk)

FlexRaid/Snapraid etc are more a backupsolution. They take a snapshot on demand and create parity disks. On a disk failure, you can rebuild the faulted disk to this snapshot state (not realtime). Sequential and I/O performance is always like a single disk. This is a good solution but only for static media files.

An option may be a mix of realtime-raid with snapshot raid for media files.

Regarding ESXi datastores where you need I/O performance, snapshots and datasecurity, you should consider a ZFS solution. This is softwareraid with "use all free RAM for caching" that gives a better performance than your hardware-raid cards. As a plus it avoids the write hole problem of hardware raid due the modern CopyOnWrite filesystem. Best controller for such solutions is SATA in AHCI mode or a LSI HBA card without Raid like a LSI 9207 or a IBM 1015 flashed with a LSI 9211-IT firmware. I prefer Solaris/OmniOS SAN appliances for ZFS but you can also use solutions based on BSD or Linux.

Regarding your disks, I would build a realtime Raid from your 2/3 TB disks with 2 TB per disk and trash the rest (or use fo backups in a second pool). High performance ESXi datastore is SSD datastore with a 10% overprovisioning.

So are you telling me that I purchased the wrong LSI raid cards? As far as the ESXi datastores, I was planning to create ISCI targets using the StarWind iSCSI SAN Free Edition software..

Please continue to provide comments...
 
No.
Software and hardware raid like Raid5/6 or ZFS Z1/2/3 is realtime striping over all disks with 1 to three disks for redundancy. Sequential performance scale over number of data-disks in such a case while I/O performance (important for a datastore) scale over number of Raids so a Raid 60/Z2 with 2 vdevs has the double I/O performance of a single disk. (A Raid-Z with 5 vdevs has 5 x the I/O performance of a single disk)

FlexRaid/Snapraid etc are more a backupsolution. They take a snapshot on demand and create parity disks. On a disk failure, you can rebuild the faulted disk to this snapshot state (not realtime). Sequential and I/O performance is always like a single disk. This is a good solution but only for static media files.

An option may be a mix of realtime-raid with snapshot raid for media files.

Regarding ESXi datastores where you need I/O performance, snapshots and datasecurity, you should consider a ZFS solution. This is softwareraid with "use all free RAM for caching" that gives a better performance than your hardware-raid cards. As a plus it avoids the write hole problem of hardware raid due the modern CopyOnWrite filesystem. Best controller for such solutions is SATA in AHCI mode or a LSI HBA card without Raid like a LSI 9207 or a IBM 1015 flashed with a LSI 9211-IT firmware. I prefer Solaris/OmniOS SAN appliances for ZFS but you can also use solutions based on BSD or Linux.

Regarding your disks, I would build a realtime Raid from your 2/3 TB disks with 2 TB per disk and trash the rest (or use fo backups in a second pool). High performance ESXi datastore is SSD datastore with a 10% overprovisioning.

Does it matter that some of my drives are 6.0Gb/s and some are 3.0Gb/s when building a SAN? If you were to buy new hard drives for a SAN, what model and size would you buy? Obviously, I would like to use my existing drives; however, I am willing to purchase new drives if necessary.
 
Windows + NTFS + hardware raid controller with cache and BBU is one way

The other option (more sophisticated) is

A commercial Oracle Solaris or free OmniOS based storage appliance (managend via Web-browser)
without a hardware raidcontroller (use Sata or a simple/cheap raidless LSI HBA)
where Raid is managed by ZFS like my napp-it http://www.napp-it.org/doc/downloads/napp-it.pdf

This offers a much better data security with checksums, copy on write, snaps etc
and a higher performance when using the whole free RAM as readcache.

As an option to Windows + Starwind, you can use Solaris Comstar. This is a
enterprise ready iSCSI/ FC software stack embedded in Solaris/OmniOS ZFS. (Google about Comstar)

Regarding disks, I would use 7200 rpm disks from Hitachi/Toshiba/HGST
https://www.backblaze.com/blog/hard-drive-reliability-update-september-2014/
either 2TB ones to combine with yours or 4 TB ones.

You can combine 3 and 6 Gbs disks mostly without problems.
(In a Raid where data is striped over all disks, the slowest disk limits performance)
 
Last edited:
solaris + comstar + zfs is a good option

however solaris i mean one not currently maintained by oracle is starting to show its age so freebsd + zfs or linux + btrfs may be a better option if unix is your friend :)

Windows + NTFS + hardware raid controller with cache and BBU is one way

The other option (more sophisticated) is

A commercial Oracle Solaris or free OmniOS based storage appliance (managend via Web-browser)
without a hardware raidcontroller (use Sata or a simple/cheap raidless LSI HBA)
where Raid is managed by ZFS like my napp-it http://www.napp-it.org/doc/downloads/napp-it.pdf

This offers a much better data security with checksums, copy on write, snaps etc
and a higher performance when using the whole free RAM as readcache.

As an option to Windows + Starwind, you can use Solaris Comstar. This is a
enterprise ready iSCSI/ FC software stack embedded in Solaris/OmniOS ZFS. (Google about Comstar)

Regarding disks, I would use 7200 rpm disks from Hitachi/Toshiba/HGST
https://www.backblaze.com/blog/hard-drive-reliability-update-september-2014/
either 2TB ones to combine with yours or 4 TB ones.

You can combine 3 and 6 Gbs disks mostly without problems.
(In a Raid where data is striped over all disks, the slowest disk limits performance)
 
I wanted to stay with Windows Server 2008 and use Starwind to create iSCSI targets for ESXi. My SAN server would be connected to the VM Host Server using Infiniband. However, right now I am not getting no where near a 10GB connection.

Please keep the comments flowing as I have open to suggestions...
 
I just found another 2TB Caviar Black WD2001FASS drive I had laying around. Now I have over 15TB of storage with drives of different manufacturers and different speeds. What is the best configuration to make these drives usable or should I buy all new drives? Please advise....
 
To make one thing clear. This is not a SAN.

Quoting wikipedia:

"A SAN typically has its own network of storage devices that are generally not accessible through the local area network (LAN) by other devices."

I'd use the slow (5400rpm) drives for bulk storage, black/7200rpm drives for vmware (iSCSI / NFS + SSD cache).

Be aware that the green drives are not suitable for RAID, consider replacing them with WD RED drives. I'd also be wary of putting older kit into production like this without a burn-in period.
 
For all your "slow storage" usages, flexraid or similar will be fine and can use drives of various sizes, an advantage considering what you've got.

Then you can buy a few hard drives or better SSDs for your VMs.
 
For all your "slow storage" usages, flexraid or similar will be fine and can use drives of various sizes, an advantage considering what you've got.

Then you can buy a few hard drives or better SSDs for your VMs.

So Flexraid can be used in conjunction with my LSI raid card or it considered software raid? Please explain...

When you talk about "slow storage usages", are you referring to my movie files? So, I could use Flexraid to create a large storage pool for my movie collection. Then create shared folders on the storage pool that can be used by my media streaming devices like my HTPC, Fire TV, Chromebox,etc. I only have 6 SATA slots on my motherboard. I assume that for Flexraid, the hard drives should not be connected to the LSI 9260-8i raid card.

Then buy a few hard drives or better SSDs and use my LSI raid cards to create datastores for my VMs... Please let me know if I interrupted your comments correctly...
 
Last edited:
the comments about grouping like drives into specific purposes makes the most sense.
 
Ok.. Based on the all the feedback, here is what I have done and plan to do over the next month.

I have created an 8TB RAID 10 volume and 2TB RAID 5 volume using the LSI 9260-8i controller card. These volumes will be used for VM datastores. I plan to use StarWind iSCSI SAN Free Edition software to create ISCI targets for VMWare ESXI. For the 8TB RAID 10 volume, I am using four 4TB HGST 7200 RPM NAS drives. For the 2TB RAID 5 volume, I am using three 1TB Seagate 7200 RPM drives. For the cache drive, I am using a 240GB Intel 520 SSD drive.

For my media files storage, I plan to use FlexRaid because I have a lot of different size drives. I don’t want to use the onboard sata connections, so I am purchasing the IBM M1015 and crossflashing it in IT mode. This will give me 8 sata ports to connect my 8 drives (12TB total) that will be used for storage of my media files.
For my boot drive, I am using a 120GB Intel 530 SSD drive connected to the onboard SATA port. This is the only onboard SATA port that I am using on the motherboard.

I welcome feedback on my approach.
 
Back
Top