EMC NS120 For ESX

calvinj

[H]ard|Gawd
Joined
Mar 2, 2009
Messages
1,738
I might be working with an NS120 here soon and wondering if anybody else has one and using it with ESXI?

Everything is pretty straight forward right now. 3 hosts, redundant fiber switches / hbas, ns120 with 79x 146gb sas disks.

What I'm wondering is what would be an optimum raid config for this SAN? I wouldn't think 79 disks in a raid 6 would be a good idea. I would like to stay with in a Raid 5 or Raid 6.
 

danswartz

2[H]4U
Joined
Feb 25, 2011
Messages
3,715
I'm confused. Looking at the spec sheet for the NS120, it talks about 4-16 drives?
 

mct

[H]ard|Gawd
Joined
Jul 14, 2004
Messages
1,194
Raid 5 would probably be fine for you. We have a NS480 and we are using setting all of our raid groups with FC disks to raid 5. We do have some SATA disks that are setup for Raid 6. I would always recommend doing some testing to make sure that Raid 5 will meet your IO requirements for your applications.
 

danswartz

2[H]4U
Joined
Feb 25, 2011
Messages
3,715
79 drives sounds like an awful lot for a raid5 array - seems to me like the odds of a second drive failing during the rebuild (and vaporizing all your data) would unacceptably high. Even raid6 sounds scary.
 

NetJunkie

[H]F Junkie
Joined
Mar 16, 2001
Messages
9,682
Uh....yeah. You don't put all those drives in one RAID group. You split them up. Usually we do a lot of 4+1 RAID5 groups. RAID10 if needs demand it. I only do RAID6 on archival type storage...usually 1TB and 2TB SATA. Stick to 4+1 and 8+1 RAID5 Groups.
 

mct

[H]ard|Gawd
Joined
Jul 14, 2004
Messages
1,194
You wouldn't have all 79 disks in one Raid 5 array. You would create raid groups that would contain a specific number of drives depending on the performance that you need out of it. You would then carve out your LUNs from each of the raid groups. In this case those LUNs would be presented to the ESX hosts.
 

mct

[H]ard|Gawd
Joined
Jul 14, 2004
Messages
1,194
Uh....yeah. You don't put all those drives in one RAID group. You split them up. Usually we do a lot of 4+1 RAID5 groups. RAID10 if needs demand it. I only do RAID6 on archival type storage...usually 1TB and 2TB SATA. Stick to 4+1 and 8+1 RAID5 Groups.

Beat me to it. :p What NetJunkie said.
 

danswartz

2[H]4U
Joined
Feb 25, 2011
Messages
3,715
Heh, no kidding. I looked at the spec sheet again. I was looking at the part that explains the disk configuration for the different RAID levels, The max drives/expansion bays was elsewhere. Sorry :) What is typically done in these cases? Say you set up N 3+1 raid5 arrays and export them to the esx host. I seem to recall that you can only have 2TB-512B in a datastore, so with 3/4+1 raid5, that would give you 3-4 lun extents per datastore? Just set up a crapload of datastores, and spread the VMs across them?
 

NetJunkie

[H]F Junkie
Joined
Mar 16, 2001
Messages
9,682
And FYI, the reason we do RAID5 4+1 (5 disks) and 8+1 (9 disks) is due to the disk enclosure (DAE) layout. They are 15 drives and you do a hotspare for every 30 fibre channel disks. So the first DAE may be 4+1, 4+1, 4+1 for a total of 15. The next would be 4+1, 8+1, hotspare, for a total of 15. You can absolutely span RAID groups across DAEs no problem...just most people don't and 4+1 and 8+1 gives a mix of varying I/O types.

There is a good "best practice" guide on RAID Group deployment on PowerLink.
 

calvinj

[H]ard|Gawd
Joined
Mar 2, 2009
Messages
1,738
I knew that I wouldn't be doing all 79 disks in a raid 5 but wanted to know what might be the best way ofngoing about this. It sounds like a 4+1 or an 8+1 would be the route to go. Just trying to mskenit simple as possible.

Yes you can only have a 1.99 tb datastore to you esxi boxes but these will not be just for data stores. There will be a handful of rdms to specific machines such. As SQL servers and a few various rdms as required by a vendor.

So in my case you think raid 5 would be the better way to go, instead of a raid 6?
 

NetJunkie

[H]F Junkie
Joined
Mar 16, 2001
Messages
9,682
No real reason to do RAID6 on 4+1 and 8+1 146GB drives. Rebuild times won't be that bad and the EMC array will "proactively" fail a drive and copy the data to a hotspare at the first sign of trouble.
 

calvinj

[H]ard|Gawd
Joined
Mar 2, 2009
Messages
1,738
So let me get this right.. Raid 5 + hotspare... Would you do a hotspare per shelf or per 8 + 1 setup?
 

NetJunkie

[H]F Junkie
Joined
Mar 16, 2001
Messages
9,682
Per 30 FC drives. For SATA it's every 15. So put a hotspare in every other shelf.
 

lopoetve

Extremely [H]
Joined
Oct 11, 2001
Messages
33,317
I might be working with an NS120 here soon and wondering if anybody else has one and using it with ESXI?

Everything is pretty straight forward right now. 3 hosts, redundant fiber switches / hbas, ns120 with 79x 146gb sas disks.

What I'm wondering is what would be an optimum raid config for this SAN? I wouldn't think 79 disks in a raid 6 would be a good idea. I would like to stay with in a Raid 5 or Raid 6.

You need to study basic RAID performance docs. RAID6 is going to suck horribly bad for write performance - in fact, this is by far the most common performance mistake we see made, given the relatively smaller cache on the NS-120.

Don't group everything together. Build raid groups / luns for specific performance and size needs (performance first - size is easy!). Go from there.
 
Top