SAS Enclosure / DAS storage

Joined
Apr 25, 2014
Messages
52
I'm trying to correctly setup a HA storage system for HyperV at the lowest possible cost. Speed is not a big issue because the hyperv hosts will be limited by 2x gigabit nics to access the iscsi targets.

Best practices for HyperV HA show a minimum of having 2 storage servers, 2 enclosures with each storage server connecting to each enclosure, and both enclosures storing the same data.

I've been considering HP's MSA50, 60, 70 all of which can support 1 or 2 controllers and each controller has 2 SAS 8088 ports but 1 is marked in and the other out, and these are generally used to daisy chain enclosures. The SE3016 is also popular on this forum, but only has 1 IO set (in and out). I'm assuming both the HP and the rackables are SAS 3.0gb/s enclosures, which is fine in terms of speed for my needs.

Are there newer SAS enclosures that are designed differently for the ability for to storage servers to both be attached? Or do the older HP MSA products support that? The HP's mention something called dual domain, but as far as I know that's just a form of MPIO, and is designed to add redundancy between 1 server and 1 enclosure, by running 2 cables to 2 separate HBA/raid cards.

I would most likely use an HBA, because based on the Microsoft koolaid ive been drinking its better to handle all raid as "storage spaces", and id love to for once be able to read smart data directly while in "raid".

If anyone has experience with the msa50, 60, 70 enclosures and knows some tips on how to use them / what HBA's play well with it, Id love to know.

Anyways I have a lot to learn on this topic, open to any information / suggestion.
 
As far as I can tell there's a big price difference between the MSA50,60,70 line and what I assume came after it. The D2600/2700 (sas 6gb/s) line and the p2000 g3(tons of io options). HP made the naming / numbering amazingly confusing on these...

Most likely if I purchased something from the MSA50,60,70 line, I would use the p800 controller as they are dirt cheap on ebay and it allows upgrading the firmware.

I cant figure out if Dell had an equivalent DAS to the MSA50,60,70 line, but eventually they came out with MD1000 which is SAS 12gb/s to the enclosure. Is 12gb/s just code here for 4 x wide SAS 3 Gb/s? It only supports SAS 3gb/s and SATA II for the individual drives though. These can be purchased for around 200 on ebay though so might be worth it for SATA II support, and if they also have a faster connection to the whole enclosure.

Some of the newer HP stuff mentions "Configure with a single controller for low initial cost, or dual controllers for high availability for the most demanding entry-level situations." But i'm still confused if the older MSA50,60,70 line has the same HA capabilities with dual controllers.
 
You're in for some pain if you're really going to setup two of all that crap. That stuff is going to be so loud and use tons of electricity.

Anyway, I've been through the gamut of DAS, so here's stuff to keep in mind:

MSA 60/70 - Dirt cheap, tons of them on ebay. Might have some compatibility issues with non-smart array raid cards, but I believe LSI works fine with them for JBOD. They're pretty loud. Also, if you use SATA, expect only 1.5gbps connections to each drive.

MD1000 - Dell's slightly later implementation of the MSA 60. Quieter, but is really picky with drives, and REALLY picky if you use SATA. It's almost a must to use interposers with SATA drives, which jacks up the cost. Not a concern if you use SAS drives. You can typically find a fully loaded MD1000 on ebay for like $450, probably the best route. LSI/PERC cards recommended here.

SE3016 - The fucking frankenstein. Not super loud, and you can get it pretty quiet by changing out the fans. Only a single power supply, only does SAS1 (just like the others) but confirmed 3gb/s SATA. It looks awful, the trays are metal, but still feel crappy in my opinion. Inconsistent LED states based on which HBA/RAID card you use.

For clustering, I think almost all of your JBOD units are out. You need to be looking at the HP P2000 or Dell MD3000/3200. These have redundant links and redundant controllers, but you'll be paying minimum 1000 for these on ebay. You're correct on dual domain, it's only for redundant links to a single server, not clustering.

The D2600/D2700 are really the ideal right now in my opinion when it comes to aftermarket JBOD. They run SAS2, support 6gb/s, and are more efficient and quiet than their MSA predecessors. Plus, they've been out a while, so cost is usually in the $600-1200 range. The Dell versions, MD1200 and MD1220 are still over 1000, so not worth it in my opinion.

Honestly, I'd skip over the multiple storage enclosures in your plan. Just use multiple servers and the onboard storage for your setup. If this is for business, than I'd go with a real NAS/SAN like EMC/Nimble/Netapp. Storage spaces is still pretty crappy overall, the tiering in R2 helps, but using any parity modes in spaces still sucks for performance. Mirroring is OK/Usable, but I'd never use it in production.
 
Thanks for the great reply.

I'm kinda leaning back towards using internal storage on the storage server as well. Especially now that you made it clear the MSA50,60,70 can never be truly HA/redundantly controlled by 2 different storage servers. I was also originally planning on using a few large 3.5" sata drives, which doesn't work with any of the newer HP servers that are affordable because they all use 2.5" drives (DL360,380 G5/G6 and later). But people have made it pretty clear that even minimal hyperv usage will saturate the iops on a 2-4 spindle drives. So I'm thinking now to just get 2 240gb ssd's and work with the sas 140gb 10k drives I already have plenty of. ( I should only need 500gb for VM's and 500gb for general file storage).

I would most likely have 2 HyperV hosts running around 6-8 vm's that could failover back and forth between each other. That would consist of pfsense, elastix (debian asterix pbx), a centos LAMP, and 2-3 windows servers for AD, DNS, DHCP. and 2 test VM's (don't need to failover obviously). Most likely just use some DL380 g5's I already own for this because they're quiet, and can handle this load fine with 2 x quad core l5335, 24gb RAM. This is for a small business obviously, but im not that anal about perfect uptime or redundancy. I have a dl360 g6 I could use as the storage server, but its a little overkill IMO with the 2x L5520 xeons I have in it.
 
I think your train of thought is in the right direction. As long as you do mirroring with storage spaces, you should see OK IOPS, and with 2012 R2 you can do SSD caching, which I've read works well. I don't think it's better than LSI's cachecade or some real storage vendors caching, but it's much better than nothing.

The only reason I'd recommend the G6 series for your hosts is DDR3. G6 runs on DDR3 and you can get PILES of DDR3 for cheap, and the maximum capacities are much higher as well. I run most of my stuff at home on DL380 G6's.

Also, just as an FYI, there is a DL380 G6 model with 3.5" drives, it's a bit more rare than the 8 X 2.5" version, but it does exist, and usually the cost is similar on ebay (300ish.) It holds 6 X 3.5".

My current favorite though is the Dell R510 12 bay version. As long as you have a PERC H700 in it, you get 6gbps SATA, 12 3.5" bays AND 2 X 2.5" bays. It's the perfect cheap storage server in my opinion. I looked at the 360 G6's, and they probably work really well too, but it was too loud for my uses (basement home lab.)

So how are you going to connect your storage to your hosts? SMB 3.0? Get a few Intel X 540's and direct connect them for some badass 10gbps connectivity. :) This is overkill though, your plan stated above looks like gigabit links would be just fine.

Last thing I should mention, be wary of what brand SSD you buy, particularly if you're going to use an HP Smart Array controller. Those damn things HATE Kingston SSD's. I've had decent luck with Samsung brand, and Intel SSD's seem to be the most compatible. Good luck my friend.
 
I was either going to use SMB 3.0 or iSCSI. There's not a whole lot of info out there on performance when doing it via SMB 3.0 or instructions, but luckily I have the "official guide" to hyperv 2012, which does cover it.

I have 1 nc380t cards in each server, brining them all to 4 gigabits. It would be nice if I had 5 gigabits in each server so each could have a team to storage server (through a separate switch then lan), team to lan, and a single wan connection (for pfsense, necessary on both for failover). Since id most likely be using 2u servers I can easily purchase some more nc380t's.

I was considering cheaper decent iops ssd's like crucial m500 (performance is horrid in smaller sizes), xlr8 pro (same sandforce controller as hyperx 3k (only the 3k version). Of course these aren't high write endurance drives, but ill take the risk. Im guessing the xlr8 pro is a bad idea if you've tried the hyperx 3k.
 
Noise matters somewhat since this is a small business and is just in an office mounted in an 22u rack. I do really like the dl380 g5's cause they're super quiet. Though I also have a poweredge 2600 that I had to keep around for parallel port support for some legacy software that is easily 60db (missing 1 redundant psu, might be causing it to ramp fans). No longer running that though, but have a 1u dl160 that's almost as loud that I also want to get rid of since i'm not really using it and its loud :).
 
I believe the Crucial's work, but no promises. Never tried a Sandisk.
 
I snagged a dl380 g6 really cheap on ebay (~150 shipped) and got 2 Seagate 600 480gb ssd's (have really good random iops in reviews). (plan to raid 1 this and storage vms on it)

I also have 6x 146gb sas 10k drives. (plan to raid 10, or 5 or 6 this and store general file shares here)

Do you recommend I use the integrated p410 raid for the ssd's and physical disks, or make both arrays in storage spaces? I might be a Microsoft guinea pig and try storage spaces + smb3.0 storage for the vms. And should I raid 10 or 5 drive raid 5 + 1 hot spare, or just do a raid 6 with the spindle drives.
 
The p410 is pretty crappy in the grand scheme of things. Hopefully it doesn't have issues with the SSD's. If it were me, I'd get a LSI 9260-8i and utilize it instead, BUT the p410 may treat you just fine, I *think* it's 6gbps SATA, but you'll want to check.

As for your 6 x 146gb, depends on if you want high IO or capacity. You'll need an OS drive though right?

So 2 X 146 R1 for OS
4 X 146 R5 for capacity or R10 for IO
2 X SSD R1 for VM

I always hate to share my OS drives with VM usage, I typically segment them off, per above.
 
Technically the dl380 g6 has sd card slot and internal USB header. So I could run windows server on that, though msft doesn't recommend that for anything beyond hyperv, and even then it seems like they only added that to have feature parity with esxi. Bigger issue is the complex instructions on how you accomplish that. I probably should figure out how to use that tool that converts install ISO's into VHD's .

I take it you recommend the hardware raid over doing any of that in windows. I checked the quickspecs and although the g6 supports 6g SAS it appears to only support sata 3gb/s. It also says that it lowers all to match the slowest (normal behavior), but I don't know how that works with both SAS and SATA. My 146gb drives are 3g SAS though as well, so almost any enclosure would have lowered the ssd's when used with the SAS drives all to 3g.
 
Back
Top