What's the best out of the box RAID5 setup?

lachdanan

Limp Gawd
Joined
Nov 3, 2012
Messages
240
Hi,

Currently I have a 5 drive drobo (raid5?) setup but I am still out of space and have to use many other drives all over, outside.

I was looking at 8 and 12 drive drobos, and 8 seems $1k, but not sure about the 12 drive one. I bought drobo years ago because I don't know much about custom raid setups and read that if you mess it up, you lose everything.

What's the best out of the box raid (5 I assume) device I can buy that's affordable that I can add 4tb HDDs that has as much slots as possible and is flexible with hot swapping and maybe even different drive sizes?

I am sure some people here have the ultimate setups, but I don't want to spend a new computer price on it though :)




Thanks.
 
"out of the box" for 8 drive+ is going to cost what a mid/lower-end system is going to run.

Someone in the For Sale forum here has a 8 drive syngology that probably would do all you want, and can be expanded upon without upgrading the entire unit.
 
Well there are a couple of things:

First and foremost, I'd avoid raid 5 with massive hard drives, your chances of losing your data on a rebuild go way up with drives over 1tb, so if speed isn't the primary concern I'd stick to raid6.

As far as messing things up, I'm pretty sure that holds for any solution, if you screw up with a drobo your data will be just as gone if you don't have a real backup solution in place.


Now the important bit:

How much data total do you need to store? Do you have a real backup solution? What are your priorities (lots of redundant storage, high speed storage, etc, etc). What level of tech support will you need when it eventually encounters a problem? How valuable is the data being stored - how much will it cost you to replace it if it disappears tomorrow?

Without those answers you won't be able to find a real solution.
 
Thanks alot guys. I did some research and someone suggested Norco 24 bay units. Not sure how good the internal components have to be i.e. mobo, CPU, etc, but the purpose is to store MKV bluray rips, so maybe a couple thousand files that are very large (4.5 to 25 GB each).

I was thinking of having say 12 drive raid with 2 drive for parity. Speed is not important, but protection is. I won't play the videos very often but maybe once a day or once every few days.

I think I have 16tb right now, and I don't have real backup. As for tech support, I can handle it if it's not way too complicated. Like changing a drive, rebuilding the array, I should be able to do it.

If I lose them, it would just be lost time for me to find and download them, which would be a lot.

Also after reading yesterday, I realized software raids like flexraid or snapraid seems like the way to go.

What do you guys think?

Thanks :)
 
I think I have 16tb right now, and I don't have real backup.

Maybe you should work on that.

A drive pool of 16TB mirrored to another drive pool of 16TB is much better than any sort of RAID setup, with or without parity.

In general, RAID sucks. I'd suggest drive pools in different machines that mirror your data.

I'd suggest a couple of really simple NAS build-it-yourself in cheap cases with lots of bays, good power supplies, a quality in-line AVR UPS, and then pool 4-6 drives of various sizes into a pool big enough to contain your data.

I helped a friend use this board, case, supply, 2gb ram, and build a pair of mirrored systems with drive pooling.

The systems clocked in at something like $200 each, (including a intel NIC which solved headaches with freeNAS and realtek nic) and then the disks.
 
Last edited:
Yep for cheap get that $300 + random size drives + snapraid.

Benefits are:
Its cheap
You an mix drive sizes
When you need more space you add whatever drive you can one drive at a time.
Lower power usage -- it is not raid so you can have all your drives spin down and use no power, and when you watch a movie only the drive the movie is on needs to spin up.
Snapraid will let you replace one failed drive (Or two drives if you use q-parity too aka raid6)
Worst case if two drives fail you just lose those 2 drives, the rest are still usable. (Or 3 raid if you were using q-parity)
Also in next version of Snapraid it's going to allow more than just 2 parity drives, so lets say you fill out a 24 bay case one day... you can have 4 parity drives and 20 data drives.

Also it works on Windows too. No need to use Linux if you don't know it.

Wait I should note, that $300 case is loud as shit. Like way louder than you'd think. Almost certainly you'll need to replace the power supply and most likely also replace the 80mm fans in the case. Also I am not sure the SATA controllers that thing comes with support >2TB drives. So its not ideal setup, unless you are deaf and only have <=2TB drives. When you start to add up cost of a new PSU and fans and maybe a new SAS controller its no longer such a great deal, but it is still pretty good. Just a 24 bay Norco is $399. Even if you eventually throw out the included mobo/cpu/ram/sata controllers, just the case itself is nicer than a Norco. Although it doesn't use the minisas connectors so wiring is a big more messy. Still. I'd take messy wiring over shitty qualify Norco.

Also keep in mind snapraid is ideal only for storing mostly unchanging data, aka media files. You do not want to store anything on it really. If you want to put 'My Documents' type crap somewhere do not use snapraid. Use snapraid for media and then create a separate RAID1 array for other stuff, or if you use Linux you can use snapraid for media and then use ZFSonLinux and create a small pool for non-media stuff. Get best of all worlds.
 
Last edited:
Thanks alot for the info. I want to use snapraid or flexraid. I am not sure which but I heard snapraid is not automated so you have to run the commands yourself. It would be nice, if they were done every day or something.

After reading that thread, I realized people were replacing all 2+3 fans and the power supply like you said. Also when you said the SATA controllers, do you mean on the mobo? I saw people using separate SATA cards I think to support 24 drives. Would that support 2+TB drives?

Do you really think Supermicro case is better, more robust than Norco? Because I have to either buy that Supermicro from them + shipping will be about $530. But if I build the same thing using Norco, then it costs $650 to me. I didn't include cost for new fans, power supply, etc since it would be spent for both systems.

I only plan to store 100% mkv files. All my other files are so small and insignificant, that a 2tb drive outside the RAID (inside my PC) would be way more than enough for life.

Also I don't need a GFX card, right? How will I see and install the OS though and control the system from outside? I never did that. I have a brand new GTX 280 I think from work.

Lastly do you think those specs of the Supremicro build is more than enough? I can replace the CPU for 2.7ghz for $20 I think. But I don't know why some people have the very latest specs like 64GB RAM, the latest Xeon, SSD for OS, etc for similar servers on that thread?

That seems like an overkill for RAID storage, right? I won't use the server to do other calculations or anything.

Thanks again man :)
 
I've not tried FlexRAID but I know many use it on AVS forum. Although I'm not sure on how free it is anymore? I think its like $50 now or something. I think he started charging awhile ago. In any case the two are very similar in how they work, FlexRAID is likely just easier. I wouldn't say don't use it.

The $300 Supermicro setup comes with 3 PCI-X 8-port Supermicro SATA controllers. The 24 bays are attached to these. You don't even need to use the motherboards onboard SATA controller, you can put the OS drive on there, if you want.

To use >2TB drives though now you need to replace those SATA controllers. The popular option is to buy IBM M1015 cards on ebay for ~$130. They support 8 drives, so you can get one to start with but eventually need 3 for 24 drives. Or you can get one to start with and when you need more than 8 drives buy a sas expander. Either way though, M1015 is a PCIe card, and the included motherboard only supports PCI-X. So now you need a new motherboard to do any of this. That means new CPU and RAM too. Add to that a new PSU and now you're getting new everything, minus the case itself.

The $300 Supermicro setup is only a good deal if you touch nothing. If you are not happy with <=2TB drives soon as you replace anything you end up replacing everything and its no longer a great deal.

Many end up doing this which is why so many have fancy setups on AVS forum. They're just buying the case for $300 and throwing out all the stuff in it. If the case is $520 for you with shipping then even that isn't a good deal. A 24 bay Norco is $399.

While the quality of the Supermicro is better than a Norco, its not that much nicer. If they were the same price I'd take the Supermicro but I wouldn't pay extra for it.

On and yes almost certainly you'll need some VGA device. Most server motherboards will have an onboard VGA controller so you don't need to waste a PCIe slots (plus the extra power and noise), but if you end up with some desktop board you'll need a VGA controller. Technically you only need a VGA controller to install Windows, well at least Linux, I'm not sure how Windows then works if you pull it out. They might make you buy Windows Sever for headless setups. Linux doesn't require that. But most desktop boards will require a VGA device be installed so its all a moot point Typically either you get a server board with onboard vga or you get a deskotp board and add in a low power GPU.

Also again basically any hardware will work for a basic file server. Many end up wanting to do more with it. Eg if you have Roku devices in your house they cannot play mkv files off network, but if you install Plex on your server it can transocode video files to send them to roku for you... but then you need a beefy CPU and stuff to do this. Or maybe once you have this server why not also run torrent on it? Deluge has a web interface. Or just remote desktop to server to control stuff. For simple file server and nothing else though, anything works. You can find people with Atom CPUs.

Anyways if that $300 Supermicro isn't ideal for you, then you'll likely be looking at gettting a Norco + some low end server motherbioard + low end Intel/AMD cpu at least 4GB ram. Offhand I don't have any recommendation. Maybe someone else can chime in or you look for recent similar-ish threads.
 
Thanks for replying. Some people over there seem to claim they got 4tb working without changing the mobo. I read some use AOC-SAT2-MV8. Does that support >2tb drives?

Right now I don't have anything bigger than 2tb, was planning to get some larger ones though.

Also since I am in Canada, even an empty Norco with shipping would cost me $500 (newegg.ca).

I understand it better now. I just want to silence the system using different fans just like all others end up doing there. Also the ability to support 4tb drives would be nice though, so I could throw in a couple new 4tbs 1 for parity and 2 for more data, and that would be nice with all my other HDDs lying around.

Also if you were to get a new server mobo to support 4tb, which one would you get? Does it have to be this generation?

Thanks again :)
 
Oh shit I take it back, it comes with three AOC-SAT2-MV8. I just assumed they did not work with >2TB drives as they're so old. I happened to have a new 4TB drive sitting here and I went and plugged it in and yeah, shows up as 4TB.

So I guess only downsides to the $300 Supermicro system is:
1. Very loud - but you can fix this with PSU/fans
2. It is old and if anything ever breaks it'll probably be a pain to get working again.

As is though its working file server with ECC ram and everything. You literally need nothing except HDDs (and PSU/fans if you're not deaf).
 
Thanks for replying. I am glad that it supports it :)

Do you also use AOC-SAT2-MV8 yourself?

It also seems like Supermicro case is 30lb heavier than Norco. That should mean better build I guess like you said?

I pulled the trigger on this one:
http://www.ebay.com/itm/151128996482

Do you know if I can use a quad core with that mobo? If I do, would it make sense to replace the CPU with a quad core opteron 2.7Ghz instead? It costs $20, so if it would speed up building parity, calculating checksum, etc, I don't mind spending it.

Also do you know which fan would be best for replacing the 3 fans inside (not sure what they are called, as opposed to the 2 case fans on the other side). Some guy was suggesting to use this:
http://www.newegg.ca/Product/Product.aspx?Item=N82E16835233031&Tpk=XLF XLF-F1453

But I also heard some people recommending Noctua fans. The way this guy used those large fans is by butting them and using a zip tie and then securing them by all the SATA cables that come from one side, so like this:

NgYhtwt.jpg


Thanks again. You helped me clear a lot of questions I had :)
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Do those things just take desktop SATA drives? And if so, can you mix and match them up?
 
I am not an expert but yeah. You can mix and match if you use software raid like flexraid, snapraid, etc.
 
Yeah when I bought mine I paid the $20 extra or whatever it was at the time for a quadcore because why not. I don't have any actual numbers in syncing snapraid or something that shows how much faster it vs dualcore or anything though.

If the guy says his 3 140mm fans zip-tied together works then that seems like a good plan. You'll have to figure out how to remove the stock fan bracket that takes 3 80mm fans, which might require a bunch of screws or maybe drilling out a couple rivets or something. But you won't have to buy a Norco 3x120mm fan bracket and ghetto rig it to fit.

I don't know what is the best 140mm fan to use, anything from 1000-1400rpm will probably be quiet enough and still move enough air. Noctua fans are so expensive I don't know if they're worth it.

To Jeffman: Yeah you can put any SATA drives in there and they just show up to the OS as a bunch of individual drives.
 
Thanks alot man, appreciate your insight. Btw do you know what length of SATA cables should I get? Someone on that thread mentioned using different lengths to have a smaller mess inside, but they didn't say what lengths were they. And since I never had this kind of case, I am not sure what would fit. Though I should check if it comes with SATA cables. Here is the picture of what I am talking about:

Wa66Q5E.jpg




Thanks again :)
 
Mine came with literally everything minus HDDs. You shouldn't have to buy SATA cables.
 
Thanks alot man. Where did you get your system? Same company Tams? I thought you built yours manually.
 
I've got two systems. One is a fancy Norco with 2x 6 core Xeons , 48GB ram, 20x 2TB drives, and 6 4TB drives. And then like 6 months ago I bought the Supermicro thing from Tams to use as a backup system. I put all my older (750GB-2TB) drives in it and plug it in once a week for a few hours to copy stuff to. Still has stock PSU/fans/everything. All I did was add HDDs.

Usually the supermicro system sits in the garage but currently they're both sitting in a hallway closet:
_MG_8344.jpg

_MG_8345.jpg


You can see I only have the 20 bay Norco, they didn't make a 24 bay when I bought mine, so after I ran out of unused bays I just started putting the new 4TB drives on top of a piece of wood. That metal strip thing is from hardware store, used to like mount shelves on wall, I just put one screw to each drive on top so they don't move around too much.
 
Last edited:
Thanks man, that's a great setup you have :)

Are you using 1 single giant 1 drive that pools all of your 24 drives? If so, how many do you spare for parity? I heard some people suggesting to have 2 separate RAIDs (12 drive each) and 2 parity for each. Not sure if that's extreme or sensible.
 
Wow man your setup looks great. I bet the Norco cost like $3k I assume without the drives?

Btw are you using the SATA connections on the mobo for the HDDs sitting on top of the case? +1 for creativity to hook them up like that :)

EDIT: Did you also carve the wood to sit harddrives in it perfectly? It seems like they are inside the wood.
 
Last edited:
On main server I have a 8x 2TB raidz2 array, a 12x 2TB raidz2 array, and at the moment a 6x 4TB snapraid array with 1 parity drive. Later this weekend that should be a 8x 4TB snapraid array with 2 parity drives. The backup system has 22 drives in it with no parity drives at the moment.

I'm sorta in process of copying all the crap off that 12x raidz2 array so I can delete that pool and add all those drives to the snapraid array. I've just been pretty lazy about actually doing this.

With normal raid5/raid6 I don't like going over 8 drives really, even though I did make that one 12 drive pool. But with how snapraid works (worst case being I lose 3 drives and the rest still work), and the type of files you store on it (movies and stuff that can be replaced), I see no reason to make lots of 8 drive arrays or something. My plan is for like a 20+ drive snapraid array with 2 parity drives, when next version comes out maybe 3 parity drives.

Also that Norco I mostly got for free, lol... I just bought the case and HDDs. Mobo/CPU/RAM was all free from a friend. Someone sent him it for some testing and never asked for it back. He didn't need it so gave it to me and just said don't sell it on ebay in case whoever asks for it back, but its been like 2 years now. X5650 CPUs and stuff were pretty badass when new. Still nice enough for what I use it for.
 
Amazing setup :) All this file management takes time but when done at least you can relax. So is there any advantage to having say 2 12x RAID with 2 parity each vs 1 24x RAID with 4 parity? Does making it all a single RAID put it at a greater risk as long as the number of parity drives is equal? Also do you think 2 drives for parity for a 24 RAID is enough? I am not sure but I read someone was saying you need 2 for every 12, but I am not sure there is a rule like that. Highly unlikely for 4 drives to go down at once I imagine.

Also why is snapraid limited to 2 parity? I heard flexraid can use more parity drives. Not sure if it has to be specifically implemented for each.
 
Well one way or another, especially with lower flow fans, you want to make sure all the air they move is being pulled through the HDDs to cool them. The HDDs are a bit restrictive, and a gap is almost no restriction. Don't give air the choice of which path to take. ;)

Even with my setup, because I don't have top of the case on, I used tape and foam to fill in the gaps to make sure all air has to be pulled through the HDDs.

With round fans I mean you could get like a piece of cardboard or something and cut holes for fan but still avoid gaps. But it might just be easier to use square fans.
 
Thanks that's a good point. I was also thinking the same, but even though I never heard this brand, they were showing Phanteks to be very good which are round fans. But of course they don't use them the same way we do so that's why.

Btw what fans are you using? The colors seem like noctua?
 
You might want to see what people recommend for radiators as that amount of restriction might be closer to what these cases see pulling air through the HDDs.

There were not many options for quiet heatsinks that fit those Xeon CPUs, I forget what socket they are, in any case ended up with Noctura heatsink/fans.

The 120mm fans for the HDDs though at first I had Slipstream fans but removed them after one failed and replaced them with three of these:
http://www.newegg.com/Product/Product.aspx?Item=N82E16835553001

I'm not sure if SPCR ever tested them but they're good enough for me. I don't need silence, I just need quiet enough. The system sits in closet in a hallway, not like my bedroom or office or something.
 
Thanks man. Those seems pretty good fans, even though a little expensive :)

Also do you know about these cables?

900x900px-LL-839548a8_20130914_224213.jpeg


I really like that they are less cluttered but I heard that you need a different card than AOC-SAT2-MV8.
 
The controller takes any standard SATA cable. It does not use MiniSAS connectors or anything. I actually have some of those blue cables at work, they seem to come with random stuff, I'm not sure where you can buy them though. Like I know they came with some Tyan 1U servers, but I think I've also gotten them with some LSI controllers.. although only with minisas connector on one end. In any case no, not sure where to get them at.

Actually like here is them with the minisas connector on one side:
http://www.ebay.com/itm/HP-Compaq-5...t=US_Drive_Cables_dapters&hash=item3a8465bf40

But I know you can get them with normal SATA connectors on both sides.

Ah here you go:
http://www.ebay.com/itm/Supermicro-...t=US_Drive_Cables_dapters&hash=item19e291c2b6

Not sure about length and all but they seem to be available on ebay at least.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Thanks alot man, you rock :) I haven't seen any of them. So the actual ones with MiniSAS has one connection to the card, but then breaks into 4 HDD connections on the other end? That's why you need a different card? Or are you saying I can use the cable in the first link with AOC-SAT2-MV8?

If I am not wrong, the cable in the second also breaks out into 4 connections? So then shouldn't I be able to connect this to one connection of AOC-SAT2-MV8 and utilize 4 HDDs and therefore can max out 24 HDDs using only one AOC-SAT2-MV8 card?

If so, would there be any slowdown because 4 HDDs use 1 cable?

Thanks again, I will read more about these cables.
 
Actually sorry about the 2nd cables, I can see that they each have 1 input and 1 output. I couldn't see it in the small images, but now I can see then.

But the first one I am still confused a little :)

Also is the whole purpose of these cables that they take less space relatively?
 
The first one yeah is minisas fanout to 4 normal sata connectors. You'd need a controller that had minisas connectors, the AOC-SAT2-MV8 does not. Minisas has kinda become the standard connector for SAS controllers/expanders/backplanes and stuff. It is effectively 4 cables in 1. I dunno you can look up SAS stuff in general if you want to see advantages. For home server stuff only real advantage is less wiring for most part.

Also these blue cables in general if you're asking about, I dunno if they're all that special. You initially linked to that picture and I thought you were asking where to get them, lol. I mean they're nice and I generally only see them with server stuff so I guess they may be slightly higher quality and most sata cables, but I don't know for sure. Plus I mean long as a sata cable works its good enough quality. They do seem to be about as thin as you can get though if you're really trying to minimize wires.
 
Thanks alot man, you cleared my mind. I was actually asking where to get them too, so it's useful :)

Gotcha, I might get some since they don't seem very expensive. Though the MiniSAS one isn't that much better than the 2nd blue cables. Though I can see that they would minimize mess. But getting new cards for that wouldn't be very good.

Btw do you know what kind of card would I need for the Supermicro server that has the H8DME-2 mobo? I am not sure if there is a single card with 6 SAS connections that can branch out to 4 HDDs per connection, so 6x4=24?
 
I forget if there are 2 PCIe slots on that mobo or not?

If so you'd want to get a IBM M1015 SAS controller and a SAS expander:
http://www.intel.com/content/www/us/en/servers/raid/raid-controller-res2sv240.html

You can search this forum for SAS expander threads.
(edit: http://hardforum.com/showthread.php?t=1484614)

Yes bandwidth isn't ideal, but even a single minisas connection between the controller and expander is 1200MB/sec. Its not like you need more than that when in the end people use GbE and stuff to connect to the server.

If you got a different motherboard with more PCIe slots I'd recommend getting 3x M1015 controllers and avoid any potential issues with expanders.
 
Yes it has 2 :)

Damn I need to read up on this SAS controller and expander things.

So if you access 1 HDDs that's hooked to the same cable vs 4 of them at the same time, then that 1200MB/sec will be shared between them?

Getting 3x M1015 seems like it will cost a lot, like $400 :)
 
Also what fan would you recommend for a 2.7Ghz quad core opteron? Do they get hotter when the number of cores increase? :)
 
Last edited:
4 TB drives and raid5 don't mix. Once you suffer one drive fail you are looking at a long period of constant stress on the remaining ones, and their fail probability shoots up. What's worse is that all these bad blocks that can't be read that you haven't noticed once the last 2 years will definitely pop up now, since you are reading the whole drive. To make the mess complete, I just had a massive raid failure and I had to hang on the rebuilds from working but "wounded" drives that wouldn't give more than 35 MB/sec. That's a solid 24 hours to read a 3 TB drive, and that's for every attempt at resync, which might fail, plus time for re-reads on read errors.

Myself I also partition and then use raid inside partitions, and no array shall be made of drive parts that are more than 1 TB in size. In addition to enabling you to re-sync the most important x*1TB chunk first you can also place that most important chunk at the beginning of the drives, which makes resync significantly faster than the later ones.
 
Back
Top