Flexraid Storage Server

Neme

n00b
Joined
Jan 19, 2010
Messages
30
Looking to put together a large flexraid (or similar) storage server, mainly for bluray storage and playback via gigbit network to 1-2 streaming players (popcorn hour/PC)

Parts I have looked into and decided on so far (unless someone can give me some other good ideas):

Norco RPC-4224
3x IBM M1015 (flashed to LSI9211-IT)
Sandybridge CPU (for low power idle and more than enough processing power for what i need)
2008 R2 (I have good experience and availability through my line of work)

Will initially be looking at about 8TB of storage with RAID 6 level redundancy on FlexRAID (so 5-6 drives or so depending on size) but want to have good room for expansion by adding drives.

Parts I'm not so sure about:

Motherboard to run a sandy bridge and 3x M1015, interested to hear from anyone running this sort of configuration without any issues.

What are the current pick of the crop HDD wise for this kind of mass storage where performance isn't so much of a concern against storage density/cost/reliability.

As I say, especially interested in anyone already running a similar system but thanks for any feedback.
 
Last edited:
ZFS (or any conventional distributed parity RAID) is a terrible choice for a media server.

Snapshot RAID, using either FlexRAID or the free, open-source SnapRAID is the best choice for a media server.
 
Motherboard to run a sandy bridge and 3x M1015, interested to hear from anyone running this sort of configuration without any issues.

What are the current pick of the crop HDD wise for this kind of mass storage where performance isn't so much of a concern against storage density/cost/reliability.

I have a Supermicro X9SCM-iiF motherboard with an Ivy Bridge Xeon CPU (E3-1270v2). I also have three IBM M1015 cards.

I am running linux and SnapRAID, but I think the same hardware should probably be fine with the setup you mentioned (Windows and FlexRAID).

Lately I have been buying 4TB Hitachi 5400rpm HDDs, which can be found for about $230-240. These days, $60 per TB seems a fair price.
 
Last edited:
ZFS (or any conventional distributed parity RAID) is a terrible choice for a media server.

Snapshot RAID, using either FlexRAID or the free, open-source SnapRAID is the best choice for a media server.

While I certainly agree that Snapraid can be a good choice for a media NAS (not so sure about Flexraid now, given the pricing....you can buy a full OS for less :eek:), I really wouldn't go as far as saying ZFS is a terrible choice - it isn't in many cases, and in some it's the smart choice (and ZFS isn't just distributed parity raid BTW).

At the end of the day, it's horses for courses - some people may not like the manual parity updates of Snapraid/Flexraid (OK you can script/cron it), or may prefer a single volume to manage without worrying about what to put on which disk (I know there are "pooling" solutions, but they all have certain issues or "features") - some may want the increased bandwidth that striping gives, or the the increased IOPs that mirroring gives (or both - some need to support many clients simultaneously), or the snapshot/replication capabilities...........and so on!
 
ZFS (or any conventional distributed parity RAID) is a terrible choice for a media server.

Snapshot RAID, using either FlexRAID or the free, open-source SnapRAID is the best choice for a media server.

I don't see what is terrible about it. At worse it will perform the same (1 HDD speed), and if he adds vdevs later then his performance will increase, unlike with your suggestions.
 
ZFS is a terrible choice for a media server. This has been discussed before. Anyone who has looked into the subject knows that the best choice for a media server is snapshot RAID, either FlexRAID or SnapRAID. The OP has obviously looked into it. The ZFS zealots are wrong and poorly informed as usual (and off-topic -- if you want to argue about this, PM me, since I won't respond here about ZFS anymore).
 
I don't see what is terrible about it.

In the context of a home media library:

1) Most people have to learn a new OS to get ZFS. That's a big deal for people with no previous unix experience.

2) There is no way to easily add storage one disk at a time. Yes, you can add disks in mirrored pairs but raid1 is inefficient in both power (watts) and money for this application.

3) ZFS is complicated. Fine for professionals but tough for the average Joe.

4) The heavy hitting features of ZFS (scalability and data integrity) don't have a lot of value in a home media collection. You can always re-rip if a bit flips somewhere. High IOPS and bandwidth are not needed for media playback.

5) The OS's that have ZFS don't have the same level of hardware support as Windows or Linux. (I'm ignoring ZFS on Linux because building kernel modules is definitely not for the average user.)

ZFS is a fine choice for people who want to play around with ZFS but it's not the best choice for people who want storage with minimal hassle. They should go with a quality NAS, SnapRaid, or one of the other non-ZFS choices.
 
ZFS on Linux requires no building of kernel modules, at least for the most popular distros - just install the package!

Secondly, there are several made for purpose NAS operating systems which feature ZFS (as well as other raid schemes) - you really don't need to learn a new OS any more than you need to learn a new OS if you buy an off-the-shelf "quality" NAS.

Snapraid is great in many home media NAS configurations - I use it myself - but that doesn't mean anyone who doesn't agree or prefers a different approach, is then a zealot, wrong and poorly informed!!
 
Well, first, I'm not saying ZFS is for everyone, on the other hand you JoeComp are saying it's not for anyone, we'll have to agree to disagree.

As for FlexRAID I used it until it became commercial and I wasn't impressed at all.
 
ubuntu server/mdadm has served my great for the past few years. raid10 of smaller drives for performance, raid5 for a large storage array, grown bigger, converted to raid6, grown bigger etc. Performance is there, only thing I don't care for is the lack of checksumming, but I figure since its just a bulk media file server with some isos/other small stuff its not going to be the end of the world.
 
ZFS on Linux requires no building of kernel modules, at least for the most popular distros - just install the package!

RedHat - have to build: http://zfsonlinux.org/spl-building-rpm.html

Debian - have to build: http://zfsonlinux.org/spl-building-deb.html

I guess "the most popular distros" means Ubuntu for which there's a PPA.

Secondly, there are several made for purpose NAS operating systems which feature ZFS (as well as other raid schemes) - you really don't need to learn a new OS any more than you need to learn a new OS if you buy an off-the-shelf "quality" NAS.

They're not "special purpose operating systems." They're GUI's on top of operating systems that have ZFS. The user still has a unix system on his hands and he's on the hook for keeping it running. A pretty GUI for ZFS operations doesn't change that. It's not the same as buying a real NAS box, not even close.

Snapraid is great in many home media NAS configurations - I use it myself - but that doesn't mean anyone who doesn't agree or prefers a different approach, is then a zealot, wrong and poorly informed!!

ZFS is simply not the best tool for the job here. However, that doesn't make the people who run ZFS media collections mis-informed or zealots. Maybe they enjoy running ZFS, think ZFS is cool, or simply want to learn the technology. Good for them! :) This is a hobby and those are all perfectly valid reasons.

The problem comes when ZFS is promoted as the best solution to every storage scenario for everyone and all other solutions are treated as inferior. It hasn't happened in this thread but the forum is full of threads where it has.
 
Last edited:
RedHat - have to build: http://zfsonlinux.org/spl-building-rpm.html

Debian - have to build: http://zfsonlinux.org/spl-building-deb.html

I guess "the most popular distros" means Ubuntu for which there's a PPA.

That's right, Ubuntu and derivatives - if you wanted to build a Linux based home media NAS with ZFS and have no real Linux experience, why would you then pick RedHat or Debian? :confused:
BTW, unless you are running Windows, the situation is much the same with Snapraid - there's an unofficial ppa for Ubuntu....but RedHat and Debian......:)

They're not "special purpose operating systems." They're GUI's on top of operating systems that have ZFS. The user still has a unix system on his hands and he's on the hook for keeping it running. A pretty GUI for ZFS operations doesn't change that. It's not the same as buying a real NAS box, not even close.

And what do these "real" NAS boxes use for an OS?
As for being not "special purpose operating systems" - what else would you use something like FreeNAS for? :confused:

ZFS is simply not the best tool for the job here.

And nobody has said it is, but that doesn't automatically make it a terrible choice either!
Your statement is simply your opinion - you are entitled to that of course, but so is everyone else.

At the end of the day, you pick what is, in your opinion, the best tool for the job at hand.
If others don't share that opinion, fine - doesn't make it wrong though!
 
the guy is asking for motherboard and hard drive suggestions. not a zfs war thread. ps: zfs is not terrible for this, it is just not necessary.
 
ZFS is simply not the best tool for the job here.

The problem comes when ZFS is promoted as the best solution to every storage scenario for everyone and all other solutions are treated as inferior. It hasn't happened in this thread but the forum is full of threads where it has.

I would agree that all alternatives can make sense given a particular need. However, when people give suggestions it's usually based on preference and in reality for the end user all things usually come down to cost.

Let me be perfectly blunt. Even the worst of storage solutions will hit 20 - 30 MB/s on their worst day and there's no video content that's going to be viewed in a home setting that will outstrip that. Whether you pick software raid, hardware raid, snap raid, flex raid, or even ZFS all have enough throughput to handle video. If it can at least deliver the perfomance then it's in the running easily.

Everything else is a cost / preference / ease of setup issue when we are talking about home-brew solutions. Personally I think mdadm strikes the best balance at providing a plethora of features and performance, but if you aren't comfortable with Linux then any of the other solutions might be somone's cup of tea. I might not agree with them but when it comes to home-brew solutions there is no "perfect tool for the job". None that I've seen anyway.
 
Looking to put together a large flexraid (or similar) storage server, mainly for bluray storage and playback via gigbit network to 1-2 streaming players (popcorn hour/PC)

Parts I have looked into and decided on so far (unless someone can give me some other good ideas):

Norco RPC-4224
3x IBM M1015 (flashed to LSI9211-IT)
Sandybridge CPU (for low power idle and more than enough processing power for what i need)
2008 R2 (I have good experience and availability through my line of work)

Will initially be looking at about 8TB of storage with RAID 6 level redundancy on FlexRAID (so 5-6 drives or so depending on size) but want to have good room for expansion by adding drives.

Parts I'm not so sure about:

Motherboard to run a sandy bridge and 3x M1015, interested to hear from anyone running this sort of configuration without any issues.

What are the current pick of the crop HDD wise for this kind of mass storage where performance isn't so much of a concern against storage density/cost/reliability.


As I say, especially interested in anyone already running a similar system but thanks for any feedback.
you can pick any motherboard, pick any motherboard that has direct PCI-express link to slots. some motherboards utilize PCI-Express Switch/multiplexer to extend to more slots.
M1015 does NOT like switch/Multiplexer,


HD wise, ..... this is you choice. I would suggest to pick 2T-3T Drives.

the best selection is picking up entry level motherboard server where you can search many threads on SSD& Data Storage.

good luck for your build!!
 
I've played around with ZFS, have run hardware RAID, and am currently using FlexRAID. Based on what you are saying, this is my opinion:
- Firstly, what is your budget? I know you want the best bang/buck, but you won't get far unless you've got money. Also, while 3TB (or even 4TB) aren't the best bang/buck right now, the slight premium is definitely worth it, in my opinion, as you'll need less drives (vs. getting a bunch of more cost effective 2TB's) which may mean you don't need a 3rd HBA (e.g., 24x 2TB's = 48TB vs. 16x 3TB with less HBA). Pick a number, and get back to us.
- FlexRAID's biggest strength is its bare knuckle, straight up simple and yet effective data protection solution. You can throw it on pretty much any hardware and/or software, and it will run. If you are designing this specifically for FlexRAID, I suggest you get the more bare bones setup (Atom processor, etc...) to save on hardware costs, and for even better power savings. I ended up with an i3 SB because I needed the horsepower to transcode, and even on idle, it draws 60 watts from the wall which an Atom can do even better.
- If you are paying $10 or more for 2008 R2 than Windows 7 Home Edition, I suggest you go with Home Edition instead unless you plan to actually use 2008 functions. Again, FlexRAID won't need it, and you can save the money for more hard drives.

Since ZFS was brought up, and the OP has said "flexraid (or similair)", I'm going to put on my flame suit and say that you should be checking out ZFS before you try/pay FlexRAID because your situation could work well. My thoughts:
- The biggest "problem" with ZFS for a media server is that you can't "plug and play" new hard drives, rather you need to make more pools which require sets of hard drives. Since you are going big anyways, and you expect to go even bigger, I don't think getting another 3-7 HDD's will be a huge problem since you plan to go there anyways. If you were looking to run a 4-5 HDD's, and don't expand to grow quickly, I wouldn't even bother consider ZFS which is why I am bringing this up now (and before more people say "strictly NO for ZFS).
- You seem to know what you are doing so figuring out ZFS really shouldn't be that hard. There is also a lot of support on these forums which can help you too just like how you are asking questions right now.
- ZFS is free, buy another HDD instead
- A lot of people working on ZFS, one developer (maybe he hired another?) for FlexRAID. One is being developed at breakneck speed and being ported to Linux, another one is slowly plodding along.
- ZFS was designed from the bottom up for data protection. FlexRAID does a good job too, but ZFS is in a different league.
- ZFS does require much more intense hardware, but it scales amazingly well. FlexRAID does OK in this respect, but ZFS was designed for scale-ability so again, two different leagues.

While I run FlexRAID now, which I have purchased even after I got in a "small flame war" with the developer, I do plan to move to ZFS in the future when I build my Norco type NAS. Why? The last two points above pretty much.
 
ZFS is so good at almost everything when it comes to data storage that any opposing solution is always in competition, even if you simply wanted to know more about the opposing solution. This is why ZFS always comes up in every thread on here; it most likely solved whatever problem you are having and thus should be considered, regardless of your opinion on the system.

OP, If you haven't seriously considered ZFS, please look into it and if it doesn't fit, come up with very solid reasons as to why. Anything less than a perfectly sound opening post will result in the ZFS contention you see here.

The problem comes when ZFS is promoted as the best solution to every storage scenario for everyone and all other solutions are treated as inferior. It hasn't happened in this thread but the forum is full of threads where it has.

Agreed. I love what ZFS can do, and it really is powerful, but it's not end-all-be-all, and unless you specify a reason from the jump as to why you choose not to use it, there's no point in posting in this subforum because all you'll get is "Why not ZFS".
 
Back
Top