FlexRaid vs. SnapRaid, yes seriously...

Elpee

Weaksauce
Joined
May 6, 2013
Messages
69
Did search around the forum and found this one, but it's too old. Time flies fast and things have been changed and I think both FlexRaid and SnapRaid become more mature now. In the events more and more people are planning and building their own media servers at home, I think it's good time to figure out which one is the best choice for Windows-based snapshot redundancy.
I've been testing FlexRaid + its storage pool and SnapRaid + Stablebit drivepool in two weeks (I didn't test Drive Bender because I was told they, SnapRaid and Drive Bender, are not good to stay together because of automatic balance).
With such a short time for testing, IMHO, FlexRaid and SnapRaid are same in performance as well as easy to use on my 30+ drives server. And therefore, it's hard for me to pick one officially for my server, frankly speaking. Any suggestions, guys?
Much appreciate it.
 
If you are happy with either of them, then SnapRAID is the obvious choice since it is free.
 
For your needs, I would give a very slight edge to FlexRAID if you are running 30+ drives because you can have more parity drives than SnapRAID (unless this was changed?). If you ever think you'll move to real time, another reason to go with FlexRAID. If this isn't that important to you, save the money and go with SnapRAID.
 
Im on the same situation atm, upgrading from WHSv1 to WHS2011 mostly for media storage/streaming, but moving for 4tb hdd support mostly. I been reading a lot but also not sure yet.

Stablebitt Drive Pool + Scanner - Seems very nice at least for WHS, very easy and will give me the same funcionality of WHSv1. Not sure how would it work with SnapRaid.

SnapRaid - Seem very nice for open source solution, i knew in the past didnt have a storage pool, but idk with 3.0 i read somewhere that it did... maybe im confused out of reading so much this past week.

FlexRaid - This is what im inclining atm, mostly out of a a lot of friend had good experience, seems also a very popluar choice for AVSforums for a pure media server. What worries me is development, the Flexraid team seems to be putting a lot of effort into NZFS... not sure how Flexraid will end up out of this.

Im a little worried about mixing products though, like stablebitt + snapraid and ending with something that might not work or create conflicts, while flexraid all is done by the same, so if there is an issue it should be resolved much easier than a cross software solution.... then again i dont have any experience with either... and about to venture into one.

Hope more people can post some thoughts, im not going as many drives, 16 is the plan atm, but sure will help if more people could post their experiences or suggestions.
 
I prefer using FlexRAID because it combines the drive spanning and RAID features into one software packages. Less points of failure, and if something goes wrong, less support contacts required. If something goes wrong with SnapRAID + DrivePool, then you'll need to troubleshoot and figure out which software caused the issue, and if you require official support, will likely need to be contact with both support teams.

FlexRAID also lets you add as many PPU (Parity Protection Units) as you want. And with a 30 drive array, you want a LOT! I'd suggest at minimum, 3 parity units (probably more to be safe).

I've been using FlexRAID on my server for more then a month now, and I have zero regrets. There is even a feature to have FlexRAID automatically balance your drives if you want (disabled by default).
 
For your needs, I would give a very slight edge to FlexRAID if you are running 30+ drives because you can have more parity drives than SnapRAID (unless this was changed?).

You are correct that SnapRAID is limited to dual parity, while FlexRAID can do triple, quadruple, etc.

But if I had 30+ drives (I use SnapRAID), I would simply create two or more dual-parity sets.
 
SnapRaid - Seem very nice for open source solution, i knew in the past didnt have a storage pool, but idk with 3.0 i read somewhere that it did...

Yes, SnapRAID added some rudimentary read-only pooling support. It works well for pooling your data for reading purposes. But for adding new data you will still need to write it to a specific drive. I actually prefer it that way, since I can control which drive my data is sent to, and I add new data infrequently enough that it is not a big deal to me to open up the drive I want to write it to.
 
Yes, SnapRAID added some rudimentary read-only pooling support. It works well for pooling your data for reading purposes. But for adding new data you will still need to write it to a specific drive. I actually prefer it that way, since I can control which drive my data is sent to, and I add new data infrequently enough that it is not a big deal to me to open up the drive I want to write it to.
I really like 'pool' in SnapRaid. It's fast and very simple to creat a storage pool wherever I want. But, as you said, the pool of SnapRaid is just symlinks and therefore I couldn't stream movies/ TV shows to my HD media players over home network. It's useful for those who need a list of names in the pool if the pool is shared no more no less. That's why I have to employ Stablebit to couple with SnapRaid.
 
FlexRaid - This is what im inclining atm, mostly out of a a lot of friend had good experience, seems also a very popluar choice for AVSforums for a pure media server. What worries me is development, the Flexraid team seems to be putting a lot of effort into NZFS... not sure how Flexraid will end up out of this.

Hope more people can post some thoughts, im not going as many drives, 16 is the plan atm, but sure will help if more people could post their experiences or suggestions.

FlexRAID runs itself. I leave my media server running 24/7, and I have FlexRAID scheduled to run nightly updates, and monthly validates. It's a real set-it-and-forget it type software. If development does stop, you just won't get new features, but the software will basically run itself.

As far as FlexRAID and PPU (aka parity drives), I'd go with however comfortable you are with data loss. More PPU drives makes it more robust, and for 16 drives, I would highly recommend 2-3 PPU's (think "RAID6" without the whole array failure/everything lost). For 30 drives, depending on how robust you want it, I'd consider 4-6.

You are correct that SnapRAID is limited to dual parity, while FlexRAID can do triple, quadruple, etc.

But if I had 30+ drives (I use SnapRAID), I would simply create two or more dual-parity sets.

Meh, just shell out the money for FlexRAID than because it'll make everything so much simplier. If you've got 30 drives and $60 for software to run it all makes you hesitate, I don't know what to say really :p
 
But, as you said, the pool of SnapRaid is just symlinks and therefore I couldn't stream movies/ TV shows to my HD media players over home network.

Huh? Of course you can play video from the snapraid symlink pool. That is just the sort of thing that it is supposed to do and works well.
 
Meh, just shell out the money for FlexRAID than because it'll make everything so much simplier.

That is debatable. For me, SnapRAID is much simpler to use than FlexRAID. SnapRAID is streamlined and easy to set up, easy to automate however I like. And it may be better to have two 16 drive data sets with dual parity than one 32 drive data set with quad parity, if the dual parity setup is more efficient and better tested. Do you know anyone who has thorougly tested the performance and reliability of FlexRAID when operating with quad parity?
 
That is debatable. For me, SnapRAID is much simpler to use than FlexRAID. SnapRAID is streamlined and easy to set up, easy to automate however I like. And it may be better to have two 16 drive data sets with dual parity than one 32 drive data set with quad parity, if the dual parity setup is more efficient and better tested. Do you know anyone who has thorougly tested the performance and reliability of FlexRAID when operating with quad parity?

It is not better to have 2 parity drives on 16 vs 4 parity drives on 32 for RAID4. Losing 2 drives on the same 16 drive array, you could lose a disk, and have no parity to rebuild; lose 3 drives and you are will have no parity on at least 1 array. For a 32 drive array, you could lose 2 or 3 and still be fine. You will have longer rebuild times though, but the point of RAID is up time so just make sure you data is backed up regardless.

As for large FlexRAID drives, there are examples on the FlexRAID forums. Performance wise, read and write has no impact on performance since data is not stripped across the array; only the parity drives contain parity information. Rebuilds and verifies do take a long time though, roughly 1 - 1.5 hours per TB.
 
But, as you said, the pool of SnapRaid is just symlinks and therefore I couldn't stream movies/ TV shows to my HD media players over home network.
Huh? Of course you can play video from the snapraid symlink pool. That is just the sort of thing that it is supposed to do and works well.
Unfortunately, most standalone HD media players do not recognize, and follow, symbolic links. This is a glaring coding glitch that both the Realtek & Sigma Designs core packages were guilty of, and it seems that none (or very few) of their licensees (the player producers) have ever fixed it.

I had seen evidence of this on (standalone) players accessing *nix-based servers, but, it looks like symlinks on Windows-based servers are also a victim (as per Elpee's report).
 
Unfortunately, most standalone HD media players do not recognize, and follow, symbolic links. This is a glaring coding glitch that both the Realtek & Sigma Designs core packages were guilty of, and it seems that none (or very few) of their licensees (the player producers) have ever fixed it.

I had seen evidence of this on (standalone) players accessing *nix-based servers, but, it looks like symlinks on Windows-based servers are also a victim (as per Elpee's report).

You're right. To me, that's a big minus for SnapRaid pool. :mad:
Another big minus for SnapRaid is it doesn't allow users to span drives as FlexRaid can do. See this.
To me, drive spanning is also a big point to gain.
 
So how do you pool the storage with SnapRAID or FlexRAID? I want a tree with lot of directories, I dont want to handle separate hard disks manually - that would be unbearable with lot of data.
 
Another thing I really like about FlexRAID is the proprietary recycle bin that brahim implemented.

It requires that you use the FlexRAID pool, but when it's enabled it protects your array from deletes. What I mean is that deletes will not compromise parity when the recycle bin feature is turned on.

How that works is that when a file is deleted from the pool (deleted from anywhere, including deleted over the network) the FlexRAID pool service rather than deleting the file, moves it into a special protected hidden folder. So if you deleted files on drive A, and drive B died before you updated the parity, then the needed files would still be available on drive A to compute drive B combined with parity.

The recycle bin is then only cleared and free space is regained after a successful parity update.
 
I much prefer Snapraid to Flexraid for parity. Unfortunately, however, I need the drive pooling from Flexraid. Liquesce is too buggy, and the new pool-lite features of Snapraid are good for some use patterns but not all.

I always thought there was a bug with Flexraid's pooling and how it respects NTFS permissions because I 50% of the time get a insufficient permissions error when downloading from Chrome to the drive pool, however, it seems to be limited to only Chrome--doesn't affect other browsers. It's really annoying. :)
 
Another thing I really like about FlexRAID is the proprietary recycle bin that brahim implemented.

It requires that you use the FlexRAID pool, but when it's enabled it protects your array from deletes. What I mean is that deletes will not compromise parity when the recycle bin feature is turned on.

How that works is that when a file is deleted from the pool (deleted from anywhere, including deleted over the network) the FlexRAID pool service rather than deleting the file, moves it into a special protected hidden folder. So if you deleted files on drive A, and drive B died before you updated the parity, then the needed files would still be available on drive A to compute drive B combined with parity.

The recycle bin is then only cleared and free space is regained after a successful parity update.
This feature is enabled by default or we have to enable it. It's great, right?

I much prefer Snapraid to Flexraid for parity. Unfortunately, however, I need the drive pooling from Flexraid. Liquesce is too buggy, and the new pool-lite features of Snapraid are good for some use patterns but not all.

I always thought there was a bug with Flexraid's pooling and how it respects NTFS permissions because I 50% of the time get a insufficient permissions error when downloading from Chrome to the drive pool, however, it seems to be limited to only Chrome--doesn't affect other browsers. It's really annoying. :)
In fact, I tried to couple Snapraid with Flexraid pool (only) from expert mode. No complaint at all and looks potential. I was just stuck at disk spanning of Snapraid. Damn...
 
It is not better to have 2 parity drives on 16 vs 4 parity drives on 32 for RAID4. Losing 2 drives on the same 16 drive array, you could lose a disk, and have no parity to rebuild; lose 3 drives and you are will have no parity on at least 1 array. For a 32 drive array, you could lose 2 or 3 and still be fine.

That is obvious, but it is only one component of the issue. I was talking about the entire situation. For example, the multiple snapshot RAID sets setup is superior in some ways, such as restoring a failed disk. If you have one gigantic set, the system will read from all of your drives for hours to restore a disk, but with multiple sets, a restore only needs to read from all the disks in one set to restore a drive. Also, quad parity is not widely used, so the reliability of it is certainly worth considering. And the computational load for restoring from quad parity is significantly higher than for dual parity. That is why I asked if anyone had tested the performance for quad parity. I would not be surprised to hear that it takes twice as long to restore a drive when you are using quad parity.

Anyway, whether to use two sets with dual parity or one set with quad parity is a complicated issue.
 
Last edited:
Unfortunately, most standalone HD media players do not recognize, and follow, symbolic links. This is a glaring coding glitch that both the Realtek & Sigma Designs core packages were guilty of, and it seems that none (or very few) of their licensees (the player producers) have ever fixed it.

Thanks for explaining. That is certainly a glaring problem with those standalone players. It makes me wonder why anyone would even want to use them if their programming is so awful (with a bug this bad, I assume there are many other bugs in their programming as well)
 
Is that because windows symlinks are (back in the XP days anyways) simply text files with a .lnk extension? Most media players won't handle that because they probably expect tradional SMB or NFS shares from a Unix server. Unix uses "real" symbolic links and will never present a text file to the client for one.

I wouldn't call it a bug. Windows links are pretty hacky. With that said, I think there is a way to use a hardlink in windows...I forgot what that was called, but you can have several disks show up as directories in the root of another disk and access them that way. However, I am not sure if file sharing is allowed on that kind of setup - but it probably is?
 
BTW the issue you cite is not a big deal with most standalone players - you'd just have an SMB or NFS share for each disk. And a lot of standalone players have front ends driven by a database so you don't even have to navigate them yourself. Just flip through the list of movies and hit Play.

I am surprised you were not aware of this, JoeComp. They've had this feature for more than half a decade now!

With that said, I'm still pretty old school - I *like* flipping through directories and choosing individual media files to play. The server I use is not windows-based so no worries about silly symbolic links not working.
 
This feature is enabled by default or we have to enable it. It's great, right?

It is NOT enabled by default. I do not know why this is. But it's just a simple boolean value in the FlexRAID config and can be turned on and off at will.
 
I am surprised you were not aware of this, JoeComp. They've had this feature for more than half a decade now!

I'm surprised you feel the need to take a gratuitous personal shot at me. Does what I do or don't know intimidate you that much?

Anyway, I am certainly aware that many media browsers can do their own sort of drive pooling. I have utilized this feature myself in the past.
 
ntfs junction point

Although I believe that is still just for folders not links to files.

http://msdn.microsoft.com/en-us/library/aa365006(VS.85).aspx

There are three types of file links supported in the NTFS file system: hard links, junctions, and symbolic links....

In linux, there are two types of links: symbolic links and hard links. You cannot create a pool with hard links because a hardlink cannot span filesystems (a hardlink just points to an inode).
 
Joe, I am actually genuinely surprised! Sorry you took it the wrong way. Just as you were surprised with the media players having poor windows shares support. It's just so weird that most have good NFS support, even though it is not as common nowadays, and many media player owners end up installing NFS servers on their Windows PC to get around SMB issues. Mind-boggling! I am just happy that I don't have to go that far with my setup.

I did think like you at first - "why would I ever use a media player?" - but I gave them a try (Dune Smart H1) and it's great not having to worry about the latest PC drivers breaking something or dealing with custom resolutions and refresh rates. "It just works" <tm>

The media players are great for consistent playback - usually no stutter that can happen on a windows PC with all the stuff that hogs the CPU sometimes.
 
BTW the issue you cite is not a big deal with most standalone players - you'd just have an SMB or NFS share for each disk.
Makes no difference. The bug/oversight is at a level such that it is not precluded by the access-interfaces presented by SMB or NFS.
And a lot of standalone players have front ends driven by a database so you don't even have to navigate them yourself. Just flip through the list of movies and hit Play.
Yes, BUT ...
Most of those db/front-ends are so lame, rigid, ugly, slow, annoying, etc. that many users resort to traditional navigation (and lose access to symbolic link functionality).
And, some/many of those db/front-ends also make the same coding error (with respect to symbolic links).
With that said, I'm still pretty old school - I *like* flipping through directories and choosing individual media files to play. The server I use is not windows-based so no worries about silly symbolic links not working.
Read my lips ...
I had seen evidence of this on (standalone) players accessing *nix-based servers, but, it looks like symlinks on Windows-based servers are also a victim (as per Elpee's report).
"... no worries ..." you say?
And I reply, "Ignorance is bliss." :)

Symbolic links are not silly. It is a very powerful mechanism. But, as is often the case with "powerful mechanisms", it takes expertise and/or effort and/or enlightenment to grasp the capabilities/potential and/or the (implementation) implications of that mechanism.
 
Heh, when I said silly I meant the way windows does it. Yuck! :)

The only front-end on a media player that doesn't make me roll my eyes is 10muse - although it isn't perfect - it requires an ipad to use it properly. I suppose you could say it is the least lame of the ones out there now. And you'll be a few hundred dollars poorer for it. ;)

Guess we see eye to eye on the current state of things.
 
I really like 'pool' in SnapRaid. It's fast and very simple to creat a storage pool wherever I want. But, as you said, the pool of SnapRaid is just symlinks and therefore I couldn't stream movies/ TV shows to my HD media players over home network. It's useful for those who need a list of names in the pool if the pool is shared no more no less. That's why I have to employ Stablebit to couple with SnapRaid.
To be fair, my responsibility is to clear the point here,
No, I couldnt play movies/ TV shows on my PCH player thru SMB streaming over home network. All links in the pool were 0 kb.
However, when I set up a NFS server with Snapraid and shared Snapraid pool folder thru NFS. Bump, I could see all snapraid symlinks in the pool and played them well with PCH player. :D
 
Back
Top