Best Storage Solution?

HICKFARM

Gawd
Joined
Jun 18, 2008
Messages
547
What is the best way to store media nowadays? I had a Raid 5 array of 5 2TB hard drives, but i don't know if it is worth messing with a raid card and stuff anymore with how cheap 3 and 4 TB drives are getting. I store lots of media that exceeds 7TB and is still growing.

I really want to keep a server and just not get a NAS. I started using PLEX and need the server to transcode all the media. Any advice is appreciated. Last time i did a lot of research was 3 years ago when i built the 5 HDD raid 5 array.

EDIT: From what i have seen so far snapraid would be a good solution for my situation paired with DrivePool software to manage all the drives. Looking for a nice server case to fit all these drives in. Hard to find or expensive. Been looking through the 10TB+ thread for case ideas. Will install Windows server 2012 with Virtual Machines for my Plex and other things to keep my server secure. Probably try and keep a backup of the main install as well, so easy fix if my OS decided to corrupt again.

What do people use to make a backup of a SSD like a ghost image? Would be easy to recover in case of failure. Have heard standard ghosting software didn't vibe well with solid state drives.
 
Last edited:
If you only are storing media, and dont care about data corruption of a few bits here and there, you can use Unraid/Flexraid/Snapraid/etc. All of them are suitable for media storage. You keep your old individual disks and just add another parity disk. If one disk crashes, you can restore it from the parity disk. No format, no nothing. But it has its drawbacks, if you are adding a new file, or editing an old file, the parity disk must be updated. That is not done automatically, you need to tell it to do so manually somehow. I dont think you need a raid card for this solution, just add the software. One advantage is that if you are streaming one movie, only that disk will be active, so the rest of the disks are spinned down. You can use Windows and ntfs.

If you care about data corruption, and want a big data pool to hold all your data instead of separate disks, you can use ZFS raid. ZFS has a very heavy data protection and detects and protects against all kinds of data corruption and automatic self repair, etc etc. Heavy Enterprise storage servers costing millions, are using this. It scales very well, powering large 55Petabyte supercomputer installations with 1TB/Sec read/write speed (with Lustre). This is serious stuff. Search here for ZFS threads, there are ton of them. You dont need any hardware raid card, just use FreeNAS or Nas4free or FreeBSD or Solaris or Linux + ZFS. You can not use Windows. One disadvantage is that all disks are active, if you stream a movie. ZFS has very heavy data protection, read the wikipedia article on "Data integrity".

In anyway you should use software raid today, and leave hardware raid as it is getting obsolete.
 
I might have to look into software raid. I don't know about getting into linux though to set it up. I like to use windows for the ease of sharing with my other pcs in the house and for my roommates are that are technologically challenged. Using windows makes it easy for me to remote desktop to the server as well.

I know all this can be done within linux as well, but i am sure it much more complicated to someone like me who has just dabbled in linux here and there.
 
Abandon hardware raid, sell the card.

Software raid can be much safer (ZFS) and scales way better (ZFS). And it is free. And open source. But you must use Unix derivative.

If you are going to use Windows, then check up Unraid/Snapraid/Flexraid/etc. JoeComp knows more about these solutions. Maybe he can help you with a good recommendation.
 
Which windows solution would be the best? What happens if windows crashes, re-install it and then the program automatically detects the raid again?

Since i already bought this Adaptec 51645 before this thread I might as well use it.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Then you probably should go with a hardware raid solution, if you want to use it.
 
Ya that is my plan as of now. I just bought 3 4TB seagate drives on Newegg's sale today.

So with this new card i can setup 2 seperate raid 5 arrays for the time being.

Can you convert a raid 5 array into a raid 6? Or do you have to start from scratch to make it a raid 6 array? I have to mess with all the option of this new card before setting up my array with data, so I know how to rebuild array without all my data on the line.
 
Flexraid and Snapraid both protect against data corruption. They can detect it through hashes on the data and repair it through parity. They only spin one disk at a time, can use already full disks, can use disks of different sizes provided the parity disk is as large of capacity as your largest data disk, can have multiple parity disks to protect against multiple disk failures, can run windows or linux, and if you have a failure greater than the number of parity disks you only lose the data on those disks, not the whole array. You can stop using them at any time, take any disk, put it in any computer and all data on the disk is intact and useable. Nothing special to do, it's just a disk with data on it.

Snapraid is free, Flexraid is pay. I've used both for quite some time and have never lost a single file, ever. 16tb and counting.

I'll be happy to help anyone who's interested in setting one up. It's very easy and well worth the time.
 
ZFS is extra, unnecessary complexity and risk if youre just storing home media files that almost never change. ZFS also has no one-disk-at-a-time online capacity expansion, and if you lose more disks than parity disks, you lose everything. Like hardware raid, zfs has advantages where uptime and time to recovery are important, like business and enterprise scenarios, but its less ideal for home media unless you like futzing around in Linux cmdline, or buy into the notion that running zfs carries some inherent badge of distinction. And the GUI frontends like nappit are great but if things go wrong, troubleshooting will be a commandline affair.

SnapRAID is where the new thinking is in this area, it is purpose built for home media storage with front to back data integrity features including full hashing and bitrot detection, and even instant dupe detection. Plus it supports more parity disks than hardware raid or ZFS (hardware raid maxes at 2, zfs maxes at 3 .. Snapraid maxes at a whopping 6 parity disks).

As previous poster mentioned, SnapRAID also works on plain old NTFS formatted disks, no blackmagic abstraction layer like ZFS in the event you had to perform hard data recovery on a disk or move it to another PC. It also works like a pseudo-backup in the sense you can undelete files (though like hardware raid and zfs, its not a true replacement for a 1:1 backup and not recommended as such). The only real downside is SnapRAID is not ideal for protecting volumes where lots of small files are constantly changing, like say your windows system drive (C:\). Its still possible to do so but its not really where snapshot raid excels.

Lastly, SnapRAID is more power efficient and theres less wear and tear on your disks. With striping based RAID like zfs/HW raid/sw raid, all disks have to be spinning to read or write even just one file, but with SnapRAID, only the disk that contains say the movie you want to watch needs to spin up while the rest can remain spun down.
 
Last edited:
ZFS is extra, unnecessary complexity and risk if youre just storing home media files that almost never change. ZFS also has no one-disk-at-a-time online capacity expansion

It should be noted that because of the flexibility of ZFS, this is not completely true. What you cannot do is add another disc to the "RAID". For instance if you setup a 3x2TB in RAID5, you cannot come back later and make it 4x2TB. However what you can do is add additional drives to the overall "pool". Traditionally you would add another set of 3x2TB or whatever you want. But you could, if you wanted, tell ZFS to duplicated data on one drive and add that drive to the pool. You can even just add them as single drives independent (JBOD style). That's not ideal, but ZFS is flexible enough.

But for me, being able to "add another disc" is only useful for 1 or 2 drives when your talking parity. Because you are most likely going to start out with a config that has some benefit with parity rather then mirroring. So maybe something like 3x4TB in RAID5. You can add another disc, no worries. And you can add a 5th disc. But after 5, you will probably start getting worried about how good that parity is for you. And thus now you have to start a new pool. Well the advantage to ZFS is that the new pool just gets absorbed by the original one, so you have 2 separate RAID arrays, but still one big pool.

Now that being said, if the OP wants Windows, then forget about all the ZFS stuff. Stick with hardware. It's very time proven and the error rate is very very low for normal home usage.
 
It should be noted that because of the flexibility of ZFS, this is not completely true.

Actually, it is completely true. Read again what you quoted. "ZFS also has no one-disk-at-a-time online capacity expansion". Also, he was obviously talking about some sort of redundant RAID, not simple spanning or pooling without redundancy.

The examples you gave are more than one disk at a time, or they were not capacity expansion (mirroring), or they were not redundant.

DPI is absolutely and completely correct.

And the rest of the example is showing the disadvantage of ZFS. When you get to too many data drives, with SnapRAID you simply add a single parity drive to the array. ZFS has no such ability.
 
Last edited:
Actually, it is completely true. Read again what you quoted. "ZFS also has no one-disk-at-a-time online capacity expansion". Also, he was obviously talking about some sort of redundant RAID, not simple spanning or pooling without redundancy.

The examples you gave are more than one disk at a time, or they were not capacity expansion (mirroring), or they were not redundant.

He did not specifically say parity, and many people use mirroring as a legitimate form of RAID so we can't discount it. And that's why I said it is flexible in how it can be used.

I wouldn't do it, you wouldn't do it, but it's an option.

That's not ideal, but ZFS is flexible enough.

EDIT: The actual command is "copies=2" when adding/creating the vdev.
 
Last edited:
It should be noted that because of the flexibility of ZFS, this is not completely true.

You're right, but the non parity modes of ZFS seemed irrelevant in responding to a guy asking about home media storage, who also had some past familiarity with hardware RAID. Tried to keep it apples to apples, assumed something like raidz2 went without saying but I should have clarified.
 
Last edited:
And the rest of the example is showing the disadvantage of ZFS. When you get to too many data drives, with SnapRAID you simply add a single parity drive to the array. ZFS has no such ability.

Another key differentiator for me when adding a disk to a SnapRAID raidset - whether it be an empty drive for data, a drive with data already on it, or an empty drive to serve as an additional parity disk - is the existing data and parity disks are never placed in a degraded or vulnerable state during the integration (expansion) process. This is unlike striping based raid where OCE is a destructive process, with a moving point-of-no-return line as it writes new stripes across all disks, and your existing data is slightly more vulnerable until the process completes.

If you've ever had the experience of trying to expand a hardware raidset and the operation bombed out halfway through because let's say the drive you were adding to the raidset was bad, and suddenly your raidset status was showing as "degraded" as you wondered what next, you know what I'm talking about. It is a potential headache you avoid with JBOD+parity, which goes to my point about striping introducing unnecessary risk in this usage scenario, due to the increased interdependence created between disks, and resultant domino effect when failures arise.
 
Last edited:
He did not specifically say parity, and many people use mirroring as a legitimate form of RAID so we can't discount it. And that's why I said it is flexible in how it can be used.

I am not discounting mirroring. But mirroring is not one-drive-at-a-time capacity expansion, as I already said. You need to add two drives at a time in a mirror if you want to expand capacity by the amount of one drive. Or if you use multiple copies on a single drive, then you are not expanding the capacity by one drive at a time, you would be expanding capacity at half a drive at a time (or less if you use n-way mirrors with n>2).
 
Last edited:
Yeah, but if you have a backup of your data (like you should anyways). Then you can add 1 disk at a time to a ZFS RAIDZ array.

Just destroy the pool and recreate it with the additional disk and copy your data back from backup.
 
Yeah, but if you have a backup of your data (like you should anyways). Then you can add 1 disk at a time to a ZFS RAIDZ array.

Just destroy the pool and recreate it with the additional disk and copy your data back from backup.

If your backup is the original optical discs, as is common for a media fileserver, then your proposal is a huge waste of time and resources as compared to SnapRAID, just add a single data or parity drive as needed.

Also note that the original claim was about "online capacity expansion".
 
Last edited:
For simple flexibility and scalability I would say snapraid beats zfs and a standard raid5 easily. What everyone is talking about is setting up a system and later wanting the ability to easily add drives and increase storage while maintaining redundancy.

How has any zfs example that's been given here able to fit these criteria? Add multiple raids? Destroy pools and restore from backup? Add single disks without redundancy? All of that sounds more like reasons not to go with zfs. Honestly no one has really argued FOR zfs in my opinion. All examples that have been given so far seem overly complicated, much more involved to make work and more than a bit unsafe for protection of your data. I haven't heard anything yet that seems to be of real benefit for this situation regarding zfs.

With snapraid you start out with X amount of drives with let's say a single parity drive. You add more drives, data still has redundancy. Double X, just add another parity drive. Still redundant but now with double parity in case of double drive failure. Triple X, add another parity, now you're protected against 3 drive failures. Straightforward, easy, uncomplicated and your data stays protected.
 
Well that wouldn't be a very good backup if the restore time is so huge and or takes a lot of work.

I'm just saying it's about balance and about priorities.

For example, I have all the data on my home media server backed up to offline drives that I keep in a drawer in a different building.

Because to me that's how important and how much work has gone into my media collection that it's worth having a backup. Because recreating it would take months or years. Not to mention the added ability to easily change between data storage systems in the future and re-size them as my needs change.

I am in no way saying SnapRAID is bad, it's great and I recommend it all the time. I used to use FlexRAID myself. But I decided to switch to ZFS and I like the change personally and think it was worth it.
 
Well that wouldn't be a very good backup if the restore time is so huge and or takes a lot of work.

Now you are just being ridiculous. The discussion was about a particular statement that was made in this thread: "ZFS also has no one-disk-at-a-time online capacity expansion". Someone came in and said it was not true, but gave no valid counter examples. Then you say that you can destroy the pool and create a new one. But that is not "online", since destroying the pool takes your data offline. And your backup is obviously "not very good" by your own strange standards, since to restore you have to travel to another building, pick up all the drives, travel back (risking damaging your backup on the trip), then spend hours copying all the data over to a new pool, all the while you have NO BACKUP -- single point of failure.

As compared to SnapRAID, where you just add a single data or parity drive and do a sync, while all your data is intact and accessible, and no time or effort is wasted restoring from a backup, and your data is protected the whole time.
 
Well that wouldn't be a very good backup if the restore time is so huge and or takes a lot of work.

I'm just saying it's about balance and about priorities.

For example, I have all the data on my home media server backed up to offline drives that I keep in a drawer in a different building.

Because to me that's how important and how much work has gone into my media collection that it's worth having a backup. Because recreating it would take months or years. Not to mention the added ability to easily change between data storage systems in the future and re-size them as my needs change.

I am in no way saying SnapRAID is bad, it's great and I recommend it all the time. I used to use FlexRAID myself. But I decided to switch to ZFS and I like the change personally and think it was worth it.


I'm not saying don't keep backups. I understand that, but for simply making a server with the ability to easily add drives and keep that data protected in some way. All the arguments presented so far for zfs seem like both sides are giving good reasons against zfs. I've heard nothing that remotely makes it sound better.
 
I run ZFS in a fairly simple FreeNAS environment. I originally started with 3 x 1TB disks in a single pool. When the time came to upgrade, I simply bought 3 2TB disks and replaced each disk individually in the pool. Worked like a champ and there is absolutely no backup/restore/destroy pool stuff going on. The reason I did it this way is because I wanted all of my drives in my pool on the same warranty schedule.

I have never used Snapraid myself, so I can't comment on that, but ZFS is not complicated at all.
 
I currently have a small Storage Server setup with Windows 2012 R2 with 5x2TB Drives in a single Storage Pool. On a separate host, I run a Windows 2012 R2 VM as a Plex Server. I map a network drive to the storage server from the Plex server and add all of my media. It works rather well.

This can all be done with Windows 8 as well.
 
All the arguments presented so far for zfs seem like both sides are giving good reasons against zfs. I've heard nothing that remotely makes it sound better.

Agreed. For a media fileserver, there is only one* way that using ZFS makes sense -- format each individual drive with ZFS and then use SnapRAID to add parity. And that only makes sense for people who are already set up to use ZFS. Most people thinking of setting up a media fileserver most likely already have drives formatted with NTFS or some common linux filesystem, in which case switching to ZFS does not make sense (just take those drives as data drives and use SnapRAID to add parity and checksums).

* Unless you count the occasional eccentric who wants to tinker with a ZFS RAIDZx media fileserver just for the hell of it
 
I run ZFS in a fairly simple FreeNAS environment. I originally started with 3 x 1TB disks in a single pool. When the time came to upgrade, I simply bought 3 2TB disks and replaced each disk individually in the pool. Worked like a champ and there is absolutely no backup/restore/destroy pool stuff going on. The reason I did it this way is because I wanted all of my drives in my pool on the same warranty schedule.

I have never used Snapraid myself, so I can't comment on that, but ZFS is not complicated at all.

Your example there is just a nonstandard raid with removal and replacement of the disks. Something that snapraid, flexraid and other nonstandard raids can all easily do. Nothing zfs specific. It could be ntfs or ext4 or any other filesystem in that example and they'd all work that same way.

My point is previous posts are talking about zfs like it's the solution to the original question and by those examples it's not and it's more complicating than problem solving. It really has nothing much to do with the original question. As your example can demonstrate. zfs has no real part in what you did, other that it's there, it doesn't make a difference in the function of what you described.
 
Agreed. For a media fileserver, there is only one* way that using ZFS makes sense -- format each individual drive with ZFS and then use SnapRAID to add parity. And that only makes sense for people who are already set up to use ZFS. Most people thinking of setting up a media fileserver most likely already have drives formatted with NTFS or some common linux filesystem, in which case switching to ZFS does not make sense (just take those drives as data drives and use SnapRAID to add parity and checksums).

* Unless you count the occasional eccentric who wants to tinker with a ZFS RAIDZx media fileserver just for the hell of it

Thank you very much. My point made even more clear.
 
I currently have a small Storage Server setup with Windows 2012 R2 with 5x2TB Drives in a single Storage Pool. On a separate host, I run a Windows 2012 R2 VM as a Plex Server. I map a network drive to the storage server from the Plex server and add all of my media. It works rather well.

This can all be done with Windows 8 as well.

Does that setup offer any parity or other redundancy on your data? Just wondering.
 
SnapRAID is a simple and free solution for those less willing to deal with sophistication. At a simple level, it resembles more of a RAID4 configuration (assuming you start off with similar sized drives).

One of the supposed benefit of SnapRAID is only losing data on the drive that fails as opposed to losing the entire array if all the party drive + 1 fails in a ZFS configuration. This very much depends if you value uptime over the probability of having the number of concurrent drive failures.

ZFS offers raidz1, raidz2 and raidz3, depending on how "risk adverse" and comfortable you are concurrent drive failures.

My opinion: if you are comfortable getting into more complex solutions, get ESX and run FreeNAS while passing through your raid controllers. Thereafter, setup a Windows configuration for your day to day needs, while passing through the graphics card and other hardware you might require.
 
Now you are just being ridiculous. The discussion was about a particular statement that was made in this thread: "ZFS also has no one-disk-at-a-time online capacity expansion". Someone came in and said it was not true, but gave no valid counter examples. Then you say that you can destroy the pool and create a new one. But that is not "online", since destroying the pool takes your data offline. And your backup is obviously "not very good" by your own strange standards, since to restore you have to travel to another building, pick up all the drives, travel back (risking damaging your backup on the trip), then spend hours copying all the data over to a new pool, all the while you have NO BACKUP -- single point of failure.

As compared to SnapRAID, where you just add a single data or parity drive and do a sync, while all your data is intact and accessible, and no time or effort is wasted restoring from a backup, and your data is protected the whole time.

I don't really see how my backup is bad by my own standards... All my standards said was that the backup shouldn't take a lot of time and a lot of work to restore from. Which are both relative things.

You said the optical disks of the original media are the backup. It would take a very long time for me to re-rip hundreds of audio CDs, hundreds of BluRay movie and TV disks, then to encode each movie to a smaller file taking several hours per movie and also split out the TV episodes from the TV show disks and encode and re-organize it all. Like I said, it would take months or even a year to restore all my data and hundreds of hours of my time.

It's far cheaper that I keep the data in an offline mirrored backup which I update regularly since my time like most people's is not worthless. Restore difficulty is as easy as picking up the disks, plugging them in and copying them to the new array. Time it takes is the limit of the disk speeds, but for my data probably about a day.

So my backups are a single point of failure only during the window in which I am restoring, so for a day or so. If that's true, then for a SnapRAID array, the data is always at a single point of failure. Unless you count the optical disks as the backup in which case I have 2 backups then, so I only have a single point of failure when my main copy and my primary backup are both destroyed at the same time.

I agree that destroying and recreating the pool to add a disk is not what people would typically do and is not a very good idea. It's not something I actually do, I just said it's something that could be done that's all. I grow my array by replacing the disks with larger disks like someone already mentioned too. The disks that I replace are where I get my backup disks from so it works out really well IMO.

I guess I jumped into this thread at the wrong time... I only just mentioned a way that one could grow a ZFS array one disk at a time. Not trying to make any comparison to SnapRAID or argue over which one is better at what or which one is better for the OP's solution.

I don't want to ever tell someone what they should or shouldn't use. I want them to know what I do and what I found that works well and general information about all the options out there and I want them to make their own decision of what they choose themselves.
 
Last edited:
Like I said, it would take months or even a year to restore all my data and hundreds of hours of my time.

Which is another reason why ZFS RAIDZx is a terrible choice for a home media fileserver.

With SnapRAID, if you have 2 parity drives and are unlucky enough to have 3 drives fail at the same time, then you only need to restore the data on the failed data drives.

With ZFS RAIDZx, you have to restore your entire media collection.

There are many reasons why ZFS is a terrible choice for a home media fileserver.

I agree that destroying and recreating the pool to add a disk is not what people would typically do and is not a very good idea.

So your prior post was trolling. Well done.
 
Last edited:
Which is another reason why ZFS RAIDZx is a terrible choice for a home media fileserver.

With SnapRAID, if you have 2 parity drives and are unlucky enough to have 3 drives fail at the same time, then you only need to restore the data on the failed data drives.

With ZFS RAIDZx, you have to restore your entire media collection.

There are many reasons why ZFS is a terrible choice for a home media fileserver.


So your prior post was trolling. Well done.


If you can unintentionally troll, then yes? I say It's not trolling because I believe to troll you have to be doing it intentionally. I was just talking about the subject at hand. If it was troll it wasn't intentional. Along the lines of The Big Lebowski, "Donnie, you're out of your element!"

You don't have to explain how snapshot RAID works. I've used both SnapRAID and FlexRAID before for a long time.

You can talk to me all you want about the differences of SnapRAID and ZFS but at the end of the day... IMO I am a regular guy with a regular home media server. I used to use snapshot RAID, eventually I was looking for something more powerful for my normal home media server. I moved to ZFS, it's been working out great. Do I like it? Yes, Do I like it more than SnapRAID + some pooling software like AUFS? Yes. Would I do it again? Yes. Would I recommend it for a possible home media server? Yes. For everybody? No, of course not, that would be silly, nothing is for everyone.

My story is a real story just as real as anyone elses. These are real cases from a real user (me), my opinions are JUST as valid as yours so STOP trying to convince me your opinions are somehow more valid than mine... because that would be trolling...

I've used both, and ZFS is working better for me and makes me happier in the end. I'm assuming you have used both before and you find SnapRAID works better. Neither of us is wrong... That's how the world works. People have different experiences and different opinions. I decided to share mine and now I'm being condemned for doing so... What a world is that where I can't share my experience and opinion. Thought that was the point of the forum.

Note that I never told anyone to do anything or tried to claim one thing was better than another, just what I had done (and liked) or what theoretically could be done. It's called contributing information to a discussion, nothing more.
 
Thanks for the comments guys, I will have to look more into this snap-raid option. Sounds like that is the perfect solution for my situation. I have 3 4TB drives coming in the mail to backup my existing files from my raid 5 array.

I am still leaning towards hardware Raid at the moment since i have that Adaptec 51645 just sitting around. But i plan on consolidating all my drives into one raid and software raid seems like the solution for that. Snap Raid seems to be the route I would want to go for this, just wondering if streaming media through my house would lag. Like a situation where i am watching 2-3 different 1080p TV shows that are all stored on one drive in the array. I definitely want to stick with windows for now, no ESX or other complicated stuff. Other question with snap raid is with the parity drive. I have 5x2TB, 3x1TB, and 3x4TB drives. So if i want 2 drives for parity, would both of them have to be 4TB drives?


I currently have a small Storage Server setup with Windows 2012 R2 with 5x2TB Drives in a single Storage Pool. On a separate host, I run a Windows 2012 R2 VM as a Plex Server. I map a network drive to the storage server from the Plex server and add all of my media. It works rather well.

This can all be done with Windows 8 as well.

Why do you run plex on a virtual machine instead of just one on the host? Prevent yourself from having to reboot the server as much as possible?
 
Last edited:
It's called contributing information to a discussion, nothing more.

It is called trolling when you post something that you know is a bad idea and should not be done just to make a pointless argument.

STOP accusing me of trying to convince you that my opinions are more valid than yours when I have done nothing of the sort. I was just discussing SnapRAID and ZFS before you tried to make this personal. I have no comment on your "opinions". STOP your trolling and false accusations.

The fact is that ZFS RAIDZx is a terrible choice for a home media server. This has been explained time and time again in this forum, and in this very thread, where no advantages of ZFS RAIDZx for a home media fileserver have been mentioned.
 
Thank you, so at least I know I'm not trolling then since the things I said are not something I know to be a bad idea. I mean it's how I do them myself and it works well in my experience compared to other things I've tried, so thus, not a bad idea to me.

I guess you are referring to the statement I said about being able to expand a ZFS array by 1 disk by destroying and recreating the pool. I've already clarified that that was merely a hypothetical and I never suggested anyone do it nor suggested it was a good way to expand a pool. So if that came across as troll I'm sorry. I certainly wasn't thinking that when I wrote it.

"The fact is that ZFS RAIDZx is a terrible choice for a home media server." That statement is not entirely a fact as it stands since it doesn't say for whom it applies to, so it must apply to every single person with a home media server. Well I actually have a home media server and I have actually used 3 types of RAID now including snapshot raid and zfs and zfs has worked best for me out of the 3. So you really need to add "for most people" to the end of that statement for it to be considered a fact.

Not everyone has the exact same use from a media server or demands the same performance and features, so one solution can't possibly always be the best for everyone.
 
Thank you, so at least I know I'm not trolling then since the things I said are not something I know to be a bad idea.

This contradicts your previous post where you wrote:

"I agree that destroying and recreating the pool to add a disk is not what people would typically do and is not a very good idea."

Which contradicts the suggestion that you made in your prior post:

"Then you can add 1 disk at a time to a ZFS RAIDZ array. Just destroy the pool and recreate it with the additional disk and copy your data back from backup."

The fact remains that ZFS RAIDZx is a terrible choice for a home media server FOR ANYONE.
 
Last edited:
I thought that clarifying something I said before was a bad idea would make it not trolling anymore. I guess not, sorry. Maybe it would be better to delete the statement rather than clarify it next time. I just don't typically like deleting things I've wrote before.

I just like to deal with hypotheticals too much I guess (I like to ask "what if?") and I write things almost as a question for discussion before finally deciding whether or not it's actually a good idea or bad idea. Or sometimes I might not even know whether or not something is a bad idea at first which is why I might bring it up so insight can be added to it to come up with whether or not it's good or bad.

I think this is a classic example of I tend to take things literally. Like, "you can't expand a zfs array by 1 disk at a time". Well technically you can my brain said. But whether or not that was a good or bad idea never even came into my mind as part of the statement at that point in time. Later when it came into question I did think on it more and clarified that It was indeed a bad idea.

Again, I never intended it to by trolly. I hate trolls and the last thing I'd want to do is troll :eek:
 
Last edited:
"The fact is that ZFS RAIDZx is a terrible choice for a home media server."

I see this written by JoeComp so many times it really should be his .sig - to save him typing time. :D I think it's a form of OCD - which isn't a bad thing as anyone dealing with storage needs to have some OCD (asperger's?) to keep on top of things.

Basically, walk around eggshells when dealing with him. :p

One day I'll do the SnapRAID + ZFS thing - I really don't want to deal with Windows drive letters or weird-assed NTFS mount point stuff! Although perhaps SnapRAID has an ability to bypass all that stuff with well-formed drive path links?

But then again I do run iSCSI and make regular use of snapshots and run some VMs with a sprinkle of home automation.
 
Back
Top