Inexpensive dedicated storage without ZFS?

I read every word. A made no sense, B and D assume we're all made of money or should be therefore huge expenditures for storage we don't need at the moment are trivial, D also states that if it's not trivial for you then you're poor and don't deserve storage, and with C your argument is newegg shipping sucks, to which I say no shit Sherlock, that's why the official store of [H] is amazon.

A makes complete sense if you know English. The person who responded to me after you seemed to figure it out. Again actually read what I said and come up with valid reasons why my statements aren't valid.
 
Last edited:
A: You can have a 6 drive RAID6 and a hot spare, run out of room and go buy 1 more drive to expand to a 7 drive RAID6. If something dies you have a hot spare. Doesn't matter how many drives you have, having a spare drive is an option. If you have a 6 drive RAID6 and buy 6 new drives to create a 2nd RAID6 array to add to your zfs pool and one drive fails, you are in the same position you describe, dispite fact you bought multiple drives. Doesn't matter how many drives you buy at once, having a spare drive is separate thing.
A hot spare is an extra drive now isn't it? Good god If you need 6 drives and you only buy six you aren't buying extra. You are only buying what you need in the moment with absolutely no planning at all. If you need six and you buy seven that's buying extra.

To top it all off let's say you buy one TB drive today then next year 2TB drives become the sweet spot. Well if you really want to save money you'll buy the 2TB drive. Do you know what will happen if you install a 2TB drive in RAID with a whole bunch of 1TB drives? That's right you've basically wasted 1TB because the array isn't going to update until all of the drives are at the higher capacity. I guess you'll just have to buy more than one. OH NOES!!!

B: Makes no sense. If you don't need the space, it makes no sense to pay for it. Just because prices can go up doesn't mean they will. Overall prices go down over time. Even if they do not, if I need space $160 gets me a 4TB drive, that might hold me over for another year. Either spend $160 now and $150-$170 a year from now... or spend $320 today. No reasonable logic exists to spend $320 today when its not needed much less spending $1200+ today for 8 new drives when I only need a few TB extra.
What you are advocating is that people shouldn't plan ahead or buy in bulk. Are you sure that what I said makes no sense?

C: I agree its just a nice perk but only really useful when you buy like 8-10 drives. You need one more drive for $150, but instead by 8 for $1200 for better packaging?
No like I said I plan ahead and I don't like returning stuff. If you want deal with it deal with it. If you don't don't. C: is in a larger context with the fact that you'll need buy more than one drive anyway for a whole host reasons.

D: That's stupid. When I was in school I always bought one drive at a time. And shit even now make 6 figures and I hate having to buy multiple drives for zfs. Buying HDD space is so damn boring. Lots more fun stuff to buy.
Not really. Not if your data is important to you. Your argument seems to be that there are other fun things to buy instead of hard drives. Well, I don't have an argument against that except we aren't talking about all of the other fun things someone can buy. We are talking about file servers.
 
Last edited:
Is there a comparison chart somewhere that lists out all the various design issues and compares all the different flavors that we all have been discussing vehemently here?

I was being snarky, but I think you know that. :)

But seriously - I am not sure. I may have seen something like a chart a while ago but for the life of me I cannot remember when and what.
 
A hot spare is an extra drive now isn't it?

A spare is something you already have on hand, you buy it once. When you want to expand your array you don't need to buy it again, you can buy just one drive, assuming you're not using zfs.

Good god If you need 6 drives and you only buy six you aren't buying extra. You are only buying what you need in the moment with absolutely no planning at ALL. If you need six and you buy seven that's buying extra.

The point was, a spare is something you already have on hand, or do not, but either way a totally separate from whether you expand your array by 1 drive or add another raid set.

What you are advocating is that people shouldn't plan ahead or buy in bulk. Are you sure that what I said makes no sense?

I do plan ahead, I know need X TB/year, so I buy that. If I know I won't need space for 6 months, no reason to buy it now. And buying in bulk is makes sense when you get some sort of discount. If you don't you're just prepaying for something you don't yet need.

No like I said I plan ahead and I don't like returning stuff. If you want deal with it deal with it. If you don't don't.

Having a spare on hand fixes this issue, well I mean it fixes the problem of being down a drive while doing an ram. The ability to expand an array by one drive, vs having to add another raid set has nothing to do with having a spare on hand. Not sure how buying extra drives makes you not have to return stuff, unless when a drive fails you just toss it and take the loss.

Not really. Not if your data is important to you. Your argument seems to be that there are other fun things to buy instead of hard drives. Well, I don't have an argument against that except we aren't talking about all of the other fun things someone can buy. We are talking about file servers.

Data being important has nothing to do with expanding an array one drive at a time vs having to buy another entire raid set to add to a pool. You're data is equally safe and should be backed up anyways. The only difference between these two is the economics of buying small amount of space as needed vs buying in large blocks up front. More money up front is less money on hand.
 
Last edited:
A hot spare is something you already have on hand, you buy it once. When you want to expand your array you don't need to buy it again, you can buy just one drive.
A hot spare isn't mandatory it is in effect extra.

Spare Definition
Dictionary.com
spare [spair] Show IPA verb, spared, spar·ing, adjective, spar·er, spar·est, noun
verb (used with object)

10.to have remaining as excess or surplus: We can make the curtains and have a yard to spare.


The point was, a spare is something you already have on hand and separate from whether you expand your array by 1 drive or add another raid set.
Again a hot spare isn't mandatory it is in effect extra. You don't need it for RAID 0 - 100. You can stand up RAID without it. It is by definition more than what's needed. It is extra.

I do plan ahead, I know need X TB/year, so I buy that. If I know I won't need space for 6 months, no reason to buy it now. And buying in bulk is makes sense when you get some sort of discount. If you don't you're just prepaying for something you don't yet need.
Most of the bulk you'll buy comes with a discount. Por ejemplo. This bulk pack of 50 (and I'm not recommending you buy a 50 pack there are smaller 10 packs) comes in at 59.36 per drive. Buying it by the single is 79.99. There's your discount. That discount is larger than if you waited a year. It's probably larger than if you bought it anywhere else that's reputable.

Having a spare on hand fixes this issue. The ability to expand an array by one drive, vs having to add another raid set has nothing to do with having a spare on hand.
Again if you are buying a spare you are in effect buying more than you need. You are buying extra. Just because the effect of buying an extra hard drive is beneficial doesn't mean you can recreate the definition.

Data being important has nothing to do with expanding an array one drive at a time vs having to buy another entire raid set to add to a pool. You're data is equally safe and should be backed up anyways. The only difference between these two is the economics of buying small amount of space as needed vs buying in large blocks up front. More money up front is less money on hand.
Really? So why are you buying that hot spare again?
 
A spare is something you already have on hand, you buy it once. When you want to expand your array you don't need to buy it again, you can buy just one drive, assuming you're not using zfs.

That makes perfect sense, and I would have thought it was obvious. I'm surprised you even needed to explain this, but clearly the explanation was not understood.

It is amazing all of the red herrings that are being thrown out by ZFS zealots to try to rationalize the inability of ZFS to expand by one drive at a time.
 
That makes perfect sense, and I would have thought it was obvious. I'm surprised you even needed to explain this, but clearly the explanation was not understood.

It is amazing all of the red herrings that are being thrown out by ZFS zealots to try to rationalize the inability of ZFS to expand by one drive at a time.

Yeah bullshit. This is my rub with those who dislike ZFS and really that's what this is about. You should just say that. You should just say, "I don't like the fact that you need to buy more than one drive to expand a pool." There's no argument there from anyone even the ZFS zealots. They hate it too. I assure you. But don't sit here and make dumb excuses like " buying one drive at a time is better". It's not for anyone who has RAID and that has nothing to do with ZFS. It's actually not for those who buy desktop drives for their file server either. Desktop drives don't have the performance characteristic consistency that enterprise drives do. A two-platter today could be a a three-platter tomorrow.

Be an adult and say what you mean don't play word-ninjitsu . By the way there's Flex-Raid freaks just as much as there's ZFS zealots. Is it just my imagination that every time someone is looking for a file server recommendation and ZFS is mentioned you show up and talk about the same talking point? Be a man and just say you don't like it. That's better advice.
 
Last edited:
I wrote exactly what I meant, repeatedly. I don't "dislike" ZFS. It has its uses. But ZFS is a terrible choice for a home media fileserver.
 
I wrote exactly what I meant, repeatedly. I don't "dislike" ZFS. It has its uses. But ZFS is a terrible choice for a home media fileserver.
...
It is amazing all of the red herrings that are being thrown out by ZFS zealots to try to rationalize the inability of ZFS to expand by one drive at a time.

That's what you meant. But earlier you actually recommended that buying one drive at a time is preferential. I wasn't "rationalizing" because your recommendation isn't even best business practices. Go ahead go to work and buy just one drive at a time as needed. Get close to filling up that array? Just buy one drive... and pay shipping and handling every time you need to expand the other servers. You'll save tons of money if the drive is still made....but that's not likely.

Hard drives are not like processors. They are almost guaranteed to lower in price and then EOL. Hard drives? Yeah they will lower in price over the span of like two years. But if you see a deal like 20 or 30 dollars off buy more than one. It's almost assured that the price will increase after the sale is over. If you are at work also buy more than one. Hard drive prices decrease but as supply becomes scarce the prices will rapidly increase.
 
Fuck how are you even arguing this shit. Expanding arrays one drive at a time is nice for home users. Having to buy another raid set is fucking an expensive up front cost when you don't need that amount of space right away. Home users, even if they are using ZFS are not buying drives in quantities they're getting any discount. It is always going to be cheaper to hold out until some sale is going on and then buy them. There is no huge bulk discount for buying 6-8 drives that makes any sense.

Also not sure why you keep bringing up this spare shit. It has nothing to do with the difference between being able to expand an array one drive at a time vs having to add an entire raid set to a pool.

Either you want a spare and you get one, or you don't and you don't. This applies equally whether you are using zfs or not. It has nothing to do with when you run out of space whether you need can simply go buy another drive to expand the array or you need to go buy a full set.

Also FYI I use ZFS, got 40TB of storage on my FS. But I'm not going to try to argue expanding arrays one drive at a time isn't a huge feature for home users.
 
Fuck how are you even arguing this shit.
Probably because someone decided to say that a spare of something isn't extra. If we didn't try to argue definitions that appear in dictionaries we could have skipped that whole part.

It has nothing to do with the difference between being able to expand an array one drive at a time vs having to add an entire raid set to a pool.
Like I said then just say that.

Either you want a spare and you get one, or you don't and you don't. This applies equally whether you are using zfs or not. It has nothing to do with when you run out of space whether you need can simply go buy another drive to expand the array or you need to go buy a full set.
Are you now arguing that people have free will? Did I say they didn't?

Also FYI I use ZFS, got 40TB of storage on my FS. But I'm not going to try to argue expanding arrays one drive at a time isn't a huge feature for home users.
Did I say it wasn't? Nope. Look up above. I said that you should just say that you don't like ZFS because you have to add vdevs in sets. Buying one drive when there's a sale or when prices are low is well not smart. If you can afford it buy more than you need. You don't even have to take my word for it. You can stay here on [H] and look at what people will say or recommend when it comes to memory or hard drives. They will always say buy more than what you need. Plan ahead.

Hell there's Aesop Fables that would disagree with you.
 
Last edited:
If you want a spare you buy a spare and you have it. Later when you need more space, you do not buy need to buy extra because you already bought the damn spare and continue to have it. You also don't need to buy a raid controller and a motherboard and cpu either, because you already have them as well. All you need is one more drive. Whatever, this has nothing to do with anything. The discussion is about being able to expand an array vs having to add a set and this spares have nothing to do with this.
 
If you want a spare you buy a spare and you have it. Later when you need more space, you do not buy need to buy extra because you already bought the damn spare and continue to have it.
Would you stop trying to argue the definition of the word spare? Good grief. You don't even have to say you are wrong you can just ignore that part and not argue a verifiable definition. Otherwise I'll just paste the definition every time you try and argue that point.

You also don't need to buy a raid controller and a motherboard and cpu either, because you already have them as well. All you need is one more drive. This has nothing to do with anything. The discussion is about being able to expand an array vs having to add a set.
Um no the title of the thread is...."Inexpensive dedicated storage without ZFS?" Raid controllers and everything else that everyone uses would be apart of this discussion. We only need to limit the discussion if that's the only argument you are trying to make and if it is then see above because I suggested that you should just say that and not try to play words with friends and argue verifiable definitions.
 
ZFS is a fine choice for a home media server. Particularly if one isn't budgeting every megabyte and can afford an extra disk or two. It's great storage!
 
Yeah this did get mighty off topic. My apologies for contributing to the noise, but I've learned so much about ZFS and its capabilites.

Through all this, did we ever answer the OP's question?
 
Currently I buy drives by two (2*3TB per month, or thereabout), one for backuping the other. I plan to continue like that while having a ZFS storage server. Most of my data would be on the server, not my "new data" until I have enough drives to add a vdev. Or maybe I could make a separate pool with those solo drives, adding one by one, with no redundancy : not really a problem since I have backups. Things like personal photos/videos would always find space on the RAIDZ2 pool, and have an additional backup, anyway.

Going ZFS doesn't mean going out of options.
 
Through all this, did we ever answer the OP's question?

Yes, the answer was given before the ZFS zealots diverted the thread trying to rationalize away the fact that ZFS is a terrible choice for a home media fileserver.

For a home media fileserver, the best choice is snapshot RAID, with either SnapRAID or FlexRAID. Those programs support any hardware and most operating systems, so all the OP needs to do is buy any hardware supported by his OS of choice, and then run SnapRAID or FlexRAID.
 
I wrote exactly what I meant, repeatedly. I don't "dislike" ZFS. It has its uses. But ZFS is a terrible choice for a home media fileserver.

No, its only a terrible choice because you are being a dramatic princess about it. Its not a terrible choice. Its not the best choice for all scenarios including large static media collections, but it is FAR from terrible. Terrible would be a bunch of USB enclosures strung together on a USB hub with no backup or redundancy.... But maybe YOUR ZFS server IS a bunch of USB drives strung together and that's why you've invested so many posts in this thread being a drama queen against ZFS for media storage.

Guess what, not all of us really want to or need to run two servers, one for media and one for other files, each with the "JoeComp approved RAID" for the data on that server. Many of us have more than just static media files that we want to store reliably. ZFS is better for more "types" of data than snapshot based RAID, and conversely snapshot RAID is far far FAR worse than ZFS for many types of data besides static media files. Im not going to trust my Lightroom catalog (which is changing continuously) to snapshot based RAID, when it will never be 100% protected with snap shot RAID.

So you can rant and rave all you want about how "terrible" ZFS is for media files, but guess. what... It stores those files, gives them redundancy, serves them out on a network. It works. If the downside is having to upgrade 8/(insert number > 1 here) drives instead of adding one at a time, most of us are fine with that tradeoff.

So get some common sense, realize that most of us dont live in your bubble and stop being a drama queen troll.
 
Wrong, for a home media fileserver. It does not matter how many drives you have, ZFS is still inefficient at expanding a parity or mirror volume. Whether you have one drive or 50, if you have to add two or four drives (and only get 50% or 75% efficiency on the added space) rather than one drive, then your system is less efficient than other solutions. The only thing about ZFS that scales with more drives is the read/write speed, but that is irrelevant to a home media fileserver.

Also, SnapRAID and FlexRAID work great with a large number of drives. I do not know where you get the "may not scale well" from.
This is plain wrong. You clearly have not understood what "scalability" means. Scalability is the ability to go upwards, say 100 disks or 1000s of disks. How in earth do you think that any solution that handles single disks, will cope with this many disks? What do you do after C: D: ... Z:... CC: CD: DC: ... ZZ: etc disks ends? Which of these many disks have free space? Where is that particular movie you want to see? Is it on DC: or DX: or did you even move it to another disk? You need huge lists with all the movies, to navigate among 1000s of disks. And when you move files, you need to update the lists.


FACT 1) To handle 100s of disks you need some kind of pooled storage. You can not handle disks individually. Where did you put that movie? On disk DX or DY or ...?
When you use pooled storage you have one single directory /Movies and then
/Movies/Drama etc
This makes it easy to catalogize and handle all information. It is madness to handle lots of datapieces individually. Obviously you are not a programmer, but if you were, you would understand the huge difference of declaring lot of variables, or use one vector.

int age1, age2, age3, age4, age5,...., age1000;
int ages[1000];

It is great dumbness to handle lot of individual variables. You need the concept of pooled variables, to handle them as one. In fact, the first iteration of C did not work well to write Unix, because it lacked Structs. Structs allow you to collect together data belonging together so you can abstract and think of them as one entity. After adding Structs to C, he could finally write Unix. Go and learn some programming. Only a non techie would believe that individual int variables, are as good as a pooled storage.


FACT 2) When you have many disks enough, they start to pop all the time. It is easy to calculate the probability that at least one disk crash, using the well known formula (1 - (1 - p)) ^ n. Let me fill in some of the implications for you:

Say you have 1000 disks, and there is 0.001 chance they crash. Then the simple formula above tells us that there is 0.63 chance that any one disk will crash, that is 63%. If you have many 1000s of disks, they will start to crash all the time. For instance, the first computers used electron lamps as transistors, and because they had so many of them, very often one transistor was broke. Repairs needed all the time. The very word "bug" comes from a bug had flewn into one of these electron lamps, so the engineer "debugged" the system to find the error.


FACT 3) You need some automatic repair mechanism when you scale upwards. You can not handle 1000s of disks individually. Staging area? Are you kidding?


FACT 4) http://en.wikipedia.org/wiki/Scalability
"A system whose performance improves after adding hardware, proportionally to the capacity added, is said to be a scalable system."
In flex/snapraid case, your sysadmin work and book keeping increases the more disks you have. The more disks -> the more work, until you can not cope with the sysadmin work and book keeping any more. Hence, does not scale. You will spend more time managing and repairing the system, than using it. Does not scale.

ZFS scales better. No need for book keeping huge movie lists because of pooled storage. If disks crash, you have raidz3 and hot spares, so you can still use the system. There are Petabyte ZFS installations with great success.

Are there any Petabyte Flex/snapraid installations? No. Why? Maybe because they do not scale.


So obviously, flex/snapraid does not scale. Learn about "Scalability" and stop making yourself a laughingstock. Tech savvy people are amused at your amateur claims. If I were to have many movies, I would not like to split them up between many disks and book keep all the movies. I would like pooled storage, and redundancy and safety. Obviously, you dont get this from flex/snap making them terrible choices for people with many media files.
 
To be fair, Joe has limited his rhetorical assertions of preference to "Home media server", which is usually not the domain of (disk x >1000).


But one can make single disk pools and/or add single disks to one's home ZFS media server pools if one prefers.
 
So you can rant and rave all you want about how "terrible" ZFS is for media files, but guess. what... It stores those files, gives them redundancy, serves them out on a network. It works. If the downside is having to upgrade 8/(insert number > 1 here) drives instead of adding one at a time, most of us are fine with that tradeoff.

As I already wrote, ZFS may "work" for a home media fileserver, in the sense that it can store data and serve it. Nevertheless, there are multiple downsides to using ZFS for a home media fileserver. I already listed eight of them in this thread. No matter how much you try to ignore the truth, the fact is that ZFS is a terrible choice for a home media fileserver.
 
To be fair, Joe has limited his rhetorical assertions of preference to "Home media server", which is usually not the domain of (disk x >1000).

That is obvious to any reasonable person.

Nevertheless, if some unreasonable person actually did have 1000 HDDs in their home media fileserver, they could still use SnapRAID or FlexRAID. You can easily divide the drives up into whatever size snapshot RAID units you like, and you can pool them together using any number of choices of pooling software.

I think ZFS is just fine for a home media server. Mine works great.

As I have written several times, of course ZFS "works" for a home media fileserver, in the sense that it stores your data and serves it. But that does not change the fact that ZFS is a terrible choice for a home media fileserver, as there are other choices that do the job so much better.
 
the other choices you mention are only better for someone who doesn't plan out their build ahead of time. if you want to add drives one at a time to expand, no, zfs sucks for that.

however if you plan your build ahead of time and you know what your current and likely future requirements are going to be then building a zfs setup is a great choice as long as you understand your limitations ahead of time.

as an example i have 8TB across 3 individual disks right now in my HTPC. I know how much is used and I know how much of that used is crap that i will never watch and will delete before i waste time sending it across the wire.

i am planning a build right now using 2TB (possibly 3TB if there is a deal when i decide to purchase) drives in a 6 disk raidz2. 8-12TB will handle my media and backup needs fine and if at some point i begin to exhaust the space i can increase to 4TB drives down the road.

could i build a fresh snap/flex setup from scratch? sure, but it wouldn't be free and i would likely be buying all new drives anyway. further, i am extremely comfortable with the ZFS tech as i administer almost a PB of it on a daily basis. granted not everyone is as familiar however in a situation where someone would need to learn either technology ZFS being simple to learn is a big plus.

as long as the OP or anyone else properly understands what ZFS can and can't do well then pretty much every point you make is null and void. further as gea_ pointed out on the first page you CAN use ZFS and snapraid and get the advantages of both if expanding by a single disk is critical to your use case.

show us on the doll where the bad zfs touched you ...
 
That is obvious to any reasonable person.



Not so much, as your facts are a little light. Most of your asseveration has been with regard to your personal aesthetics and, as such, is not particularly compelling. "That which can be asserted sans evidence can be dismissed in similar fashion."

Nevertheless, I do support your privilege to assert as you wish. Cheers.
 
Last edited:
Not so much, as your facts are a little light. Most of your asseveration has been with regard to your personal aesthetics. "That which can be asserted sans evidence can be dismissed in similar fashion."

I listed eight specific downsides that ZFS has for a home media fileserver. Just because you missed that post or because ZFS zealots like to stick their fingers in their ears and hum does not make those any less factual.

No matter how much the ZFS zealots try to rationalize it away, the fact is that ZFS is a terrible choice for a home media fileserver.
 
I listed eight specific downsides that ZFS has for a home media fileserver. Just because you missed that post or because ZFS zealots like to stick their fingers in their ears and hum does not make those any less factual.

No matter how much the ZFS zealots try to rationalize it away, the fact is that ZFS is a terrible choice for a home media fileserver.


I was charitable and assiduously eschewed personal commentary in my remarks, in deference to your rhetorical stylings.

Personally, I found your much referenced 8-point argument less than compelling in light of the positive benefits of my ZFS systems. This reply was, in fact, composed on a Linux virtual machine running on a Solaris server. Linux on ZFS, as it were.

I'm more than pleased with the system, and don't find a thin list of trivial complaints rooted in economy of bits and another's personal preference to be much of a buzzkill.

Perhaps the mods will allow you to change your handle to "JoeBudget" in light of your bit parsimony?
 
Last edited by a moderator:
As I already wrote, ZFS may "work" for a home media fileserver, in the sense that it can store data and serve it. Nevertheless, there are multiple downsides to using ZFS for a home media fileserver. I already listed eight of them in this thread. No matter how much you try to ignore the truth, the fact is that ZFS is a terrible choice for a home media fileserver.

And no matter how much YOU try to ignore reality, the fact is that ZFS is NOT a terrible choice. It is far better than many choices out there. It is not terrible for storing files of any type. It has some downsides versus niche forms of RAID in some use cases, but those downsides do not add up to some sort of terrible experience. Terrible is losing TBs of data. Using ZFS for media files is not going to result in TBs of lost data, regardless of the file extension.
 
Last edited by a moderator:
Using ZFS for media files is not going to result in TBs of lost data, regardless of the file extension.

Actually, it very well could result in loss of all of your files. If you lose more drives than you have parity, with ZFS you lose all your data.

Yet another reason ZFS is a terrible choice for a home media fileserver.
 
The basic structural problem in JoeComp's argument is that the "ZFS zealots" he's trying to troll are ZFS users who have examined their own budgets and levels of Raid Greed and still chose ZFS as their volume manager.

He's made a pretty good haul of it. I think JoeComp is capable of a sticky-worthy discussion of RAIDs vs budgets in a TCO framework. He's a good writer and clear thinker, if a bit of a common address redundancy protocol-er. JoeComp needs his own thread.
 
Actually, it very well could result in loss of all of your files. If you lose more drives than you have parity, with ZFS you lose all your data.

Yet another reason ZFS is a terrible choice for a home media fileserver.
now you're arguing mathematical mean time failure probabilities that are extremely low and making it sound like something that is a high probability.

double parity vdev can sustain 2 failures. that typical scenario is one drive dies, now during the resilver you can sustain one more loss and still be fine. a third failure during the the resilver results in total data loss.

with your preferred methodology in that same scenario you're going to still lose 3 drives worth of data but not all the data which may be a desirable scenario for some folks. however, you could still use zfs and the benefits of it's data integrity underneath snap/flex raid and have the best of both worlds.

apparently you have some huge bone to pick with zfs in general though so w/e. have fun using snapraid ontop of NTFS.
 
now you're arguing mathematical mean time failure probabilities that are extremely low and making it sound like something that is a high probability.


I'd wager against it happening on a rig with 5 disks. Oh heck, I already do. :D



double parity vdev can sustain 2 failures. that typical scenario is one drive dies, now during the resilver you can sustain one more loss and still be fine. a third failure during the the resilver results in total data loss.

with your preferred methodology in that same scenario you're going to still lose 3 drives worth of data but not all the data which may be a desirable scenario for some folks. however, you could still use zfs and the benefits of it's data integrity underneath snap/flex raid and have the best of both worlds.

apparently you have some huge bone to pick with zfs in general though so w/e. have fun using snapraid ontop of NTFS.


QFT. Remember to defrag and keep your security software utd folks.
 
the other choices you mention are only better for someone who doesn't plan out their build ahead of time. if you want to add drives one at a time to expand, no, zfs sucks for that.

This appears to be the only really valid point. Well, maybe that an the fact that since data is spread across multiple disks you can't 'spin down' all but one disk when you are playing a single video.

In a "grow as you go" situation ZFS can be difficult. Does that make it a terrible choice? No. Does it make it the right choice for you? From the sounds of it, no. So go with FlexRaid, since you already seem dead set on it.
 
Has anyone tried the ZFS + Snapraid combo yet?

Although I guess it would work anyways since that is pretty trivial.

(JoeComp needs to make a .sig so he can save typing. And he should take out full-page ads in the New York Times and be a guest on 60 Minutes to warn us of the ZFS plague. I also suggest buying strawmen in bulk - you get a discount!)
 
This appears to be the only really valid point. Well, maybe that an the fact that since data is spread across multiple disks you can't 'spin down' all but one disk when you are playing a single video.
this isn't entirely true, the spin down part. you CAN do it however the only tool i know of that does it is called SANTools. The guys at santools are really good guys though and if you explain to them that you're a home user and just want to tweak/play with your home setup they may give you a private non commercial license.

I do know you can demo the product which would give you enough time to make the firmware tweaks you need to make.

Firstly, ZFS is not a fan of BIOS or anything else telling the disks to sleep. ZFS is a control freak. However, you can change the firmware settings on the drive which ZFS doesn't mind. Now, this isn't for everyone but if you are obsessed with drawing the least power here is how you do it.

Again, you need santools and you will poll your drives, it spits out a boat load of information below is the relevant snippet from a seagate constellation es.2 3TB SAS.
Power Condition : Page [1Ah] (Factory, Current, Saved)
Idle (IDLE_A) : 1, 1, 1
Idle (IDLE_B) : 1, 1, 1
Idle (IDLE_C) : 0, 0, 0
Standby (STANDBY_Y) : 0, 0, 0
Standby (STANDBY_Z) : 0, 0, 0
Idle condition timer(IDLE_AT) : 10, 10, 10
Standby condition timer (STANDBY_ZT) : 36000, 36000, 36000
Idle condition timer (IDLE_BT) : 6000, 6000, 6000
Idle condition timer (IDLE_CT) : 18000, 18000, 18000
Standby condition timer (STANDBY_YT) : 18000, 18000, 18000
here we see the settings that the firmware supports, do note that on enterprise SAS drives idle C isn't enabled by default nor is standby y or z. these settings turn off features of the drive and or slow spindle speed. exactly what each condition does i forget off hand. none of these settings 'turn the drive off' though. zfs still sees the rive as online at all times. the real difference here is commands sent to the drive will have increased latency depending on what condition the drive is in. not ok for most enterprise deployments but great for home use.

you can change these power condition timers to be anything you want. as little as a few milliseconds of inactivity will change the condition (not recommended). for home media server use i would probably use something like 5s/30s/etc to cylce through the power conditions.
Accumulated power transitions to active: 74340
Accumulated power transitions to idle_a: 74340
Accumulated power transitions to idle_b: 76
Accumulated power transitions to idle_c: 0
Accumulated power transitions to standby_z: 0
Accumulated power transitions to standby_y: 0
here you can see that this particular drive rarely ever transitions to idle_b because it is in a high IO 24/7 environment and is rarely idle. if you take the time to tweak your firmware for your home setup though this report would show lots of transitions from a through c and some standby if you had that set.

so yes, it is possible to control power consumption with ZFS. it certainly isn't as friendly/easy as snap/flex but it CAN be done. also, once you do it once you create a .map file and push that file to all your drives at the same time ... presuming all your drives are identical. if they arent you have to change each drive individually.
 
The point is that if you are running ZFS RAIDZx and want to play a movie, the data is distributed across all the drives in the vdev. So no, you cannot play the movie while spinning down all the drives except for one. ZFS requires all the drives in the vdev to be spun up and reading in order to play the movie.

Yet another reason why ZFS is a terrible choice for a home media fileserver.
 
Depends on how much ram you have. if you have enough RAM to fit the entirety of the movie then it will pre-fetch all the blocks into ARC at which point the drives will spin back down.

again, you're not entirely accurate or willfully ignorant.
 
Last edited:
Back
Top