8TB+ Storage

It's not about ZFS caring. It's about planning ahead in general. It's also not about ultimate performance, but informing the OP of what the performance really is and not saying "it requires no additional hardware" when in reality it does. Computers are forgiving little things, and quite flexible. Just because a USB stick is storage, doesn't mean it makes much sense for storing tons of files.

I never said it doesn't require additional hardware. Just that the hardware you use is flexible. And in certain cases it does make sense to store lots of files on a USB drive. Its all depends on what the goal is. If you want to give someone a bunch of large video files a USB drive may be the easiest and quickest way to do so.


And that's precisely the use case I'm using. 8TB not less than that. Your RAM recommendation is correct. That RAM requirement is alot more than what is required for a file server not running ZFS. That informaiton is precisely the informaiton the OP should walk away with. Not putting in 2GB for 8TB of storage. BTW 16GB - 24GB of ECC will run anywhere from $159 - $230+ that's not cheap that's the cost of a i5 2500.

2-4GB is a lot more than other file servers? I suppose that if you want to run a file server with 1GB than 2GB is twice as much. And 4GB is four times as much. Cost wise though that's not the case.

Really? Mind telling me how you're going to get 16-24 GB on an Atom processor? Aside from the fact that most consumer Atoms don't support ECC either. That's probably the most expensive route bar none. But if you want to make the OP purchase a server from Supermicro so he can have decent performance instead of buying a better processor that would support more RAM be my guest. I think the processor matters here for this setup.

I never said that you could get 16-24GB on an Atom processor. Just that it had enough processor power to do the necessary parity calculations to support a software RAID setup like ZFS Z2. And if you did want something like that here's the AMD version that supports 16GB - http://www.newegg.com/Product/Product.aspx?Item=N82E16813157228

There are plenty of people getting good performance with low power CPUs. The reason RAID cards used to have specialized on board processors was because the main CPU wasn't powerful enough to do the RAID calculations as well as general purpose processing. Modern processors are orders of magnitude faster and are capable of doing both easily. Also since the machine is just a file server, even if it were use most of the processing, it wouldn't matter. There's nothing to compete for CPU resources.

No where did I advocate buying a Supermicro server to run ZFS. I said you can spend what you want and there are many different paths you can choose. It depends on what the desired end point is. There are people running 4-8 drive setups on low power CPUs with 2-4GBs of RAM and are perfectly happy. That's not the route I'm going, but my goals are different. You also have people building monster 100TB+ systems because that's what they need/want/can afford.

I think I stated this as well. You can go cheap, but again it comes at a price that people should be fully aware of going into it.

It all depends on what the goal is. Define the goal and then build towards that.
 
I never said it doesn't require additional hardware. Just that the hardware you use is flexible. And in certain cases it does make sense to store lots of files on a USB drive. Its all depends on what the goal is. If you want to give someone a bunch of large video files a USB drive may be the easiest and quickest way to do so....snip

The OP is going for 8TB of storage that's his goal. He wrote that in the name of the thread. He/She told us specifically what he was going for. Are we going on tangents just for the sake of argument?

He could use USB. But Why? He doesn't have to. All of the external drives he has are likely SATA. We would constrain bandwith to 480Mb per sec for no reason when there's likely a 3/6Gb port on any MB he would purchase. Maybe he can plug in several USB drives and create a pool from those, but as I stated before... just because you can doesn't mean you should. You pretending that there's no goal, and stating the obvious is not helpful.

The Atom discussion is not required either. There's more than enough threads here which talk about the performance impact which you think isn't there. There's a performance hit. Period. The CPU matters, so does memory. I'm not going to read a White Paper to you for the sake of your ego.

Is there anything else you would like to respond to that matters?
 
The OP is going for 8TB of storage that's his goal. He wrote that in the name of the thread. He/She told us specifically what he was going for. Are we going on tangents just for the sake of argument?

He could use USB. But Why? He doesn't have to. All of the external drives he has are likely SATA. We would constrain bandwith to 480Mb per sec for no reason when there's likely a 3/6Gb port on any MB he would purchase. Maybe he can plug in several USB drives and create a pool from those, but as I stated before... just because you can doesn't mean you should. You pretending that there's no goal, and stating the obvious is not helpful.

The Atom discussion is not required either. There's more than enough threads here which talk about the performance impact which you think isn't there. There's a performance hit. Period. The CPU matters, so does memory. I'm not going to read a White Paper to you for the sake of your ego.

Is there anything else you would like to respond to that matters?

I don't think you actually read that thread. There were getting acceptable levels of performance, with encryption turned on which we have never discussed and add a large amount of CPU load, on an Atom board with low RAM. Its stated in the thread more than once that an AMD equivalent would be up to twice as fast or enough to saturate a 1Gb network link.

Stating 8TBs of storage only talks about one aspect of the goal, size. That doesn't tell us the whole story. I have presented many different options that are working and in use currently that fulfill many different goals. I have been attempting to provide data and options. I also never advocated that he use all USB drives, just that its an option. Data and options.

I also never said that CPU or memory don't matter. How much they matter is relative though to the task. So at this point unless the OP wants more data or input this discussion is purely academic and I have no desire to be the subject of ad hominem attack for simply presenting my viewpoint.
 
Last edited:
Possible Solution: Meeting the $600 Cap

Here's a crazy idea. Let's use your overpowered processor and memory. Why don't we virtualize your WHS?

You're running AMD so that processor supports VM extensions (hell you might even have IOMMU.... I think ECC too). You could virtualize WHS so no new processor. Depending on the size of your case, you could reuse that too. Keep the existing 8TB associated with WHS. Set up a second VM with ZFS or even Linux RAID.(Note: There's some caveats to ZFS virtualized, but it can be done.)

You've got $600 so you can buy (5) 2 TB drives. Take your 2TB external add that. Now you've got 6. Either Z2 or Raid 6 with (6) 2TB drives gives you 8TB storage for your second backup system. But they will be all in one. You should have $100 bucks left over. Use that for memory... ECC if you can.

So in one box you'll have WHS and Backup in one box. I personally like to separate, but given the amount of money you have you have to make sacrifices somewhere.
 
Last edited:
Lot of misconceptions here.

OK run ZFS with 1GB of unbuffered RAM. Sure it may boot, but the performance impact would be pretty darn apparent with z2 and z3 implementations (if it didn't crash out first).
Why do you think ZFS would crash on 1GB RAM? Have you read it somewhere or are you imagining things? If you have read it, show me the links.

For your information, I have used a PC with 1GB RAM, Pentium4 and a raidz1 (raid-5) with 4 disks for over a year with no stability issues at all. I dont understand why people are thinking ZFS will crash on 1GB RAM? ZFS is ENTERPRISE ON SOLARIS. Solaris does not crash. Who has spread that rumour, that ZFS needs huge amounts of RAM? It is only FUD. It works fine on 512MB RAM PCs too, without crashing.

I know that early FreeBSD port of ZFS had high RAM requirements, but that was a bug in the port. After fixing the bug, ZFS worked fine on low RAM pcs. ZFS has always worked excellent on Solaris, even with low RAM. No crashes.

I can add that when I used ZFS on my 1GB RAM pc, I got like 30MB/sec. But that is because Pentium4 is 32bit CPU, and ZFS is 128bit CPU. You need 64bit cpus to get high performance. That 32bit cpus have low performance is well known in the ZFS community. The low 30MB/sec did not depend on low RAM, it depended on 32bit cpu.



If file servers could be built at the enterprise level with miniscule processors and levels of ram don't you think we would do it? You don't think there's a need to save money?
You must be clear what you talk about. Are you talking about home usage, or Enterprise servers? ZFS for home usage works fine on low resource PCs, but shines on large servers.

If you have lot of RAM, then ZFS will utilize the very clever ARC disk cache and performance will sky rocket. Because most will be cached in RAM. On 8GB RAM home usage servers, you sometimes see no disk activity because everything is cached in RAM.

If you have little RAM, then ZFS can not use ARC disk cache, and ZFS will need to reach the disks all the time. In that case, performance will not sky rocket. Instead, performance will be just disk speed. But disk speed performance is ok, for some people. But Enterprise servers always use large disk caches.

It is possible to build a low resource ZFS server. In that case, the limiting factor will be disk speed because RAM disk cache will be too small.



The truth of the matter is that you get what you pay for. You can believe until the cows come home that you can implement ZFS in z2 or z3 without thinking about the cpu/ram requirements, but the chances of it performing similarly to a server with 1GB of memory and a hardware raid controller is quite low. I can assure you it won't.
I think we are talking about different things here. The creator of this thread wants a home PC, not a high performing Enterprise file server. I agree that you need more resources to reach high performance levels with ZFS, ideally 8GB RAM or so, must have 64bit cpu. Then ZFS will give you 430MB/sec with 7 disks. 48 disks will give you 2-3GB/Sec.

But if you are a home user, then you dont need 430 MB/Sec. You need 100MB/Sec or so. And ZFS will give you that, on very low PCs with very low RAM, as long as you have 64 bit cpus. The reason is that one single disk gives 100MB/Sec, and if you have several of them in zfs raid, you get more than 100MB/sec. Thus a home user can get away with 1GB RAM or 2GB RAM.

If you need a Enterprise file server, then you need more RAM/CPU, yes. But please dont mix them.



Because you'll be port limited using just the MB. Maybe that's the reason.
Again, he is a HOME USER. He will not have 24-36 disks. He might have 8 disks or so. Please dont mix Home Usage with building an Enterprise file server.



Yes, I could get some cheap Rosewill SATA controller for 35 bucks. Aside from the performance impact (and yes there is a performance impact), I prefer a high quality JBOD HBA for alot of other reasons like reliability, warranty, compatibility, and expandibiilty. You're paying the high cost of a good JBOD controller because most of them offer options far away and above that of some 35 buck SATA controller.
The good thing is that with a cheap controller, ZFS will detect all problems at once. This means it is risk free to use a cheap controller with ZFS.

If you use a cheap controller with other solutions, you might get data corruption without knowing it. But ZFS will know, and ZFS will inform you. Thus, go with the cheap controller if you dont have money. You will get lower performance, maybe, so what? He is a HOME USER and will accept 100-200MB/Sec.



For me that matters. I'm a build it once type of guy. I don't like cracking open a case because i bought some cheap 4 port SATA controller. I would much rather buy a JBOD HBA and have 16 ports with room to expand to more if need be.
Normal home users dont have more than 16 disks. HOME USER. Ok?



Huh? Dedup no matter what, is costly. It doesn't have much to do with it's maturity. Yes different versions of ZFS perform better than others, but the RAM requirment for dedup is still present even in Solaris. By the nature of it's implementation this will always be the case. Maturity or no you'll always need more RAM to implement it.
Might be true, I dont know about all dedup implementations. But I know that ZFS dedup is immature, and therefore you should avoid ZFS dedup.



You can always do things on the cheap. There's no denying that. But you'll get what you pay for. The threads are filled with people buying sub standard parts because they were cheap and were told they didn't need this or that. Later, they realize that while they were told they could do something with only 2GB of RAM, they find out rather quickly they were not told that it would run much worse than they expected.
Sure, but I am a HOME USER and I accepted 30MB/sec for over a year. I am not a power user with 20 disks in a rack.

If you need high performance with ZFS, then you need lot of RAM and CPU. If you have low requirements, then you can get away without investing in extra hardware. 8 disks is probably enough, you dont need to buy extra SATA controllers or so. My claim is valid: you dont need extra hardware. ZFS is cheapest.

The new coming worlds fastest IBM supercomputer with 20Teraflops will have 55 Petabyte with 500GB-1000GB/sec. It will use ZFS as a filesystem, and use Lustre to manage all ZFS servers. Thus, ZFS can be very expensive too. It scales. Extremely well.
 
For the use the OP is going to put his server to (ie storing/serving movies), then ZFS is pretty well suited.

You don't really need ECC memory, or an ultra fast CPU, or even that much memory, at least not for this type of use. You don't need any RAID cards either (though obviously you need enough SATA ports to connect all the HDDs)
The resources ZFS needs will vary hugely with the type of workload it's expected to perform.
Serving movies out to one or two client systems is vastly different to serving hundreds or thousands of users running eg. small database queries/updates, and hence the requirements are vastly different too.


ECC?
Not really needed here - though, of course, it wouldn't hurt.
Just use copy with verification (eg md5) to load the movies onto the server, and then you can be sure that what is on the server's disks is the same as what was sent, and that you haven't suffered any bitflips on the way.
Even if you do suffer a bitflip on later playback - would you ever notice it anyway? - it's not bank or medical records here.... and if you do notice any playback problems, look into it then!!

Fast CPU?
Not really needed - you could even run this on an Atom CPU if you really wanted (though personally I would go for a bit more power myself :) ) .
That's not to say that you may not get better performance, esp on uploads, with a faster CPU (because you would), but it's not really needed as such - in any case a basic modern dual core desktop CPU can pretty much max out a single gigabit link, which is all you are likely to find on most home media servers.
RaidZ3 may need a bit more grunt than mirroring would though!

Lots of memory?
Not really needed - for this type of single user, large file sequential IO , the ARC won't really help much - typically any data in it wouldn't be used again any time soon.
You do need enough memory so you don't starve the transaction group buffer on writes, but even then ZFS will adapt (and you can tune this) - 4GB is probably plenty for this type of use.

Deduplication and encryption (two ZFS features which are CPU/memory resource heavy) are not needed here.
 
For the use the OP is going to put his server to (ie storing/serving movies), then ZFS is pretty well suited.
Not really as you've stated video is pretty resilient to bit rot. Any other solution would be faster on lower requirements. That's not to say ZFS is without benefits, but storing video isn't the best example. It can withstand quite a bit.

You don't really need ECC memory,
Need it no. But strongly advisable if you are going to sing the praises of ZFS's self healing properties. That's like saying you should buy a Ferrari because it's fast and when you fill up the gas tank you put in Regular.
A while ago i read a couple of papers on ECC and ZFS. If I find them I'll link them.

Fast CPU?
Not really needed - you could even run this on an Atom CPU if you really wanted (though personally I would go for a bit more power myself
Most of us would. We aren't talking Xeon's, but dual channel memory controller would be preferred over single channel Atoms. Since the memory controller is on die, you'll probably want dual channel vs. single. No?

Lots of memory?
Not really needed - for this type of single user, large file sequential IO , the ARC won't really help much - typically any data in it wouldn't be used again any time soon.
You do need enough memory so you don't starve the transaction group buffer on writes, but even then ZFS will adapt (and you can tune this) - 4GB is probably plenty for this type of use.

The amount of memory has direct impact of I/O in all scenarios. The argument was never that lower amounts or RAM worked on not. It was the performance associated with it. Sacrificing performance just for the sake of it just doesn't make any sense.....if you can afford it. The memory requirements aren't static and increase with the number of drives/pools/storage added. Essentially performance decreases significantly past the point of ARC saturation, dedup or no dedup.

Deduplication and encryption (two ZFS features which are CPU/memory resource heavy) are not needed here.
Yes they are, but I don't believe I said that they were required at all.

ZFS is only as good as the components that are used with it. Start using single channel memory instead of dual, no ECC vs ECC and you slowly start to lose the point of implementing it in the first place.
 
Last edited:
Not really as you've stated video is pretty resilient to bit rot. Any other solution would be faster on lower requirements. That's not to say ZFS is without benefits, but storing video isn't the best example. It can withstand quite a bit.

I didn't say anything about bitrot - I did mention in-memory bitflips, but they aren't the same thing.

Video is no more or less resilient to bitrot than any other type of data - the consequences may (or may not) be less severe, but that's all.


Need it no. But strongly advisable if you are going to sing the praises of ZFS's self healing properties. That's like saying you should buy a Ferrari because it's fast and when you fill up the gas tank you put in Regular.
A while ago i read a couple of papers on ECC and ZFS. If I find them I'll link them.


I didn't sing any praises, and ECC memory has nothing to do with ZFS "self healing"!


Most of us would. We aren't talking Xeon's, but dual channel memory controller would be preferred over single channel Atoms. Since the memory controller is on die, you'll probably want dual channel vs. single. No?


On a home media server you'd ever notice any difference between single and dual channel memory.


The amount of memory has direct impact of I/O in all scenarios.

Err - no it doesn't!!
Why do you think it does?



The argument was never that lower amounts or RAM worked on not. It was the performance associated with it. Sacrificing performance just for the sake of it just doesn't make any sense.....if you can afford it. The memory requirements aren't static and increase with the number of drives/pools/storage added.


You'll have to define exactly what you mean by "performance" then.
On a zfs based home movie server, how does more memory improve performance?


Essentially performance decreases significantly past the point of ARC saturation, dedup or no dedup.

In some usage patterns that's true, but not in this case. The data you are reading most likely won't be used more than once, so cache is of little benefit.

Again though, you'd have to define what you mean by performance.
For a home movie server, serving a single client, all the server has to do is serve out sequential data at 5-6MB/sec max. Even a modest server should be able to manage 2-3 such clients simultaneously.


ZFS is only as good as the components that are used with it. Start using single channel memory instead of dual, no ECC vs ECC and you slowly start to lose the point of implementing it in the first place.

Well I suppose you could say that about any system, but we are talking about a home media server here.
 
Kac77 is correct when he says that ZFS might be slower than expensive hardware raid solutions - on low end PCs.

If you are using low end PC, with 1GB RAM then ZFS might be slower. This is because ZFS is resource heavy in terms of CPU. It is like doing a MD5 checksum on every file, constantly. It takes time and cpu power. You might loose performance, yes. But you dont need additional investments. You might loose some performance, but ZFS won't cost you extra money. And ZFS protects your data.

Hardware raid cards essentially contains a complete PC (ram, cpu, os, bios, etc) and can costs as much as an entire PC. You will get higher performance, for extra money and your data might be corrupted.

If you have a budget, ZFS is the cheapest. It will cost you some performance though. But not extra investments. But a home user will accept maxing out a gigabit NIC. Any 64bit cpu can do that.
 
I didn't say anything about bitrot - I did mention in-memory bitflips, but they aren't the same thing.

Video is no more or less resilient to bitrot than any other type of data - the consequences may (or may not) be less severe, but that's all.

Of course I'm speaking about the consquences. Wether the file is accessible or not is the whole point. Why would you argue about data corruption if it's impact wasn't a consequence?

Files like pictures or documents can't really withstand any corruption at all. Movies in particular can. I can send you a corrupted video where there's 5 missing frames out of a 2 hour movie. I'll bet you a bridge, a million dollars, and a shoe if you can see where those missing frames are while watching it.

I didn't sing any praises, and ECC memory has nothing to do with ZFS "self healing"!
That might have been an exaggeration, but you did recommend pairing ZFS with non-ecc, which is counter productive to ZFS itself, or more precisely I think you said it "wasn't needed." I'm going to let you think about what I was saying because you obviously couldn't miss the point I was making and I don't think you did.

On a home media server you'd ever notice any difference between single and dual channel memory.

Err - no it doesn't!!
Why do you think it does?
The CPU can't calculate anything unless it's read from somewhere. This almost always is system memory. In the case of ZFS system memory is used for reads, writes, checksums, parity. dedup, compression, and encryption.

This is also the case for every system where the CPU is calculating parity: ZFS, software RAID, etc. The CPU can't caculate parity unless the data comes from somewhere and in this case after the file system that's system memory. How fast the memory is, how much bandwidth you have, single/dual, amount etc will always play a role with I/O. If the CPU is involved then so is system memory. Data usage patterns will affect how much of a role they play, but they all can in one way or another.

In the case of hardware raid it's got local memory, dc processors, and dc memory.... for a reason. Arguing with me over these concepts is like arguing over every RAID controller and computer on planet Earth.

The answers to all of your other questions are answered with the above. I really don't feel a need to argue basic computer concepts. It's one thing to argue if a computer with low resources can accomplish a task (something I've never argued this entire time it's always been about performance not whether it was possible) and quite another to argue basic computer concepts by saying dual/single, amount of RAM, type of CPU don't matter. They do and they matter more in the case of ZFS. This will always be the case unless some major breakthrough of epic proportions occurs.

Now if you want to see the difference a processor + memory makes.

Here's an 920 i7:
#CPU: i7 920 (8 cores) + 8GB Memory + FreeBSD 8.2 64-bit
#time dd if=/dev/zero of=./file.out bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes transferred in 6.138918 secs (1749073364 bytes/sec)

Here's a Athlon 4600
#CPU: AMD 4600 (2 cores) + 5GB Memory + FreeBSD 8.2 64-bit
#time dd if=/dev/zero of=./file.out bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes transferred in 23.672373 secs (453584362 bytes/sec)

Does a 400% decrease in performance prove the point? Now granted the memory and CPU changed so there's a magnifier effect in finding out which affected the score more. But I just needed to show the rate of change.

Now if you'll excuse me we've gone off the rails into silly season long ago and it's time to move on.
 
Last edited:
Of course I'm speaking about the consquences. Wether the file is accessible or not is the whole point. Why would you argue about data corruption if it's impact wasn't a consequence?

Files like pictures or documents can't really withstand any corruption at all. Movies in particular can. I can send you a corrupted video where there's 5 missing frames out of a 2 hour movie. I'll bet you a bridge, a million dollars, and a shoe if you can see where those missing frames are while watching it.


That might have been an exaggeration, but you did recommend pairing ZFS with non-ecc, which is counter productive to ZFS itself, or more precisely I think you said it "wasn't needed." I'm going to let you think about what I was saying because you obviously couldn't miss the point I was making and I don't think you did.


Firstly I didn't recommend anything - please read what people write. Saying that something isn't really needed is not the same as making a recommendation that it should be avoided. ECC wouldn't hurt - all that's being said is that it's not strictly necessary here!
(please note that this is not the same as saying that this is always the case)

As to corruption, what are we talking about? On-disk corruption? Or in-memory corruption on playback? They are not the same thing, the causes are different and hence the safeguards you might need are different!

You can protect against in-memory bitflips on uploads to the server, by doing verified copies. You'll know that what is on disk is exactly the same as what was sent. You do not "need" ECC memory in the server for this.

You may subsequently suffer a rare bitflip in server memory on playback (but note that the ondisk copy is still perfectly intact) - is that so bad?
Do commercial bluray players have ECC memory?
Does the OP's HTPC or WDTV or even DLNA TV (or whatever he uses for playback) have ECC memory?




The CPU can't calculate anything unless it's read from somewhere. This almost always is system memory. In the case of ZFS system memory is used for reads, writes, checksums, parity. dedup, compression, and encryption.

Yes, all computers need some memory - that's not in dispute.
What is in dispute is your assertion that a home zfs media server will have poor performance unless you throw shedloads of memory/cpu at it! I'm saying it won't!
You do need enough CPU and memory resources for the job at hand, but once that level is reached, throwing more at it won't make any real difference.


This is also the case for every system where the CPU is calculating parity: ZFS, software RAID, etc. The CPU can't caculate parity unless the data comes from somewhere and in this case after the file system that's system memory. How fast the memory is, how much bandwidth you have, single/dual, amount etc will always play a role with I/O. If the CPU is involved then so is system memory. Data usage patterns will affect how much of a role they play, but they all can in one way or another.

In the case of hardware raid it's got local memory, dc processors, and dc memory.... for a reason. Arguing with me over these concepts is like arguing over every RAID controller and computer on planet Earth.

The answers to all of your other questions are answered with the above. I really don't feel a need to argue basic computer concepts. It's one thing to argue if a computer with low resources can accomplish a task (something I've never argued this entire time it's always been about performance not whether it was possible) and quite another to argue basic computer concepts by saying dual/single, amount of RAM, type of CPU don't matter. They do and they matter more in the case of ZFS. This will always be the case unless some major breakthrough of epic proportions occurs.


Now if you want to see the difference a processor + memory makes.

Here's an 920 i7:
#CPU: i7 920 (8 cores) + 8GB Memory + FreeBSD 8.2 64-bit
#time dd if=/dev/zero of=./file.out bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes transferred in 6.138918 secs (1749073364 bytes/sec)

Here's a Athlon 4600
#CPU: AMD 4600 (2 cores) + 5GB Memory + FreeBSD 8.2 64-bit
#time dd if=/dev/zero of=./file.out bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes transferred in 23.672373 secs (453584362 bytes/sec)

Does a 400% decrease in performance prove the point? Now granted the memory and CPU changed so there's a magnifier effect in finding out which affected the score more. But I just needed to show the rate of change.

Oh come on!!! - please read what the guy is saying.
He's talking about using compression with that comparison - of course a faster CPU will be quicker - compression (and encryption) are zfs features known to be CPU hungry, and you wouldn't use either of them on a home movie server.
Deduplication is a memory monster - this is also well-known, but again, it's not a feature you'd ever use on a home movie server.

As to the actual comparison itself - can I ask you if you seriously think that you can drive 1.6GB/s of sustained I/O through to 6 standard 2TB SATA drives? Or is "dd" perhaps not the most reliable way to measure true I/O performance?


As to real world performance - the OP has stated that he wants a home movie server.
As long as he can write to the server at 80-100MB/s, which is typically all you'll get from a gigabit connection, and possibly the source drive (and the OP may be happy with less), how are you going to improve performance?
On playback, even 1080p movies with lossless audio don't use more than about 5-6MB/s - even a modest server should be able to manage a few of those!

It's irrelevant here to say a faster CPU and more memory in the server will mean better performance - better in what way?


Now fair enough, you are entitled to believe whatever you like about ZFS and what it needs/doesn't need, but I will say this - ZFS is highly configurable and scalable, and can cope with a wide variety of workloads. Some are more "difficult" than others, and the requirements for different types and levels of workload can vary dramatically. There is no "one shoe fits all" approach - there are indeed many situations where ZFS will benefit hugely from increasing main memory and/or CPU resources, and/or using read and/or write cache devices, and even faster HDDs etc. However that's not the same as saying that this is always the case - it isn't!
 
You ZFS fanbois are terrible. If you look at the big picture, you're both right. You just have different opinions on how to run a ZFS system. You guys have offered a solid suggestion on what the OP should take in college but he'll have to pass it on his own. You guys are now fighting over what classes he should take, how to take notes, and how to study for his tests. You've given him a lot of solid, GREAT information and positive reasons for choosing ZFS but it's up to him to go learn a little about it to formulate his own questions. Not learn by listening to the both of you fight. You both seems very experienced with ZFS but you both do things differently. Save it for another thread or when you two are forced to work in a small server room together for 6 weeks. :)
 
Now if you want to see the difference a processor + memory makes.

Here's an 920 i7:
#CPU: i7 920 (8 cores) + 8GB Memory + FreeBSD 8.2 64-bit
#time dd if=/dev/zero of=./file.out bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes transferred in 6.138918 secs (1749073364 bytes/sec)

Here's a Athlon 4600
#CPU: AMD 4600 (2 cores) + 5GB Memory + FreeBSD 8.2 64-bit
#time dd if=/dev/zero of=./file.out bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes transferred in 23.672373 secs (453584362 bytes/sec)
What is this link? He says:

"Many people complain about ZFS for its stability issues, such as kernel panic, reboot randomly, crash when copying large files (> 2GB) etc. It has something to do with the boot loader settings."

I have never ever read many complaints about ZFS having stability issues, crashes, etc. I have been following ZFS before it first arrived, and have been closely monitoring the Sun Solaris ZFS mail lists. Of course there have been occurences when people have had problems. But very seldom. Often it depended on faulty hardware, buggy drivers, etc. I can not recall people having crashes copying large files. I have done it all the time.

What is this sheit? I want to see the links he talks about. Is he talking about ZFS on Linux, ZFS-fuse? Or port to FreeBSD? The FreeBSD port had bugs. But the Solaris ZFS mail list has had stable reports, not many crashes. I dont believe what he says, because I have very closely followed the Solaris ZFS list, and there has not been much problems he describes. Sometimes there have been crashes, yes. But not often. It is like here, in this forum. Sometimes we hear about people having crashes with Solaris and ZFS, but that is not often. Do you often see complaints here, for instance, that Solaris crashes when copying large files? Have you EVER seen such a complaint? I can not recall seeing any such complaints.

And what is a "boot loader" he talks about? I have never heard about "boot loader" problems in Solaris. Solaris uses GRUB and can boot from ZFS. I dont know if FreeBSD can boot from ZFS yet. But everything points to FreeBSD, not Solaris.

I want to see the links he talks about. They are not found on the Solaris ZFS list. Maybe zfs fuse or FreeBSD list. And his server is apparently a FreeBSD server, he runs FreeBSD, not Solaris. I have not followed the FreeBSD ZFS mail lists, but I know FreeBSD have had ZFS problems.

But from FreeBSD mail lists, infering that in general ZFS is unstable or will crash with 1GB RAM - is just plain wrong and the logic is faulty.

This pisses me off. What if you read the ZFS fuse on Linux mail list? It is in beta. Would you claim that ZFS is buggy and unstable (just because the Linux beta port is unstable)?? No, that is wrong. You could say that ZFS on Linux is unstable, but saying that ZFS in general is unstable is pure FUD. Or ignorance. :mad:
 
Back
Top