Questions to ESXi all-in-one

Joined
Sep 8, 2007
Messages
697
I have a few questions about migrating to an ESXi all-in-one build:

Some Background

I'm running a small linux file/web/FTP/Media/Torrent server at home. I have 4x1TB WD Green drives in a Software RAID10, and it has a 5th drive as a system drive.

I had a catasprohic hardware failure in December and replaced my low-power hardware with my ex-gaming, then ESXi server hardware. My end-goal is to build an ESXi all-in-one setups but I can't afford new drives right now. I have some money from though from my tax return, so I'm hoping I can afford Motherboard/Processor/HBA/RAM.

Questions:

Nearly every all-in-one build uses ZFS, is it possible to do this with linux software RAID as well? Why it ZFS almost universally used?

Focussing on low-power usage, the AMD Processors have a lower TDP than Intel processors, I have read that Intel is more "accurate" with their mesaurements than AMD. Is their really a noticeable power consumption difference between the two?

If I buy a more powerful processor (such as a Xeon E3-1230), in order to ensure enough overhead for my rare processor intensive activities, how much power will go to waste when its not being used, when compared to a more "just enough" processor?
 
You can use Linux software RAID if you want. ZFS is liked around here because it is the best filesystem available, period. It is flexible, relatively fast, and is the most resilient.

What you should do if possible is get a decent SAS/SATA controller and use VMDirectPath (VT-d - your system will need to support that) to pass it through to a guest OS that will use it directly. I'm doing that using Illumian and ZFS. Works great. Performance is good. If you can't do that, then I don't really recommend ESXi for use with software RAID. People have reported problems (several people reported corruption) when trying to use the built-in drive passthru in ESXi to do that.

The card that I use is a LSI-based IBM M1015. I installed the Solaris driver from LSI's website in my Illumian VM in order to make the card work without having to flash to the incorrect firmware from another card like most people recommend. (Why don't they just use the correct driver instead of the incorrect firmware? I don't get it.)
 
^^^
"best" is subjective where depends on your consideration.


the plus plus on zfs is all in one: filesystem, raid manager, raid software, and raid tools.

linux software raid is a legacy compared with zfs :p.

you need to have multiple layers on linux software raid, for examples mdadm(linux software raid), LVM(logical volume manager), and filesystem ( ext4, XFS or others)

linux software raid is really stable ( based on my experience).. but not many features( zfs has many features)
 
I'm not familiar with any standard Linux filesystems that are self-healing except for BTRFS which I have no interest in using as it is still very immature, IMO. ZFS supports self-healing and is far more mature than BTRFS. I would actually be willing to bet that ZFS is currently better than BTRFS will ever be. As for the EXT filesystems, I like the idea of using those about as much as I like the idea of using EXFAT for my Windows machines (i.e. not at all). It has zero features except journaling which basically everything other than FAT has. And XFS is pretty fast, but possibly the worst of all when it comes to data integrity.

Linux filesystems seem like a quantity over quality thing to me. And most variants of Unix are like Linux without the quantity part i.e. most of them have only 1-2 filesystems that are crappy instead of 34734905 of them. At least FreeBSD devs had the decency to admit that ZFS was better than what they had, and implemented it. No need to reinvent the wheel there. BTRFS is like a poor clone of ZFS. Funny that now Oracle 'owns' both of them. Though many (most?) of the Sun guys left after the acquisition, anyway.

BTW, I actually like Linux. But I like Illumian more (for general purpose server purposes - and for home use, at least).
 
Last edited:
I'm not familiar with any standard Linux filesystems that are self-healing except for BTRFS which I have no interest in using as it is still very immature, IMO. ZFS supports self-healing and is far more mature than BTRFS. I would actually be willing to bet that ZFS is currently better than BTRFS will ever be. As for the EXT filesystems, I like the idea of using those about as much as I like the idea of using EXFAT for my Windows machines (i.e. not at all). It has zero features except journaling which basically everything other than FAT has. And XFS is pretty fast, but possibly the worst of all when it comes to data integrity.

Linux filesystems seem like a quantity over quality thing to me. And most variants of Unix are like Linux without the quantity part i.e. most of them have only 1-2 filesystems that are crappy instead of 34734905 of them. At least FreeBSD devs had the decency to admit that ZFS was better than what they had, and implemented it. No need to reinvent the wheel there. BTRFS is like a poor clone of ZFS. Funny that now Oracle 'owns' both of them. Though many (most?) of the Sun guys left after the acquisition, anyway.

BTW, I actually like Linux. But I like Illumian more (for general purpose server purposes - and for home use, at least).

oh well, btrfs still need a long journey. the problem is the parent (Oracle)

on linux you can use many filesystems:), this quality:)..
before ext4 released, I prefered xfs ad ext3(OS partition).
once ext4 got released and merger to kernel disrtibution, I prefers ext4, just my subjective

linux community does not hate zfs:p. just license contradiction where zfslinux born to provide native zfs on linux where adheres with gpl license

btrfs is totaly different with zfs :p.

that is the reason " best is a subjective matter for us"
 
Nearly every all-in-one build uses ZFS, is it possible to do this with linux software RAID as well? Why it ZFS almost universally used?
One of the reason ZFS is often used, is because it does not need additional monetary investments. You dont need to buy hardware raid cards or anything. Just plug in your disks into the mobo and that is it.

A second reason, is that ZFS is much more safer than other storage solutions. For instance, hardware raid can be unsafe and might corrupt your data. Ordinary Linux filesystems such as ext, XFS, JFS, etc does not protect against data corruption too well. There is reasearch on data corruption and Linux / hardware raid, also, research shows that ZFS protects very well against data corruption:
http://en.wikipedia.org/wiki/ZFS#Data_Integrity
 
Just my 2 cents...

Yes - Linux mdadm works great in an ESXi All-in-one-box. I had that running for nearly a year and it was the so far best experience I had. I just recommend to spend some money on good hardware. I have a SuperMicro X9SCM-F + Xeon 1230 CPU + 16GB ECC RAM + 2x Intel SASUC8i controllers.

ZFS is a great file system. I also thought that and indeed, I had it running for a few weeks before I moved to ESXi. I dropped it, because I simply don't like the environment (Solaris, OpenIndiana). I'm too used to Linux as it seems, I simply miss great tools like saidar, a nice screen config (yes this is available in Solaris 11, but I got some weird mode errors) and what I mostly missed: cp -v - the verbose mode is not available in Solaris). However. I decided to give it one more try 3 weeks ago, because of stability and some rare damaged photos, which is truly a mess and makes me angry.

So I set up a 2nd PC, running Solaris and attached 6x1TB drives from WD/Hitachi. Everything looked fine and so I moved the data over - which was pretty quick - within a few hours, 4.5TB were moved over. A few hours later, I saw the first "too many errors". A faulty drive was shown. Hmm.. luckily I had a spare one, so I replaced it and resilvered over night. The next morning, I though: now, everything is fine and I can start putting my 6x 1.5TB HDs into my SuperMicro server tomorrow and move the data back to the real server (the 2nd one was just temporary).

The next morning, I grabbed my MacBook Air, logged into the Solaris temp server - and saw many many errors. Looks like the another HD was also damaged. Unrealistic, but - possible. I had set up the ZFS vdev as one single RAIDz1, so sure - my fault - I should have chosen RAIDz2... But this was not the first time I had issues with ZFS. I had crashed now and then, but always on "cheaper" hardware like Gigabyte mainboards - the main reason I switched the some beefier stuff like SuperMicro.

The result: I lost about 1.3TB of data. Luckily just self-ripped movies from our DVD & BR collection - all data was secured. Of course, I had no backup of this, which is not a problem. So I decided to switch back to Debian and to get the remaining data from the ZFS ASAP. I used "badblocks" & "smartctl" intensive and identified the 2 damaged drives from the 1TB HDs and sorted them out, then I created a new mdadm - so far - it works fine. I don't say that ZFS is bad. No it's not! But for my own surprise it killed a lot of my stuff and that does not create the warm feeling while having my data "secure and safe" on a ZFS vdev.

To come back to topic: just be sure the mainboard supports ECC RAM, get them! and be sure it supports vt-d to passthrough a controller to ESXi directly.
 
Very strange story. It seems unrealistic that you had two disks error, in such a short time. I guess the problem is somewhere else, maybe a faulty power supply, or you did not insert the disks correctly the cable was loose, etc. There are many reports on erratic disks and in the end, it depended on faulty hardware or something similar. Here is one story where two disks go bad in a short time:
https://blogs.oracle.com/elowe/entry/zfs_saves_the_day_ta

And yes, you should always use raidz2 (raid-6). I hope you have raid-6 on your new system now.

A question, are you sure that your computer still does not corrupt your data on your Linux box? I mean, when you reinstall Linux on that box, then maybe you still have data corruption - but Linux will not tell you because ext3 might not notice the data corruption. I heard about people getting error reports (but no errors because they used raidz2) and they switched to Linux and got no error reports any longer. Probably, their Linux filesystem did not detect the ongoing errors.

ZFS is so sensitive it detects the slightest error. No other filesystem does that. Just because you dont see error reports in Linux, it does not mean there are no errors? Or?
 
Hey brutalizer,

it's true. The risk is there, that the new system still corrupts my files silently and that is a bad feeling. The reason that I've switched to ZFS initially.

I also fully agree with you, that just because I don't see the errors, the may exist (dark matter exist, even if we don't see it ;)).

This "rescue" system is a temporary one - the goal is to move the stuff back to the SuperMicro server (as you can read in my signature). So far - I'd planned to continue to use Debian.. hmm you words make me thing - AND THAT IS BAD! ;)

The current RAID looks like this (RAID6):
Code:
root@scotty:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid6 sdb[0] sdg[5] sdf[4] sde[3] sdd[2] sdc[1]
      3907045376 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      
unused devices: <none>

I will read the linked post above.

One question - how does your setup look alike?
 
Very strange story. It seems unrealistic that you had two disks error, in such a short time. I guess the problem is somewhere else, maybe a faulty power supply, or you did not insert the disks correctly the cable was loose, etc. There are many reports on erratic disks and in the end, it depended on faulty hardware or something similar. Here is one story where two disks go bad in a short time:
https://blogs.oracle.com/elowe/entry/zfs_saves_the_day_ta

And yes, you should always use raidz2 (raid-6). I hope you have raid-6 on your new system now.

A question, are you sure that your computer still does not corrupt your data on your Linux box? I mean, when you reinstall Linux on that box, then maybe you still have data corruption - but Linux will not tell you because ext3 might not notice the data corruption. I heard about people getting error reports (but no errors because they used raidz2) and they switched to Linux and got no error reports any longer. Probably, their Linux filesystem did not detect the ongoing errors.

ZFS is so sensitive it detects the slightest error. No other filesystem does that. Just because you dont see error reports in Linux, it does not mean there are no errors? Or?

ext3 is just a filesystem, will reports when something happens with filesystem only
mdadm is in charge to detect failure on Harddrive
errors will be thrown by mdadm when saw something on raid :D

I guess, you should step back from all in one zfs :), and see a big picture on how the way mdadm and a filesystem (ext3, ext4, xfs or others) on linux:)

on my understanding, there is no sensitive on the system, this is just the way we/system is configured

i
 
Hey brutalizer,

it's true. The risk is there, that the new system still corrupts my files silently and that is a bad feeling. The reason that I've switched to ZFS initially.

I also fully agree with you, that just because I don't see the errors, the may exist (dark matter exist, even if we don't see it ;)).

This "rescue" system is a temporary one - the goal is to move the stuff back to the SuperMicro server (as you can read in my signature). So far - I'd planned to continue to use Debian.. hmm you words make me thing - AND THAT IS BAD! ;)

The current RAID looks like this (RAID6):
Code:
root@scotty:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid6 sdb[0] sdg[5] sdf[4] sde[3] sdd[2] sdc[1]
      3907045376 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      
unused devices: <none>

I will read the linked post above.

One question - how does your setup look alike?

my suggestion for running software raid on linux raid or zfs or others:
1) pick HBA card.. stay away with on board SATA( saw two times when two motherboards had random problem with their on board sata port after 2-3 years 24/7)
2)pick a good motherboard, or entry level server motherboard
3) pick a motherboard that support ECC
4) you will enjoy for many years out of troubles
5) Harddrive is the main point of failure:p heheh. set your system to notify when the software raid manager/monitor see errors on raid.
 
Without wanting to hijack this topic...

my suggestion for running software raid on linux raid or zfs or others:
1) pick HBA card.. stay away with on board SATA( saw two times when two motherboards had random problem with their on board sata port after 2-3 years 24/7)
Did! Intel SASUC8i w/ LSI FW :)

2)pick a good motherboard, or entry level server motherboard
3) pick a motherboard that support ECC
Also done in the SuperMicro server. X9SCM-F + Xeon E3-1230 + 16GB ECC RAM

4) you will enjoy for many years out of troubles
5) Harddrive is the main point of failure:p heheh. set your system to notify when the software raid manager/monitor see errors on raid.

So true :( I saw so much HDs dying in the last months. Got most between 2008 and 2011 - looks like these are reaching the EOL now.
 
One question - how does your setup look alike?
I have just an Intel Q9450 with a Gigabyte P45 mobo. And I got a PCI-X SATA2 hba card with 8 disks 1TB each. I only get 150MB/sec PCI speed. My mobo does not have PCI-X, which means it will degrade to PCI. I have not had any problems at all. Never.

Sometimes I read posts on several disks errors in short time, then it is always some hardware error. For instance power supply, HBA, or something else. The chance of different disks going corrupt at the same time is very small. It is probably some common hardware that has gone bad. Or, cables are not in, they are loose. Or whatever.

My point is, when several different disks errors, then the problem is not on the disks. Mostly.
 
Is the ZFS bit flip issue really that bad? I'm storing media files (movies, music) and user files (pictures, personal, important stuff).

Am I likely to see an issue on a 5tb array with 16gb non-ECC memory?
 
zfs is not prone to bitflipping, The issue is that it is kinda silly to have a FS that detects and corrects corrupted data it reads from disk except it can't when memory was corrupted so it wrote garbage to disk to begin with.
 
If you use a 1,4 MB DOS Floppy with FAT16 Filesystem on it, you would not care.
A simple power failure or eject can destroy the whole filesystem.

More robust filesystems like NTFS will mostly survive this and only some data may be lost.
One problem remains: On a large filesystem, you have silent errors because of magnetic fields,
radioactivity or pure random errors. Some of them may be discovered and repaired by a offline filecheck- others not
because there are no checksums on data to discover these errors.

With ZFS all these errors are reported and repaired, online without the need to become active, either on access or via
regular online scrubs. You trust this system and you can trust. But without ECC it is possible, that a RAM bitflip change data prior ZFS,
in the extreme it can damage part of the filesystem.You should not expect a total failure like with FAT but a data loss or wrong data
is possible- to avoid this is the main reason you may use ZFS.

So use ECC to complete the efforts of ZFS on the disk part.
 
Last edited:
And if ECC is not possible? Is there a better alternative?

How likely IS bit flipping? - Sorry for the noobish questions, I am just struggling to find GOOD information for my very small needs.
 
You will see all kinds of different stats about bit flipping and modern memory - I am far from expert on that. For modern chipsets and ram, ECC is not that much more, so if you are willing to spend for a cpu/mobo/ram that support it, I'd recommend it, otherwise, if not mission critical, don't worry about it.
 
And if ECC is not possible? Is there a better alternative?

How likely IS bit flipping? - Sorry for the noobish questions, I am just struggling to find GOOD information for my very small needs.


Its like with any statistic failures, they are more likely the more RAM you have.

And better options?
Do not thing about and hope the best - there is no other option.
 
Its like with any statistic failures, they are more likely the more RAM you have.

And better options?
Do not thing about and hope the best - there is no other option.

Thank you gea, I really appreciate yourself helping me. I am going to bite the bullet and go for ZFS with napp-it on OI and just ensure I use SkyDrive for the very important stuff.

Many thanks
 
zfs is not prone to bitflipping, The issue is that it is kinda silly to have a FS that detects and corrects corrupted data it reads from disk except it can't when memory was corrupted so it wrote garbage to disk to begin with.

uhh... I know a lot of people here seem to believe that, but it's wrong. Hard drives are MUCH more likely to get corrupt than memory is. ZFS is a HUGE improvement over any other filesystem without ECC memory. ECC memory is a further improvement, but not NEARLY as important as ZFS itself is. Besides, disk corruption resulting from memory corruption is likely to be correctable by ZFS to begin with.

If I had to quantify it, I would say that having ZFS is one thousand times more important than having ECC.
 
uhh... I know a lot of people here seem to believe that, but it's wrong. Hard drives are MUCH more likely to get corrupt than memory is. ZFS is a HUGE improvement over any other filesystem without ECC memory. ECC memory is a further improvement, but not NEARLY as important as ZFS itself is. Besides, disk corruption resulting from memory corruption is likely to be correctable by ZFS to begin with.

If I had to quantify it, I would say that having ZFS is one thousand times more important than having ECC.

wrong, how? you may think i'm overstating the case, but there is nothing factually incorrect in what i said. when you say 'likely to be correctable', that is, imo, mistaken. if ram is corrupt, zfs can only checksum what is in it - why would you think reading this back off disk will change that?
 
If we're talking data that is on the disk, ZFS is going to do its comparisons in memory, and it does not matter if corruption happens from the disk or in the memory itself. Yes, of course not every single memory operation is dealing with data that was read from the disk and ECC does help in additional cases. But ZFS itself will correct some memory corruption.

I meant that it was wrong to overstate it as such. My bad for using the 'wrong' wording to say that.
 
Last edited:
I have just an Intel Q9450 with a Gigabyte P45 mobo. And I got a PCI-X SATA2 hba card with 8 disks 1TB each. I only get 150MB/sec PCI speed. My mobo does not have PCI-X, which means it will degrade to PCI. I have not had any problems at all. Never.

Sometimes I read posts on several disks errors in short time, then it is always some hardware error. For instance power supply, HBA, or something else. The chance of different disks going corrupt at the same time is very small. It is probably some common hardware that has gone bad. Or, cables are not in, they are loose. Or whatever.

My point is, when several different disks errors, then the problem is not on the disks. Mostly.

it is rare on SAS harddrive.
but...
on Consumer SATA ... disk corrupted more than one is not rare,
I assume that all sata harddrive work very hard mostly on writing and some reading.

when we speak as hombrew server at home where mostly do reading... disk failing/corrupted more than one is rare * just my observation*.

the culprit is write cache,as I am aware of.
I always set HD write cache to off :D via smartctl command or hardware raid( If I'm using the hardware).

SAS drive always turn off write cache by default.
I saw( not remember what is the model) some non-consumer SATA drive turns write cache off by default

I saw some consumer SATA drive, where write cache can not be turned off. the drive just lies " just say, ok is off now after receive the command), when smartctl command is sent

:D you can see detail explanation on my previous posting on HP SAS expander thread.
 
So true :( I saw so much HDs dying in the last months. Got most between 2008 and 2011 - looks like these are reaching the EOL now.

speaking as a home consumer:
sadly true :(
there is no reason when they shorten their warranty, just my conspiracy theory :p

I already have 6 Failed SATA drive within 4 years( my homebrew server) :D. mostly seagate and 1 Western Digital.

Do I will buy seagate? why not.. just give me a good bargain price, I will bite it :D
 
Quick question regarding going with ZFS.

ZFS is how I will present storage to my ESXi 5 host.

When I create a VM, will it create a virtual hard disk on the ZFS host, with NTFS?
 
And if ECC is not possible? Is there a better alternative?

How likely IS bit flipping? - Sorry for the noobish questions, I am just struggling to find GOOD information for my very small needs.
Here is GOOD information on random bit flip and ZFS and ordinary filesystems. :)
http://en.wikipedia.org/wiki/ZFS#Data_Integrity


ZFS will always turn off write cache on each disk. However, if the disk is partitioned then ZFS will not turn off write cache, because there might be other OSes or something that needs write cache.


For ESXi and ZFS, there is a huge thread about it and a good how to for beginners. Start from the beginning:
http://hardforum.com/showthread.php?t=1573272
 
linux software raid is really stable ( based on my experience).. but not many features( zfs has many features)

It's been 100% rock solid for me for about 2 years now, it was other hardware in the old server that died, and I just plugged the harddrives into the new system and did a mdadm --scan --reassemble (something like that) and 10 seconds later I mounted the /dev/md0 and all was online again.

It took longer to reinstall NFS. :p





One of the reason ZFS is often used, is because it does not need additional monetary investments. You dont need to buy hardware raid cards or anything. Just plug in your disks into the mobo and that is it.

A second reason, is that ZFS is much more safer than other storage solutions. For instance, hardware raid can be unsafe and might corrupt your data. Ordinary Linux filesystems such as ext, XFS, JFS, etc does not protect against data corruption too well. There is reasearch on data corruption and Linux / hardware raid, also, research shows that ZFS protects very well against data corruption:
http://en.wikipedia.org/wiki/ZFS#Data_Integrity

In my case, I'd need to replace my harddrives to use ZFS, hence why I'm hoping to use software raid as an interim solution until I can afford the drives. I wasn't aware that ZFS hat the extra safety features, that is very good to know for future usage. Since in my case, thats the most likely source of errors.

That and the snapshots will be a godsend for me.


Quick question regarding going with ZFS.
ZFS is how I will present storage to my ESXi 5 host.
When I create a VM, will it create a virtual hard disk on the ZFS host, with NTFS?

The Virtual hard disk can be installed on any file system, since its virtual. I don't know the technical bits, but most likely its ntfs emulation?
 
Generally, an esxi vmdk will be presented as a scsi disk to the guest (maybe ide if windows xp), so it's up to the guest to specify filesystem type.
 
In my case, I'd need to replace my harddrives to use ZFS, hence why I'm hoping to use software raid as an interim solution until I can afford the drives.
You mean you need to have other disks to move your data to, and reformat your old disks with ZFS/software raid, and then move all data back to the raid?

Cant you borrow some disks from a friend over a weekend doing this?
 
Sorry for the noob question, but I don't understand how you guys are storing everything on ZFS filesystems within ESXi using passthru. Do you pass the physical disks through to a guest OS that supports ZFS, create the filesystem, and then re-export it via iSCSI or NFS to the ESXi host? Doesn't this create a lot of processing overhead?
 
Sorry for the noob question, but I don't understand how you guys are storing everything on ZFS filesystems within ESXi using passthru. Do you pass the physical disks through to a guest OS that supports ZFS, create the filesystem, and then re-export it via iSCSI or NFS to the ESXi host? Doesn't this create a lot of processing overhead?
As I have understood, your description is correct. Solaris is virtualized and gets access to all disks via passthrough. Then Solaris creates storage on the ZFS raid, and every ESXi guest can access the storage.

What overhead? Not much overhead in ESXi. People are using this very successfully. Look at the biggest thread here, called "OpenSolaris napp-it" or something.
 
Sorry for the noob question, but I don't understand how you guys are storing everything on ZFS filesystems within ESXi using passthru. Do you pass the physical disks through to a guest OS that supports ZFS, create the filesystem, and then re-export it via iSCSI or NFS to the ESXi host? Doesn't this create a lot of processing overhead?

You pass a disk controller (IBM M1015 in my case) through to the VM by marking it as a passthru device in ESXi advanced server settings, then you edit the VM, "Add PCI Device" and select it. Your VM will now show the hardware controller as if it were physically part of that machine. ESXi will still see that it is there and configured for that VM, but it will be unable to use the device directly. You could set up an iSCSI or NFS share for VMWare so that your other VMs could be stored on your ZFS setup if you want, but note that VMWare will not let you create any VMs without a data store, so you will need at least one disk connected to the ESXi system so you can create your storage VM. Even if you're passing through a disk controller and don't need a VMWare virtual HD on your storage box at all, it's still too stupid to let you do that. Hopefully they will fix that bug (they'll call it intended behavior - I'll call it an intended bug) in the next version.

Since it shows as a physical device on the relevant VM, all of the disks connected to that controller will show up in the VM as if they were physically there. You will see your actual disk names and information, not VMWare stuff. (You can, however, use a combination of VMWare disks and the aforementioned 'physical' disks and then you would see both.)
 
Last edited:
You mean you need to have other disks to move your data to, and reformat your old disks with ZFS/software raid, and then move all data back to the raid?

No, but my drives are 1TB WD Greens and everything I've read tells me that ZFS and these don't play well together, something about sector size or something like that (I read up on it a while back).
 
No, but my drives are 1TB WD Greens and everything I've read tells me that ZFS and these don't play well together, something about sector size or something like that (I read up on it a while back).

It depends - if you have 4k Cluster disks, this will slow A BIT the ZFS performance. It's not that bad honestly and a bit overhyped.
 
No, but my drives are 1TB WD Greens and everything I've read tells me that ZFS and these don't play well together, something about sector size or something like that (I read up on it a while back).

For RAID-Z*, you want the result of 128KB divided by the number of your data drives (so n-1 for RAID-Z, n-2 for RAID-Z2, etc.) to result in a multiple of the sector size of your hard drives.
 
Back
Top