Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Will 4K sector drives take away some of that gloss, or can ZFS's data integrity mechanisms compensate for bad controllers and bad cables too?
hello all. I have been trying to understand zfs for about a week or more now and building a server and the abundance of information is a bit overwhelming. I am still trying to get my head wrap around using ESXi and having multiple VMs. My two questions (I have many more but these two apply to this thread) is
1. A zPool is created with a single vdev (which can be a single disk or multiple disks), or multiple vdevs. Correct? Are these benchmarks assuming one vdev (I.E. one vdev consisting of 4,5,6,7...disks)?
2. I am a bit confused with the magic number of disks and this relates to my first question to (magig number in vdevs assuming multiple vdevs or the magical number in a pool period). I plan on building a server in the coming months (which should give me a bit more time to identify what I need/want). I read earlier, sub.mesa that data, optimally should be written in 32-128KB blocks, but yet, going to alot more disks offers better performance. I feel like the optimal solution for writing to a disk is to decrease it (going from 32KB->16KB) as it doesn't need to spend that much time writing to the disk. As long as it can go into 128KB evenly?
Thanks for your help. I having other questions about zfs and using virtual machines, and this and that hardware but I am having a hard time finding the correct thread to read up on it/post a question. sub.mesa, did a good writeup on zfs in his zfs.guru thread. Thank you for that. Does anyone know a good thread that talks about using esxi and vti-d, etc etc. I know the saying that if you don't know what it is, you probably don't need it, but I am quite sure many of us didn't pop out of the womb knowing what it was either (and for some, didn't know what it was because it didn't exist yet) . I would like to learn about it and identify if I can take advantage of it or not. Thanks again.
So i just wanted to clarify some things to myself. The hardware configuration for a server i'm currently building is as follows:
SuperMicro X8DTE-F
Intel Xeon E5530
12GB DDR3
3Ware 9650SE-16L
3x Hitachi 500GB raid sata-disks (HW RAID1 + hot spare) (for base system, running UFS)
My system is currently running FreeBSD 8.2 stable with zfs v15. I started planning on buying some data disks too, and before finding all this 4K sector stuff I ordered myself 7 WD20EARS drives. Now i planned on using them as 6 drives in one RAID-Z array + 1 hot spare. But now i keep reading here, that 6 drive RAID-Z configuration is not optimal? It would however fit so nicely for my remaining 13 free hotplug slots, since i plan to expand later on with other 6 drive RAID-Z array and keeping that one hot spare. Should i go with 6 drive RAID-Z or try something else?
Also i'm very confused about this glob 4k sector thing. Can someone confirm if i need to patch my sources to make it work, or will it simply be sufficient to use it prior to zfs create? And will the settings stick?
sub.mesa, what is your take on NCQ on those WD drives? They should be new ones. I don't even need very high transfer rates, since my server is basically for shell use. I would simply love steady operation and decent speeds.
Then final note, has anyone experimented with wdidle.exe to disable WD power saving features, like head parking? Or is this even an issue with ZFS?
Sorry for being a little messy but this is my first post in a long time
Zarathustra[H];1039249962 said:Based on this, am I OK with 4 drives in my array, or should I add a 5th for performance, even though I don't really need the space?
Much appreciated,
Matt