would like to share on my current configuration that has been running for > 4 years.
on my running server rack with my own temperature and humidity control/sensor,
I follow with my minor changes -> Best Practices for data center & server room monitoring
Moisture range 40%-60%, 35%-40% is...
totally true when you know what are you doing :)
a bit hard in the beginning... and easy when knowing better
gluster or ceph :)
I am using ceph due on proxmox prefer ceph..
NFS4 has better performance on writing, dare to see the presentation and try linux NFSV4 with V4 client?. you can not compared with ZFS that totaly different purposes.
I am trying to be objective on this thread.
I can push to clustering. clustering "gluster" and "ceph" is very fast and...
you should read ZFS on Linux - FAQ
this the important step:
dev/disk/by-id/: Best for small pools (less than 10 disks)
or
/dev/disk/by-path/: Good for large pools (greater than 10 disks)
or
/dev/disk/by-vdev/: Best for large pools (greater than 10 disks)
I suggest you to use dev/disk/by-id...
a bit OOT:
gluster and ceph are complecated :D. there is no way to make very simple due on involving many configuration.
once the system is running, you will be happy with the performance aka clustering!!!.
if you are running nfs v4 and client v4, you will get better performace..
I think, this thread not about ZFS :P...
btw I am using ZoL and NFS V4 server and all clients :D...
stay away with < V4...
2010 presentation ->...
this should be for compatiblity, V4 can server v4/3/2 nfs client too...
since OP talked on writing NFS server , I made assumption on writing to nfs server :).
async is a lot fast than sync.
async is not good for writing corruption on NFS server. ex. server crashes or crippled
I would not use async when nfs server is serving mostly for writing on the server.
and
would use async when nfs server mostly serving read-only files/data.
just my...
LSI HBA 9240 (IBM1015 OEM) P19 firmware :P...
this the same as my friend said, "oh yeah never happened".
Once got hit with identical issue, he was just screaming. and I said " I told you" when upgrading 3T or greate with many drives would cause the problem
I am happy with SAS2 backplane :P...
Yes and No...
I had nightmare on SAS1 with HBA LSI card. some drive were not recognized randomly specially 2-3TB drives...
this is just me, moved to SAS2 backplane was smooth sailing.
I read from Intel PDF spec...before .
Basically. If not zero. You need to be cautious.
This is important on single ssd configuration.
Rule of thumb. Not zero. Get a replacement ASAP or possible. Make a clone to new ssd.
I am not really worried on waiting when is on raid 1.