Any advice?

Joined
May 22, 2006
Messages
3,270
Decided two weeks ago to build my own ZFS file server to replace my aging Debian box. My old file server served as an iSCSI and NFS share for my vSphere lab plus a CIFS share for my family to store pictures, music, etc. Several iterations of ZFS based file servers should easily be able to handle these tasks so I set about doing some research and bought the following hardware:

AMD Athlon II X2 240 CPU
ASUS M4A89GTD PRO/USB3 motherboard
4x 4GB DDR3 RAM
8x WD5000AAKX 500GB hard drives (4k sector drives)
4x Samsung F2 1.5TB hard drives
2x Sil3124 SATA cards for extra SATA ports
2x Intel Pro/1000 PT dual port Gb NICs
1x Intel Pro/1000 GT singe port Gb NIC

First up to bat is FreeNAS. The install went smoothly and I began to set up a zPool of the 8x 500GB drives. From there I created a volume through the command line to use for iSCSI and a ZFS dataset to use for NFS.

Speeds with FreeNAS were pitiful. I could get good sequential read and write speeds but after I migrated a single virtual machine to either of the NFS or iSCSI datastores, the best performance I could get within the VM was with iSCSI to the tune of 30MB/s and 20ms latency using HD Tune. Sequential reads and writes within the VM were roughly 80MB/s but that was still lower than what I got with my Debian setup.

NFS performance was terrible. The NFS datastore reported 200ms of latency from VMware and I could only get around 5MB/s of performance from it.

So I decided to install Nexenta Community Edition instead.

After determining that my 500GB drives needed to be wiped to erase all traces of the FreeNAS ZFS data, I finally got Nexenta to install. However, the GUI is total crap. Upon first connection it takes me through a wizard to set up my network interfaces, passwords, etc. and then wants me to make my first ZFS pool. I click on my 500GB drives and tell it to make a RAIDz1 pool and *poof* the GUI locks and will no longer respond. SSH and local login no longer work either.

After a reboot I can login via SSH and console and see that it created my ZFS pool however any attempt to connect to the GUI results in either the wizard re-appearing or an hourglass saying "waiting to establish NMS connection."

Quite frankly I'm getting sick of dealing with this and am close to blowing this all away and reinstalling Debian to replicate my previous setup. At least that was reliable and performed reasonably well.

What am I doing wrong? Why am I having such a hard time getting ZFS to work? Are FreeNAS and Nexenta just shit and I shouldn't have even bothered trying either of them? Are my 4k drives to blame? I read they can cause performance issues if not aligned properly but FreeNAS reported ashift=12 which should mean it was OK.
 
I have been trying something similar recently, similar system, only 2GB RAM though, with my now unlocked Limpron 140 "dual core" aka 4400e. My WHS v1 is acting up, and I generally don't like how it operates, thrashes the crap out of hard drives during the night. It was totally locked up for the first time ever yesterday (since June 2009) and I had to reboot it.

I've looked at and liked unRaid, but only tested with the free version limit of 3 drives.

I tested briefly with FreeNAS as seen here, no replies. I basically opened up the box so I could connect with Win7, files transferred at 50-55MBps. No iSCSI was set up on this.
http://hardforum.com/showthread.php?t=1626452

I installed Ubuntu 11.10 Alpha3 server, wanted to try btrfs, but was having problems, ended up just using ext4 for testing. I set up Webmin, didn't like it trying to configure Samba, was connecting via https://FileServer:10000 but then Webmin wouldn't connect unless I used https://192.168.0.107:10000.

I was thinking of using Solaris Express 11, but read they don't offer any updates for it unless you grease Larry Ellison's Oracle palm. Don't know if it's true or not.

I ended up wiping it and am installing Win2003 x64 server right now to see how it goes. Had it sitting here, thought what the heck. Want to see if/how iSCSI works on it, which I know zero about. If I don't like it, might use WHS 2011 since it is cheap, was only about $50 on Newegg the other day. I also have an Intel pro1000 PCIe-x1 NIC installed, have a couple more if needed.

Sorry I can't be more help, but misery loves company!! :D
 
Last edited:
Anyone have any advice or insights? I've dropped $1,000 buying this new hardware and ZFS just seems to be a massive fail.
 
2x Sil3124 SATA cards for extra SATA ports

Are these PCIe cards or PCI? I ask because a single hard drive from 2009 or newer will need more bandwidth than the PCI bus can deliver.
 
You may want to still check your read bandwidth of each drive running at the same time. Does your version of dd display the transfer rate in MB/s at the end of the transfer? Like this:

http://hardforum.com/showpost.php?p=1037564636&postcount=3

In your case you would want to do a read test of all of your drives at the same time and look at the bandwith of each. On linux I use the screen command to create 1 console session per drive and then do a similar dd benchmark for each drive.

BTW, I used this method on a server with a 133 MHz 64 bit PCI-X card and found out the card only provided PCI bandwith of around 120 MB/s instead of the around 1GB/s that the bus should have provided.
 
You may want to still check your read bandwidth of each drive running at the same time. Does your version of dd display the transfer rate in MB/s at the end of the transfer? Like this:

http://hardforum.com/showpost.php?p=1037564636&postcount=3

In your case you would want to do a read test of all of your drives at the same time and look at the bandwith of each. On linux I use the screen command to create 1 console session per drive and then do a similar dd benchmark for each drive.

BTW, I used this method on a server with a 133 MHz 64 bit PCI-X card and found out the card only provided PCI bandwith of around 120 MB/s instead of the around 1GB/s that the bus should have provided.

When FreeNAS was installed I ran "dd if=/dev/zero of=/mnt/zPool1 bs=1M count=10240" and got 240-300MB/s. I'm sure cache had a big part to play in those numbers, but the theoretical maximum across the two PCI-E 1x busses the SATA cards are installed on is 500MB/s so bandwidth isn't an issue. The latency and horrid access times were the issue.
 
Don't waste any more time with FreeNAS or Nextenta.

Use real SE11 or Open-Indiana and the Napp-it web-based UI from _Gea. Should work fine. I loaded it on a X2 240 when I was trying things out and went very, very well.

Not sure if it will like that Silicon-Image SATA card or not, but that probably doesn't account for much of your total spend - pick up a LSI 9240 OEM card from eBay (e.g., IBM M1015) for about $80, flash the firmware and off to the races for you.
 
Don't waste any more time with FreeNAS or Nextenta.

Use real SE11 or Open-Indiana and the Napp-it web-based UI from _Gea. Should work fine. I loaded it on a X2 240 when I was trying things out and went very, very well.

Not sure if it will like that Silicon-Image SATA card or not, but that probably doesn't account for much of your total spend - pick up a LSI 9240 OEM card from eBay (e.g., IBM M1015) for about $80, flash the firmware and off to the races for you.

Last night I managed to delete the existing partitions on my 8x 500GB drives and write zeroes to the first couple GBs. Then I installed OpenIndiana and Napp-It and was able to create my 8 disk RAIDz1 pool.

iSCSI performance is screaming. NFS has good throughput but very high latency. From what I'm reading, this is due to the ZIL. Apparently the only ways to improve NFS on ZFS performance is to disable ZIL (which I'd rather not do) or move the ZIL to its own drive.

Anyone else have any experience with improving NFS performance on ZFS? Are those my only options?
 
What about getting a IBM Br10i or M1015 and flashing it with the IT firmware, instead of the crappy SYBA PCIe SATA controllers? You can pick these up cheap from Great Lakes Computers or ebay.

The rest of the hardware looks pretty good.
 
What about getting a IBM Br10i or M1015 and flashing it with the IT firmware, instead of the crappy SYBA PCIe SATA controllers? You can pick these up cheap from Great Lakes Computers or ebay.

The rest of the hardware looks pretty good.

Running Bonnie++ from Napp-It gives me 355MB/s Seq Read and 205MB/s Seq Writes. Is it a pretty sure bet that replacing the SATA cards with a Br10i or M1015 is going to improve performance? Not only would I need to get the card, but also find a bracket and two breakout cables.
 
Running Bonnie++ from Napp-It gives me 355MB/s Seq Read and 205MB/s Seq Writes. Is it a pretty sure bet that replacing the SATA cards with a Br10i or M1015 is going to improve performance?
If you're connected via GigE, higher sequential rates won't matter. I am not sure either whether a different controller would influence latency, etc.
When you say "NFS has good throughput but very high latency.", did you measure read or write or both? What numbers did you get? How was the test setup?

Not only would I need to get the card, but also find a bracket and two breakout cables.
Great Lakes Computers offers a PCI bracket for $5 extra (they also have the SFF-8087 cables, but might get them cheaper @ monoprice.com).

-TLB
 
If you're connected via GigE, higher sequential rates won't matter. I am not sure either whether a different controller would influence latency, etc.
When you say "NFS has good throughput but very high latency.", did you measure read or write or both? What numbers did you get? How was the test setup?

NFS read latency has been acceptable at ~8ms. NFS write latency, however, I've seen spike up to 274ms. Right now I have a single Windows 2008 R2 domain controller virtual machine running on one of the NFS shares and it's averaging 2.5ms read and 25ms write as the machine idles.

Meanwhile, another VM is running VMware vCenter on an iSCSI share and is averaging 5ms read and 4ms write latency as it idles.

These latency numbers are coming from VMware's built in performance monitoring.

I would be willing to get a different SATA card if I knew it would help. However, since I'm only seeing high latency from NFS and not iSCSI, I can only conclude my SATA cards are fine it's just an issue with NFS on ZFS.
 
NFS read latency has been acceptable at ~8ms. NFS write latency, however, I've seen spike up to 274ms. Right now I have a single Windows 2008 R2 domain controller virtual machine running on one of the NFS shares and it's averaging 2.5ms read and 25ms write as the machine idles.

Meanwhile, another VM is running VMware vCenter on an iSCSI share and is averaging 5ms read and 4ms write latency as it idles.

These latency numbers are coming from VMware's built in performance monitoring.

I would be willing to get a different SATA card if I knew it would help. However, since I'm only seeing high latency from NFS and not iSCSI, I can only conclude my SATA cards are fine it's just an issue with NFS on ZFS.

Would a write cache or more memory help the write latency, you could disable write caching to check but probably don't want to leave it off.

NFS uses synchronous writes which benefits from a cache (ZIL), not sure about iSCSI.
 
Would a write cache or more memory help the write latency, you could disable write caching to check but probably don't want to leave it off.

NFS uses synchronous writes which benefits from a cache (ZIL), not sure about iSCSI.

I've got 16GB of RAM so that should be fine.

I think I'll invest in a 64GB SSD drive to use as ZIL. That should help.
 
Back
Top