zfs storage system help

shnelson

Limp Gawd
Joined
Feb 10, 2012
Messages
145
I'm at my wits end here, trying to build a storage system primarily for CIFS to house system backups and media files for streaming across the home network.

I've tried Nexenta and FreeNas, both with very dissapointing read speeds of <1MBps, we're talking 400-500k tops on a 1Gb link.

Write speeds to the same share are a little better, but very random between 20MB-35MBps

System spes:
Super Micro PDSME+ Board
Intel(R) Core(TM)2 CPU 6300 @ 1.86GHz
8GB ECC Memory
SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz SATA II (3.0Gb/s) Controller Card
4x 1TB WD Green drives
2x 30GB Crucial SSD (intended to be ZIL mirror)

Board is a super micro, has 2 onboard intel gb nics and I've tried an add in dual port intel NIC.

I get the same performance results across any one of the interfaces, as well as when I try carving each drive out as it's own volume & cifs share to alleviate the possibility of faulty hard drives.

Really not sure where to look from here, any suggestions would be greatly appreciated!
 
Last edited:
What the nic of the client PC?

have your ruled out a network related issue with a network benchmark program like iperf/jperf?

what do you get when running benchmark test on the pool?

what your raidz level?
 
Think I got it sorted out, had a zentyal box acting as a gateway that once removed alleviated all xfer speed issues.

Now I'm doing some performance testing against the ZIL, which might be a little excessive considering the link speed I'm limited to.

Thanks for the reply!
 
Watch those nics on that board, for me, I was only able to load them up to 60MB/sec, and couldn't get them to go faster. Installing a new nic corrected that.
 
I was going to say, my FreeNAS is running at speeds up to 100MB/s. Sounded like a network issue.
 
I've had a similar problem: write speed about 80 MB/s, read speed <1MB/s... :rolleyes:
 
Something ain't right.

My Freenas 9.2 on a slower system than yours gives me pretty solid 100MB/s+ both ways over GbE and SMB2. 9.2.1 will support SMB3 but I don't see much room for performance improvement.

That's with 6x2TB drives in a single raidz1 vdev with no zil or l2arc.
 
I have 2 80GB hdd striped, 4GB RAM, Athlon 64 x2 2GHz... I can copy files from local pc to the nas with ease (50-70MB/s) but when I have to copy the files from the nas to the local pc I get 1 MB/s speeds. PCI Intel Nic, of course it is gigabit.
 
The root of my problem was running two network segments on the same switch with zentyal acting as the gateway for both. So, server was 10.0.0.10 and workstation was 10.0.10.10. This was a temporary setup to test a configuration, obviously something was flawed. After putting everything back on the same segment I am seeing better performance, but not quite 100MB/s. I'll be playing around with iperf when I get time to see if there's anything else I can do.


It is worth noting that when I had my issues, it was across all protocols to this box. CIFS, NFS, FTP etc.
 
When you are testing the speeds I would recommend you try it with and without the ZIL. If you don't have a fast ZIL it can slow you down.

Gea has done some benchmarks testing it. http://napp-it.org/doc/manuals/benchmarks.pdf

Thanks for the link - I just bought a pair of 30gb SSDs with better specs because the previous pair of 16gb in place had terrible performance. This is my first intended test after CIFS performance was figured out.

I'll be running IO meter against some VMs running on this storage across iscsi & nfs. I'd like to use LACP, but don't have the ports available to do so yet :(

I had some qlogic and emulex fibre cards that I was trying to get working for the 4gb throughput (hence nexenta), but ran into configuration issues and gave up.
 
What I would do first, is set sync=disabled on your NFS share. That will give you a reasonable baseline for IOPs and such. Then you can worry about the right kind of ZIL.
 
Back
Top