Ask me for ZFS benchmarks

unhappy_mage

[H]ard|DCer of the Month - October 2005
Joined
Jun 29, 2004
Messages
11,455
I've been running my own server for a while now, and its hardware is getting a little long in the tooth for what I'm trying to do with it. So I ordered some new hardware:
  • Opteron 4170
  • 2*8GB DDR3 ECC
  • H8DCL
However, before I put this new machine in production, I figure it'd be a good idea to try out the alternatives and see what produces the best results (for some value of "best"). I'd like to try out a bunch of operating system distributions that support ZFS, and run some kind of benchmark suite over each of them. I have 6 2TB 7200 rpm Hitachi disks sitting unused that I can use as a target, an SSD or two that I can use as L2ARC, and an ACARD ANS-9010B that I can use as slog.

Here's where you come in. I know I plan to try a few combinations of OS and virtualization tech, and a few benchmarks. If there are things that you'd like to see me try, I can put them on the list as well. I'll update the lists that follow as I get details.

OS distros
  • Solaris 11 Express - the license says you can't use it for production, but there are people who might actually be willing to pay for a license for it, so I might as well make the measurements.
  • SmartOS - free, modern Illumos, KVM (although that won't work on this hardware... yet).
  • FreeBSD-latest - ZFS and a reasonable package system.
  • ZFS on linux
  • zfs-fuse

Virtualization
  • ESXi 5 - Latest and greatest free hypervisor from VMware.
  • ESXi 4 - Older free hypervisor; less limitations but less capability
  • Bare metal

Disk layouts
  • 3 2-way mirrors
  • 1 6-disk raidz2
  • 1 6-disk raidz
  • 1 5-disk raidz
  • 2 3-disk raidz

Benchmarks
  • bonnie++
  • Fill the filesystem 50% and watch scrub IO bandwidth
  • Homebrew benchmarks: simulate streaming files to many clients, whatever else I think of.
  • tar up a folder with some strange combination of files in it

Misc ideas
  • Turn down HT speed and see if there's a change in maximum scrub speed.
  • Buy more memory and repeat.
  • Plug in more disks and repeat.
  • Use different disk controller: LSI SAS3442E-R, Supermicro 2008-based card, and onboard SP5100 available.
  • Put SAS expander in the middle and see what happens.

So what would be interesting to see?
 
Last edited:
OS: Linux Distro (perhaps Ubuntu 11.10?) using zfsonlinux. If you are really ambitious zfs-fuse also. Zfsonlinux is currently RC6, but would be interested in how much optimization is left before it catches the leaders.

I've also read that a 6 disk raidz is not an "optimal" number. 3, 5, or 9 have "better" throughput. Raidz2 optimal are 4, 6, and 10. Don't know if you want to take that into consideration.
 
Benchmarks are sort of worthless.

You should have a performance goal and a way to measure it. Then you upgrade until you reach the goal.

An old Pentium with an old hard drive is fast enough for most of the serving I want to do.
 
OS: Linux Distro (perhaps Ubuntu 11.10?) using zfsonlinux. If you are really ambitious zfs-fuse also. Zfsonlinux is currently RC6, but would be interested in how much optimization is left before it catches the leaders.

I've also read that a 6 disk raidz is not an "optimal" number. 3, 5, or 9 have "better" throughput. Raidz2 optimal are 4, 6, and 10. Don't know if you want to take that into consideration.

If you're going to go this route, use a distro that's actually somewhat stable, like Debian. Ubuntu 11.xx is an enormous piece of shit on every level (in my unbiased opinion :D)
 
OS: Linux Distro (perhaps Ubuntu 11.10?) using zfsonlinux. If you are really ambitious zfs-fuse also. Zfsonlinux is currently RC6, but would be interested in how much optimization is left before it catches the leaders.
That's a good idea. I haven't played with zfs-fuse before, but I can probably figure out how to do that.
I've also read that a 6 disk raidz is not an "optimal" number. 3, 5, or 9 have "better" throughput. Raidz2 optimal are 4, 6, and 10. Don't know if you want to take that into consideration.
Yeah, I'd heard that before, but I'm not sure I believe it. I can try 5-disk raidz as well.
Benchmarks are sort of worthless.

You should have a performance goal and a way to measure it. Then you upgrade until you reach the goal.

An old Pentium with an old hard drive is fast enough for most of the serving I want to do.
Good for you. I like to get the most out of my hardware, and if I can do that by switching between one piece of free software and another one, that's free benefit right there.

This thread is not intended for my benefit. I'm gonna run a hypervisor and (most likely) smartOS eventually. The question I hope to answer is "which OS should I run if I want the (most bandwidth/lowest latency/lowest licensing cost?".
 
Why SmartOS instead of openindiana?

For interest sake if you are spending the time to do the benchmarks a comparison between ZFS and BTRFS would be interesting.
 
Smartos is a very stripped down openindiana with KVM. Basically it is only the Solaris kernel (zfs, zones, cifs) and the KVM hyper visor. No graphics, no nothin. In a sense, similar to all esxi setups here, but KVM as the hyper visor.

For those who want Solaris as the server to provide zfs, but not much else, Smartos is perfect. Smartos is in beta phase, but seems to be very useful for esxi people.

Because Solaris is used as backend, with the aggressive caching zfs has, you can get 10x higher performance in the virtual machines, than running bare metal. Very cool reading here on smart os.
http://www.theregister.co.uk/2011/08/15/kvm_hypervisor_ported_to_son_of_solaris/

"Though Joyent has not released official benchmarks rating its new hypervisor, Hoffman claims some ample performance gains. With I/O-bound database workloads, he says, the SmartOS KVM is five to tens times faster than bare metal Windows and Linux (meaning no virtualization), and if you're running something like the Java Virtual Machine or PHP atop an existing bare metal hypervisor and move to SmartOS, he says, you'll see ten to fifty times better performance - though he acknowledges this too will vary depending on workload.

"If anyone uses or ships a server, the only reason they wouldn't use SmartOS on that box would be religious reasons," he says. "We can actually take SQL server and a Windows image and run it faster than bare metal windows. So why would you run bare metal Windows?"
...
"We're actually able to do instrumentation around Windows and Linux that Windows and Linux have never seen, not even at Microsoft or Red Hat," he says. "With DTrace and KVM, we have arbitrary observability at the hardware/software boundary. On the one hand, this doesn't deliver the total, up-the-stack visibility that we get with DTrace in [Joyent's OS-level virtual machines], but it does allow for unprecedented visibility into things like I/O latency, interrupt delivery, CPU scheduling."


If you have old winxp which does not support 10gbit card, Smartos might do that. And winxp only supports 3gb ram. With Smartos and zfs, you can support 10gbit nic card, and use 10gb ram for zfs cache. Thus, you will have much higher performance with Smartos, than running bare metal, or running esxi.

I expect people to abandon esxi and switch to Smartos, when it is released. The point is, Smartos is much safer than esxi, because of zones. No hacking allowed.
 
Last edited:
Why SmartOS instead of openindiana?
No particular reason. They're the same kernel AIUI, so it shouldn't really matter for performance.
For interest sake if you are spending the time to do the benchmarks a comparison between ZFS and BTRFS would be interesting.
I don't think it's a fair comparison. ZFS is stable, BTRFS is unstable and unloved (their wiki on kernel.org is down right now, for example). It's easy to be fast and wrong.
Smartos is a very stripped down openindiana with KVM. Basically it is only the Solaris kernel (zfs, zones, cifs) and the KVM hyper visor.
... and no support for AMD virtualization. Sigh.
 
Back
Top