My whacky (?) Microserver ZFS config and the slow writes.

eod

Limp Gawd
Joined
Nov 22, 2000
Messages
254
First off, some very very impressive builds here. I recently went looking for a small home NAS (drobo, qnap, synology). Unimpressed with the price and features I decided to jump on a HP Microserver N40L (4 bays, 1.5ghz AMD, 2 gigs of ram), I've seen plenty of folks mention it here. I threw in 2x2tb and a 250gig drive it came with.

So here is where I think the wheels fall off my build. I put ESXi on a thumbdrive, boots, installed OpenIndiana on the 250gig drive and napp-it. I provisioned 1.5gig from each 2tb drive by presenting them to vmware as virtual disks (so they host a large vmdk blob) and setup a mirror.
While I'm not expecting great performance from it with the layers of virtualization, I am a bit taken back by the Seq Write speed.

My Bonnie results look like so:
Seq-Write: 7.144 MB/s
Seq-Read: 158 MB/s

Is my approach wrong or is there some tuning I could do to improve this? I would like to run a light vm or two on this but if my ESXi approach is wrong, well then.

Let me know!
 
How should that work. Esxi needs about 500MB-1GB for itselfs, add 1GB for OpenIndiana for base OS functionality.
No space left for any caching and things like that. This together with virtualizing disc controller and disks must
deliver horrible bad performance.

What you can do:
Use enough RAM (8 GB) and try again. But to be honest, virtualizing a storage server with virtual disks is a bad idea.
You may try ESXi raw disk mapping to have decent performance but do not expect too much performance and stability.
ESXi + virtualized storage server needs vt-d and a dedicated disc controler to work properly - not possible with microserver.

Other option is to install OI on your 250 GB disk (but also with 4 GB RAM+) for a quite performant NAS/SAN solution.
There are reports with up to 100 MB/s transfer rates with fast disks. On top of OI you may try either Virtualbox or KVM.
 
Yeah all logical thinking in my head kept pointing me that this would be a poor I/O performer but I read a few blogs where people had success. Well one blog:

His results:

http://www.livingonthecloud.net/2011/10/hp-microserver-san-in-box-benchmarks.html

His layout of VMDKs

http://www.livingonthecloud.net/2011/09/hp-microserver-building-san-in-box.html

Granted he has a SSD for caching, etc but I figured I see a bit better than I am.

I plan to max the ram in a week or so. I might explore the OpenIndiana with Virtualbox on top or just punt and go with FreeNAS and revisit this at a later point.
 
Did you enable write caching in the bios on the N40l? If not I would go and turn that on first and then re-check your performance.

writecache.JPG
 
Definitely more RAM. Always more RAM.

If you change the sync setting on your zvol or filesystem that you're testing on to 'sync=disabled', what does that do to the test? That effectively disables write log mechanics for writes to that dataset, and if performance jumps a lot, you at least know what to start looking at tuning. If it doesn't affect it much, I'd suggest that something in ESXi must be to blame.
 
Well that drastically changed things by enabled the cache in the bios.

Seq-Write: 30 MB/s
Seq-Read: 161 MB/s

I'm not sure how acceptable these speeds are. I do have more RAM on order which I can allocate. Though I haven't tuned this much yet.
 
Ok so with write cache on in the BIOS. Disabling sync as per Nex7. I get the following.

Seq-Write: 64 MB/s
Seq-Read: 159 MB/s

Though I assume I don't want to disable sync. So with more ram, I may see the write increase with compression on, sync on and my write cache in the bios enabled?
 
Write caching on the drives is also dangerous.

For a home setup you could get a Intel SSD 311 as slog device for writes.
 
In the long run I plan to max out the ram and throw a SSD in there. I just am unsure if current approach is flawed with the multiple layers of virtualization. What sort of risk is there with write cache on with a mirror from ZFS?
 
In the long run I plan to max out the ram and throw a SSD in there. I just am unsure if current approach is flawed with the multiple layers of virtualization. What sort of risk is there with write cache on with a mirror from ZFS?

From the view of ZFS and the filesystem its all always ok. A block write is done correct or not at all.
From the view of your VM it may be a disaster up to a non usable VM.
 
Unusable corruption to the VM. Are you referring to the GuestOS, the mounted drives that hold the zfs inside a vmdk or both?
 
I believe the write caching option here is just enabling the write cache on the hard drives, no?

I think in most setups these will be enabled by default - having them disabled isn't common?
 
Obviously I'm not the most experienced to answer this but it was disabled by default on my system. Googling information about the N40L shows that most people enable it but I don't see a lot presenting the disks as a added VM.

Though it looks like most people just run FreeNAS which seems a tad anti-climatic and possibly behind in ZFS implementation.
 
eod I have a very similar setup to your own - did you give your VM 1 or 2 vCPUs?
 
Back
Top