Thanks for your help. I think I've narrowed it down to actual disk/zfs performance.
Even though CrystalDiskMark displays wonderful results, I came to think about that it creates a 1GB file which ZFS then hold in ARC. This effectively means I'm running disk test from the ARC cache.
Thanks for your help.
I tried adding the disk as SCSI 2:0 - paravirtual. Same results, unfortunately.
Each disk is it's own VMDK. I have not partitioned the drives added from ESXi. VMWare tools is installed and updated to latest version.
So weird that the Ubuntu runs @ ~200MiB/s...
The D-drive is a harddrive(VMDK) file mounted in ESXi, just like regular ESXi drives.
I'm trying to copy an MKV file about 5GB in size.
See below, green arrow is SSD and red arrow is ZFS datastore via NFS:
Thanks for your reply.
Unfortunatly disabling ATIME didn't help anything. I was actually enabled on my pool, so thanks for the tip.
I'm copying from a volume that resides on the NFS datastore being presented by OmniOS, to the SSD. I'm copying a 5GB file. See below:
Thanks for taking time to reply :)
Direct attached SSD is a VM datastore attached to the VM as SCSI 0:2.
SSD is connected directly to the motherboard, with the same cable. Never touched it.
I'm talking about internal VM to VM copy, from the ZFS datastore to the SSD...
Not sure if this is the right forum to post in, but I'll try anyway.
Running my ESXi 5,5 U2(latest build) All-in-one, Gea's concept basically.
All my VM's use VMXNET3 NICs.
One OmniOS VM w/ 32GB RAM, 4vCPU, LSI9211-8i(P19 firmware)
Napp-it runs MTU9000 against the vSwitch, presenting...
I'm currently running ESXi 5.5U1 on a Supermicro X9-SRL. I use a LSI 9260-8i RAID controller for all my datastores.
Main main datastore is a 4 x 3TB RAID10, consisting of WD RED series drives.
For some odd reason I can't understand, I'm experiencing 3 times faster WRITE...
I ended up using Nexentastor v3.15 with a 5.5 TB striped mirror(RAID10). Performance was slightly better with Napp-it, but Nexentastor has the better web interface and easier config.
Napp-it is a powerfull tool, if one has the time to learn it. I use it for my much bigger standalone fileserver...
Thanks for all your replies, still not sure whether I should move to ZFS+NFS yet. But I guess it's work a try, considering I will also get a bit more disk space (RAID10 vs RAIDZ1) out of my 4 drives. At least ZFS is more flexible and scales better than a local RAID controller.
Input is still...
Thanks for your replies, I'll try and post some more info here:
Currently my performance is "good" when only using one or two VM's to access the RAID10, but when I start using other VM's, the disk performance is terrible.
Seems like the RAID10+BBU+Spindles+VMFS doesn't really scale all...
Before I repurpose all in one fileserver, I have a few questions for the experts.
I'm currently running hardware RAID10 VMFS with ESXI 5.5. Multiple VM's are used for fileserver, streaming server, etc. I'm finding performance to be very limited on the IO side, even though...
Can someone shed light on how to treat permission when mounting a CIFS share from a Ubuntu Server to my OmniOS Napp-it server?
Everytime I try to mount the CIFS share with a SMB user created on Napp-it, I'm not getting the same access rights as I do on fx. a Windows Server. It...