Thanks for your help. I think I've narrowed it down to actual disk/zfs performance.
Even though CrystalDiskMark displays wonderful results, I came to think about that it creates a 1GB file which ZFS then hold in ARC. This effectively means I'm running disk test from the ARC cache.
Thanks for your help.
I tried adding the disk as SCSI 2:0 - paravirtual. Same results, unfortunately.
Each disk is it's own VMDK. I have not partitioned the drives added from ESXi. VMWare tools is installed and updated to latest version.
So weird that the Ubuntu runs @ ~200MiB/s...
The D-drive is a harddrive(VMDK) file mounted in ESXi, just like regular ESXi drives.
I'm trying to copy an MKV file about 5GB in size.
See below, green arrow is SSD and red arrow is ZFS datastore via NFS:
Thanks for your reply.
Unfortunatly disabling ATIME didn't help anything. I was actually enabled on my pool, so thanks for the tip.
I'm copying from a volume that resides on the NFS datastore being presented by OmniOS, to the SSD. I'm copying a 5GB file. See below:
Thanks for taking time to reply :)
Direct attached SSD is a VM datastore attached to the VM as SCSI 0:2.
SSD is connected directly to the motherboard, with the same cable. Never touched it.
I'm talking about internal VM to VM copy, from the ZFS datastore to the SSD...
Not sure if this is the right forum to post in, but I'll try anyway.
Running my ESXi 5,5 U2(latest build) All-in-one, Gea's concept basically.
All my VM's use VMXNET3 NICs.
One OmniOS VM w/ 32GB RAM, 4vCPU, LSI9211-8i(P19 firmware)
Napp-it runs MTU9000 against the vSwitch, presenting...
I'm currently running ESXi 5.5U1 on a Supermicro X9-SRL. I use a LSI 9260-8i RAID controller for all my datastores.
Main main datastore is a 4 x 3TB RAID10, consisting of WD RED series drives.
For some odd reason I can't understand, I'm experiencing 3 times faster WRITE...
I ended up using Nexentastor v3.15 with a 5.5 TB striped mirror(RAID10). Performance was slightly better with Napp-it, but Nexentastor has the better web interface and easier config.
Napp-it is a powerfull tool, if one has the time to learn it. I use it for my much bigger standalone fileserver...
Thanks for all your replies, still not sure whether I should move to ZFS+NFS yet. But I guess it's work a try, considering I will also get a bit more disk space (RAID10 vs RAIDZ1) out of my 4 drives. At least ZFS is more flexible and scales better than a local RAID controller.
Input is still...
Thanks for your replies, I'll try and post some more info here:
Currently my performance is "good" when only using one or two VM's to access the RAID10, but when I start using other VM's, the disk performance is terrible.
Seems like the RAID10+BBU+Spindles+VMFS doesn't really scale all...
Before I repurpose all in one fileserver, I have a few questions for the experts.
I'm currently running hardware RAID10 VMFS with ESXI 5.5. Multiple VM's are used for fileserver, streaming server, etc. I'm finding performance to be very limited on the IO side, even though...
Can someone shed light on how to treat permission when mounting a CIFS share from a Ubuntu Server to my OmniOS Napp-it server?
Everytime I try to mount the CIFS share with a SMB user created on Napp-it, I'm not getting the same access rights as I do on fx. a Windows Server. It...
For a new file server build, I've been looking at the Supermicro X9-SRL-F, but they're hard to come by in my region.
The Asus Z9PA-U8, seems to be very similar and is easily available.
I own a Supermicro board, and I'm quite pleased with it, so I wanted to hear what you...
You are forgetting your "dpool" folder. Even though it's not a share, initial authentication polls that folder(at least that's what I'm seeing). Try this from a SSH session/console.
Clean current ACL:
/usr/bin/chmod -R A- /dpool/
Allow everyone to access dpool and dpool/datatank and set...
Is it possible to add the option of defining your own ACL Reset parameters?
I would like to reset all my ACL's with the following parameters(just example):
With VMXnet3, I'm getting between 600-800Mbps when copying to another random PC in my network. With E1000 that speed is what one should expect from 1Gbit network.
Very weird but that's how it is here.
I'm aware of this, but never the less the amount of bandwidth it delivers is exactly 1Gbit. Hence the need for aggregation. VMXnet3 runs slower than E1000 on OI, so thats not an option.
Thanks so far
Because I'm using E1000 vNIC's, which only allow throughput of ~1GBit/s
Aggregation on the ESXi is allready setup using IP Hash to my Cisco SG200 switch, using 3 x pNIC's. But for the guest VM's to utilize this, they need to be able to Rx/Tx with more than 1GBit/s, which the E1000 is not...
I have now successfully aggregated 2 x E1000 NICs(virtual) in OI. This works and connectivity is there, BUT with the aggregation I'm getting speeds of around 50MiB/s.
Without the aggregation(single NIC), I'm getting 100-110MiB/s.
Can anyone tell me whats wrong? My setup is an All In One...
Nice to know it's working. I'm not sure what you mean by port group.
I have 4 x NICs on a single vSwitch in ESXi. They're load balancing via IP Hash to a Cisco SG200 switch. So AFAIK, ESXi should have ~4Gbit/s throughput in the vSwitch out to my network. I just tried adding 2 x Intel E1000...
Has anyone messed with Link Aggregation on OI?
I running mine as an all-in-one on ESXi, but had poor network performance using the VMXNET3 adapter.
I then switched to E1000, but because of the lower bandwidth properties of the E1000, I'm not able to utilize the 2xdual port NIC's I have in...
Can someone enlighten my as to why why my ZFS pool parameter "ASHIFT" concerning disk sector sizes reports as "ashift=9", when ALL my drives are Advance Format drives, that should report 4K sector size and result in "ashift=12"?
During the last week, I've been replacing 3 x WD 2TB Greens due to reported errors in Napp-it.
Yesterday I finished a Resilver with the disk "c6t9d0", brand new drive. Today I started another Resilver, replacing yet another faulty WD Green.
Just an hour into the Resilver, I'm again getting...
I need guidance on drives to purchase, can anyone please assist?
Criterias for drives:
1) Energy efficient
2) Performance is not an issue (1Gbit network only)
3) May not run too hot, room with server in it is approx 28degrees celcius.
4) CAN'T be WD Greens(have 16 allready and I'm running back...
One after the other WD-Green 2TB drive is failing on me ATM :(
What are the general recommendations for 2TB / 3TB drives these days for ZFS?
I would like to avoid the special raid edition drives if possible, they're super expensive here.
WD RED Series perhaps?
Just purchased a Cisco SG200-26 (non-PoE version).
It works great, but since hooking it up to my network, my Synology NAS won't hibernate.
So I went and checked if the switch was polling, and indeed it is. Every 1-2 seconds, the activity LED on all connected ports...