This is my first post in this forum. But I read tons and tons of threads about ZFS fileservers (napp-it / OpenIndiana / Nexenta / etc).
First my hardware which I used for this ESXi whitebox built:
After sucessfully modding the NH-D15 to fit the narrow ILM design of the Supermicro X9 board. I used my Kingston SSDNow harddisk as the LocalDatastore for the ESXi 5.5 U2 Hypervisor installation. Created two vSwitches (one connected to the physical GBit Intel Nic one as loopback interace for the vSAN later). I changed the MTU on the vSwitch1 (vSwitch0 is for the Managment network and the physical interface) to 9000 for vmk1 and the Virtual Machine Port Group. I created a little linux test client (Ubuntu 14.10 Server + iperf + nfs-common) also on LocalDatastore with VMXNET3 interface attached to vSwitch1.
napp-it:
I downloaded the newest napp-it 0.9f3 build from the napp-it.org page. Followed the documentation to install the appliance (changed it to 2vCore, 8GB) on my ESXi hypervisor to the LocalDatastore. E1000 connect to my vSwitch0, VMXNET3 connect to my vSwitch1. I patched the VMXNET3 driver, the TCP Stack and modified the MTU and LSO according to Cyberexplorers Blog. The eight Crucial MX100 are directly passed thru to the VM with DirectPath I/O. On the appliance I configurated them as RAID10 and disabled sync on the array.
DD gave me a result of around 1.2GByte/sec write and read I did not really test so far.
So I ran some tests between the napp-it appliance and the linux test client and was able to achive around 20Gbit/sec thru one iperf stream.
So I created a NFS share and mounted this share with my linux test client. And I got quite a good 1.0 - 1.2 GByte/Sec write speed. Read again I did not test so far.
FreeNAS:
FreeNAS I downloaded the latest iSO 9.3 from the FreeNAS page. Uploaded the iSO to the hypervisor and created a new VM (2vCore, 8GB, 10GB HDD, LSI2308IT DirectPath I/O) with again two interfaces. One E1000 the the management network (vSwitch0) one VMXNET3 connected to the storage network (vSwitch1). Also the SSDArray configured as a RAID10 array.
NOW here comes the
I ran DD and it gives me following result:
I somehow get almost threetimes the DD write speed as I get from napp-it???
NFS I also tried with my linux client gives me only 550 - 650 MByte/sec... maybe some optimization requiered...
So now my questions:
Thanks a lot guys for your help. (especially _Gea I read tons of posts from you!!!)
Best
Yves
First my hardware which I used for this ESXi whitebox built:
- 1x Cooler Master HAF XB Evo Case
- 2x ICY Dock MB994SP-4SB-1 HDD Cages
- 1x Cooler Master B500W PSU
- 1x Supermicro X9SRH-7F Board
- 1x Intel Xeon E5-2620 CPU
- 1x Noctua NH-D15 CPU FAN
- 3x Enermax T.B. Silence 120mm CASE FAN
- 4x Kingston DDR3 16GB 1333MHz ECC-Reg RAM
- 8x Crucial MX100 512GB SSD
- 1x Kingston SSDNow 120GB SSD
After sucessfully modding the NH-D15 to fit the narrow ILM design of the Supermicro X9 board. I used my Kingston SSDNow harddisk as the LocalDatastore for the ESXi 5.5 U2 Hypervisor installation. Created two vSwitches (one connected to the physical GBit Intel Nic one as loopback interace for the vSAN later). I changed the MTU on the vSwitch1 (vSwitch0 is for the Managment network and the physical interface) to 9000 for vmk1 and the Virtual Machine Port Group. I created a little linux test client (Ubuntu 14.10 Server + iperf + nfs-common) also on LocalDatastore with VMXNET3 interface attached to vSwitch1.
napp-it:
I downloaded the newest napp-it 0.9f3 build from the napp-it.org page. Followed the documentation to install the appliance (changed it to 2vCore, 8GB) on my ESXi hypervisor to the LocalDatastore. E1000 connect to my vSwitch0, VMXNET3 connect to my vSwitch1. I patched the VMXNET3 driver, the TCP Stack and modified the MTU and LSO according to Cyberexplorers Blog. The eight Crucial MX100 are directly passed thru to the VM with DirectPath I/O. On the appliance I configurated them as RAID10 and disabled sync on the array.
DD gave me a result of around 1.2GByte/sec write and read I did not really test so far.
So I ran some tests between the napp-it appliance and the linux test client and was able to achive around 20Gbit/sec thru one iperf stream.
Code:
root@napp-it-15a:/root# /opt/csw/bin/iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte (default)
------------------------------------------------------------
[ 4] local 10.254.1.2 port 5001 connected with 10.254.1.3 port 47192
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 22.2 GBytes 19.1 Gbits/sec
So I created a NFS share and mounted this share with my linux test client. And I got quite a good 1.0 - 1.2 GByte/Sec write speed. Read again I did not test so far.
FreeNAS:
FreeNAS I downloaded the latest iSO 9.3 from the FreeNAS page. Uploaded the iSO to the hypervisor and created a new VM (2vCore, 8GB, 10GB HDD, LSI2308IT DirectPath I/O) with again two interfaces. One E1000 the the management network (vSwitch0) one VMXNET3 connected to the storage network (vSwitch1). Also the SSDArray configured as a RAID10 array.
NOW here comes the
I ran DD and it gives me following result:
Code:
[root@freenas] /mnt/Storage# dd if=/dev/zero of=tempfile bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 1.248864 secs (3439099468 bytes/sec)
[root@freenas] /mnt/Storage# dd if=/dev/zero of=tempfile bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 1.244599 secs (3450884130 bytes/sec)
[root@freenas] /mnt/Storage# dd if=/dev/zero of=tempfile bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 1.266117 secs (3392235449 bytes/sec)
I somehow get almost threetimes the DD write speed as I get from napp-it???
NFS I also tried with my linux client gives me only 550 - 650 MByte/sec... maybe some optimization requiered...
So now my questions:
- Are these FreeNAS DD write speeds for real?
- Is it possible to achive them also over NFS?
- Can I also improve napp-it to get those speeds?
Thanks a lot guys for your help. (especially _Gea I read tons of posts from you!!!)
Best
Yves