Recent content by ARNiTECT

  1. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hi again, As mentioned above in my overly long post about 10G speeds: CPU usage seems high when idle: ESXi is now reporting average 30% for 2x vCPU and 45% for 1 vCPU (network & disk near 0%) Other ESXi VMs are near 0% CPU when idle. Pools are not currently being accessed: no VMs, file...
  2. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Fixed! NFS backed VM CrystalDiskMark c: drive r:1000MB/s w:700MB/s As my post in the STH VMware forum: I had messed up something in ESXi networking in my pursuit of 10Gbe.
  3. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I did a reinstall of the napp-it VM, which seemed to go smoothly, but unfortunately, the VM backed on NFS drive is still capped at 100MB/s. reinstall process: - In napp-it, I ran a backup job, stopped the services, removed the LUs, exported the pools - I made a new VM using the current napp-it...
  4. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Thanks Gea, I tried setting napp-it VM's latency to low, but it hasn't helped. I have just tried CrystalDiskMark on a NFS backed VM in my secondary server and I am getting much better speeds (r:500MB/s & w:400MB/s, with sync enabled) This secondary server was setup more recently, but on older...
  5. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I have set 8x vCPU on OmniOS and I'm still struggling with slow VMs backed on NFS from OmniOS 3x NVMe Z1 (CrystalDiskMark of c: drive r/w:100MB/s) During VM CDM benchmark, OmniOS CPU load is low, Pool use is low, wait low iPerf3 is 23Gb/s which I would expect for vmxnet3 over software The same...
  6. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hi Gea, I thought the pool benchmarks looked ok too, I compared them with a few of your previous benchmark documents. From what I can see in the benchmarks, (maybe I'm reading them wrong) my pools should be able to get very close to 10Gb on reading and writing. iperf also reports 10Gb and much...
  7. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hi, I am currently setting up 10G and I am struggling to get near 10G speeds to/from OmniOS AiO ESXi VM over SMB or NFS. I apologise for such a long post! tl;dr: - File transfer speeds r/w only 250MB/s to all pools, as below in bold; - iperf3 shows 10G between physical nics and 14G virtual; -...
  8. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    My tests seem to have worked as expected; I have noticed the following though: After the first run of a replication with -R send option, the source filesystem SMB share is no longer accessible on the network. The source SMB share name is still visible in the list under ZFS filesystems. The new...
  9. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Ah, ok. If I want the old snaps I will have to recreate the replication jobs using -R. For my Host1-poolA to Host1-poolB backup replication jobs (with old snaps), I understand I should set the replication for each single ZFS filesystem with -R and -I. After the first successful run, should I...
  10. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    I have just tested upgrading a Replication job from -i to -I and the target ZFS FS is only showing intermediary snaps from the point the Replication job was changed from -i to -I and is not showing older source snaps from before the change (Windows previous versions). I then tried creating a new...
  11. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Thanks Gea, I will set up a few test runs with this, as there seem to be a few opportunities for accidents
  12. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Yes, I've resigned myself to this. I have now setup a VM boot order: ESXi > OmniOS 1 (local) > Windows Server > OmniOS 2 (AD) > Other VMs This seems to be working well. Next I want to setup the Replications: //Host 1 - Primary pool, to: Host 1 - Disaster recovery pool To change an existing...
  13. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Thanks Gea, Ah yes, it is SMB I am having issues with, SMB > Active Directory > Secondary DC (Solaris 11 only). I read through the document and I agree Clusters etc looks interesting, but a bit overkill for me. It looks like best option is: '1.2 Improved Availability Level2 (second standby...
  14. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    Hi, I would like our house to carry on functioning if my primary server goes down unexpectedly, or for maintenance/tinkering. I have looked into ESXi: vMotion, vSphere Replication and vSphere High Availability, but was wondering if there is a simpler option using OmniOS/napp-it instead. I have...
  15. A

    OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

    So, IOPS of Raid-0 is similar to Raid-Z like a single disk? I thought IOPS of Raid-0 would scale with disks.
Back
Top