OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

OmniOS 151032 stable is out
This is the most feature rich update for Open-ZFS and OmniOS ever.

download: https://omniosce.org/download.html

Release notes

https://github.com/omniosorg/omnios-build/blob/r151032/doc/ReleaseNotes.md

Update
http://www.napp-it.org/doc/downloads/setup_napp-it_os.pdf

New Open-ZFS features:
- native ZFS encryption
- raw zfs send of locked and encrypted filesystems
- sequential/sorted resilver (can massively reduce resilver/scrub time
- manual and auto trim for the SSDs/NVMes in a pool
- Allocation classes for metadata, dedup and small io (mixed pool from disk/SSD/NVMe)
see https://www.napp-it.org/doc/downloads/special-vdev.pdf
a warning at this point: a zpool remove of a special vdev with a different ashift than the pool crashes Illumos/ZoL
- force ashift on zpool create/add

OmniOS related
-updated NVMe driver (with basic support for NVMe/U.2 hotplug)
-updates for newer hardware
-installer supports UEFI boot

- SMB 3.02 (kernelbased SMB) with many new features
see https://github.com/illumos/illumos-gate/pulls?q=is:pr+SMB3
- improvement for LX/ Linux zones, newer Linux distributions
- improvements for Bhyve
- improvements to the Enlightened Hyper-V drivers for running under Hyper-V or Microsoft Azure.

Napp-it 19.dev/19.h2.x supports the new features
 
Hello,

I am running a RAID Z2 array with 4x8TB drives on OmniOS 151030 hosted by ESXI 6.7
During the last 3 months I have encountered 2 checksum errors that were repaired.

Pool view shows 2 in checksum errors with no other error present.
The checksum errors show up on 3 of the 4 disks.

Although I never had a checksum error in many years, one error I would brush off, 2 have already gotten my attention.

Is there anyway to know which of the disks was holding the bad piece of data so I might replace it if it's the same one in both cases?

Thanks in advance for any assistance.
 
Checksum errors are real errors (already happened) and detected by ZFS. If they are on a single disk, replace the disk. If they are on several disks it is more likely that you have a RAM, backplane, cabling or PSU problem. Care about backup prior problem finding.
 
Hi,

If a scrub reported an error and corrected 256kb of data, and is showing 1 checksum error on 3 disks does it mean there was a checksum error on all 3 disks, or there were 3 disks involved in correcting the error which resided on one of them?

If the record size is 128kb, how could there be 1 checksum error on 3 disks.

Anyway, its a system that's been running for 4 years stably, only the disks are newish. The RAM is ECC. I don't think it makes sense for a problem with RAM, cabling, PSU, hba to manifest as a 2 scrub errors within 3 months when the pool is being scrubbed weekly?

Anyway, the important data is backed up, the rest of the data will be annoying to lose but can be obtained again with a slight effort. Not worth backing up so much media.
 
Hi,

I would like new hardware for a Napp-it All-in-One, but I am on a budget. The guides on napp-it.org are very helpful

I would appreciate any comments on the following build.

I want the speed, but I recently lost a lot of time recovering from backups when a second drive in a 2-way vdev mirror started produced errors during a resilver. This only happened once in 7 years, but I either need a faster recovery plan, or go for 3-way vdevs.

Use:
CAD/BIM, 3D modelling & rendering, graphics applications, searching through photos, pdfs, emails etc. All of these are slow at the moment. For home we use Emby server, Squeezebox etc.

VMs:
2x windows server, exchange, 2x windows 10, vCenter, VEEAM, UPS appliances etc.

Existing parts:
13x 3TB WD Reds, 2x 8-port IBM ServeRAID M1015, HDD cages (non-hotswap), everything else will be new.

RAID:
I haven't decided between Safe 12TB (4x 3-way vdevs) or Fast/Larger 18TB (6x 2-way vdevs). With correctly selected RAM, L2ARC, SLOG, will there me be much difference in speed?

Network:
2x 10G Cat6A sockets at each PC, existing D-Link DGS1510-52 (2x 10G) & later get a new Netgear XS708T-100NES (8x 10G) £432, I aim to setup 2x workstations with 20G connections to server.

Case (very limited space for a server):
£100 - Zalman MS1000-HS1
£200 - 3x (5x3.5" in 3x5.25") 6Gb/s Hot-Swap drive bays
or cheaper: £30 - BitFenix Shadow Midi Tower (no hot-swap)

Motherboard (Napp-it build example 3.5):
£470 - X11SPH-nCTF, add: 1x M1015 for 16-ports total
This would give me a lot of upgrade potential, but I am concerned about the USB issues mentioned in the build example.
cheaper boards seamed a false economy

CPU:
£325 - Xeon Silver 4108 (8-core)
or cheaper £190 - Xeon Bronze 3106 (8-core)

MEM:
£250 2x 32GB Hynix DDR4 2933MHz ECC RDIMM

ESXI:
USB drive

ZFS cache:
hopefully approx 40Gb RAM

OmniOS & L2ARC & SLOG on 1 drive:
1x Intel Optane 900P 280GB (£227) is this big enough? what partition sizes? a 905P 480GB is £469 and I hoped I wouldn't need to spend this much

Alternatively (does OmniOS need Optane speed?):
OmniOS: 2x £25 250Gb SSD Mirror datastore
L2ARC & SLOG: Intel Optane 900P 280GB (£227) what partition sizes?

Other VMs:
using main pool initially, maybe upgrade to NVMe later

Other parts like PSU, Cables etc should be straight forward

Total for main parts £1200 - £1900
 
ZFS allows to optimize for every workload.

A pool from 4 x 3way mirrors (12TB) would be the fastest pool from disks especially regarding io and reads paired with an ultra secure raid setup (any two disks are allowed to fail, max 4 disks). With a fast Slog like an Intel Optane you can even achieve a decent sync write performance. As an Slog must not be larger than say 20 GB, your idea with an Optane 900 would allow to use > 200GB as L2Arc with read ahead enabled asumed >= 32GB RAM (although I would not expect a decent advantage with the L2Arc with enough RAM).

But given that a disk has around 100 physical iops wheras a good NVMe is > 100k iops, I would go a different path. If 12 TB is enough, I would create a Z2 pools from 6 disks (12 TB) and a second 12 TB removeable pool for backup. This would also allow any two disks to fail and you have an external disaster backup of the whole pool.

A single Z2 pool has a quite good sequential performance but iops is only like a single disk. This is why such a pool is only good for a filer and backup server with sync disabled. For your VMs where you mainly need performance and want sync write, I would add a mirror from powerloss save NVMe. My suggestion would be a mirror from the new Intel 665P (M.2, 512GB or 1T). They are quite affordable, fast and offer powerloss protection. The mainboard offer two M.2 slots for them. They are fast enough to allow sync write without an extra Slog.

For ESXi boot and a local datastore, I would prefer an Sata SSD (>80 GB, I love the small datacenter Intel SSD for this like DC 35x0 - 80/120 or newer datacenter drives. Much faster than USB and ultra reliable (never had a failure)
 
Thanks _GEA!

I like your idea of removable backup pool. I can setup a backup server from left over parts; however I can no longer upgrade ESXi beyond 6.5.0 U2 due to my CPU (Xeon X3450), will this create future issues with keeping OmniOS up to date on the new hardware? Will it be quite straight forward to import a Z2 pool to a machine with an older release?

Would you suggest the Supermicro X11SPH-nCTF is a good mainboard for an AiO, I think you had a couple of issue with this board, have these been resolved?

Can the controller for the onboard 2x NVMe slots be passed through to OmniOS, or would I need ESXi to present these as virtual disks to OmniOS?

For ESXi boot & OmniOS VM: 2x Intel D3 S4610 240GB SATA at £90 each, appear to be the equivalent current models.

The Intel 665P looks like a great value powerloss protection NVMe, but doesn't appear available in UK yet. I will have to wait.

With a mirror of 2x Intel 665Ps I expect the VM performance to be great; however, I am also keen for the main pool to be faster than our 2012 built server (4x2-way3TB RAID10, 8GB RAM for ZFS, 24GB SLOG, 128GB SSD L2ARC, 1G nic, 5TB data ...probably not tuned well). We find the following to be slow on our server: Revit file opening, incremental saving, synchronising with central model; searching through folders of PDFs, including searching PDFs by contents, searching through emails, updating photo thumbnails; Photoshop opening large files; copying files from redirected folders to local drives.

For the above use, would a 4x3-way RAID10 have 12x the IOPS of a Z2 pool? Would the Z2 pool still benefit well from an Optane SLOG? Or, for a similar price to an Optane 900P 280Gb I could increase RAM from 64Gb to 128Gb, leaving >100Gb for cache.
 
You can stay some time with ESXi 6.5U2.
Most annoying is the html management interface that needs a page reload prior actions or it may crash quite often.

X11SPH-nCTF is a perfect mainboard. Very fast and stable, enough PCI-e Slots,
allows up to 10 NVMe directty connected NVMe, has 10 Gbe and SAS 12G.

M.2, U.2 and PCI NVMe are PCI-e. You can pass-through them.
Some NVMe are troublefree, others may have problems to work properly in pass-through mode.

Intel P4510 seems to be one of the NVMe that does not work properly in pass-through mode.
There is work to make this on working. Do tests in newest OmniOS stable (or bloody).
If possible, test the NVMe in advance, https://illumos.topicbox.com/groups...1/esxi-omnios-passthrough-problem-intel-p4510

Your last system cannot be ultrafast due its limited 8GB RAM to deliver random requests and metadata from RAM.
From expectation, 8GB would split to 2GB OS usage, 0.8 GB rambased write cache and 3-4 GB ARC readcache.
If sync is enabled, slog performance is critical. "Any" SSD is not good enough. L2Arc mostly is never as helpful as more RAM.

Additrionally, the old 3TB are slow. With a lot of RAM you can overcome this a little but in the end they are slow.
A multi mirror + RAM can help but only for random re-access and file browsing. Writing can a be much faster with more RAM.
In the end, you may think of less but newer 8-12 TB HGST/WD ultrastar disks.

For a regular filer use, you do not need or want sync write so you do not need an Slog.
VM, database use or a mailserver alike use case require sync. Then you want an Optane Slog or fast enterprise SSDs.

Enough RAM is essential. Using more than 64GB makes only sense on special workloads. With mainly sequential data, look for faster disks.
 
Thanks, the motherboard looks great on paper.

I might have to rethink the storage then.
I can return 5 of the 3TB WD Reds, as they were purchased recently.
I will keep to: 64Gb RAM, 2x Intel D3 S4610 240GB, no Optane.

Fast pool for VMs and live projects, Read/Write:
2x Mirror with power loss protection:
- 2TB Intel 665P NVMe, approx £250 each when released? = £500
- 1.9TB Samsung PM983 NVMe £316 = £632 (maybe this is a better option for speed/durability/use near full?)
- 1.9TB Samsung PM883 SATA £324 = £648
- 1.9TB Intel S4510 SATA £350 each = £700

General pool for photos and pdfs, Mainly Read
It appears WD HC310 has 4x the read IOPS of my WD Red 3TB
12TB (6x RAID10) 4TB WD HGST Ultrastar HC310 £143 each = £858 (Read: 6x IOPS, 6x Mb/s)
12TB (5x Z2) 4TB WD HGST Ultrastar HC310 £143 each = £715 (Read: 1x IOPS, 5x Mb/s)
12TB (4x RAID10) 6TB WD HGST Ultrastar HC310 £176 each = £704 (Read: 4x IOPS, 4x Mb/s) (good balance of cost/speed/redundancy/expansion)
12TB (4x Z2) 6TB WD HGST Ultrastar HC310 £176 each = £704 (Read: 1x IOPS, 4x Mb/s)

Disaster recovery Pool for both main Pools
15TB (7x Z2) existing 3TB WD Reds, 1 spare

//Edit: for the boot drive (ESXi & OmniOS VM) I like the look of 2x (mirror) Samsung SM883 240GB SATA, which look durable with power loss protection and about the same price as the Intel D3 S4610 SATA
 
Last edited:
Has anyone tested if the 'Corsair MP510 M.2 NVMe' are compatible with OmniOS?
These have power loss protection, good performance, durability and price.
 
I have not tried but would not expect problems in a barebone setup, Problems with some NVMe are mainly in an AiO setup with NVMe passthrough under ESXi.
 
ah... I planned to use 3no. 2TB ssd drives in the main pool in an ESXi/OmniOS/Napp-it AiO setup.
I understand some NVMe drives do work, is there a list somewhere? googling NVMe passthrough to OmniOS or Illumos wasn't clear.
Would compatibility be sensitive to the mainboard I choose? If I try one in an old system, using a PCIe x4 to NVMe adapater and it works, is it safe to assume it'll work in a new board?

For very similar cost, I am deciding between 2 options (I prefer A)

Option A:
OmniOS: 1 no. Crucial MX500 240Gb SATA'
Fast Pool: 3 no. Corsair MP510 NVMe 2TB'
Slow Pool: '6x 3TB HDD Raid10'

Option B:
OmniOS/L2ARC/SLOG: 1 no. Optane 900P 280GB vdisk
Fast Pool: 3 no. Crucial MX500 2TB SATA'
Slow Pool: '6x 3TB HDD Raid10'
 
I went ahead and bought a Corsair MP510 2TB, an M.2 NVMe to PCIe x4 adapter and will test pass-through tomorrow in my old Supermicro X8SIL-F.
 
Woo hoo! - the NVMe drive seems to be working on my Napp-it all-in-one

NVMe M.2 drive: Corsair MP510 2TB
Mainboard: Supermicro X8SIL-F / X3450 / 32Gb ECC
PCIe to M.2 adapter: StarTech PEX4M2E1

I'm not sure how best to test if its running ok, so I'll go through what I've done so far.
  • Updated napp-it to 19.10, OmniOS r151032h, ESXi 6.5U2 (CPU limit) ...was on 18.01/151018/5.5
  • then turned off server, installed NVMe, turned on server
  • NVMe drive appeared in pass-through, enabled and rebooted
  • OmniOS now wouldn't boot because of a problem with my LSI HBA controllers (M1015), not sure if this is to do with updates or new NVMe drive. I removed them both for now and added the NVMe passthrough. Any idea what this could be?
  • created a new pool with the single basic NVMe drive, new zfs, smb share, nfs share.
  • SMB appears on the network from my Windows 10 PC, NFS shared added to datastore
  • created new Windows 10 VM on NFS datastore, which installed fast and is now running with out any apparent issues
  • no errors on drive yet
As the test appears to be successful, is it safe to get a new mainboard and a few of these Corsair MP510 drives?

I'm now leaning towards getting an X11SCA-F for the faster core Xeon E-2200 series, with an LSI SAS 9400-16i on PCIe Gen3 x8 for NVMe drives, an Adaptec ASR-71605 on x4 for HDDs, GPU on x8, 10G on x4, M.2 x4)
 
The OmniOS storage VM is on a local datastore (mostly sata)
This should be independent from pci-e cards in passthrough mode

If you are looking for many NVMes you need a mainboard with many pci lanes and this means no 1151.
My favourite is a https://www.supermicro.com/en/products/motherboard/X11SPH-nCTpF where you can connect up to 10 NVMe over 1 x M.2, 2 x U.2 via Oculink and 7 NVMe via pci-e adapters + 10G + 8xSAS 12G HBA.

The 9400 is not really an interesting offer. You can either connect 4 x NVMe or 16 SAS/Sata disks or a mix - too expensive. Better use a mainboard with enough NVMe options and a 12G HBA.

The Adaptec is badly/not supported on Unix. This one is mainly for Linux/Windows. Use BroadCom/LSI HBA all the way for ZFS.
 
OmniOS is still installed on an SATA drive from the mainboard's controller.

I previously passed through the 2 no. LSI HBA's to OmniOS, but now OmniOS won't boot unless I remove them, it might be linked to the update of ESXi.

I understand the Broadcom HBA 9400-16i can accept 8 no. NVMe at PCIe x2 speed each, which should be fast enough. I wanted to buy this HBA first to test, before committing to a mainboard. It was about £280 total new (still a few left on ebay) I will try it, and put it back on ebay if it is no good. It was cheaper than I can currently get the 9305-16i used, and the 9201-16i is only Gen2, so on x4 with 16 no. HDD drives, the speed might be limited. I have 3 no. trusty M1015 flashed as HBA, but I am trying to get a system to work with Xeon E-2200 series.

I read the ASR-71605 works in ESXi and OmniOS. I currently need 14 no. HDD, so if it doesn't work, I could revert to a M1015 for 8 no. SATA and another 8 no. SATA on the 9400-16i and still have 4-8 NVMe.
Perhaps the X11SCA-F has multiple onboard SATA controllers, allowing me to use 1 or 2 drives for boot and pass through the others?

The X11SPH-nCTPF is a great board and its what I would go for with a Xeon Silver; but the E-Series looks really fast for my AiO VMs, I don't think I need more cores; E3 is next fastest, but not enough PCI; Scalable have many lanes, but are much slower.

In my recent research, I did a comparison of possible mainboards (starting from those listed in your guides):
Summarised at end with: CPU series / possible number of M.2 (minus a x4 if no 10G) / total SATA drives (including those connected to the mainboard controller).

Supermicro X11SPH-nCTF (10G,3x8,1x4,1xM2,2xOC,8xSAS) Scalable / 10xM.2 / 18xSA
Supermicro X11SPH-nCTPF (10SFP,3x8,1x4,1xM2,2xOC,8xSAS) Scalable / 10xM.2 / 18xSA
Supermicro X11SPM-TPF (10SFP,2x16,1x8,1xM2) Scalable / 11xM.2 / 12xSA
Supermicro X11SPL-F (6x8,1x4,1xM2) Scalable / 13xM.2 / 8xSA
Supermicro X11SSH-CTF (10G,1x8,1x2,1xM2,8xSAS) E3-1200 v6/5 / 3xM.2 / 16xSA
Supermicro X11SSZ-TLN4F (10G,1x16,2x4) E3-1200 v6/5 / 6xM.2 / 4xSA
Supermicro X10SRM-TF (10G,1x16,2x8) E5-2600/1600 v4/3 / 8xM.2 / 10xSA
Supermicro X10SRL-F (4x8,3x4,) E5-2600/1600 v4/3 / 10xM.2 / 10xSA
Supermicro X11SCA-F (2x8,1x1,1x4,1xM2) Xeon-E / 5xM.2 / 8xSA
Supermicro X11SCH-F (2x8,2xM2) Xeon-E / 5xM.2 / 8xSA
Asus P11C-M-10G-2T (10G,2x8,2xM2) Xeon-E / 6xM.2 / 6xSA
Asus ASMB9-iKVM
Asus Z11PA-U12-10G-2S (10G, 1x16,2x8,1x4,1xM2,2xOC,12xSAS) 12xM.2 / 13SA
ASRock C246 WS (2x8,1x4,1x1,1xM2) Xeon-E / 5xM.2 / 12SA
ASRock EPC621D8A (5x8,1x4,2xM2) Scalable / 12xM.2 / 13xSA

X11SPL-F seems to win, with 13 no. NVMe (allowing 1 no. x4 for a 10G NIC).

I also came across the Supermicro AOC-SHG3-4M2P, for 4 no. NVME into PCIe Gen3 x8, which looked interesting. Or the AOC-SLG3-4E2P.
 
Last edited:
The Adaptec may work in ESXi but definitely not in OmniOS, https://illumos.org/hcl/

About onboard Sata
Pass-throuth means controller/pci-e device pass-through. You cannot pass single Sata disks (ok, unsupported rdm is possible)
Only with SAS HBAs you can use disk pass-through (add physical raw disks to a VM) as a supported ESXi option.
 
Shame, I'll return the Adaptec ASR-71605 when it arrives.
If I pass through the 9400-16i controller and start with 4 no. NVMe and 8 no. HDD on this card, and then 8 no. HDD on an M1015 (Gen2x4), then I'll keep a look out for a good price on a PCIe Gen3 16 port Broadcom/LSI HBA, which would allow me to move all HDD onto this and have up to 8 no. NVME on the 9400-16i in the future.
Although I understand the onboard AHCI controller for the X11SCA-F can be passed through from ESXi, I will instead just keep it for boot disks and ISO storage.
 
Does napp-it Home ever go on sale or have discounts/coupons? $300 for a perpetual license is still a bit too steep for my broke ass.
 
For homeuse, the free napp-it 19.10 homeuse should offer what's needed for a homeserver as there is no restriction regarding capacity or ZFS functionality. Some special features may need cli actions.

If you want the newest pro or dev software release for support reasons and a single pro feature like replication to a second backup server you can use the single extension pro/home edition for 50 Euro per 2 years. If you decide to go to the unlimited while the pro/home license is still valid, I will take it back for the full price.

The Twin complete home unlimited (300 Euro) is for two servers like filer and backup and offers all pro features like HA Clustering, Backplane maps, realtime monitoring or the upcoming file/webserver based keymanagement for ZFS encryption keys.
 
For homeuse, the free napp-it 19.10 homeuse should offer what's needed for a homeserver as there is no restriction regarding capacity or ZFS functionality. Some special features may need cli actions.

If you want the newest pro or dev software release for support reasons and a single pro feature like replication to a second backup server you can use the single extension pro/home edition for 50 Euro per 2 years. If you decide to go to the unlimited while the pro/home license is still valid, I will take it back for the full price.

The Twin complete home unlimited (300 Euro) is for two servers like filer and backup and offers all pro features like HA Clustering, Backplane maps, realtime monitoring or the upcoming file/webserver based keymanagement for ZFS encryption keys.

Well the realtime monitoring and backplane map look like useful features to me, but I can't justify spending 300 on it.
 
You can check realtime load at console and you can create a map using the online 2day eval/test and print it out.
 
How do I turn off LZ4 compression in the GUI? Looks like it on by default now. Almost all my content is already compressed so I don't need it.

edit: it shows On in the pool details in the gui:
YNjU7Rm.png


but Off in the zfs compression command:
UjwdWHe.png
 
Last edited:
The feature list shows that lz4 is available and active as an additional compression option to the former compression options. To activate lz4 for a filesystem you can use zfs set compression=lz4 for a filesystem or activate in the GUI in menu ZFS filesystems when you click on off under compression in the line of a filesystem (lz4 is the default for on).

lz4.png
options. To
 
I recently finished building my server, but I was having trouble with a Supermicro AOC-SHG3-4M2P, which can connect 4x M.2 NVMe drives to PCIe x8 slot on my Supermicro X11SCA-F.
I am currently only using 2x M.2 NVMe drives, to pass through to OmniOS/Napp-it (32).

In ESXi > Hardware > PCI Devices > the Add-on Card lists the following

0000:00:01.0Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16)Not capableNot capable
0000:01:00.0PLX Technology, Inc. PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) SwitchNot capableNot capable
0000:02:07.0PLX Technology, Inc. PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) SwitchNot capableNot capable
0000:06:00.0Phison Electronics Corporation E12 NVMe ControllerNot capableActive
0000:02:06.0PLX Technology, Inc. PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) SwitchNot capableNot capable
0000:05:00.0Phison Electronics Corporation E12 NVMe ControllerNot capableActive
0000:02:05.0PLX Technology, Inc. PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) SwitchNot capableNot capable
0000:02:04.0PLX Technology, Inc. PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) SwitchNot capableNot capable

The 2x installed NVMe are passed through to the OmniOS/Napp-it VM.
When I first booted up the VM, only 1 of the 2 NVMe drives were available, the other was listed, but marked as removed.
I restarted and the console says:

napp-it-32 console login: WARNING: /pci@0,0/pci15ad,7a0@17/pci1000,3000@0 (mpt_sas18): number of phys reported by HBA SAS 10 Unit Page 0 (21) is greater than that reported by the manufacturing information (16). Driver phy count limited to 16. Please contact the firmware vendor about this.​

I have not seen this before, I though it was related, as I coincidentally had 16 physical drives connected.
I removed one NVMe drive, it worked fine - I swapped it for the other NVMe drive, it worked fine - tried both again, only one works - I tried removing other unrelated drives, still only one NVMe works.
I disabled pass-through and instead tried to mount a datastore to the drives, one worked, but strangely the other gave an error.
I recently followed this guide to 'Create VMFS5/6 Datastore on USB drives' and decided to try the same on the second problem NVMe drive. This worked and I could mount a datastore on both drives at the same time and upload files to them.
I then enabled pass-through on both NVMe drives again and added them to the OmniOS/Napp-it VM.
Both drives were available and I could add them to a Pool - restarted and they are still working fine.

So all fixed, I just don't understand how.

The above console warning is still there on every reboot though. What is this?
 
If everything works, I would suppose you can ignore the warning from the SAS HBA.

You may only check if firmware and OmniOS is newest.
If you want a more detailled answer, ask at Illumos-Discuss where driver developpers are around,
https://illumos.topicbox.com/groups/discuss
 
Last edited:
If everything works, I would suppose you can ignore the warning from the SAS HBA.

You may only check if firmware and OmniOS is newest.
If you want a more detailled answer, ask at Illumos-Discuss where driver developpers are around,
https://illumos.topicbox.com/groups/discuss

Yes, the HBA (9400-16i) firmware is up to date and OmniOS is at 151032. The Supermicro 4-port M.2 adapter doesn't seem to have updateable firmwware.
Thanks for link to Illumos-Discuss.
 
napp-it v20.x

I have uploaded a pre version of next napp-it v19.12 noncommercial homeuse
and napp-it 20.01 pro to support the newest features of Oracle Solaris and expecially OmniOS/OpenIndiana.

- ZFS encryption with web/filebased keys, a http/https keyserver with an HA option,
keysplit for two locations, automount after reboot and user lock/unlock via SMB
http://napp-it.org/doc/downloads/zfs_encryption.pdf

- special vdevs
https://www.napp-it.org/doc/downloads/special-vdev.pdf
- trim

- force ashift when adding a vdev

- protection against accidentially adding a basic vdev to a pool

more (all features from 19.dev)
https://napp-it.org/downloads/changelog_en.html
 

I don't fully understand 'special vdevs'.

I was considering adding an Optane 900P for SLOG, but after discussion here decided to go for a pool of NVMe drives for my VMs.

I currently have 2x NVMe drives (Corsair MP510 1.92TB), 4x 8TB WD HDD and 128Gb of ECC memory.

Would adding a special vdev increase performance of my HDD pool only?
What sort of drive would be suitable for a special vdev? a good SATA-6G SSD, NVMe SSD, or Optane 900P?

The VMs in my AiO I am most keen on tweaking the performance of, are my Windows 10 desktops for games & 3d software with GPU passthough (E-2278G iGPU, Quadro P620, Quadro P4000)
 
Last edited:
The fastest option would be creating a pool from an NVMe mirror.

Special vdev is a new concept introduced into ZFS by Intel. The idea is to identify some performance sensitive data types and place them on a special vdev ex a NVMe vdev mirror in a pool where the other vdevs are built from disks. These special data types are mainly metadata, dedup tables and small io. With the small io concept you can force single filesystems onto the NVMe when you use a recsize setting smaller than the special vdev threshold.

Unlike Arc caching that caches only random data, special vdevs improve performance in general for the selected data types. Special vdevs is a very intelligent alternative to data tiering where some active data is copied to a faster part of an array.
 
The fastest option would be creating a pool from an NVMe mirror.

Special vdev is a new concept introduced into ZFS by Intel. The idea is to identify some performance sensitive data types and place them on a special vdev ex a NVMe vdev mirror in a pool where the other vdevs are built from disks. These special data types are mainly metadata, dedup tables and small io. With the small io concept you can force single filesystems onto the NVMe when you use a recsize setting smaller than the special vdev threshold.

Unlike Arc caching that caches only random data, special vdevs improve performance in general for the selected data types. Special vdevs is a very intelligent alternative to data tiering where some active data is copied to a faster part of an array.

Thank you _Gea
So rather than 2 separate pools as I have now, I could set my 2x 2TB NVMe as a special VDEV mirror to my 4x 8TB HDD for approx 18TB of storage and have similar performance for my VMs and much faster performance to the HDDs?
Can the special vdev also be a basic stripe of 2x NVMe? I hoped to stripe the NVMe to maximise the storage, as I have another 18TB pool of HDDs as disaster recovery. I would still raid10 the HDDs as I assume they are more likely to develop faults than the NVMe.
Is there guidance on how to set this up?
Is it stable to use now?
 
Last edited:
In a home environment I would not expect a major performanceboost for your disk pool: Only in a large multisuer high load environment I would expect an improvement due metadata on the fast special vdev or small io in general. In a home environment I would use a second pool.

As a special vdev affects redundancy (a special vdev lost is a pool lost) you must use a mirror (2 or 3way) for a special vdev. As the whole pool is affected, I would prefer NVMe with powerloss protection. At last, there is a known bug with special vdevs. They must have the same ashift like the pool or you cannot remove again, the pool may even crash when you try.
 
In a home environment I would not expect a major performanceboost for your disk pool: Only in a large multisuer high load environment I would expect an improvement due metadata on the fast special vdev or small io in general. In a home environment I would use a second pool.

As a special vdev affects redundancy (a special vdev lost is a pool lost) you must use a mirror (2 or 3way) for a special vdev. As the whole pool is affected, I would prefer NVMe with powerloss protection. At last, there is a known bug with special vdevs. They must have the same ashift like the pool or you cannot remove again, the pool may even crash when you try.
Thanks, I'll keep to the original plan then.

I'm trying to get my 4x NVMe PLX switch to be more stable. Its the Supermicro AOC-SHG3-4M2P. It is fine with large sustained file transfers of 100's of GB and running benchmarks with a single GPU passed through to VM, but when I push the system hard with multiple GPUs running benchmarks, I get a huge number of errors (eg: Stripe of 2x 2TB Corsair MP510, Drive A - S:0 H:780865 T:0 / Drive B - S:0 H:629124 T:0, one corrupt file 'WIN10P02-flat.vmdk' ) errors keep on climbing even after the heavy load has finished. It doesn't look like the drives are even being subjected to much use during the benchmarking, so it must be something else. I destroy pool and recreate and all is well again until I push the system hard. I have tried each drive separately on the adapter with the same results. I have not yet tried a mirror of the drives. I ran MemTest86 for 24hours with no errors. I then tried each drive in another adapter (a simple M.2 NVMe to PCIe) in the same PCIe slot with no errors after many hours of heavy use. The drives are passed through as my post above. I am in contact with Supermicro for help, as the manual is useless.

Is there anyway ESXi/OmniOS would not be compatible with a switch?
 
Last edited:
Hi. I had just install Napp-it on Ubuntu server (with GUI) running as VM on Proxmox. I have my RAIDZ1 pool created in Proxmox with all my data. How can I import this pool (or disks from pool ) into Napp-it to manage it?
 
Back
Top