OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Thanks for your response _Gea - I think you are entirely correct about the newer version of ZFS having less leniency with remaining free space calculations.

The remaining space was approximately 1% - speeds were actually pretty good (2 vdevs of 10 drives each in RAIDZ2). I never had any performance problems, which may surprise some people. :)

I had to free up an additional 1 TB GB or so and everything was fine. I actually booted up my OI151a VM so I could monitor the amount of space freed.
 
Oh another thing - I really appreciate that Napp-It now includes a VMWare ESXi template - this makes it so much incredibly easier to set up and maintain.
 
Anyone know why/how to improve performance for the esx disk ?

I have ZFS pool:
Tank: 6x 2TB mirror

Tank can (locally DD):
read @ ~300MB/s lo
file copy @ 110MB/s
write @ 224MB/s

Tank 1 export storage via NFS @ MTU 9000 over a vmxnet3 directly to ESXi 6.0, I am using gea ovf.

From inside the VM I can copy from (and back to) NFS ~ @ 110MB/s. - 100% network base

However copying within the esxi disk is rather slow:
50MB/s basically it's doing the same as above, except within a vmdk

could there be block size mismatch, or NTFS blocksize or ESXi IO control or anything like that preventing the VM to use the pool at 100%? NFS dosen't seem to be the limiting factor.

Thanks
 
Is the ESXi vdisk on NFS or local datastore?
If the vdisk is on NFS, have you disabled sync or enabled/default?

What test tool are you using?
I would prefer a Windows VM with tests to c: ex with AjA or Crystaldiskmark
 
The vdisk is on the NFS data store on the pool described above; I was testing with 20GB files using the windows interface. It start very fast but goes to a steady speed described above. I will retry with Crystal but I expect the same results, also crystal dosen't have a Copy test (50%read/50%write).
 
Gea, I've installed AJA on a beefy VM writing to the pool, (sync=off).

I have also Mapped a windows drive via NFS and SMB to the same pool.

AJA System Disk Test 16GB file:
vDisk: 64MB/s Write, 101MB/s read
NFS drive: 211 MB/s Write, 84 MB/s read
SMB drive: 656MB/s Write, 330 MB/s read (not sure why its so high, but the network throughput seem to agree, transfer speed @ 6.3 Gbit)
 
Last edited:
Can you disable sync on the NFS share with the vdisk as read is twice as fast than write.
The new SMB 2.1 stack on Solaris/ OmniOS is very fast.
 
There are some options like
OS tuning like tcp, nfs and vmxnet3s tuning (mainly increase buffers),
sometimes a smaller ZFS blocksize like 32 or 64kb helps as your client OS mostly use a smaller blocksize than ZFS with its default of 128k
Windows nic and tcp settings ex disable disable int throttling.

A transfer to a vmfs/ntfs filesystem over NFS to ZFS is always slower than accessing SMB shares on ZFS directly especially if you read and write from the same but with some tuning values can be better than your current results.
 
Hello,
I am storing the data from the sensors of my servers (temperature, SMART values, UPS data, ...) in a custom SQLite3 DB using a Perl script I wrote.
Soon I would like to store also data from Arduino devices I am preparing.

Since SQLite has no simple and ready to use graphing tools I can access from a Android tablet, I thought about switching to OpenTSDB for the data storage and using Grafana for the plotting.

However, there is no Grafana binary for OmniOS or Solaris.

I wasn't able to get a Go dev environment running on OmniOS, so I have two options: 1) cross compile or 2) use a Linux VM.

For 1) I am a bit confused by Chris's Wiki :: blog/programming/GoCrossCompileNotes and gonative: Cross compiling Golang programs with native libraries - inconshreveable
For 2) I'm not sure how to proceed. OmniOS should have Linux KVM, right? or better Virtualbox?

Could you give me some recommendations? or, if you have a Go environment, could you please compile Grafana for OmniOS?

Thanks
 
I cannot comment about the options to compile Grafana as I have not used that.
If you want to virtualize, can can use ESXi free below OmniOS ex with an All-In-One setup.
see http://napp-it.de/doc/downloads/napp-in-one.pdf

OmniOS is also working on LX and KVM from SmartOS and you can use Virtualbox on top of OmniOS.
 
I'm trying to set up Ubuuntu 16.04.1 with nappit in root
... but : sh /var/web-gui/data/tools/linux/napp-it
-> doesn't work :
"Can't open /var/web-gui/data/tools/linux/napp-it"

cd /var/web-gui/data/tools
=> there is no linux folder ! :/
(but an Ubuntu folder)

... so, it is impossible to start napp-it :'(
 
I'm trying to set up Ubuuntu 16.04.1 with nappit in root
... but : sh /var/web-gui/data/tools/linux/napp-it
-> doesn't work :
"Can't open /var/web-gui/data/tools/linux/napp-it"

cd /var/web-gui/data/tools
=> there is no linux folder ! :/
(but an Ubuntu folder)

... so, it is impossible to start napp-it :'(

Are you on napp-it 2016.07f or newer?
The wget installer is on 2016.07f
 
Are you on napp-it 2016.07f or newer?
The wget installer is on 2016.07f
I downloaded this afternoon @3pm with : wget -O - www. napp-it.org/nappit | perl
Is there anything else, newer, not mentioned int the tutorial ?
 
Can you recheck the folder
/var/web-gui/data/tools/linux/

I have just rechecked 16.07f and the folder (with minihttpd and a startscript) is there
(I checked per download but not on Ubuntu 16.04)
 
Can you recheck the folder
/var/web-gui/data/tools/linux/

I have just rechecked 16.07f and the folder (with minihttpd and a startscript) is there
(I checked per download but not on Ubuntu 16.04)
no folder "linux" in /var/web-gui/data/tools/

"minihttpd" is in /var/web-gui/data/tools/ubuntu/etc
there is a file called "napp-it" in /var/web-gui/data/tools/ubuntu/etc_init.d/
 
Ok.. I was following : napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana, Solaris and Linux : Linux

On Ubuntu 14/16 and Debian 7/8, you can try the following to run napp-it

- install the OS
- setup ZFS (On Ubuntu 16: apt-get install zfsutils-linux )
- run: wget -O - www. napp-it.org/nappit | perl

This downloads napp-it
Start napp.it for the first time via
sh /var/web-gui/data/tools/linux/napp-it

After the initial start, you can start via commands "napp-it" and stop via "napp-it-stop"
There is no autostart enabled during boot
 
I will need to try that on Ubuntu as the wget installer downloads the above 16.07f
 
After download and unzip : just have to rename the extracted folder :data_16.07f" to "data" and then "sh /var/web-gui/data/tools/linux/napp-it" works fine !

I was able to import a pool which was created on OpenIndiana under Nappit in 2012 ! :)
 
Hello All,
I'm currently working on my 3rd iteration of ZFS all-in-one Server.
This time I'll use a x10SDV-TLNF4-O board for 16 cores (HT) and up to 128gb ram.
Revising this time not necessarily for more space, but for energy savings, more RAM and CPU Cores, and trying to do things "properly" this time.

By "proper" I mean
Instead of just using a random SSD and thumb drive, I have 2 100GB S3700 Intel SSDs that I will mirror and install esxi and the OmniOS+napp-it image on.
Then I plan on hosting my files / running additional VMs on a ZFS Pool which will consist of 4x10TB Drives (2 mirrored VDEV) (expanding to 8x10TB and 4 vdev once budget allows) on a LSI 9240-8i (flashed to 9211-it mode) via passthrough.

As I finalize my parts list; My main question is about the ZIL (SLOG) device.

2 Questions about "proper procedure"
A) Should the SLOG be mirrored? Not sure if failure of the SLOG device in disastrous to the pool these days.
B) The motherboard will be using the sata drives on the motherboard SATA controller to run esxi, so I can only passthrough the 9240 HBA to the OmniOS+Napp-it VM as far as I can tell without having parts in hand yet. Is there a way to pass through a single motherboard sata port? I will eventually need all 8 ports on the HBA card for large storage devices. If it isn't possible to pass a single SATA Port; How disastrous would it be to instead pass a vmdk stored on the SLOG device to the OmniOS for usage as the SLOG?

Thanks for your help
 
Boot mirror
You need a hardware raidcontroller if you want to mirror the ESXi bootdisk.
But as the intel 3700 is very reliable and a reinstall from Scratch (ESXi + napp-it from an ova template) is done in 20 min this is not really needed. Another option beside a hardwareraid is a 3,5" sata raid-1 enclosure for 2 x 2,5" disks or you can image your bootdisk example with clonezilla and use the second disk as cold spare.

about the slog
If an slog fails together with a crash you may encounter a dataloss of last writes - not very propable.
If the slog simply fails, ZFS will then use the onpool ZIL instead with the effect of a reduced performance.
A Slog mirror helps against both effects.

You can pass-through single disks in ESXi. This is called physical raw disk mapping. You can also use a vmdk as a ZIL device.
Both is possible but as you add another layer with ESXi disk drivers, this is not as perfect as a direct connectivity over the Solaris disk driver.
 
First, how do you plan to mirror ESXi boot on chipset RAID? Reread _Gea's plan for 30-minute recovery instead of boot RAID.

S3700s are top-notch SSDs, and neither ESXi nor OmniOS boot volumes really need their performance. Consider a cheap drive (or 2) for ESXi & the NAS. Doing that will allow you to put the S3700s into a ZFS mirror, giving fast VM storage & removing load from your mechanicals.

2A) Not too critical. You'd have to experience multiple, simultaneous failures to lose data if a single SLOG breaks. Better to prioritize backup first.
2B) You must pass the controller, not individual ports. And using your SSDs for VM storage means you wouldn't benefit from separate SLOG.
 
Boot mirror
You need a hardware raidcontroller if you want to mirror the ESXi bootdisk.
But as the intel 3700 is very reliable and a reinstall from Scratch (ESXi + napp-it from an ova template) is done in 20 min this is not really needed. Another option beside a hardwareraid is a 3,5" sata raid-1 enclosure for 2 x 2,5" disks or you can image your bootdisk example with clonezilla and use the second disk as cold spare.

about the slog
If an slog fails together with a crash you may encounter a dataloss of last writes - not very propable.
If the slog simply fails, ZFS will then use the onpool ZIL instead with the effect of a reduced performance.
A Slog mirror helps against both effects.

You can pass-through single disks in ESXi. This is called physical raw disk mapping. You can also use a vmdk as a ZIL device.
Both is possible but as you add another layer with ESXi disk drivers, this is not as perfect as a direct connectivity over the Solaris disk driver.

First, how do you plan to mirror ESXi boot on chipset RAID? Reread _Gea's plan for 30-minute recovery instead of boot RAID.

S3700s are top-notch SSDs, and neither ESXi nor OmniOS boot volumes really need their performance. Consider a cheap drive (or 2) for ESXi & the NAS. Doing that will allow you to put the S3700s into a ZFS mirror, giving fast VM storage & removing load from your mechanicals.

2A) Not too critical. You'd have to experience multiple, simultaneous failures to lose data if a single SLOG breaks. Better to prioritize backup first.
2B) You must pass the controller, not individual ports. And using your SSDs for VM storage means you wouldn't benefit from separate SLOG.

Thanks for these great insights and suggestions. I had forgotten esxi won't work with software RAID...
Re-evaluating....
Initial thoughts... If I boot esxi via thumb drive, is it perhaps possible to pass the entire motherboard sata controller to the ZFS VM and not even create a vmdk on a datastore for that ZFS VM, instead directly installing via passthrough to ONE of my S3700, which would also put all the SATA ports where I need them to be able to natively visible to the ZFS VM to hold the SLOG on the second S3700? Or does ESXI REQUIRE having an initial datastore first to put the ZFS VM onto?
 
So I've been playing in a virtualized environment, and I think I might be onto something cool (needs testing)

The Problem:
Lots of people attempting to do a ZFS Server + ESXi all in one; despite having motherboards with many onboard SATA controllers, and despite ZFS being great for "dumb" controllers, many have to purchase a HBA device or additional controller card because they need to passthrough the entire controller device for the ZFS VM. Because users need an initial storage space, they have to use some of the onboard motherboard ports for booting their initial datastore, but that means they cannot passthrough the motherboard controller to their ZFS VM.

My Solution:
Use a USB Thumb Drive to boot ESXi AND serve as the initial datastore (for VM machine information only) and passthrough the entire mobo controller, and install directly to a boot drive on the passed through motherboard controller (instead of a vmdk on a datastore)

Procedure:
-Install ESXi to the USB Drive (16GB or greater is probably what you want here)
-Boot from said USB Drive
-Once ESXi is booted, manually create a VMFS datastore on the ESXi boot drive itself (This worked for me on ESXi 6.0u2)
-Create a New VM using the USB Datastore, and pass through your motherboard controllers to the VM.
-Delete any small vmdk that you may have created in the first step. (You don't actually want to run an OS from your thumbdrive, it would probably degrade quickly)
-Copy the OmniOS (or whatever OS) iso over to the datastore.
-You can now passthrough the entire motherboard controller to the ZFS VM, installing directly to a boot drive rather than a vmdk sitting on a datastore.
-Boot your VM that should have access to your motherboard's storage controllers with a real sata storage device on it. Install your OS natively to that storage device.

Upsides to this configuration:
-You would not need to buy an additional Disk Controller for passthrough if your motherboard already came with enough ports.
-Native access to all your motherboards SATA ports in OMNIOS/Napp-it
-If you had an 8 PORT HBA, and were maxed out, this configuration would let you use more SATA ports from the motherboard.
-Installing SLOG and L2Arc devices becomess easier if your passthrough HBA was maxed out, and this configuration allows for native control of said devices
-You can still serve drives on the motherboard controlled by the ZFS VM back to ESXi as datastores if you wish via NFS or iSCSI.

Downsides to this configuration:
-Your ZFS VM must run off a native storage device instead of a virtual storage device. (This might make copying or mirroring your ZFS VM more difficult)

I don't have all the parts yet that I would need to test this, but I have done it in VM workstation with a USB drive, and it seems like it should be possible since I was able to run a VM with ESXi on it off a USB drive and also install a linux distro to the datastore on the USB and run the VM within a VM without having any drives attached to the initial ESXi VM at all.

Has anyone tried this or see any glaring issues with this setup?

EDIT: I'm realizing this would only be useful if the onboard SATA ports were already on a PCIe bus anyways, which I think most onboard ports aren't on anyways.
 
Last edited:
Yes, you can boot ESXi from USB and with ESXi 6.0u2 and with the new webconsole yon can use USB as a local datastore where you can put your storage VM onto. Then Sata is free for pass-through.

BUT
USB as bootdevice for other systems than ESXi can be flaky and slow
Sata can give problems for pass-through or hotplug
Mainboards with many Sata ports may use unsupported or badly supported chipsets
Disks on Sata ports are recognized by controller ports not by disk unique WWN

Using Sata for booting and an extra Controller for storage, especially if the extra controller is LSI based avoids all of these problems, you can add as many controller as needed. Using USB for a local datastore is ok for home use and can save 50-150 $/Euro.

You can use AiO setups for production use but should then avoid using USB as storage bootdevice and use LSI HBAs for storage.
 
Yes, you can boot ESXi from USB and with ESXi 6.0u2 and with the new webconsole yon can use USB as a local datastore where you can put your storage VM onto. Then Sata is free for pass-through.

BUT
USB as bootdevice for other systems than ESXi can be flaky and slow
Sata can give problems for pass-through or hotplug
Mainboards with many Sata ports may use unsupported or badly supported chipsets
Disks on Sata ports are recognized by controller ports not by disk unique WWN

Using Sata for booting and an extra Controller for storage, especially if the extra controller is LSI based avoids all of these problems, you can add as many controller as needed. Using USB for a local datastore is ok for home use and can save 50-150 $/Euro.

You can use AiO setups for production use but should then avoid using USB as storage bootdevice and use LSI HBAs for storage.

I'm not saying use the USB datastore as the storage though, I'm talking just storing the configuration for the vm (the vmx file i think) there with NO virtual disks on the USB datastore, and letting the VM use a real SATA connected drive thats already been passed through to run OmniOS.

But yes I see your points with regard to motherboard SATA controllers being much more flaky than LSI controllers. (I've seen so many systems go bad due to motherboard SATA ports going totally crazy and flaky)
 
Getting some good speeds using Sol 11.3, SMB, Win 10, and Intel X540-T1. No jumbo frames or tuning.


IfOHX4x.jpg
 
Hello... I wrote a fan control script for all-in-one users to be run on the OmniOS server itself It will automatically adjust fan speeds up and down based on the target temperature of one drive in your pool. (I used my hottest drive)
DISCLAIMER:
  1. The IPMI commands are specific to my board X10SDV-TLNF4, but may work for many other supermicro boards. but check and adjust them accordingly before using this script.
  2. The smartctl commands as well are specific to LSI PCIe devices I think, but can be adapted to any drive.
  3. You probably will have to install the ipmitool on omnios "pkg install ipmitool" (its pretty painless)
Thanks to gea and other people on the freenas forums who have written similar fan control scripts. Mine is different because it adjusts fan speed very granularly rather than min med max zones for the fans.

You can adjust the target temperature, set a max and minimum speed that you want the fans to run at, and choose an interval time for adjusting.





#!/usr/bin/bash
#Fan Control Script
clear
targettemp=42

#Adjustment Time before bumping Fan up or down
intervaltime=180

#DO NOT SET MAXSPEED ABOVE 60! IT WOULD SEND A STRANGE VALUE TO IPMI IF YOU DO THAT.
#SPEEDS ARE OUT OF 64 i.e. 32/64 would be 50 PERCENT FAN SPEED. DONT SET MAXSPEED HIGHER THAN 60
maxspeed=60
minspeed=24
ipmitool -H 192.168.1.2 -U ADMIN -P yourpassword raw 0x30 0x45 0x01 0x01
#Fan speed is controlled from 00 to 64
startspeed=48
ipmitool -H 192.168.1.2 -U ADMIN -P yourpassword raw 0x30 0x70 0x66 0x01 0x00 0x$startspeed
currentspeed=$startspeed
loop=1
while loop=1
do
currenttemp=`smartctl -a -d sat,12 -T permissive /dev/rdsk/c3t50014EE20AA996EDd0s0 | grep -i temperature | cut -d" " -f 37`
currentfanspeed=`ipmitool -H 192.168.1.2 -U ADMIN -P yourpassword sdr | grep -i FAN1 | cut -d'|' -f2`
echo "Current Temp: "$currenttemp " Target Temp:"$targettemp " Current Fan Speed:"$currentfanspeed
if [[ "$targettemp" -lt "$currenttemp" ]]
then
echo "HDD hotter than target, increasing fan speed"
if [[ "$currentspeed" -gt "$maxspeed" ]]
then
echo "Fan at Max, Cannot Increase more"
else
((currentspeed=currentspeed+4))
echo "Changing Currentspeed To: 0x"$currentspeed "/64"
ipmitool -H 192.168.1.2 -U ADMIN -P yourpassword raw 0x30 0x70 0x66 0x01 0x00 0x$currentspeed
ipmitool -H 192.168.1.2 -U ADMIN -P yourpassword raw 0x30 0x70 0x66 0x01 0x01 0x$currentspeed
fi
sleep 1
else
if [[ "$targettemp" -gt "$currenttemp" ]]
then
echo "HDD cooler than target; decreasing fan speed."
if [[ "$currentspeed" -lt "$minspeed" ]]
then
echo "Fan at Minimum, Cannot Decrease more"
else
((currentspeed=currentspeed-4))
echo "Changing currentspeed To: 0x"$currentspeed "/64"
ipmitool -H 192.168.1.2 -U ADMIN -P yourpassword raw 0x30 0x70 0x66 0x01 0x00 0x$currentspeed
ipmitool -H 192.168.1.2 -U ADMIN -P yourpassword raw 0x30 0x70 0x66 0x01 0x01 0x$currentspeed
fi
sleep 1
else
echo "HDD at Desired Temp"
fi
fi

sleep $intervaltime
clear
done





Other possible future tweaks... This motherboard has two fan zones... I could assign one to CPU temp and the other to drive fans if I find that I'm not getting enough cpu airflow under load. (right now I just make sure my minspeed is high enough for any scenario.)

Anyways hope this is useful as a reference for someone.
 
Last edited:
  • Like
Reactions: _Gea
like this
Running into a brick wall here.

Trying to install OmniOS, but when I load it via USB, it'll get to "syncing file system" then something will flash up and the system will reoboot.

I'm absolutely stuck at this, and haven't found anyway to proceed.
 
Current Omnios supports USB1 and 2.
If you boot from USB or want to install onto, avoid USB3

On some newer boards like SuperMicro X11 you cannot boot OmniOS from USB on any port for that reason (only USB keybord is possible if you enable keyboard support for Windows in Bios). In this case you must install from an Sata CD/DVD or use my cloned system image.
 
Current Omnios supports USB1 and 2.
If you boot from USB or want to install onto, avoid USB3

On some newer boards like SuperMicro X11 you cannot boot OmniOS from USB on any port for that reason (only USB keybord is possible if you enable keyboard support for Windows in Bios). In this case you must install from an Sata CD/DVD or use my cloned system image.
I have all USB 3.0 disabled in BIOS. It's currently on USB2.0 mode.
 
Its not the mode, its the chipset and driver.
To be sure, try to install from an Sata CD/DVD (if you have one)

USB drivers for newer XHCI chipsets are on the way but not currently available in Illumos based systems.
 
Open-ZFS Developer Summit 2016
There are videos available at OpenZFS
with infos about the state of

- ZFS native encryption (filesystem property not on underlying disks)
- faster Sequential Resilvering

They are in Oracle Solaris but Open-ZFS lacks these features at the moment.
 
Hello,

I have not updated my OmniOS in over a year. Currently on r151014. Should I update to r151018 (ie: is this stable)?

Also, what is the latest stable firmware for the IBM M1015? I am still on P16, wondering if it is worth updating? I recall seeing issues with one of the newer firmwares but can't recall what version it was to avoid and if there has been a fix since then?

Thank You!
 
151014 is a long term stable,

Current stable is 151018,
main advantage is SMB 2.1

Next stable 151020 is awaited soon
with some improvements ex improved NVMe support or LX container

For LSI 2008 besed HBA current firmware is v20.007 (last what i had checked) with bugs in 20.0 up to releases lower than 20.004
 
Thanks for the quick response Gea!

You are correct, latest LSI 2008 based firmware is: 20.00.07.00

Just to make sure I understand correctly. It is fine to update to 20.00.07 since the bugs were fixed around 20.00.04?

Thanks again!

Updated: Flashed to 20.00.07 and everything seems fine so far. Going to hold off on OmniOS update until few days so that it will be easier to troubleshoot if I do encounter issues.
 
Last edited:
20.00-07 should be ok (I have not seen problem reports about).

btw.
OmniOS 151020 should be available soon.
 
New:
OmniOS 151019+ now includes LX (Linux) container (maintained by SmartOS, Joyent/Samsung)

read
[OmniOS-discuss] Bloody update -- NOW INCLUDES LX Beta SUPPORT

or follow last discussions about LX
omnios-discuss

Example: Using Plex on Linux as a VM (have not tested myself but seems a good start)
In future I suppose all Non-Storage Features and add-ons in napp-it will be based on Zones, KVM or especially LX.
Lights and Shapes

Read the article with Brian Cantrill about container
All Things Containers From Solaris Zones to Docker

A very basic howto setup a Linux container (CentOS) on OmniOS
http://www.napp-it.org/doc/downloads/zones.pdf
 
Last edited:
Back
Top