OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

NVMe is Flash storage with a faster interface.
In a multuser scenario performance degration is less than with a conventional
enterprise SSD but of course there is one.

The best what you can expect is
- double performance for a single user compared to a conventional SSD
- same performance with multiple users than a conventional SSD with one user.

Makes sense.
 
Are the 2012R2VM results from a VM with the Samsung over NFS from napp-it or via pass-through?
What controller/ Firmware are you using with napp-it?

Passthrough from the LSI 2308 off my X10SL7-F, exactly how napp-it was receiving the drives

UPDATE: Tried it out on FreeNAS and speeds were very good. Could be problem lie with napp-it/OmniOS?
Code:
[root@freenas] /mnt/ssd/dataset# dd if=/dev/zero of=/mnt/ssd/dataset/dd.testfile bs=4M count=10000
10000+0 records in
10000+0 records out
41943040000 bytes transferred in 10.980550 secs (3819757645 bytes/sec)

UPDATE 2: After updating my LSI 2308 to P20 from P19, updating my Intel NICs (probably not the reason), and removing serviio from napp-it (it was broken and always in maintenance mode), I recreated the pool and everything seems to be fixed...somehow (probably the LSI 2308 update)


Code:
Memory size: 16384 Megabytes
write 12.8 GB via dd, please wait...
time dd if=/dev/zero of=/ssd/dd.tst bs=2048000 count=6250
6250+0 records in
6250+0 records out
12800000000 bytes transferred in 23.141027 secs (553130172 bytes/sec)
12.8 GB in 23.1s = 554.11 MB/s Write
 
Last edited:
Its no more than running the console dd command vs a local disk
so its probably a problem around this special disk, hba/firmware and mpt_sas driver (os release).

you must modify these three parameters to be sure about the reason.

You can also check System >> log for entries that indicates a problem.
Possible options where a setting can solve a problem is sd.conf
to change disk parameters and mpt_sas to change controller settings
like disabling mpio that can be a problem with some SAS configs.
(Menu Disks > details)
 
Last edited:
Its no more than running the console dd command vs a local disk
so its probably a problem around this special disk, hba/firmware and mpt_sas driver (os release).

you must modify these three parameters to be sure about the reason.

You can also check System >> log for entries that indicates a problem.
Possible options where a setting can solve a problem is sd.conf
to change disk parameters and mpt_sas to change controller settings
like disabling mpio that can be a problem with some SAS configs.
(Menu Disks > details)

sorry Gea but i edited my previous post wrong (you can check the revisions). I'll fix it now

I was able to fix it earlier today by updating my LSI 2308 from P19 to P20. Thanks for you help on everything
 
First the source of above error is SSH
You may stop SSH and check menu System > Log for other reasons

If you update from OI to OmniOS and the problem remains,
it reason may be located on your network or a client.

Any special setup or anything curious there.
You may also try an update to newest OmniOS 151016 as there are always continous fixes

(There is a huge step regarding SMB in 151017 bloody, available in next stable around april)

You may also switch between e1000 and vmxnet3 vnic to check if there is a reason.
If you are not on ESXi 5.5u2 or 6.00u1, you may update to one of them.

Sorry for late reply, have a 4 month old daughter that requires my attention all day long.

So I stopped the SSH service and the error message was gone. Great. But the SMB connection died anyway. So I tried to switch to vmxnet3, same results.

Made a new VM with OmniOS r151016 since for some reason I could not update through pkg on the r151014 machine.

Machine is now up and running, and almost immediatley I get error messages. SSH error fixed easily. But my OmniOS machine dont show up in windows network browser. I get smbd error pointing to dyndns, but I have not set up any dynamic dns, and I dont use a domain for my home network. I am not an experienced unix user so I dont know what to do next.

Before switching to omnios, I ran openindiana for over 2 years without problems, until suddenly smb started to quit on me. That is why I switched to omni since it was the recommended OS for napp-it.

I have just 3 vms on my esxi machine, pfsense/omni/win7 and I haven't done anything special I can think of that could cause this issue.

Any more advice? :(

VfZiPSn.png
 
I would check dns settings, /etc/hosts for a reference of 127.0.0.1 to hostname,
wins_server and workgroup name (needs to be the same on server and client side)

Have you setup your basic ip settings like (best start with dhcp)
http://www.napp-it.de/downloads/omnios_en.html

To update OmniOS, you must remove the current publisher (014 repository)
and add the new 016 repository. Otherwise the pgk update will only ceck for newer 014 bits.
http://omnios.omniti.com/wiki.php/Upgrade_to_r151014

For OmniOS and OpenIndiana you may find some hints also at
http://wiki.illumos.org/display/illumos/CIFS+Service+Troubleshooting
 
I would check dns settings, /etc/hosts for a reference of 127.0.0.1 to hostname,
wins_server and workgroup name (needs to be the same on server and client side)

Have you setup your basic ip settings like (best start with dhcp)
http://www.napp-it.de/downloads/omnios_en.html

To update OmniOS, you must remove the current publisher (014 repository)
and add the new 016 repository. Otherwise the pgk update will only ceck for newer 014 bits.
http://omnios.omniti.com/wiki.php/Upgrade_to_r151014

For OmniOS and OpenIndiana you may find some hints also at
http://wiki.illumos.org/display/illumos/CIFS+Service+Troubleshooting

in my /etc/hosts file i have only the following
::1 localhost
127.0.0.1 localhost loghost
# next entry is done by napp-it agent-bootinit to avoid root console spamming, comment it optionally with a #
127.0.0.1 Omni

I have set up the basic ip settings according to your explanation, it is now currently static.

I checked the illumos wiki, but nothing specifically that points out my problem.
 
I have one of my windows machines that is always up and running as master browser.
Though the problem is that I cannot find \\omni or \\192.168.xxx.yyy on the network. It does not work to use \\omni in a explorer.exe window f.ex.
So something isn't playing well within Omni. I cannot ping computers either.

Is there any other OS that works with napp-it that I could try and move my zfs pool to without loosing all my data?
 
My main OS is OmniOS but I support Solaris and current OpenIndiana (Hipster)

But the master browser functionality must not hinder a SMB connect via \\ip
or even a basic ping. There must be another problem as Solaris/ OmniOS is a very very stable solution.

- Is the web-UI working at port 81
- do you have more than one nic (potential routing problem)
- do you have enabled the security panel/ firewall settings

Unless a ping is not woring to OmniOS, you must not check other items
A ping to Windows is mostly blocked by the firewall
 
The web-ui is working on port 81, I can log in etc.
I have two nics on my machine, 1 in and 1 out, I have pointed the nic for OmniOS to LAN nic.
I have not enabled the firewall within napp-it.

I can ping omni from my windows computer, and now I can ping my windows computer from omni, I cannot ping gateway/router,
In my router, I cannot see that omni has been granted a lease on an ip, but omni is reachable through nfs from other computers using that.
 
Are your two nics on different subnets?
Can you post the output of ifconfig -a or the menu system > ethernet

Is your Windows PC in the same subnet like one of the nics or is there a router between?
 
Don't worry
Just import the pool.
This works even without a prior export from OI 151 (older ZFS) to Omni (newer ZFS)

works great :D only issue I had was making a bootable OmniOS. I tried the usb imager on your website but couldn't get it work.

end up using the Tools for OSForensics – ImageUSB – Write an image to multiple USB Flash Drives

with the following steps

Code:
from metiche @ servethehome forum

format usb FAT (not FAT32 OR NTFS)
Go Omnios and download .usb-dd file
Go to Tools for OSForensics – ImageUSB – Write an image to multiple USB Flash Drives an download imageusb
Install imageusb software
When open
Step 1 choose the usb unit you want make bootable
Step 2 SELECT THE ACTION TO BE PERFOMED. Choose option WRITE TO USB DRIVE
Step 3 SELECT THE IMAGE. when browse to find file image choose the extension option ALL FILES to find your file with extension *.usb-dd
Step 4 click WRITE and wait until the process finish.
 
Are your two nics on different subnets?
Can you post the output of ifconfig -a or the menu system > ethernet

Is your Windows PC in the same subnet like one of the nics or is there a router between?

I should clarify. My ESXi box has two nics, 1 for WAN and 1 for LAN. Every VM in my esxi box points to LAN nic.
There is only a switch between my windows pc and the esxi box hosting OmniOS. Everything is using same subnets in my home network.

ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.11.238 netmask ffffff00 broadcast 192.168.11.255
ether 0:c:29:4f:d2:8d
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
e1000g0: flags=20002000840<RUNNING,MULTICAST,IPv6> mtu 1500 index 2
inet6 ::/0
ether 0:c:29:4f:d2:8d
 
As the Solaris SMB server is integrated in the OS and ZFS it is very stable and zero config. If the napp-it web-UI is working and the SMB service online and not working even after a reboot, I would either re-download the template or do an upgrade to th newer 151016.

you must unset 014 publisher, set 016 publisher and do a pkg upgrade
http://omnios.omniti.com/wiki.php/Upgrade_to_r151014
 
I am on the r151016 release. So I cannot upgrade any further, don't think the bloody release would help, would it?
 
Only on the asumption that your current setup is somehow corrupted.

But indeed, I would try
- another client to rule out a Windows problem
- pass-through the nic or try a barebone setup to rule out an ESXi problem
- try a new OmniOS setup to rule out a problem there.
 
Anyone know how I can shrink an ESXi system os disk. E.g. my OmniOS vmdk under vmware seems to be using a lot more disk than it should be. I have it think provisioned and would like to recoup some space if possible. Suggestions?
 
Only on the asumption that your current setup is somehow corrupted.

But indeed, I would try
- another client to rule out a Windows problem
- pass-through the nic or try a barebone setup to rule out an ESXi problem
- try a new OmniOS setup to rule out a problem there.

Ok, so I installed r151014 again, and lo and behold, OmniOS shows up on the network.
As soon as I import the zpool again, Omni complains about smbd dyndns: failed to get a domainname. But I can browse my share from my different windows clients... for a while. The smb side on Omni/napp-it somehow gets broken after awhile and I have to manually restart it for it to be browsable again. Frustrating problem this. Back to square one again.
 
I would try a new pool - maybe you have any disk around.
to rule out a pool problem.
 
Anyone know how I can shrink an ESXi system os disk. E.g. my OmniOS vmdk under vmware seems to be using a lot more disk than it should be. I have it think provisioned and would like to recoup some space if possible. Suggestions?
If the vmdk's a lot bigger than what OmniOS reports, clone it to another thin-provisioned vmdk.
 
Gea,

What could cause CIFS performance problems.

I am experiencing an extremely odd issue.

If I copy to CIFS over ethernet transfer rate is 110mb/sec.
If I copy from same source over MOCA 2.0 adapters, rate is 35mb/sec
If I copy from same source to other VM or target over MOCA adapters rate is 110mb/sec.
If I download from CIFS over MOCA 2.0 adapters, rate is 110mb/sec

So issue seems to be that OmniOS + CIFS + Moca have an issue. Not sure why.
 
Problem fixed. After tuning the send/receive buffers and restarting CIFS I now have 1GBe speed over MOCA 2.0.
 
Last edited:
Currently have an intel S3700 passed-throughOmniOS via M1015 for slog. My question is, is there a significant performance difference between being passed-through or not? was thinking of just making it part of VM storage, and slice a portion of it and provison for slog.
 
Pass-through may be faster as OmniOS has direct disk access instead via the virtual disk driver via ESXi driver. But why do you want to use a partition of the S3700 as an Slog and the rest as a VM datastore?

SSDs like the S3700 have powerloss protection. Simply create an SSD only pool from disks like the S3700 or the similar S3610 and use it to store VMs. Enable sync and you are done for best performance and security.

ZFS will then use the onpool ZIL and in this special case this may be even faster than a single dedicated SSD as an Slog. A dedicated Slog only makes sense, if it is much faster as the pool regarding latency or write iops or if it offers powerloss protection and the pool does not.

In this case the pool is as fast or faster as an Slog with the same SSD. The plus when using SSDs with powerloss protection for the pool itself is that you are not in danger of a data corruption on a power outage due the background garbage collection that is continuosly initiated by the SSD firmware.
 
I understand that HFS+ is very old and prone to bit rot. Because of that I decided years ago to not trust my valuable data to it. I'm now in the final stages of deciding on a storage solution. I deployed FreeNAS and it's working well but there are things that I think would be better running on a true HFS+ file system and shared by true Apple AFP (I run only Macs).

Anyway, besides my NAS I will be maintaining a Mac Pro server for other network services, so my temptation is to put OmniOS on the NAS and create an iSCSI target, then connect to it from the Mac Pro then share the resulting volume to the rest of my Mac (only) clients via true Apple AFP. The Mac Pro has a dual port 10GbE Ethernet and the NAS has 10GbE also (and 8 drives running as a mirror), so performance should be great.

I realize that by doing this I lose the ability to do file-level recovery from snapshots and I think I'm OK with that trade off. What I'm NOT OK with is if, in any way, I give up corruption resiliency by going that route. Do I? Is the solution just as resilient to bit rot as if I had just shared the files directly from the NAS?
 
If you use ZFS for storage, you are protected against bitrot and silent data errors (you should care
about ECC RAM as this can produce undetectable errors even with ZFS), does not matter if you share
data via iSCSI, NFS or SMB.

One problem remains.
As HFS+ itself is not copy on write, it can happen that a crash during a write results in a corrupted HFS+
filesystem for examle when a datablock is written but the metadate not updated - while the ZFS filesystem remains always valid.

You can add some protection when you disable write back caching for the price of a very slow write behaviour without
an extra Slog device.

In general
I would not go this path.Apple itself switched to SMB as the default sharing protocol as it offers many advantages.
At the moment TimeMachine support is the only remaining advantage of AFP.

You only need to care about SMB 2 and above for OSX. This is included in Oracle Solaris and the next OmniOS about April.
I have done some performance tests with SMB 2 on Solaris and the OmniOS 151017 beta with excellent results on SMB2 and 10G.

see
http://napp-it.org/doc/downloads/performance_smb2.pdf


btw
If you want to evaluate OmniOS beta, the new version from today has a known bug with sudo as it always displays a
"Last Login at .." message on CLI commands. If you use napp-it with this release, you must avoid any actions that
edit systemfiles like the Tuning Panel. You must edit systemfiles manually ex with WinSCP.
 
_gea wins at tech support for his product and the products (Solaris et al) that nappit run on.
 
If you use ZFS for storage, you are protected against bitrot
In general
I would not go this path.Apple itself switched to SMB as the default sharing protocol as it offers many advantages.
At the moment TimeMachine support is the only remaining advantage of AFP.

You only need to care about SMB 2 and above for OSX. This is included in Oracle Solaris and the next OmniOS about April.
I have done some performance tests with SMB 2 on Solaris and the OmniOS 151017 beta with excellent results on SMB2 and 10G.

see
http://napp-it.org/doc/downloads/performance_smb2.pdf


btw
If you want to evaluate OmniOS beta, the new version from today has a known bug with sudo as it always displays a
"Last Login at .." message on CLI commands. If you use napp-it with this release, you must avoid any actions that
edit systemfiles like the Tuning Panel. You must edit systemfiles manually ex with WinSCP.

Thank you very much. And wow, that SMB2 performance test doc is extensive and very well done.

My thought to use iSCSI was only partly to use native AFP. There is also the fact that my server would have in essence a native HFS+ disk which I think may also offer some advantages over storing everything on a remote server over a file-sharing protocol. But maybe the advantages aren't so many.

Regarding SMB2, I was not aware that it is now in Solaris, 11.3 by the looks of it. So, to be clear, this is kernel-level SMB2, shared via 'zfs set sharesmb=on fsname' ?

Also, and I'm sure I could research this, but has there been clarity on whether it's OK for home users to use Solaris without paying? Or are we supposed to buy a support contract like anyone else?

And, aside from cost, is Solaris preferable to OmniOS? If that's too long of an answer, don't bother.

Gea, thanks for all your help for me and others, it is really appreciated.
 
About HFS
I would use SMB2 whenever possible and iSCSI/HFS+ only when absolutely needed for example for applications that insist on local disks.

Solaris has the follwowing main advantages over free/ Illumos based systems
- ZFS encryption, that allows encryption on a filesysystem level with different keys
- sequential resilvering, much faster than in OpenZFS

and
- Commercial support
- SMB 2.1 (yes, the Solaris kernelbased and multithreaded SMB, not SAMBA)

The last two are also available in Illumos as OmniOS offers commercial support as well. SMB 2.1 is now included in Illumos as Nexenta upstreamed their SMB 2.1 kernelserver improvements last november. The kernelbased SMB 2.1 is already available in the OmniOS beta that i used for my tests. The next OmniOS stable 151018 with SMB 2.1 is expected March/April.

Oracle Solaris 11.3 is the most feauture rich ZFS server at the moment and propably the fastest. It comes with its own ZFS version (incompatible with OpenZFS) and offers no support or bugfixes without a payed contract so there are pro and cons. In my own setups I am on OmniOS as beside encryption and sequential resilverings it is comparable but larger companies are mostly on Solaris.

about Solaris license, http://www.oracle.com/technetwork/licenses/standard-license-152015.html
License Rights and Restrictions
Oracle grants You a nonexclusive, nontransferable, limited license to internally use the Programs, subject to the restrictions stated in this Agreement, only for the purpose of developing, testing, prototyping, and demonstrating Your application and only as long as Your application has not been used for any data processing, business, commercial, or production purposes, and not for any other purpose. ...
 
Last edited:
- ZFS encryption, that allows encryption on a filesysystem level with different keys

Which is absolutely useless if you can't see the source. There could be anything in it.

And also, friends don't recommend Oracle to friends. Oracle is an evil moloch that needs to die a painful death. Did they replace all those ZFS engineers that ran away? Watch 10 minutes of https://www.youtube.com/watch?v=-zRN7XLCRhc#t=2020
 
Last edited:
And also, friends don't recommend Oracle to friends. Oracle is an evil moloch that needs to die a painful death. Did they replace all those ZFS engineers that ran away? Watch 10 minutes of https://www.youtube.com/watch?v=-zRN7XLCRhc#t=2020

Don't hold back now, tell us what you REALLY think! :D

OK, OK, I'll stick with Omni. Gea, aside from that bug you mentioned in the recent build of OmniOS, assuming I just want to make a plain Jane file server config, should it be just as stable as the stable one?
 
I know you are counseling otherwise and I'm weighing that, but the thought of a single exposed server, my Mac Pro using an "invisible" iSCSI connection to the storage, is still appealing. Pursuant, I have a question.

I know that probably the main thing I'd give up is file-level recovery from ZFS snapshots. My question is, assuming I use iSCSI and do enable periodic snapshots, what would the procedure be to actually recover a file? And, aside from the difficulty of that procedure, are there any other downsides to snapshots using this approach?
 
Problem fixed. After tuning the send/receive buffers and restarting CIFS I now have 1GBe speed over MOCA 2.0.

What did you set them to?

Gea, do you recommend playing with the send/receive buffers for 10GBase-T with SMB 2.1? (Solaris 11.3).
 
Back
Top