OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

I have updated the SMB2 checks with results from OmniOS bloody.
http://napp-it.org/doc/downloads/performance_smb2.pdf

Main results for 10G Ethernet on OSX 10.11 and Windows 8.1

- OS version and client network driver is very critical for 10G
on some configs or with some driver releases 10G is not faster than 1G (mostly on reads)
- From Windows, performance to Solaris is similar than to OmniOS (at a lower level than with OSX)
- From OSX, SMB2 to Solaris is faster than to OmniOS
- OSX is faster than Windows on SMB2 reads and writes out of the box
SMB2 perfomance on OSX goes up to > 600 MB/s on writes and > 800 MB/s on reads
SMB perfomance on Windows goes up to > 300 MB/s on writes and > 600 MB/s on reads

This is a quick "out of the box" check with SMB2 and Jumboframes as the only special settings on OSX.
On Windows 8.1 defaults + mtu 9000 are used. Maybe we need some additional tweaks on Windows
 
Hi,

I have the napp-it 15b if I am not mistaken, running on ESXI 5.5 (U1).
On top of it, running one guest Win 8.1 O/S using iscsi, 2xRAIDz2 pools.

In the past I tried to tweak vmxnet3 adapters, but quickly ran back to e1000 after inconsistent performance issues. Even after moving back to e1000, I sometimes have performance quirks over iSCSI or SMB, but mostly it's reasonable.

Otherwise everything works well.
Do you recommend switching over to ESXI6 and the new Napp-it 14d install and importing everything?

Thanks,
 
ESXi 5.5u1 has a bug with NFS.
I would either switch to 5.5U2 or 6.0.0U1

With ESXi 6 you need newer tools (included in current ova template).
Whenever possible, use the faster vmxnet3 over e1000

If you update to a newer napp-it on a newer ESXi , you need to
- update ESXi
- import napp-it ova template (ESXi)
- add pci device for pass-through (ESXi)
- import pool and share via NFS (napp-it)
- import NFS storage (ESXi)
- import VMs (ESXi filebrowser, mouse right click on .vmx file)
 
Thanks.

I already performed an update of esxi 5.5U1 to U2 in the past. Didn't touch anything else in the past.

I initially used iSCSI for the host due to horrible NFS performance with U1 even without sync, and stayed with e1000 due to inconsistent performance with vmxnet3.

My thoughts were to simply unplug the existing SATA drive with ESXI and the datastore holding napp-it vmx, and deploying esxi6 and the updated appliance to a new drive, while keeping the old one just in case something fails.
After that, re-configure and try again with vmxnet3 and jumbo frames.

Since everything is basically stored on the pool, after importing it back, I suppose it should be pretty straight forward? Any reason I shouldnt perform it this way?
 
Team,

I'm still catching up from about post 120 but I have hit a huge issue.

My brother has somehow managed to obtain CryptoWare ransomware virus, and it has 2048bit encrypted about 800GB of files on the server. Luckily (ironically) the computer that caught the virus has an issue whereby it reboots every 40 mins. He decided not to turn it back on after that initial 40 mins (or so the story goes).

Now, these encrypted files are on my Napp-it 0.9f5 ZFS appliance.

Is there a way to recover these files from a filesystem snapshot?

I do have a back up; allbeit slightly out of date.

Thanks,
dL.
 
This is why you use a versioning filesystem like ZFS with readonly snaps.
If you have snaps prior the infection, you can restore files from them,
use Windows "previous versions"

No chance for CryptoWare to modify or destroy snaps from a
SMB share - they are readonly. Not even an admin or root can
modify or destroy them from Windows.
 
This is why you use a versioning filesystem like ZFS with readonly snaps.
If you have snaps prior the infection, you can restore files from them,
use Windows "previous versions"

No chance for CryptoWare to modify or destroy snaps from a
SMB share - they are readonly. Not even an admin or root can
modify or destroy them from Windows.

Thanks for the quick response Gee.

I'm not sure I'm in luck as it appears snaps were only turned on for the rpool:
http://imgur.com/5W9OexO

The files that were affected exist in tank/dave and tank/software/daves

I'm currently reading through the manpages for snapshot restore to confirm.
 
Snapshots are only taken if you have enabled autosnaps or if you have created them manually.
No versioning, no undo, no restore if you do not have snaps

You can simply check the napp-it menu snapshots.
 
Snapshots are only taken if you have enabled autosnaps or if you have created them manually.
No versioning, no undo, no restore if you do not have snaps

You can simply check the napp-it menu snapshots.

Can you tell from the screenshot I posted (the imgur link) if snaps were turned on for tank? Or were they just turned on for rpool?

I remember doing some work on the server about 18 months ago.

Apologies for the rookie questions, the server is so stable I leave and forget it for the most part, other than updates!
 
What is the output of menu "snapshots" or the
output of zfs list -t snapshot
This will show your snapshots on the datapool

Snaps on rpool are bootable system snapshots
You need them to boot a former system state.

You need snapshots on your data pool
 
rpool/ROOT/napp-it-0.9a5@install 32.4M - 2.74G -
rpool/ROOT/napp-it-0.9a5@2012-05-19-14:43:46 126K - 3.21G -
rpool/ROOT/napp-it-0.9a5@2012-05-19-14:45:25 160K - 3.21G -
rpool/ROOT/napp-it-0.9a5@2012-05-19-14:47:49 29.6M - 3.24G -
rpool/ROOT/napp-it-0.9a5@2013-01-26-12:44:05 378K - 3.15G -
rpool/ROOT/napp-it-0.9a5@2013-01-26-12:44:25 446K - 3.15G -
rpool/ROOT/napp-it-0.9a5@2013-01-26-12:47:46 61.2M - 3.41G -
rpool/ROOT/napp-it-0.9a5@2014-11-27-08:58:23 117M - 5.08G -
rpool/ROOT/napp-it-0.9a5@2014-12-15-13:54:20 53.5M - 5.10G -
rpool/ROOT/napp-it-0.9a5@2015-04-15-09:56:06 69.0M - 5.10G -
rpool/ROOT/napp-it-0.9a5@2015-11-04-10:16:09 53.9M - 5.12G -

The output is above.

Thanks Gee.
 
Understood - as expected.

Thanks Gee. I really do owe you a beer, I think you must have helped me a helpful of times over the years.
 
just wondering can you set recordsize for each folder or would I need to create another pool? So I can increase efficiency. Store media files on larger recordsize and small files on 128 recordsize?

thanks
 
"Folder" is not a ZFS term.

Recordsize is a property of a single ZFS filesystem (some name it dataset).
You can set it during creation, ex when using napp-it in menu ZFS Filesystem > Create up to 1m
 
TXG is transaction group. I've seen it used where a pool gets corrupted, since zfs is copy on write, it basically rolls back to a previous state. It's possible to roll back to a previous transaction group, although that would typically be a case where little has changed since the corruption, in this case, there has been a large amount of change since all the files were encrypted. Doing a little more looking, the system only keeps a small amount of these TXG's (127 if my google fu is correct), which on a mounted filesystem would be used in minutes.

So probably not helpful here, really only useful where a pool gets corrupted somehow and since it isn't mounted, the TXG doesn't increment up quickly.
 
Thank you for the additional testing on windows _Gea. Btw, is there any way to determine the smb version that's running in OmniOS or via Napp-it web?
 
Thank you for the additional testing on windows _Gea. Btw, is there any way to determine the smb version that's running in OmniOS or via Napp-it web?

You can use powershell to check the version, it only works on Windows 8/2012 and above.

Connect to the share, run power shell
Get-SmbConnection
 
I have modified some Windows network settings in my 10G performance tests on Windows

Server is OmniOS 151017 bloody, X540 MTU 9000

MacPro/OSX with 10G Sanlink 2, Jumboframes MTU 9000, SMB2 (smb://omnios):
about 600MB/s write and 800 MB/s read

Windows 8.1/10 with Intel X540, Jumboframes MTU 9014 and Interrupt Throtteling deactivated:
about 600 MB/s write and 700 MB/s read

updated;
http://napp-it.org/doc/downloads/performance_smb2.pdf
 
very nice, and over samba too!

is SMB2 multithreaded? If I remember right, SMB was single threaded.
 
You should not mix SMB and SAMBA

SMB is the network sharing protocol from Microsoft,
now the default sharing Protocol of Apple as well

SAMBA is a SMB server for Linux/Unix. It is intended to run on every CPU
or Linux/Unix filesystem. This is a huge advantage and a huge disadvantage.
While it offers many Microsoft features up to AD server compatibility, it lacks
some features that are available only on the Solarish SMB server as an alternative
to SAMBA like

- fully integrated in Solarish kernel and ZFS as a ZFS property
- multithreaded
- out of the box working ZFS snaps=Windows previous versions
- Zero config (mainly on/off), no config file needed
- use Windows SID instead of Unix uid/gid
- full support of Windows alike SMB groups
- support of Windows ntfs alike ACL with their fine granular settings and inheritance
 
Saying Samba is not SMB isn't correct, either. Samba is a software implementing SMB. In a way, it is SMB, but you should use the protocol name and not the name of one implementation when you refer to the protocol.

Your computer isn't doing BIND requests, it does DNS requests. Same thing.
 
SAMBA use the sharing protocol SMB, that is correct,
but they are different things

SAMBA is the name of a software
SMB is the name of a protocol

SAMBA=SMB is incorrect as SMB is offered by different software packages
like Microsoft OS (of course), Apple OSX, SAMBA and Solarish
 
The Napp-it installer comes with nearly everything that is needed so Hipster should not be a huge problem. As there is now an iso for Hipster, I will check if it is running when I find some time to check.

For production use, I would use OmniOS.
Thanks Gea, i look forward to your testing!
 
so it looks like hipster branch of OI finally fixes the net-snmp issue that causes >2tb zfs filesystems to get reported as 0 bytes!

Gea, i know you do not support napp-it on OI-hipster but was wondering if i can remove/replace the incompatible packages on my hipster install to convert it to something napp-it supports? Is the reason why you cant run nappit on hipster is that it purges all of the opensolaris packages? If i re-enabled the opensolaris.org publisher would nappit work? Or is it more complicated than that?

Thanks!

I have done some tests with OpenIndiana Hipster
(20151003 from http://dlc.openindiana.org/isos/hipster/) and found the following

- gcc not installed
pkg install gcc-48

- Perl Module Carp not installed, you must install from CPAN or
edit /var/web-gui/data/wwwroot/cgi-bin/admin.pl and comment out
use CGI::Carp at line 4

- Perl Module Expect with Tty.pm not working
(ex menu user). You must recompile/ install Expect

I will add these modules to current 0.9f6 and the bugfix edition 0.9f7
so OpenIndiana Hipster should run again - maybe tomorrow


update:
OI Hipster (October) is supported from napp-it 0.9f6 (Dec 22, 2015)
http://napp-it.org/downloads/changelog.html
 
Last edited:
Will an Intel 750 PCI-E Drive work natively in Solaris 11.3? I'd like to use it as my L2ARC. Assuming 128GB of RAM, and a raw pool size of 80TB, is a 400GB L2ARC ok?
 
Will an Intel 750 PCI-E Drive work natively in Solaris 11.3? I'd like to use it as my L2ARC. Assuming 128GB of RAM, and a raw pool size of 80TB, is a 400GB L2ARC ok?

PCI-E 750 works with Solaris 11.3.

Is 400GB enough for L2ARC? It's always enough, the more you have, the more can be cached. Unless you turn dedup on and your RAM is not enough to store DDT, then you have to calculate the size for L2ARC to store DDT.
 
PCI-E 750 works with Solaris 11.3.

Is 400GB enough for L2ARC? It's always enough, the more you have, the more can be cached. Unless you turn dedup on and your RAM is not enough to store DDT, then you have to calculate the size for L2ARC to store DDT.

Thanks, good to know. :D

Actually I wasn't asking if it was enough, I was asking if it was too much lol. I've heard that every 100GB of L2ARC steals ~2GB of RAM from ARC. So I just wanted to ask if a 400GB to 128GB L2ARC:ARC ratio was good.
 
Back
Top