OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Hi All,

If I switch from OI GUI to OmniOS (which does not have a GUI), what are my options to view/copy files within previous snapshots?
What software is available that is similar to Nautilus with integrated time-slider?

Currently I have a cron based script (No Auto Snapshots) to create snapshots and sync it to a separate chassis that contains my (duplicate of primary) backup volume. I use Nautilus time-slider to retrieve deleted files or in case I lose my primary volume etc to copy files from my backup volume.

I am sure the snapshot creation script will work on OmniOS but I am more concerned with the time-slider snapshot based retrieval of individual files.

Any suggestions appreciated.

Thank You
 
You can view snapshots with the command line (or something line winSCP from windows) by going to the dataset root folder, and moving into the hidden .zfs folder, then there is a snapshot folder, and inside that are all your snaps.
 
You can view snapshots with the command line (or something line winSCP from windows) by going to the dataset root folder, and moving into the hidden .zfs folder, then there is a snapshot folder, and inside that are all your snaps.

Thanks dave99,

I suppose the nearest native software for OmniOS then for that sort of thing then would be MC?

Regards
 
Last edited:
Don't know, don't use anything directly on my ZFS boxes, they are all managed from my windows workstation via ssh & scp.
 
Your easy to use solutions for snap access:

locally: midnight commander
remote scp: WinSCP
remote smb: Windows previous version

last is best
 
I made some other SSD test for ZIL, according your benchmark. Maybe it could be interesting for others:

I have added a screenshot to my benchmark document.
Hope it's ok for you

ps
anyone around with a Intel 3700- 100GB (or 200/400GB) and a 1 or 10 GbE Ethernet.
A Crystal Disk benchmark to compare the 800 GB vs smaller versions would be fine
(800 GB for a ZIL where a few GB are needed....)

If possible:
- create a 50 GB volume (64k blocksize)
- create a LU/target/target group/view
- connect from Windows
- start Crystal disk benchmark with sync=always and sync=disabled

similar to http://napp-it.org/doc/manuals/benchmarks_5_2013.pdf (last page)
to answer the question: what is the best ZIL beside a ZeusRAM and not as expensive
 
Last edited:
No problem.

I will try to get a 100GB version of S3700 next week and do these tests again.

From datasheets, 400GB seems identical to 800GB regarding performance.
200 GB is a litte bit slower, while 100 GB is quite slow in writes.

http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-dc-s3700-series.html

From these values, a 200 GB+ s3700 seems to be a perfect ZIL.
and best for enterprise class SSD only pools.

ps
I have added a fast 120 GB SLC SSD with high IOPS and a ZeusRAM to the sync vs async performance test
 
Do you think this setup would work (well)?
s3700 200gb as a esxi install + couple of startup VMs & vapps (AD, vcenter, omnios) and then use the rest of the space (cca 100gb) as a ZIL for 2 pools?
 
Your easy to use solutions for snap access:

locally: midnight commander
remote scp: WinSCP
remote smb: Windows previous version

last is best

I have a related question. I screwed up something while tinkering with vnc to get a persistent remote session, and now the server won't boot with the napp-it boot environment. It boots with the back-up one. So how do I copy the older file (I must first find it) over the newer one ?
 
Do you think this setup would work (well)?
s3700 200gb as a esxi install + couple of startup VMs & vapps (AD, vcenter, omnios) and then use the rest of the space (cca 100gb) as a ZIL for 2 pools?

A Intel s3700 is a perfect enterprise class SSD for your ZFS Pool and your datastore.
It seems also a perfect ZIL (maybee the 200 GB+).

But I would never use a ZIL for another task.
You need the complete write performance for ZIL or performance degration on sync writes is heavy - even if you use only 8GB from 200 or 400 GB.
 
I have a related question. I screwed up something while tinkering with vnc to get a persistent remote session, and now the server won't boot with the napp-it boot environment. It boots with the back-up one. So how do I copy the older file (I must first find it) over the newer one ?

The intension of BE bootable snaps is: create them and you can boot from.
If you need a file from one, look under /.zfs via mc or WinSCP
 
Thanks, however I didn't find .zfs with WinSCP. In fine I realized the server still booted fine, just with an incorrect video setting leading to no video output, so I used putty to fix the problem.

Also, to clear something up, the ALLOC size is out of the RAW size, right ?
 
I would love to see the SuperSSpeed S301 Hyper Gold in these benchmarks.

A Sandforce controller with SLC may be always a very good solution.

In my benchmarks I have included a SF+SLC Sata-2 model from Winkom. I was able get them as a special edition with a supercap. Performance in 4k QD32 is about 50% of an Intel s3700-800 and about 30% of a ZeusRAM.

If you look at http://www.tweaktown.com/reviews/53...per-gold-slc-enterprise-ssd-review/index.html. They compare it with a S3700-800 with 4k QD32+ write values als well with top results.
A S301 or similar with Intel SLC chips would be a top idea for a ZIL but only if someone offers them with a supercap included.
 
I did build another box with an AMD Opteron 3350HE (AM3+) and with 8GB ECC memory.

The CPU features AES-NI but Solaris 11-Express would not recognise it.

However, Solaris 11.1 refuses to install.
The installer would hang forever "tranferring" data to disk (using a S-ATA SSD).

OI, SolEx-11, Windoze, Ubuntu Server are all fine.
Is this a known issue?
How can I debug?

TIA,
Hominidae
 
I did build another box with an AMD Opteron 3350HE (AM3+) and with 8GB ECC memory.

The CPU features AES-NI but Solaris 11-Express would not recognise it.

However, Solaris 11.1 refuses to install.
The installer would hang forever "tranferring" data to disk (using a S-ATA SSD).

OI, SolEx-11, Windoze, Ubuntu Server are all fine.
Is this a known issue?
How can I debug?

TIA,
Hominidae

Mostly a bad DVD, burn another one (optionally re download the ISO)
 
Mostly a bad DVD, burn another one (optionally re download the ISO)

thanks for your answer.
It's the TXT install CD and it installs just fine on another system (tested a VM).

I as well installed Solaris11-Express and then applied the pkg update procedure.
The upgrade process did run fine and upon reboot, the system hangs after the first kernel prompt.

Is there a driver issue ... with SATA / AHCI maybe? I noticed the HDD led to be ON all the time (during hang of install and during boot after update).
SolEX and SO11.1 installers report the SSD interface differently (SolEX == "Unknown", SO11.1 == "S-ATA") when selecting the local target disk.
The platform is a desktop ASUS board (with Realtek NIC, but during install it acquires an IP just fine).

The same SSD and SO11.1 install works OK on a SM H7SPA-F-D510 (Intel Atom).

Am I able to install on platform A and them move the disk over to platform B? Will there be drivers missing?
 
With napp-it to go, what is the recommended way to clear up disk space? After writing USB Image for Supermicro X9SCL(M)/1155 V.13a you are instructed to go from bloody to stable. I have trouble doing this as there wasn't enough disk space to perform the upgrade when I tried 'pkg update'. I ended up having to delete all the boot environment snaps except for default in order to get the room I needed to update to get on to the stable branch.

Afterwards this leaves me with about 90mb of free space. What should I get rid of to get some space on my usb drive?
 
Just finished next in the row storage server based on solaris 11.1.
Intel s5000psl
2xXeons@2Ghz
16GB ECC RAM
2xibm1015 flashed to lsi9211-it
16 x Seagate ST4000DM000 4TB
2x500gb for system mirror

Very simple test from win7 to Solaris box shows 60MB/s transfer speed,i'm glad :) !

Need to add:
Under Solaris 11.1 you DONT need Constantine's guide to mirror rpool HDDs,the nappit options don't work too,just use "zpool attach rpool cxxxx cxxxx" command ! The correct disk labeling and the boot blocks are applied automatically.
 
With napp-it to go, what is the recommended way to clear up disk space? After writing USB Image for Supermicro X9SCL(M)/1155 V.13a you are instructed to go from bloody to stable. I have trouble doing this as there wasn't enough disk space to perform the upgrade when I tried 'pkg update'. I ended up having to delete all the boot environment snaps except for default in order to get the room I needed to update to get on to the stable branch.

Afterwards this leaves me with about 90mb of free space. What should I get rid of to get some space on my usb drive?

There are not too many options
- delete all BE/ bootsnaps beside the active one
- delete installation files in /root (if any like smartmontools or napp-it files)
- delete unneded older napp-it versions in /var/web-gui (/var/web-gui/data is active)
- delete logs in /var/adm/messages
- optionally delete /opt/xampp (or move temporary to your datapool)
- optionall enable compress in rpool
 
Thanks, this makes sense. Is it okay to run with only 90mb? Feel like I might screw myself later on down the line when I upgrade napp-it. I cant figure this out for OmniOS: how can I clear any pkg cache that may exist?
 
I just got the following info from Pat

Just want to let you know I achieved success getting gmail tls authentication working with napp-it under omnios.

The piece that failed was the install of the downgraded IO:Socket:SSL version 1.68 via CPAN

I got it to install and was able to send email via napp-it using gmail by using the steps on this page: Reviews by WebHostingZone - Nagios Exchange

Specifically:
wget http://search.cpan.org/CPAN/authors/id/S/SU/SULLR/IO-Socket-SSL-1.68.tar.gz
tar xzvf IO-Socket-SSL-1.68.tar.gz
cd IO-Socket-SSL-1.68
perl Makefile.PL
make
make install

If anyone like to try, please report results and a step by step how to
 
This is maybe more of an network problem but it involves Omnios so here goes.
I have a all-in-one esxi box with a few vapps and win2012 VMs. One of them is also a domain controller on which all of the other VMs are connected including Omnios (with nappit). At first everything worked super (it still does, except..) until i noticed that i cant list computers on the network anymore as if the network discovery stoped working. Before all of the computers/VM (including ones that weren't part of a domain) could see everything. Now only the win7 desktops are visible. I've tried this but it only enables the discovery of that particular windows computer/VM. Access to the them via \\computer_name works. Even fully disabled firewall does nothing. I'm out of ideas.:confused:
 
Can you ping the other devices by hostname? Are you running DNS/WINS on your domain controller?
 
Can you ping the other devices by hostname? Are you running DNS/WINS on your domain controller?
It has DHCP/DNS (but not wins) running yes and i can ping everything from anywhere (as hostname or ip).
 
I just got the following info from Pat



If anyone like to try, please report results and a step by step how to

Thanks a million to _Gea and Pat - worked like a charm on my Sol 11.1 box. Did exactly what you quoted, nothing more.
Appreciated!
Cheers,
Cap'
 
Do any of the Solaris-based ZFS operating systems support SMB2?

This is more a question of the used SMB server

I suppose the answer is no for current Solaris CIFS server.
If you use SAMBA on Solaris, the answer can be yes.
 
Just upgraded from OI to OmniOS and updated to the latest bleeding packages but napp-it isn't happy:

Can't load '/var/web-gui/data/napp-it/CGI/auto/IO/Tty/Tty.so' for module IO::Tty: ld.so.1: perl: fatal: /var/web-gui/data/napp-it/CGI/auto/IO/Tty/Tty.so: wrong ELF class: ELFCLASS32 at /usr/perl5/5.16.1/lib/i86pc-solaris-thread-multi-64/DynaLoader.pm line 190.
at /var/web-gui/data/napp-it/CGI/IO/Tty.pm line 30.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/zfslib.pl line 1874.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/_lib/zfslib.pl line 1874.
For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.

I tried both bleeding and stable omnios CGIs but neither works. Just wanted to share with _Gea. I'll see if I can fix it by pointing to the perl that came with from omni-perl.
 
Just upgraded from OI to OmniOS and updated to the latest bleeding packages but napp-it isn't happy:

I tried both bleeding and stable omnios CGIs but neither works. Just wanted to share with _Gea. I'll see if I can fix it by pointing to the perl that came with from omni-perl.

I have not tried the newest bloody.
With last stable/ bloody i needed some different Perl modules.

You can try if they work:
copy /var/web-gui/data/tools/omni_bloody/CGI/ to /var/web-gui/data/napp-it/CGI/

or
copy /var/web-gui/data/tools/omni_stable/CGI/ to /var/web-gui/data/napp-it/CGI/

maybee you have the wrong version and can try the other one.
 
I tried both based on a previous suggestion you had made. It looked like both Tty.so's are the same class anyway.

I'll double check tonight when I get home.
 
single drive or one drive of a mirrored set?

if mirrored, use detach not remove. if single device, remove should work.
 
Thanks for reply. It's only single drive.

I have this problem on two storage servers with OpenIndiana and different hardware, but both with LSI host bus adapter.
On third server with Nexenta (with LSI too) was no problem with ZIL removing.

The only thing that I can do now, with this device is zpool offline, but after that I still cannot remove device from pool and pool is in degraded state.
 
Last edited:
I have never had or heard of such problems with current OI
Do you have plain OI or napp-it (where you optionally use disk buffering for better performance with many disks,
delete buffer with menu disks - delete disk buffer, needed if you did not remove ZIL with menu disks - remove)

What you can also try: reboot or check zpool status at cli
 
Back
Top