OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

So i've managed to login using an older version of napp-it that was listed in the boot menu but when i try to run the wget, I now get;

"solaris express 11 is supported upto napp-it 0.8 - use napp-it 0.8!! or update to Solaris 11.1"


since i get get into the GUI using an older version of napp-it, can i export the array then reload the machine with Solaris 11.1 and then import it with no issues?
 
So i've managed to login using an older version of napp-it that was listed in the boot menu but when i try to run the wget, I now get;

"solaris express 11 is supported upto napp-it 0.8 - use napp-it 0.8!! or update to Solaris 11.1"

since i get get into the GUI using an older version of napp-it, can i export the array then reload the machine with Solaris 11.1 and then import it with no issues?

Solaris 11.1 changed essential settimgs and behaviours.
If you use Solaris 11.0, you must use napp-it 0.8
You can install via the nappit08 installer: wget -O - www.napp-it.org/nappit08 | perl

If you use Solaris 11.1, you must use napp-it 0.9
or basic functionality is broken

Update is not a problem, not with Solaris, not with pools, not with napp-it


ps
I do not understand Oracle
They have the chance to lead ZFS development -
but currently, they only disturb....
 
Last edited:
thanks _gea - helpful as always!!

i think i'm going to get solaris 11.1 instead of using solaris express if there will be no issues
 
If you use Solaris 11.0, you must use napp-it 0.8
You can install via the nappit08 installer: wget -O - www.napp-it.org/nappit08 | perl

Because I wanted to stay on SOL11.0 (Powermangement still working), that's exactly what I did.

@_Gea...with v0.8l3 I am unable to create an encrypted ZFS-Folder via GUI/napp-it.
The option remains as unavailable, although pools are created with ZFS-V31.
(...it for sure did work in the "old days" with 0.6x)...when I create the ZFS by CLI, unlocking via GUI/nap-it however is working fine.
 
Hi. When i try to install VM tools it stoppes and says its not working for my version of linux? How do i get around that?

Edit thought i had all info in my signature. Im using OpenIndiana not sure about the version :/ it says development release
 
If anyone has issue with ashift=12 on WD 4TB black, use:

sd-config-list = "ATA WDC WD4001FAEX-0", "physical-block-size:4096";
then reboot, then re-initialize the disk, then create the pool :)
 
Parted hangs when you insert a disk without a valid partition table.
Try napp-it menu disk -initialize to prepare the disks (rollover menu)

Hmmm this would explain what I saw as well when replacing hdds for my pool! At first I thought your reply was only for x-cimo, but now I can see it being relevant to myself as well.

I will keep this in mind for next time, thanks!
 
Hmmm this would explain what I saw as well when replacing hdds for my pool! At first I thought your reply was only for x-cimo, but now I can see it being relevant to myself as well.

I will keep this in mind for next time, thanks!

It's funny that we got the same issue at the same time :)
 
@Gea:

Is it possible with nappit or with omnios board tools to announce my services via the network with avahi?
Not speaking about apple services, i want to announce nfs services.
I am not familiar with omnios and would be happy to know how to do this...

Thanks
 
Last edited:
@Gea:

Is it possible with nappit or with omnios board tools to announce my services via the network with avahi?
Not speaking about apple services, i want to announce nfs services.
I am not familiar with omnios and would be happy to know how to do this...

Thanks

I have not tried.
 
ZFS Raid + SnapRaid in a box

I think about including SnapRaid per default with napp-it

Reason:
- ZFS is superiour due to realtime checksums, and unlimited snapshots
- ZFS Software Raid is superiour in data security, performance and is realtime

Limitations:
For a pure media server, where data mostly keeps the same, where data are not too valuable and the performance of a single disk is enough, there are some some limitations with any striped Raid (ZFS or other).

- Striped raid means, all disks are active during read or write, no one can sleep
- You only can expand a pool with other Raidsets like mirrors or Raid-Z if you like redundancy
- You cannot use different sized disks or the smallest determines the Raid-size.

This is where Snapraid can fill the gap and combine the best of both.
- Use ZFS Raid where you need the performance and the realtime Raid.
- Use Snapraid on ZFS for media data with different sized disks where unused disks can sleep. You can expand with any sized single disk.

How it can work:
- Use as many datadisks as you like, size does not matter
- Build a ZFS pool from each disk (1 disk=1vdev=1pool)
- Use one or two disks (must have the same size as the biggest datadisk) for redundancy

Use your datapools as usual, create zfs folders and share. If one Pool fails (due to no ZFS redundancy), the data of this disk/pool is lost. This is where Snapraid is used.

With snapraid, you can use one or two disks to save a raid-like redundancy information on one or two extra disks (similar to Raid5/6) but not in realtime but on demand. The consequence is, that you can only restore the state of the last Snapraid sync-run.

Snapraid is quite easy to use, its only a small app.
You can install via: (similar to http://zackreed.me/articles/72-snapraid-on-ubuntu-12-04)

cd $HOME
wget http://sourceforge.net/projects/snapraid/files/snapraid-2.1.tar.gz
tar xzvf snapraid-2.1.tar.gz
cd snapraid-2.1
./configure
./make
./make install

The app is then in /usr/local/bin. I may include this in napp-it
You need to create a conf file with settings in /etc

To have it running quite maintenance, i think about using poolnames like snapraid_p1, snapraid-p2, snapraid_d1..snapraid_dn to have a setup that is working without extra setup together with a napp-it control menu and autojobs to sync timer based.

What I like to know now:
- Are there any known problems with OmniOS or OpenIndiana?
 
Last edited:
Are there issues with passing through more than 1 controller to a VM of Openindiana? I've tried passing through M1015, Sil3124, and AMD SB750, and no matter which combination I try, OI always hangs at boot and goes into maintenance mode.

Had the same issue when my ESXi was installed on a P55 mobo.
 
just finished a fresh installation of omnios and installed nappit.

when i click in the webinterface then it shows processing and nothing happens anymore:

on the host in get something like this:

Feb 3 16:29:12 omni-san sudo[6921]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:12 omni-san sudo[6921]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0
Feb 3 16:29:12 omni-san sudo[6924]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:12 omni-san sudo[6924]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0
Feb 3 16:29:12 omni-san sudo[6926]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:12 omni-san sudo[6926]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0
Feb 3 16:29:12 omni-san sudo[6930]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:12 omni-san sudo[6930]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0
Feb 3 16:29:12 omni-san sudo[6932]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:12 omni-san sudo[6932]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0
Feb 3 16:29:12 omni-san sudo[6940]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:12 omni-san sudo[6940]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0
Feb 3 16:29:12 omni-san sudo[6944]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:12 omni-san sudo[6944]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0
Feb 3 16:29:12 omni-san sudo[6945]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:12 omni-san sudo[6945]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0
Feb 3 16:29:12 omni-san sudo[6947]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:12 omni-san sudo[6947]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0
Feb 3 16:29:12 omni-san sudo[6949]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0
Feb 3 16:29:12 omni-san sudo[6949]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:14 omni-san sudo[6970]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:14 omni-san sudo[6970]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0
Feb 3 16:29:14 omni-san sudo[6975]: [ID 574227 auth.alert] Solaris_audit getaddrinfo(omni-san) failed[node name or service name not known]: Error 0
Feb 3 16:29:14 omni-san sudo[6975]: [ID 574227 auth.alert] Solaris_audit mapping omni-san to 10.10.10.12: Error 0

any hints for fixing this?
 
ok i filled the host name and corresponding ip in /etc/hosts and the messages are gone,

anyway web interface is not responding...
 
This is where Snapraid can fill the gap and combine the best of both.
- Use ZFS Raid where you need the performance and the realtime Raid.
- Use Snapraid on ZFS for media data with different sized disks where unused disks can sleep. You can expand with any sized single disk.

How it can work:
- Use as many datadisks as you like, size does not matter
- Build a ZFS pool from each disk (1 disk=1vdev=1pool)
- Use one or two disks (must have the same size as the biggest datadisk) for redundancy

Do you really mean 1 pool per disk or just one vdev per disk, added to the storage-pool?
Sounds quite like an idea... Does ZFS stripe data among the pool's vdevs or are files written on exactly one vdev?
 
ok i filled the host name and corresponding ip in /etc/hosts and the messages are gone,

anyway web interface is not responding...

Activate Edit (topmenu) and click on Log (topmenu) to display last actions.
Mostly its a parted problem with new disks without a valid partition table.

If this is the case, use rollovermenu to access disk - initialize
 
Do you really mean 1 pool per disk or just one vdev per disk, added to the storage-pool?
Sounds quite like an idea... Does ZFS stripe data among the pool's vdevs or are files written on exactly one vdev?

Each pool consists of a least one vdev, each vdev at least of one disk, so
create pools, each build from one vdev build from one basic disk
 
- Use Snapraid on ZFS for media data with different sized disks where unused disks can sleep.
So disk sleep/standby works in OI/OmniOS?
- Build a ZFS pool from each disk (1 disk=1vdev=1pool)
The main reason for using ZFS (besides from file integrity, speed etc...) is having one or two pools for all hdds. With what you're suggesting I would have 24 pools (24 hdds)? Or is there another drive pooling method used upon that so that you get only one large drive over the network share instead of single ones?

Are there issues with passing through more than 1 controller to a VM of Openindiana? I've tried passing through M1015, Sil3124, and AMD SB750, and no matter which combination I try, OI always hangs at boot and goes into maintenance mode.

Had the same issue when my ESXi was installed on a P55 mobo.
What version of esxi? 5.1 has a bug so you have to patch to the latest. I have onboard (main chipset) sata controller and 3x ibm m1015 passthrough to omnios and works fine. There is second sata onboard controler (SCU) which cannot be passthrough (the indicator keeps wanting a reboot). Dont know about your board though.
 
So disk sleep/standby works in OI/OmniOS?

The main reason for using ZFS (besides from file integrity, speed etc...) is having one or two pools for all hdds. With what you're suggesting I would have 24 pools (24 hdds)? Or is there another drive pooling method used upon that so that you get only one large drive over the network share instead of single ones?

I have not tried it with Omni (server never sleep..) but you can set disk sleep in OI via power.conf (should work in Omni as well).

Using independant single-disk Pools with Snapraid as a backup solution is also not an option for most ZFS setups. Personally I would not miss the high speed and realtime features of ZFS pools. But there are demands from some home-users for a mediaserver solution in a write once/ read many environment where unused disks can sleep and you can add single disks of different size. (See these discussions ZFS vs Snapraid around). This should also be possible with ZFS as an extra option for this special use case (Snapraid supports Solaris).

I would like to offer this as an option in napp-it. As this is not my personal use case, I asked for experiences.
 
Last edited:
The main reason for using ZFS (besides from file integrity, speed etc...) is having one or two pools for all hdds. With what you're suggesting I would have 24 pools (24 hdds)?

Perhaps you missed the SnapRAID bit. Might wanna look that up first.
 
ZFS Raid + SnapRaid in a box

I think about including SnapRaid per default with napp-it

[...]

How it can work:
- Use as many datadisks as you like, size does not matter
- Build a ZFS pool from each disk (1 disk=1vdev=1pool)
- Use one or two disks (must have the same size as the biggest datadisk) for redundancy

Use your datapools as usual, create zfs folders and share. If one Pool fails (due to no ZFS redundancy), the data of this disk/pool is lost. This is where Snapraid is used.

[...]

What I like to know now:
- Are there any known problems with OmniOS or OpenIndiana?

This is a concept which looks tempting for home use, for sure.
Sorry, but I cannot offer any experience with snapraid ATM.

...some questions though:

How does solaris CIFS Server fit into the equation with 1pool=1vdev=1disk?
Assuming that I want to provide single share to hold my media, which is larger than the largest physical disk....

....can I nest pools or construct ZFS Folders across pools? ...do I need SAMBA to do that instead of native CIFS?
....is there another union-FS approach with solaris, comparable to AUFS mentioned in the other article you linked?
 
This is a concept which looks tempting for home use, for sure.
Sorry, but I cannot offer any experience with snapraid ATM.

...some questions though:

How does solaris CIFS Server fit into the equation with 1pool=1vdev=1disk?
Assuming that I want to provide single share to hold my media, which is larger than the largest physical disk....

....can I nest pools or construct ZFS Folders across pools? ...do I need SAMBA to do that instead of native CIFS?
....is there another union-FS approach with solaris, comparable to AUFS mentioned in the other article you linked?

- From ZFS view, you have regular pools (does not matter how many vdevs/disks they are build from)
where you can create filesystems and share them

- SnapRaid is not a pooling solution, its a Raid5/6 like backupsolution for independent disks/pools

- CIFS server cannot nest shares, SAMBA, ftp or netatalk can
 
That yes, normally you let ZFS do the RAID part and have one giant pool for many disks.

In the case of SnapRAID, ZFS is degraded to a mere filesystem provider without all the redundancy. That part is played by SnapRAID so you benefit from the advantages like sleeping disks and minimized data loss in case more than the allowed number of disks fail.

ZFS still checksums the data so you _know_ when something is damaged. It's actually pretty intriguing for disks filled with movies for example.
 
I know all that how snapraid works. I was just wondering if there is a way to still get the one huge drive/folder/share/whatever_you_wanna_call_it like you get when using Drivebender on windows for example and not just individual drives. Other than that I do think that this is a great idea and kudos to Gea for the innovative thinking :D
 
Over the weekend it seems the web server stopped so how in OI do I restart the web services? I can SSH into the VM and it's serving up NFS just fine, but I can't hit the web GUI. Since this is now in production I can't just reboot the box.
 
Over the weekend it seems the web server stopped so how in OI do I restart the web services? I can SSH into the VM and it's serving up NFS just fine, but I can't hit the web GUI. Since this is now in production I can't just reboot the box.

If you ask about the napp-it webserver:
/etc/init.d napp-it restart
 
If you ask about the napp-it webserver:
/etc/init.d napp-it restart

Thanks _Gea. I just found it by poking through the Perl code of the installer. Now I just need to figure out why it stopped on me, but at least it's an easy fix for the problem.
 
Why do you need 1pool per 1vdev/1disk?
What prevents you from creating *one* pool consisting of many 1vdev/1disk ?
 
Why do you need 1pool per 1vdev/1disk?
What prevents you from creating *one* pool consisting of many 1vdev/1disk ?

Individual disk spindown was a core idea of the concept. Plus one pool of non-RAIDed vdevs is essentially RAID0, not what you want.
 
I have 3 disks and can create 3 individual vdevs, where 2 are data-disks and the third disk contains the snapshot-parity. Then I create a pool containing the 2 data-disks.
Why does this not work?
 
I have 3 disks and can create 3 individual vdevs, where 2 are data-disks and the third disk contains the snapshot-parity. Then I create a pool containing the 2 data-disks.
Why does this not work?

You have build a Raid-0 Pool.
If any one disk fail, this Pool is unavailable, all data is lost!

ps
SnapRaid is preinstalled in newest napp-it 0.9a6
http://napp-it.org/downloads/changelog.html
 
I am sorry but I really don't get it.
ZFS has no parity-data on the pool, that contains 2vdevs/disks, so data *would* be lost.

However, SNAP-Raid has the required parity-data on the third disk,
which is *not* part of the zfs-pool. So why is everything lost??
 
I am sorry but I really don't get it.
ZFS has no parity-data on the pool, that contains 2vdevs/disks, so data *would* be lost.

However, SNAP-Raid has the required parity-data on the third disk,
which is *not* part of the zfs-pool. So why is everything lost??

ZFS has no parity over vdevs. The vdevs itself must provide parity. ZFS does Raid-0 over vdevs. If any vdev is lost the whole pool is lost (In your case, all data-vdevs)
But you can of course stripe disks in a pool and build Snapraids from this larger pool. But one parity disk and one Pool whatever it is is quite useless.

This is just a mirror-like backup. For such a solution, you do not need the extra effort of Snapraid.
 
Last edited:
I am running napp-it v. 0.9a5 nightly Jan.22.2013 on OI151a7 and my encrypted pools will not connect after a Shutdown or Reboot using the Encrypted Pools Extension. (This also happens under 0.9a3-1 on OI151a5)

If I disconnect the encrypted pool using the extensions before rebooting or shutting down I am able to reconnect it using the encrypted pool extension. However, if I reboot or shutdown without disconnecting the encrypted pool through the extensions I get the following error when trying to reconnect the pool after reboot/shutdown.

Pool poef_DOCUMENTMANAGEMENT already exists, cannot import!!

If I then go the command line via PUTTY I am able to use lofiadm -c aes-256-cbc -a /DATA/ENCRYPTED-POOLS/poef_DOCUMENTMANAGEMENT/001 to manually build the lofi device. I can then issue a zpool import -d /dev/lofi which shows my pool DOCUMENTMANAGEMENT. Finally, I can use zpool import -d /dev/lofi/ DOCUMENTMANAGEMENT which imports the pool correctly. This behavior appears to be repeatable.

Any ideas as to why the Encrypted Pools Extensions requires me to disconnect my encrypted pools before Shutdown/Reboot? Is this by design? If yes could I create a shutdown script to automatically disconnect my encrypted pools on Shutdown or Reboot?
 
I am running napp-it v. 0.9a5 nightly Jan.22.2013 on OI151a7 and my encrypted pools will not connect after a Shutdown or Reboot using the Encrypted Pools Extension. (This also happens under 0.9a3-1 on OI151a5)

If I disconnect the encrypted pool using the extensions before rebooting or shutting down I am able to reconnect it using the encrypted pool extension. However, if I reboot or shutdown without disconnecting the encrypted pool through the extensions I get the following error when trying to reconnect the pool after reboot/shutdown.

Pool poef_DOCUMENTMANAGEMENT already exists, cannot import!!

If I then go the command line via PUTTY I am able to use lofiadm -c aes-256-cbc -a /DATA/ENCRYPTED-POOLS/poef_DOCUMENTMANAGEMENT/001 to manually build the lofi device. I can then issue a zpool import -d /dev/lofi which shows my pool DOCUMENTMANAGEMENT. Finally, I can use zpool import -d /dev/lofi/ DOCUMENTMANAGEMENT which imports the pool correctly. This behavior appears to be repeatable.

Any ideas as to why the Encrypted Pools Extensions requires me to disconnect my encrypted pools before Shutdown/Reboot? Is this by design? If yes could I create a shutdown script to automatically disconnect my encrypted pools on Shutdown or Reboot?

I suppose, that can be fixed. I will check tomorrow
 
I know all that how snapraid works. I was just wondering if there is a way to still get the one huge drive/folder/share/whatever_you_wanna_call_it like you get when using Drivebender on windows for example and not just individual drives.


A quick and crude way to do it would be to create a directory containing symbolic links to all your zfs mount points, and then share out that directory.

Not sure if ZFS' CIFS server would like it, but Samba should work (you'd have to edit smb.conf to allow symlinks).
 
Back
Top