OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

i think i had set it originally using unix permissions because i'v never been able to get ACLs to work no matter how i tried it.
 
I would like to test a few thing on Solaris 11.1, (currently running on latest OI).

Do you guys see any issue with passing through my controller to Solaris for some time, and creating a new pool with unused disk? (All thing without importing the existing pool).

I just don't want Solaris 11 to mess up with my existing pool
 
I would like to test a few thing on Solaris 11.1, (currently running on latest OI).

Do you guys see any issue with passing through my controller to Solaris for some time, and creating a new pool with unused disk? (All thing without importing the existing pool).

I just don't want Solaris 11 to mess up with my existing pool

no problem at all
You can even import your pool and go back to OI/OmniOS unless you do not upgrade your Pool from V.28
 
I tried to install OmniOS_Text_r151004.iso as a VM but keep getting this error. When I reboot the VM, I get BAD PBR SIG which I cant get pass it. If its relevant, I cant even select the slovenian (38) keyboard settings, because its simply not available. And when I select option 1 to install the OS, I get a message that log file cant be created due to read only disk.
Vsphere 5.1 and patched to the latest is installed on a 40gb intel ssd which is connected on a SCU port on SM X9SRL-F. Onboard (chipset) sata and 3 ibm m1015 controllers are passthrough. The same setup worked fine with OI+nappit. Any thoughts?
 
i think i had set it originally using unix permissions because i'v never been able to get ACLs to work no matter how i tried it.

ACL are ultra flexible and powerful and if you restrict use to
- owner@, group@ and everyone@ (the pendants to Unix permissions)
they are as easy to use like Unix permissions.

The real power of ACL are the fine granular settings of unlimited users and groups with inheritance settings to folder only and/or subfolders or the option to restrict inheritance.

. once you understood the options, you cannot imagine how to work without in a non-home environment.
 
Last edited:
no problem at all
You can even import your pool and go back to OI/OmniOS unless you do not upgrade your Pool from V.28

Would it break any of the CIFS/SMB file permission?

I don't mind if they don't work under solaris, however I would like if they weren't broken when I get back to OI (after my test).

Thanks Gea
 
I would like to test a few thing on Solaris 11.1, (currently running on latest OI).

Do you guys see any issue with passing through my controller to Solaris for some time, and creating a new pool with unused disk? (All thing without importing the existing pool).

I just don't want Solaris 11 to mess up with my existing pool

Are you going to use iSCSI? If yes, then don't bother with 11.1, the stock COMSTAR is broken for iSCSI. There are many other bugs as well.

Unless you have support contract with Oracle, don't use it.
 
Humm thanks :(

I'm not using iSCSI, or L2ARC (also buggy).

I was going to try 11.1 in hope for better performance under vmware.

ESXi 5.1 officially support Solaris 11.1, and i'm having sometimes some poor network performance..

Especially in large smbshare write, (10GB+), often the write speed drop from 100MB/s+ to 20MB/s forever...

I was also curious about ZFSv33 which is supposed to improve smbshare.
 
While at it I might give omnios a try.

Is bloody pretty stable? Any reason not to use it? Whats newer into it? I havent seen release not for it.

Thanks!

Edit: found it

omnios.omniti.com/wiki.php/StableVsBloody
 
Last edited:
I tried to install OmniOS_Text_r151004.iso as a VM but keep getting this error. When I reboot the VM, I get BAD PBR SIG which I cant get pass it. If its relevant, I cant even select the slovenian (38) keyboard settings, because its simply not available. And when I select option 1 to install the OS, I get a message that log file cant be created due to read only disk.
Vsphere 5.1 and patched to the latest is installed on a 40gb intel ssd which is connected on a SCU port on SM X9SRL-F. Onboard (chipset) sata and 3 ibm m1015 controllers are passthrough. The same setup worked fine with OI+nappit. Any thoughts?

For anyone that has this issue, Dami PM'd me and we came to the conclusion that 5GB virtual disk was too small. OmniOS worked perfectly when Dami changed it to 10GB
 
I went to check the health of my disks today and this pops up:

34ozn75.png


I haven't seen a performance hit (I can hit 109mb/sec when transfering from the server to a 4tb internal), so I don't think the drive died.

Any idea? I searched the error code in google and found nothing. :confused:
 
Guys,
Just had to replace a faulty disk from my rpool mirror. I can see the new disk when I hit "replace" in Napp-It, but when I click to do it, I get:
Could not proceed due to an error. Please try again later or ask your sysadmin.
Maybee a reboot after power-off may help.

cannot label 'c13t500000E017FA6D42d0': EFI labeled devices are not supported on root pools.

Also, the new disk is not shown as "AVAIL", but I don't know how to get there... Any help for a noob, please? :) Thanks!
 
I am having a big problem with napp-it...I installed the latest version using the wget piped to perl to upgrade, and now when I try to list the pools in the menu I get a "Processing, please wait..." page and it just sits there...forever.

The version reported is: "v. 0.9a5 nightly Jan.22.2013". I don't know why it chose the nightly version???!!

This is with OI 151a1.
 
Would you guys put 4TB drives in Mirror? Some people suggest only go with 2 parity with such huge drives..

I wonder how array will evolve when we have 10TB+ drives, I don't expect that the R/W speed will increase much, so thoses would likely require extreme care...
 
I am having a big problem with napp-it...I installed the latest version using the wget piped to perl to upgrade, and now when I try to list the pools in the menu I get a "Processing, please wait..." page and it just sits there...forever.

The version reported is: "v. 0.9a5 nightly Jan.22.2013". I don't know why it chose the nightly version???!!

This is with OI 151a1.

Hmmm the problem went away - I guess it takes a long time for a first run, and after a while it takes only 10-15 seconds for a response instead of a minute or two!
 
Guys,
Just had to replace a faulty disk from my rpool mirror. I can see the new disk when I hit "replace" in Napp-It, but when I click to do it, I get:


Also, the new disk is not shown as "AVAIL", but I don't know how to get there... Any help for a noob, please? :) Thanks!

You cannot use EFI labelled disks for rpool.
You must first reformat to something other like FAT or try napp-it menu disk-initialize prior replacing.
 
I am having a big problem with napp-it...I installed the latest version using the wget piped to perl to upgrade, and now when I try to list the pools in the menu I get a "Processing, please wait..." page and it just sits there...forever.

The version reported is: "v. 0.9a5 nightly Jan.22.2013". I don't know why it chose the nightly version???!!

This is with OI 151a1.

Napp-it 0.9a5 is the default release. When bugs are reported, they are fixed now in the same release with a newer build date.

Newest features and Pro updates are in 0.9a6, also with updates on same release number
but with newer dates.

About your problem:
Napp-it always reads all disks, partitions, pools and filesystems.
With alot of them or on slow machines this can last.

To improve reactiveness, napp-it buffers some infos so next access is faster.

About OI 151a1
there are some bugs with the package installer in OI 151 < a3
I would suggest to update to OI 151 a7 and rerun the napp-it wget installer.
 
I went to check the health of my disks today and this pops up:

I haven't seen a performance hit (I can hit 109mb/sec when transfering from the server to a 4tb internal), so I don't think the drive died.

34ozn75.png


Any idea? I searched the error code in google and found nothing. :confused:

The first disk with the errors does not even report smart infos.
If you use napp-it 0.9 with a pro/eval key, you can activate edit in topmenu,
select menu disk-smartinfo and klick then on Log in top-menu

You see the return values of all smart-checks.
Maybee you get some additional info there.

Next step I would do, is replacing the disk (beside: such a large pool as Raid-Z1 is only good for less important data
due to resilver time of 1 day and more) and check the disk with a manufacturers tool.
 
Would you guys put 4TB drives in Mirror? Some people suggest only go with 2 parity with such huge drives..

I wonder how array will evolve when we have 10TB+ drives, I don't expect that the R/W speed will increase much, so thoses would likely require extreme care...

You have three option:
- do not care about (like most people with local TB disks)
- do a mirror with the danger of a complete loss on errors during rebuild

If your data is really important:
- create secure pools with Raid Z2/Z3/3way mirror vdevs with two additional systems,
at least one on a different physical location and sync them as often as needed.
Use enough snaps to survive a "oh, someone or something has copied wrong/damaged files over good ones last week/month/year"
 
You have three option:
- do not care about (like most people with local TB disks)
- do a mirror with the danger of a complete loss on errors during rebuild

If your data is really important:
- create secure pools with Raid Z2/Z3/3way mirror vdevs with two additional systems,
at least one on a different physical location and sync them as often as needed.
Use enough snaps to survive a "oh, someone or something has copied wrong/damaged files over good ones last week/month/year"

Thanks, I will likely move to raidz2 when I can afford more drives.
( 4x 4TB raidz2) + (6x 2TB raidz2)

The remote location is nice, but my important data is a small part of that huge array. So I will likely backup them on a USB drive regualarly andstore it elsewhere
 
Last edited:
just tried updating to 0.9 and got the following error and now cannot login


Can't locate UUID/Tiny.pm in @INC (@INC contains: /var/web-gui/data/napp-it/CGI /usr/perl5/5.8.4/lib/i86pc-solaris-64int /usr/perl5/5.8.4/lib /usr/perl5/site_perl/5.8.4/i86pc-solaris-64int /usr/perl5/site_perl/5.8.4 /usr/perl5/site_perl /usr/perl5/vendor_perl/5.8.4/i86pc-solaris-64int /usr/perl5/vendor_perl/5.8.4 /usr/perl5/vendor_perl .) at admin.pl line 719.
BEGIN failed--compilation aborted at admin.pl line 719.


any ideas?
 
You cannot use EFI labelled disks for rpool.
You must first reformat to something other like FAT or try napp-it menu disk-initialize prior replacing.

Thanks, Gea, I tried formatting the disk, but it wasn't that easy to get rid of the EFI label... in fact labelling alone seems not to be enough. Finally, this article solved the problem for me, I had to create partition 2 a second time and then it worked.
Cheers,
 
Thanks, Gea, I tried formatting the disk, but it wasn't that easy to get rid of the EFI label... in fact labelling alone seems not to be enough. Finally, this article solved the problem for me, I had to create partition 2 a second time and then it worked.
Cheers,

a napp-it disk initialize + disk replace should work,
otherwise remove the faulted disk and re-mirror (menu disk - mirror rpool)
 
reboot after running the wget installer?



yeah, same thing unfortunatly. if i look at the boot on the actual machine it says something about the same service running twice.
I do have a pre-0.6 option listed in the boot menu so ill try that
 
yeah, same thing unfortunatly. if i look at the boot on the actual machine it says something about the same service running twice.
I do have a pre-0.6 option listed in the boot menu so ill try that

rerun the wget installer, reboot
 
I'm in the process of bouilding a All-in-one solution and just want to see if I understand the process correctly.

I need 3 drives:
- 1 drive I install ESXi
- 2 drive I create datastore1 and create a VM store in which I install OI
- 3 drive I create datastore2 and create a VM store and mirror it in OI

Is that correct?

What is worse/better if I use 2 drives with cheap RAID1 HW raid and install ESX and OI on them?

When ESX and OI are running, do I create a ZFS pool and share it back to ESX via NFS/iSCSI as a datastore for other virtual machines?
NFS or iSCSI?

Matej
 
do you have the command to run? obviosuly its wget from the command prompt but since i cant get into the gui i cant see the full name

run as root or after su and from home-directory /root

Code:
wget -O  - www.napp-it.org/nappit | perl
 
I'm in the process of bouilding a All-in-one solution and just want to see if I understand the process correctly.

I need 3 drives:
- 1 drive I install ESXi
- 2 drive I create datastore1 and create a VM store in which I install OI
- 3 drive I create datastore2 and create a VM store and mirror it in OI

Is that correct?

if you like a separate ESXi disk and a mirrorred OI - yes

What is worse/better if I use 2 drives with cheap RAID1 HW raid and install ESX and OI on them?

Cheap hardware raid helps in case of a complete disk failure.
With semi dead disks or bad data on one disk you are lost because they cannot detect the faulted data and repair the other mirror - ZFS can.

When ESX and OI are running, do I create a ZFS pool and share it back to ESX via NFS/iSCSI as a datastore for other virtual machines?
NFS or iSCSI?

Matej

use NFS, iSCSI cannot autoconnect after reboot with the needed delay.
 
if you like a separate ESXi disk and a mirrorred OI - yes
Oooo ok. So I guess I can use 1 disk for ESX and datastore1 and another for datastore2.


i
Cheap hardware raid helps in case of a complete disk failure.
With semi dead disks or bad data on one disk you are lost because they cannot detect the faulted data and repair the other mirror - ZFS can.

But isn't ZFS suppose to be located on bare metal HW instead of on/in virtual drives. I guess it's still better to have it in virtual drive and do scrubs then not having FS check both hard drives...

As far as OI and OmniOS(stable) goes, what would you nowadays choose? I can switch later but I need something that is considerate stable.

Do you know if you can use 2 sata ports on Supermicro X9SCM-F in ESX and passthrough the rest to guest? Are sata ports on single controller?
If I have to buy a cheap PCIe kontroller for ESX, what would you recommend now? On your page you recommend SIL 3512 chipset, but that is a PCI only and The above motherboard doesn't support PCI.

Thanks, Matej
 
Oooo ok. So I guess I can use 1 disk for ESX and datastore1 and another for datastore2.

But isn't ZFS suppose to be located on bare metal HW instead of on/in virtual drives. I guess it's still better to have it in virtual drive and do scrubs then not having FS check both hard drives...

As far as OI and OmniOS(stable) goes, what would you nowadays choose? I can switch later but I need something that is considerate stable.

Do you know if you can use 2 sata ports on Supermicro X9SCM-F in ESX and passthrough the rest to guest? Are sata ports on single controller?
If I have to buy a cheap PCIe kontroller for ESX, what would you recommend now? On your page you recommend SIL 3512 chipset, but that is a PCI only and The above motherboard doesn't support PCI.

Thanks, Matej

Yes you can use two boot disks, one for ESXi and datastore 1 and the second for datastore 2 and the primary Solaris boot-disk - ZFS-mirrored

Solaris on virtual disks is trouble free. You only should not put your data pools on virtual disks

OI vs Omni: To be honest, all my machines that needs a reinstall are (currently) OmniOS

All Sata ports are on the same pci device so you can pass-through only in whole.
I would use Sata to boot ESXi and an IBM 1015 (9211 -IT firmware) for ZFS
 
I'm curious. Over windows share copying multiple small files is rather slow.

Are there way to improve the speed? Would more random iops be useful there? Or some other tweaks.
 
Thanks for the reply. I will have to update soon.

I am trying to replace a 1 TB disc in one of my vdevs with a 3 TB one. But it seems napp-it hangs forever after I pull the 1 TB drive and insert the 3 TB drive. After this, I can no longer get any response from "format" on the command line either (EDIT: five minutes later it finally responds).

This is using M1015s on my VMWare ESXi all-in-one, OI 151a1. Any ideas why this is happening? It's driving me crazy! I have to do a VM reboot as even a shutdown gets stuck.

I suppose if I wait an hour napp-it will recover. Maybe this is related to the package installer issue? I will resilver this drive once I do a "zpool replace" manually then update to oi151a7.

Thanks again.

Napp-it 0.9a5 is the default release. When bugs are reported, they are fixed now in the same release with a newer build date.

Newest features and Pro updates are in 0.9a6, also with updates on same release number
but with newer dates.

About your problem:
Napp-it always reads all disks, partitions, pools and filesystems.
With alot of them or on slow machines this can last.

To improve reactiveness, napp-it buffers some infos so next access is faster.

About OI 151a1
there are some bugs with the package installer in OI 151 < a3
I would suggest to update to OI 151 a7 and rerun the napp-it wget installer.
 
I received my two 4TB drives, but disk page in napp-it don't load anymore :(

so I started playing with OmniOS and after installing everything new, I'm having the same issue.

Things hang at parted -lm.

Napp it say: errors: no known data errorsexe (get-disk.pl): parted -lm

I did it in the console, and it printed my 2TB drivers, but hang there... and print: Error: /dev/dsk/c2t0dp0: unrecognised disk label

Also when installing OmniOS it was seeing the two 4TB drive (i could have install on them)

Anyone? I'm using a supermicro lsi 2008 based card.

Thanks
 
Parted hangs when you insert a disk without a valid partition table.
Try napp-it menu disk -initialize to prepare the disks (rollover menu)
 
Thanks for being so helpful Gea.

Q: Would importing my existing pool on a different Solaris OS mess up the existing smbshare/cifs permission? I don't mind if they are not reconized, I just don't want them wiped when I go back to OI after my tests.
 
Last edited:
run as root or after su and from home-directory /root

Code:
wget -O  - www.napp-it.org/nappit | perl



oh, thanks for that _gea, for some reason I thought I might need to run something different to the default install method :D
 
I'm having an odd UI issue at the moment where if I go to Disks or ZFS File Systems I get a DIV that takes up almost all of my screen.

Firefox is fine.

Running 0.9a5. I can include a screenshot if this is a new issue. I'll also double check my add-ins but I have not added any and Napp-It has been running just fine until a few of the 0.9 updates were applied.

Beta 0.9 updates did not exhibit this behavior. Rather odd, but I am sure it's something simple.
 
Back
Top