I would like to test a few thing on Solaris 11.1, (currently running on latest OI).
Do you guys see any issue with passing through my controller to Solaris for some time, and creating a new pool with unused disk? (All thing without importing the existing pool).
I just don't want Solaris 11 to mess up with my existing pool
i think i had set it originally using unix permissions because i'v never been able to get ACLs to work no matter how i tried it.
no problem at all
You can even import your pool and go back to OI/OmniOS unless you do not upgrade your Pool from V.28
I would like to test a few thing on Solaris 11.1, (currently running on latest OI).
Do you guys see any issue with passing through my controller to Solaris for some time, and creating a new pool with unused disk? (All thing without importing the existing pool).
I just don't want Solaris 11 to mess up with my existing pool
I tried to install OmniOS_Text_r151004.iso as a VM but keep getting this error. When I reboot the VM, I get BAD PBR SIG which I cant get pass it. If its relevant, I cant even select the slovenian (38) keyboard settings, because its simply not available. And when I select option 1 to install the OS, I get a message that log file cant be created due to read only disk.
Vsphere 5.1 and patched to the latest is installed on a 40gb intel ssd which is connected on a SCU port on SM X9SRL-F. Onboard (chipset) sata and 3 ibm m1015 controllers are passthrough. The same setup worked fine with OI+nappit. Any thoughts?
Could not proceed due to an error. Please try again later or ask your sysadmin.
Maybee a reboot after power-off may help.
cannot label 'c13t500000E017FA6D42d0': EFI labeled devices are not supported on root pools.
I am having a big problem with napp-it...I installed the latest version using the wget piped to perl to upgrade, and now when I try to list the pools in the menu I get a "Processing, please wait..." page and it just sits there...forever.
The version reported is: "v. 0.9a5 nightly Jan.22.2013". I don't know why it chose the nightly version???!!
This is with OI 151a1.
Guys,
Just had to replace a faulty disk from my rpool mirror. I can see the new disk when I hit "replace" in Napp-It, but when I click to do it, I get:
Also, the new disk is not shown as "AVAIL", but I don't know how to get there... Any help for a noob, please?Thanks!
I am having a big problem with napp-it...I installed the latest version using the wget piped to perl to upgrade, and now when I try to list the pools in the menu I get a "Processing, please wait..." page and it just sits there...forever.
The version reported is: "v. 0.9a5 nightly Jan.22.2013". I don't know why it chose the nightly version???!!
This is with OI 151a1.
I went to check the health of my disks today and this pops up:
I haven't seen a performance hit (I can hit 109mb/sec when transfering from the server to a 4tb internal), so I don't think the drive died.
![]()
Any idea? I searched the error code in google and found nothing.![]()
Would you guys put 4TB drives in Mirror? Some people suggest only go with 2 parity with such huge drives..
I wonder how array will evolve when we have 10TB+ drives, I don't expect that the R/W speed will increase much, so thoses would likely require extreme care...
You have three option:
- do not care about (like most people with local TB disks)
- do a mirror with the danger of a complete loss on errors during rebuild
If your data is really important:
- create secure pools with Raid Z2/Z3/3way mirror vdevs with two additional systems,
at least one on a different physical location and sync them as often as needed.
Use enough snaps to survive a "oh, someone or something has copied wrong/damaged files over good ones last week/month/year"
Can't locate UUID/Tiny.pm in @INC (@INC contains: /var/web-gui/data/napp-it/CGI /usr/perl5/5.8.4/lib/i86pc-solaris-64int /usr/perl5/5.8.4/lib /usr/perl5/site_perl/5.8.4/i86pc-solaris-64int /usr/perl5/site_perl/5.8.4 /usr/perl5/site_perl /usr/perl5/vendor_perl/5.8.4/i86pc-solaris-64int /usr/perl5/vendor_perl/5.8.4 /usr/perl5/vendor_perl .) at admin.pl line 719.
BEGIN failed--compilation aborted at admin.pl line 719.
You cannot use EFI labelled disks for rpool.
You must first reformat to something other like FAT or try napp-it menu disk-initialize prior replacing.
just tried updating to 0.9 and got the following error and now cannot login
any ideas?
Thanks, Gea, I tried formatting the disk, but it wasn't that easy to get rid of the EFI label... in fact labelling alone seems not to be enough. Finally, this article solved the problem for me, I had to create partition 2 a second time and then it worked.
Cheers,
reboot after running the wget installer?
yeah, same thing unfortunatly. if i look at the boot on the actual machine it says something about the same service running twice.
I do have a pre-0.6 option listed in the boot menu so ill try that
rerun the wget installer, reboot
do you have the command to run? obviosuly its wget from the command prompt but since i cant get into the gui i cant see the full name
wget -O - www.napp-it.org/nappit | perl
I'm in the process of bouilding a All-in-one solution and just want to see if I understand the process correctly.
I need 3 drives:
- 1 drive I install ESXi
- 2 drive I create datastore1 and create a VM store in which I install OI
- 3 drive I create datastore2 and create a VM store and mirror it in OI
Is that correct?
What is worse/better if I use 2 drives with cheap RAID1 HW raid and install ESX and OI on them?
When ESX and OI are running, do I create a ZFS pool and share it back to ESX via NFS/iSCSI as a datastore for other virtual machines?
NFS or iSCSI?
Matej
Oooo ok. So I guess I can use 1 disk for ESX and datastore1 and another for datastore2.if you like a separate ESXi disk and a mirrorred OI - yes
i
Cheap hardware raid helps in case of a complete disk failure.
With semi dead disks or bad data on one disk you are lost because they cannot detect the faulted data and repair the other mirror - ZFS can.
Oooo ok. So I guess I can use 1 disk for ESX and datastore1 and another for datastore2.
But isn't ZFS suppose to be located on bare metal HW instead of on/in virtual drives. I guess it's still better to have it in virtual drive and do scrubs then not having FS check both hard drives...
As far as OI and OmniOS(stable) goes, what would you nowadays choose? I can switch later but I need something that is considerate stable.
Do you know if you can use 2 sata ports on Supermicro X9SCM-F in ESX and passthrough the rest to guest? Are sata ports on single controller?
If I have to buy a cheap PCIe kontroller for ESX, what would you recommend now? On your page you recommend SIL 3512 chipset, but that is a PCI only and The above motherboard doesn't support PCI.
Thanks, Matej
Napp-it 0.9a5 is the default release. When bugs are reported, they are fixed now in the same release with a newer build date.
Newest features and Pro updates are in 0.9a6, also with updates on same release number
but with newer dates.
About your problem:
Napp-it always reads all disks, partitions, pools and filesystems.
With alot of them or on slow machines this can last.
To improve reactiveness, napp-it buffers some infos so next access is faster.
About OI 151a1
there are some bugs with the package installer in OI 151 < a3
I would suggest to update to OI 151 a7 and rerun the napp-it wget installer.
run as root or after su and from home-directory /root
Code:wget -O - www.napp-it.org/nappit | perl