OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Wow, this guide and OP was a lot to take in for someone who has never dabbled more in virtualization than running VMWare or KVM/Quemo on a host OS for some few applications.

However, I've recently acquired a Dell poweredge t310 cheaply, and plan to use it s a replacement for my current homebuilt homeserver runnning FreeBSD with ZFS. However, I want encryption. I had planned to use FreeBSD with Geli with XFS on top, but then again I want to move to linux, as I''m just better at it, and for a home server it's just a better selection of available tools and applications. So I concidered virtualizing it with FBSD as dom0 and CentOS as a guest, letting FreeBSD control the disk part. But then I got confused as to how barebones KVM is when requiring a dom0 at all, how good FBSD was for this purpose etc.

So now Iæm concidering using napp-it to simplify my life.Iæm not wuite convinced by the encryption part, though.I see napp-it supports encryption on OpenIndiana , by building a new ZFSpool on encrypted file based block devices who themselves exists on another ZFSPool, but frankly, i find this a convoluted and unwanted setup compared to running ZFS on top of Geli under FreeBS. Disk->ZFS->file based abstractiopn layer with encryption->ZFS->files just seems the wrong way to go compared to the alternative Disk->Geli-ZFS-files. Is it possible to run the lofiadm device or similar directly on the disk instead, like in FreeBSD?
 
Thanks all for your answers, all things considered I'm gonna go with the 3TB WD REDs. The lower power usage means a lot.

/Jim

I need guidance on drives to purchase, can anyone please assist?

Criterias for drives:
1) Energy efficient
2) Performance is not an issue (1Gbit network only)
3) May not run too hot, room with server in it is approx 28degrees celcius.
4) CAN'T be WD Greens(have 16 allready and I'm running back and forth to the dealer with RMA's)
5) 3TB capacity

Options within my budget and availability:

1) Seagate ST3000DM001
2) Western Digital RED WD30EFRX
3) Toshiba DT01ACA300
4) Seagate Constellation CS ST3000NC000 3 TB
5) Seagate Constellation CS ST3000NC002 3 TB

Difference between 4&5??

4&5 is a bit pricier, but I'm putting them here for comparison, optimal would be 1, 2 or 3.

Any inputs much appreciated
/Jim
 
Has anyone successfully connected to a password protected CIFS share under OI or OmniOS with an android device?

I can connect to a protected windows fileshare (my workstation), and it also work under Solaris 11.1, however the password is never accepted on OI/OmniOS.

Thanks
 
So I gave this a shot on OmniOS.

It spammed the console endlessly with sudo logs, most of them looked like it was running chown and chmod 666 on something and then running it as root.

If I had more time I'd fix it and send you a diff. Sorry :p If you fix those security issues in the meantime let me know. Uninstalled for now.
 
Does anyone run Solaris Express 11 with built-in ZFS-encryption?
I'd very much like to know how it performs compared to
openssl speed aes -multi [number of cores]
so I could have some idea of what performance or bandwidth constraint I'm going to get.
 
I'm pretty sure Toshiba 3tb drives are rebraded Hitach Ultrastar 3TB drives they look identical besides the label though the firmware is probably different like a WD red vs WD green

Yes, absolutely! In fact, napp-it reports 8 out of 10 as Hitachi drives. The other 2 are listed as Toshiba (I think one of them was made in December 2012, the others September 2012).
 
Napp-it compatible with NexentaStor Community Edition? I would like to start using SnapRaid but I don't want to give up FC for my ESXi box.
 
Napp-it compatible with NexentaStor Community Edition? I would like to start using SnapRaid but I don't want to give up FC for my ESXi box.

No, napp-it is only compatible with NexentaCore and Illumian (Base of NexentaStor)
but you may compile SnapRaid yourself.
 
Is it possible to run the lofiadm device or similar directly on the disk instead, like in FreeBSD?


No, on Solaris you have the options:
- Encryption builtin to ZFS: Oracle Solaris only
- lofiadm, based on file-devices

Main problem with lofiadm: you must go through ZFS twice (performance)
Most advantage of lofiadm: you can backup such encrypted pools with zfs security.
 
Ok, having an annoying issue, should figure out how jobs work in the new 0.9 version so I could give you a better report.

I'm getting alert emails on the following, it doesn't happen on 0.9a5, but does on 0.9a6

state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on software that does not support feature
flags.
scan: scrub canceled on Sat Feb 16 22:38:20 2013

errors: No known data errors
 
Alerts are generated by /var/web-gui/data/napp-it/zfsos/_lib/scripts/job-email.pl (runs every few minutes)
either on job, cap or zpool status problems (Disk error).

Did you get DISK-Alerts?
 
I have a All-in-one, where I have OmniOS as storage and AD domain controller as a guest. When I join a domain in OmniOS, NFS doesn't get mounted in ESX. That means that none of the other VMs load...

I tried booting another linux distribution and mount the NFS volume, where ESX is located, but it doesn't work there ether. I forgot the error, but it said something about NFs server not being reachable.

Should AD be up before OmniOS?
If yes, then there is no chance AD could be located on the same ESX server...

Matej
 
No, I'm getting ' napp-it status and logs from stor2'
These where happening every minute after I upgraded a 0.8 system, to 0.9a6 (using wget upgrade method)

The upgrade went strange, when I upgraded other systems to 0.9a3 or a5, I lost all my jobs, but on this system all my jobs stayed in tact.

It looks like the once a week status report email was being sent per minute, under the alert timings.

Maybe a quick fix would be for me to purge all the jobs and recreate them.
 
No, I'm getting ' napp-it status and logs from stor2'
These where happening every minute after I upgraded a 0.8 system, to 0.9a6 (using wget upgrade method)

The upgrade went strange, when I upgraded other systems to 0.9a3 or a5, I lost all my jobs, but on this system all my jobs stayed in tact.

It looks like the once a week status report email was being sent per minute, under the alert timings.

Maybe a quick fix would be for me to purge all the jobs and recreate them.

Job actions in napp-it <= 0.8 are part of a job while 0.9 use common scripts with parameter in job files only.
This may result in problems with old jobs. In such cases you should recreate jobs after update.
 
I recreated the jobs, and all seems good.

I just thought it was a new feature that it upgraded my old jobs, since the other ones I upgraded, had removed the jobs. :)
 
I think I spoke to soon on this one, today I needed to migrate a VM no matter what I tried I was unable to do this, I know it may have nothing to do with the removed status, but that will always be there in teh back of my mind when I have any problems, to that end I am in the process and building my AIO and will try moving the VM inside that after I have imported the pool and reset up the luns, I will post back on my success or failure


Thanks,
basically the problem is based in an incompatibility of parted with the PERC 6i controller resulting in errors and bug reports when reading disk and partition infos from the PERC.

I see no other options, than ignoring the messages (since parted is not essential) or replace the PERC with something
more compatible like a IBM 1015 (needs reflashing to 9211/IT mode).

I do not know, it there is another firmware option for the PERC 6i but i would replace the controller.
You will also get better performance and support for disks > 2TB
 
During the last week, I've been replacing 3 x WD 2TB Greens due to reported errors in Napp-it.

Yesterday I finished a Resilver with the disk "c6t9d0", brand new drive. Today I started another Resilver, replacing yet another faulty WD Green.

Just an hour into the Resilver, I'm again getting errors, this time on a WD Green(expected), BUT ALSO on the disk I replaced yesterday.

Should I worry about these errors, or is it normal during a Resilver?

See attachment

Thanks for any help
/Jim

rebuild.PNG


EDIT:
Output of Zpool:
zpool.PNG
 
No, napp-it is only compatible with NexentaCore and Illumian (Base of NexentaStor)
but you may compile SnapRaid yourself.
If I use something Illumos based like OmniOS could I get FC to work?
 
Last edited:
Did you have it running for some time yet? My two Cruzers would always vanish after some days of runtime.

About a week now, so not a very long time, no.

I used a random noname 4GB drive in my old ESXi machine, and that simply vanished after a few days as well.. Plugged it into my PC and the drive seems to be broken.. Luckily, ESXi is pretty robust, and it ran without the drive for 3-4 months before I reinstalled it to new hardware.
 
About a week now, so not a very long time, no.

I used a random noname 4GB drive in my old ESXi machine, and that simply vanished after a few days as well.. Plugged it into my PC and the drive seems to be broken.. Luckily, ESXi is pretty robust, and it ran without the drive for 3-4 months before I reinstalled it to new hardware.

Yes, ESXi is robust since it gets loaded into RAM. The USB key is basically just used to save the config.

My Cruzers run fine on other computers. They would still reliably vanish from ESXi randomly, though.

So I save myself the hassle (and a little power consumption) by running ESXi via PXE from a host that's running critical services anyway.
 
Yes, ESXi is robust since it gets loaded into RAM. The USB key is basically just used to save the config.

My Cruzers run fine on other computers. They would still reliably vanish from ESXi randomly, though.

So I save myself the hassle (and a little power consumption) by running ESXi via PXE from a host that's running critical services anyway.

A bit off topic, but did you ever try to hotplug a drive (usb or otherwise) and get ESXi to use it without rebooting? Sort of, write it's running configuration to a new disk... I have no idea if it's possible, theoretically or in the real world.
 
During the last week, I've been replacing 3 x WD 2TB Greens due to reported errors in Napp-it.

Yesterday I finished a Resilver with the disk "c6t9d0", brand new drive. Today I started another Resilver, replacing yet another faulty WD Green.

Just an hour into the Resilver, I'm again getting errors, this time on a WD Green(expected), BUT ALSO on the disk I replaced yesterday.

Should I worry about these errors, or is it normal during a Resilver?

See attachment

Thanks for any help
/Jim

rebuild.PNG

These errors are reported from iostat - not real errors, more like messages.
They can but must not indicate future problems.

You can ignore them unless there is no real ZFS error or they do not suddenly grow extremely.
In such a case you should check your setup and your disks with a manufacturers tool for bad sectors or other problems.
 
These errors are reported from iostat - not real errors, more like messages.
They can but must not indicate future problems.

You can ignore them unless there is no real ZFS error or they do not suddenly grow extremely.
In such a case you should check your setup and your disks with a manufacturers tool for bad sectors or other problems.

Many thanks for your assistance, _Gea.

Best regards
Jimmy
 
I have a All-in-one, where I have OmniOS as storage and AD domain controller as a guest. When I join a domain in OmniOS, NFS doesn't get mounted in ESX. That means that none of the other VMs load...

I tried booting another linux distribution and mount the NFS volume, where ESX is located, but it doesn't work there ether. I forgot the error, but it said something about NFs server not being reachable.

Should AD be up before OmniOS?
If yes, then there is no chance AD could be located on the same ESX server...

Matej

Anyone has any info about that topic?
 
Anyone has any info about that topic?

If you need to start Omni as a Domain-Member, the domain must be up and running.
So you AD server cannot be on NFS storage delivered by the same OmniOS SAN VM.

To overcome this egg-chicken problem you need either a second box or you
need to place your AD server to a local ESXi datastore that can start first-
same storage like you need for your virtual SAN-VM
 
Bommer:)

I guess I cant set AD users on shares if SAN VM is not joined to domain, right?

Matej
 
I took some old hardware and setup OI Napp-it as an iSCSI target to my ESXi5.1 environment. I'm using it as the backup store for Vsphere Data Protection and it's working a treat! I'd like to add two more features to the solution.

1) I've got a second nic I'd like to utilize for iSCSI. Do I want need to setup link aggregation or just setup COMSTAR iSCSI tagret portal group with both adapters?

2) VDP has my data nicely de-duped down to 2TiB. I'd like to work out a was to replicate this backup data offsite, idealy to Amazon, Google or Azure storage pools. Is this already doable with replication or sync?
 
So I haven't seen much talk about FreeNAS - is this not recommended? I understand that it was not up to the latest ZFS version early on, but that actually has changed and it's up to date.
 
Anyone know if its possible to create an other job to run as a non root user?

I have a script that runs server rsync jobs and I dont want it running as root for ownership reasons.

Thanks
Paul
 
I have apc smart 2200 lcd with management card (AP9630). How to make the baremetal omnios box to shutdown when battery is low?
 
"napp-it to go"

"napp-it to go" is the idea of a ZFS server/ mediaserver ready to use on USB sticks,
You can create it for your hardware like a HP microserver or a line of SuperMicro boards.

advantages:
- Plugin the stick, boot and manage the server via napp-it Web-UI (napp-it runs fine with a fast USB3 stick on OmniOS)
- you do not need Unix knowledge, just look at the console after boot for current used ip or enter http://servername:81 (dhcp)
- clone the stick from time to time to have a second boot option in case of problems (boot and import your pool)
- for private/internal use, you can give away such bootable sticks for free

needed free tools see
http://napp-it.org/manuals/to-go.html
 
I have another question reguarding Solaris 11.1.

The free/personal version dosen't include update during the year, however I think that every year updates are made available.

I wonder if it's possible to do theses update without re-installing a new image (and re importing the pool), anyone know?
 
I don't think "upgrades are made available" to unlicensed users "every year". But Oracle does support in-pace upgrades to new releases for unlicensed users.

I successfully updated from Solaris Express 11 to 11.0 to 11.1 using instructions provided by Oracle for the Solaris package manager (pkg).
 
Back
Top