OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

I have had installations with 8 GB (NexentaCore). With the first system updates and systemsnaps
i got problems. 8 GB is definitly not enough. I would use 16 GB as minimum.

The disk bays is an argument. For my own i used a regular SSD and double sided tape for my home test machine.
Cheaper, faster and more "shock proof" than Sata DOM's
 
like i said, if all the illumos distros had an option to install to a compressed root pool you could easily use 8GB and be fine with just lzjb compression.

zfsguru for freebsd is fairly easy to do this during the install but i didn't have the patience to figure it out with openIA.
 
Sata DOM's may be a good option.

But I have not jet seen any that is comparable in any way to a
common Sata mainstrem SSD ex like a Intel 320-40 GB - not in price,
performance or expected reliability.

Just to make sure I got you right, you're saying to use my 80Gb SSD as local storage and install ESX on it with a data store to store the VM for OpenIndiana.

Is that correct ?

Thanks :)
 
I am having a problem with OpenIndiana, ZFS, and nappit. I have in nappit four users added under acl settings for the ZFS folder. root, cj, friends, and htpc. Friends and HTPC have the same permissions. When I go onto a windows machine it will not let me log into HTPC. I type in the user and pass I set up and it says Access is denied. I can log into the other three accounts fine. When I log in as another user and check the folder permissions, HTPC does not show up properly. It shows up as "Account Unknown(_id number here_). I am at a loss on how to fix this. I have gone as far as deleting the user and starting it's permissions from scratch.
 
No optical drive available, but I'll try this when I get home :

http://bradrobinson.blogs.exetel.co...s-x86-64-bit-installer-for-2TB-root-disk.html

Will let you know how it goes.

Just wanted everyone to know this worked like a charm. Wish it had been in the OI Wiki. :(

Now a question. As mentioned earlier I've got 5 2TB SATA3 disks which I want to use as a Raid-z1 volume, and a 120GB SATA3 SSD, which I envisioned using for the boot drive. Should I just format the whole SSD as Solaris2, or is there any advantage to splitting it and using part of it as an extended volume? Which is best practice (or neither of the above)? And if this is simply going to be used in a home environment, is it still a good idea to add a second SSD as a cache drive? I'm out of SATA3 ports, so one of the SSDs would have to use a SATA2 interface.

Thanks!
 
Last edited:
Sorry for the duplicate question (asked back in January - never got an exact answer) but I'm stuck at 8.2 of the all in one pdf.

What Exactly needs to be done here and, why - an explanation would help me better understand and learn :)

I've actually been able to bypass 8.2 but leaving my router on DHCP and after a restart my SAN is no longer viewable.

As of now, I'm using an Asus RT-N66U set to DHCP and an HP1810G-24 in dumb mode. My EXSi box is static set to 192.168.0.30
 
Just to make sure I got you right, you're saying to use my 80Gb SSD as local storage and install ESX on it with a data store to store the VM for OpenIndiana.

Is that correct ?

Thanks :)

I would use a smaller and cheaper 40 GB disk (SSD or laptop) as local datastor
and the 80 GB one as SSD cache.

But of course, you can install ESXi onto your 80 GB disk and use it as local datastore.
You can also separate things and install ESXi onto an USB stick an use the SSD only as datastore.
 
Sorry for the duplicate question (asked back in January - never got an exact answer) but I'm stuck at 8.2 of the all in one pdf.

What Exactly needs to be done here and, why - an explanation would help me better understand and learn :)

I've actually been able to bypass 8.2 but leaving my router on DHCP and after a restart my SAN is no longer viewable.

As of now, I'm using an Asus RT-N66U set to DHCP and an HP1810G-24 in dumb mode. My EXSi box is static set to 192.168.0.30

Chapter 8.2 of the napp-it all-in-one manual refers to needed settings when you switch
from DHCP to manual ip settings and like to set values by editing the config file.

If you move from dhcp to manual ip, you have to set (persistently):
- ip adress
- netmask
- gateway
- dns server

dns server is referred in edit /etc/resolv.conf

With current napp-it it is more comfortable to use menu
system - network and click on ip to set ip, netmask and gateway and
menu system - network - dns to enter nameserver(s) example;
nameserver 8.8.8.8

(8.8.8.8 is a open DNS server from Google)
 
like i said, if all the illumos distros had an option to install to a compressed root pool you could easily use 8GB and be fine with just lzjb compression.

zfsguru for freebsd is fairly easy to do this during the install but i didn't have the patience to figure it out with openIA.

Opensolaris based Systems use a ZFS boot disk with system snapshots. They are also not
as compact like others. But why care about. There is no reason to put any efford into building
a system, that is capable of working from 8 GB disks without any problems when a 16 GB usb stick is
5 Euro more than a 8 GB stick or you can buy a suggested 30-40 GB Sata SSD for about 40 Euros.

Do not go below suggested minimums of a special system and you are ok.
It does not help if others have other limits or advantages.
 
Now a question. As mentioned earlier I've got 5 2TB SATA3 disks which I want to use as a Raid-z1 volume, and a 120GB SATA3 SSD, which I envisioned using for the boot drive. Should I just format the whole SSD as Solaris2, or is there any advantage to splitting it and using part of it as an extended volume? Which is best practice (or neither of the above)? And if this is simply going to be used in a home environment, is it still a good idea to add a second SSD as a cache drive? I'm out of SATA3 ports, so one of the SSDs would have to use a SATA2 interface.

It depends on how you use your NAS and what you use it for, but there'd not usually be much need for either read or write cache drives in a "typical" home environment (through "typical" can mean different things to different people :) )

You could use a slice off your SSD as L2ARC, but it will complicate your install a bit. Not sure how much benefit you'd get though.
If you are using your NAS for media storage and playback for instance, then the data tends to be read once (when you watch the movie) and won't really be needed again until the next time you watch the same movie - a L2ARC won't really help much here.
The data is also read at a relatively pedestrian pace - even a HD movie is unlikely to need more than a fraction of your zpool's native I/O capability.

As for write cache SSDs, again it depends on the usage - for a home media NAS they probably won't offer much performance improvement. The writes tend to be "one at a time" large sequential files, uploaded from a network client - the bottleneck would usually be the network itself (write cache SSDs should really be added in pairs too, again adding to the cost)

On the other hand, if you are using the NAS for all sorts of everyday (and perhaps not so everyday) things, and are supporting multiple clients simultaneously, then things may well be quite different.

There isn't really a best practice for this BTW, as there are so many different usage variations - what's good for one setup may not be so good for another......
I suppose the crux of the matter is exactly what you mean by "this is simply going to be used in a home environment"
 
One last thing.. for my all-in-one setup I am going to use this case:

http://www.supermicro.com/products/chassis/4U/743/SC743T-665.cfm

with 8 bay hotswap and activity/disk failure led.

Which not too expansive JBOD controller that support 8 disk should I get?

The case has a CSE-SATA-743 SAS / SATA Hard Drive Backplane,

I was hoping to get the disk error led to work, but i'm not sure if that's doable with the "software raid" of zfs (I know it's not raid...)

I was looking at this controller: http://www.supermicro.com/products/accessories/addon/AOC-USAS2-L8i.cfm?TYP=E (based on a LSISAS 2008 controller)
 
Yes, but I'm wondering if it will work with openindiana or another zfs os.

The Supermicro UIO MegaRAID AOC-USAS2-L8e is perfect.
Its comparable to a LSI 9211 with IT firmware, one of the best at all

Only problem: Its UIO (wrong side mounted) used for special
SuperMicro cases. You need to modify the slot mounting
(not a problem at all)
 
My laughs are becoming more and more hysterical... just came home from work, the NAS responded, I thought well that's good. Wanted to shut down the box to test each single drive, as coolrunnings suggested. Checked the disks first and oh-oh... Hard Errors (the one with T:1 is the spare disk).Tried to boot, but no, it wouldn't let me - I looked at this for 15 minutes until I pulled the plug. Noticed lots of stuff like that on the console. It comes back up, and this time, everything looks normal to me. I also guess a restart of the system resets the error counter?

:(

Well, I shut it down now again, and will test all the drives. Will do performance tests later. Unfortunately, Just found out that the test controller I brought home from work is an HP P400 and does not support JBOD... It's just not my day... sigh.

Greets,
Cap'
 
My laughs are becoming more and more hysterical... just came home from work, the NAS responded, I thought well that's good. Wanted to shut down the box to test each single drive, as coolrunnings suggested. Checked the disks first and oh-oh... Hard Errors (the one with T:1 is the spare disk).Tried to boot, but no, it wouldn't let me - I looked at this for 15 minutes until I pulled the plug. Noticed lots of stuff like that on the console. It comes back up, and this time, everything looks normal to me. I also guess a restart of the system resets the error counter?

:(

Well, I shut it down now again, and will test all the drives. Will do performance tests later. Unfortunately, Just found out that the test controller I brought home from work is an HP P400 and does not support JBOD... It's just not my day... sigh.

Greets,
Cap'

I would try replacing the SATA cable first. I have noticed similar problems that I was able to fix with a new cable.
 
I couldn't even shut it down again, it stayed on the Solaris screen for more than 20 minutes.

Well, I guess I can order some cables then... Thanks for your suggestion, cj145!
 
Just ordered my controller, now I'm confused at which IPASS to SATA cable I should get super micro have: Forward IPASS TO SATA CBL-0097L-02 and also a Cross over one.

I need the one to connect a controller with ipass to a backplane with SATA.
 
Just ordered my controller, now I'm confused at which IPASS to SATA cable I should get super micro have: Forward IPASS TO SATA CBL-0097L-02 and also a Cross over one.

I need the one to connect a controller with ipass to a backplane with SATA.

Multi-lane internal forward breakout cable.
Connects the controller's SFF-8087 Multi-lane connector to the hard drive's or backplane's discrete SATA connector.

Reverse cable connects onboard Sata to a backplane with SFF-8087 Multi-lane connector
 
Thanks you so much. I'll donate some money for you plugin when i start using it.

All my gear has been ordered.
 
I couldn't even shut it down again, it stayed on the Solaris screen for more than 20 minutes.

Well, I guess I can order some cables then... Thanks for your suggestion, cj145!


Hummmmm, looks like poor contact. Before ordering new cables you could try using a spray contact cleaner.
 
i have my WD Raptor 74GB as my boot drive right now but its noisy as fck... anyone recommend a good boot drive?

Maybe a WD 500GB Blue drive?

or perhaps i should crack open this Intel 320 160GB SSD sitting here (that I intended to sell) lol
 
Last edited:
About poor contacts... Today, I took out the controller and put it in another PCIe slot (4x only, though). I got more errors through that (mostly Transfer errors, but also some Hard, but no Soft), but I was able to copy data to it with about 35 MB/s (says Windows Explorer), and read data with about 60-65 MB/s. That's about double the speeds I had before, even though that PCIe slot is only 4x not 8x like the one before. So I guess I'm narrowing down the problem... Could be I had some ESD while connecting it? Dunno... Problem is more, how do I prove the vendor that the port is not working correctly?

Strange thing is still, I cannot shut down my box anymore, it always gets stuck halfway. Any idea in which log I could try to read some hints as to why this happens all the time?

Thanks,
Cap'
 
I must be missing something obvious, but I always get a "send error to [email protected]" when testing the email. This is a cox cable email account, and I get this error even when sending to myself. I have made sure all my IP's are in the hosts file. Any ideas?

Thanks.
 
And not sure if this is related, but when I try to create an appliance-group (using the eval key you sent me), I am told the other server isnt running napp-it. Both are running 0.8c.
Do I have to apply the eval license to both machines? From what Ive read, key only needs to be on the receiver side...

Thanks
 
Ok, was able to create the appliance group from the other machine (machine A). But on machine A, under appliance group, everything shows ok and napp-it status for both shows "ok Version: "0.8c nightly Mar.21.2012".
On machine B however, at the top under group overview it says "sh[3]: nc: not found [No such file or directory] sh[3]: nc: not found [No such file or directory]. And in the status for both shows "remote call: timeout".
:(
 
Which distribution is the most recommanded one right now?

I am going for the all-in-one solution with ESXi5, most important for me are performance, easy to maintain / configure / recover if something goes wrong. I am planning at using napp it, and will be doing NFS/iSCSI and windows shares, with 8 2TB drives.

I've look at openindiana website, is the server or the desktop version recommanded?, does the desktop version use a lot of ram/cpu for it's gui? (Would be nice to have unless it take too much resources).

Thanks!
 
Ok, was able to create the appliance group from the other machine (machine A). But on machine A, under appliance group, everything shows ok and napp-it status for both shows "ok Version: "0.8c nightly Mar.21.2012".
On machine B however, at the top under group overview it says "sh[3]: nc: not found [No such file or directory] sh[3]: nc: not found [No such file or directory]. And in the status for both shows "remote call: timeout".
:(

Install netcat (nc) manually and reboot
open a console and enter su, then
pkg install netcat

see http://www.napp-it.org/downloads/openindiana.html
 
Last edited:
Which distribution is the most recommanded one right now?

I am going for the all-in-one solution with ESXi5, most important for me are performance, easy to maintain / configure / recover if something goes wrong. I am planning at using napp it, and will be doing NFS/iSCSI and windows shares, with 8 2TB drives.

I've look at openindiana website, is the server or the desktop version recommanded?, does the desktop version use a lot of ram/cpu for it's gui? (Would be nice to have unless it take too much resources).

Thanks!

I would use the live version
- due to time slider (select a folder and go back in time)
- due to problems with VMware tools on a server install

with current OI dev release, you must install tools like nc, smartmontools or mc manually
see http://www.napp-it.org/downloads/openindiana.html and reboot after each install
 
Install netcat (nc) manually and reboot
open a console and enter su, then
pkg install netcat

I saw those directions, but oddly enough didnt have the same problem with mahcine A (also OI). Only difference is that I upgraded OI on machine A before installing napp-it. Does that fix the problems listed?

Also, how do I tshoot the email problem?
Thanks
 
I saw those directions, but oddly enough didnt have the same problem with mahcine A (also OI). Only difference is that I upgraded OI on machine A before installing napp-it. Does that fix the problems listed?

Also, how do I tshoot the email problem?
Thanks

The problem is the used pkg installer
If a tool like netcat works it is ok, otherwise you must install the tools one by one.

about email
if you have set your emailserver and your adress in napp-it settings,
you can try menu jobs-email-smtp test

If it fails, try another mailaccount, maillist or forwarder.
Most problems are due to encrypted only mailaccounts like Google
Option in such a case: install needed TLS modules manually and use menu jobs-TLS

other option: there is a firewall blocking port 25
 
Last edited:
Guess I was hoping there are some other log files somewhere for the email errors. I think its connecting to the smtp server because if I change that setting it gives me an error "cant contact the email server". There is not TLS on my email server. Btw, the same settings work just fine from my test nexenta server. This is really stumping me.

Also, as you can see below, there is not any firewall problems....

root@NAS-oi-2:~# telnet smtp.cox.net 25
Trying 68.6.19.8...
Connected to smtp.cox.net.
Escape character is '^]'.
220 fed1rmimpo110.cox.net bizsmtp ESMTP server ready
 
Guess I was hoping there are some other log files somewhere for the email errors. I think its connecting to the smtp server because if I change that setting it gives me an error "cant contact the email server". There is not TLS on my email server. Btw, the same settings work just fine from my test nexenta server. This is really stumping me.

Also, as you can see below, there is not any firewall problems....

root@NAS-oi-2:~# telnet smtp.cox.net 25
Trying 68.6.19.8...
Connected to smtp.cox.net.
Escape character is '^]'.
220 fed1rmimpo110.cox.net bizsmtp ESMTP server ready

Send is using the module Net:SMTP
http://quark.humbug.org.au/publications/perl/perlsmtpintro.html

You may look at this and the jobscript in
"/root/web-gui/data/napp-it/zfsos/15_jobs and data services/03_email/09_smtp-test/action.pl"

But I would first try another account/ forwarder.
Recheck entries in napp-it settings ex for spaces etc.
 
I was wondering, are there disadvantage of using Solaris 11 Express (missing/broken feature etc), encryption would be a benefit here.

Thanks!
 
I was wondering, are there disadvantage of using Solaris 11 Express (missing/broken feature etc), encryption would be a benefit here.

Thanks!

Solaris 11 Express is no longer available.
(Express= beta/ preview of Solaris 11)

Use Solaris 11 final when you need Solaris
(Licence is the same, only free for non-commercial and demo-use)
 
Ok i though express was the "free one" I guess it's the same now.

I'll give Solaris 11 a shot with napp-it :)

My hardware is on it's way.

Btw, would you consider making a less expensive or not a yearly paid fee for the extras for napp it? (Possibly for home use only? i.e: one time $150)
 
Doing some more testing with the email, I created the emails jobs, then clicked "email" under "Job status" and here is the output ( some fields changed/ ommited for privacy)

id=1333385195
server=smtp.west.cox.net
to=test\@cox.net
[email protected]
user=omitted
pw=omitted

text1=send to
[email protected]
text2=by
opt2=smtp.west.cox.net
ps=
rps=
month=every
day=every
hour=every
min=every

As you can see, there is a slash after the email user before the @ sign in the "to" field. Is this normal??
 
Back
Top