OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Solaris 11 is free to use, but does not get updates, afaik, unless you have a support contract with Oracle. That's how I understood it.
For my Home NAS, I'm perfectly happy with Solaris 11 11/11 (not Express).

Are you aware of the (acknowledged) bug in S11 where under some (not unusual) conditions, it frees up everything (or almost) in the ARC and refuses to use the ARC?

http://mail.opensolaris.org/pipermail/zfs-discuss/2012-January/050664.html

And you will not see a fix for this unless you shell out a thousand bucks or whatever oracle's ridiculous price is (or maybe in a year or two, they drop another free release...)
 
Last edited:
just a suggestion:
I see ZFS on linux activities and users are gaining :D.
those are good for ZFS (specially) and Linux, since "no" real zfs alike on current linux.
oh btrfs is.... well.. in hand of oracle that I am doubtl, they will do extensively development.

managing ZFS on linux would be a good candidate in the future someday, if you need to expand your nap-it,

I am expecting ZFS would be mature enough and bring to kernel disribution.

Its always good to have options:
But i moved to Solaris a few years ago to replace our Windows and Mac filers.
Most important was ZFS and the Sun CIFS-Server, the best SMB server imho
and the only one with real ACL and Windows SID support (unlike SAMBA).
And the other Solaris goodies like Comstar,. Crossbow, SMF etc were addtional attractions
together with the all from one hand aspect.

Now we do not have a Solaris or Illumos problem but a problem with the OpenIndiana
distribution and its universal approch for a full featured Desktop and Server-OS based
on Illumos.

ZFS development from Illumos is not affected and this is the place where the ZFS music plays-
even for FreeBSD and Linux. I do hope that OpenIndiana will continue. Otherwise I will move to
OmniOS. This is quite similar to a minimalistic OI, but focussed for server needs and with a commercial
support option as the most OI compatible and free Illumosbased distribution.
 
Last edited:
Its always good to have options:
But i moved to Solaris a few years ago to replace our Windows and Mac filers.
Most important was ZFS and the Sun CIFS-Server, the best SMB server imho
and the only one with real ACL and Windows SID support (unlike SAMBA).
And the other Solaris goodies like Comstar,. Crossbow, SMF etc were addtional attractions
together with the all from one hand aspect.

Now we do not have a Solaris or Illumos problem but a problem with the OpenIndiana
distribution and its universal approch for a full featured Desktop and Server-OS based
on Illumos.

ZFS development from Illumos is not affected and this is the place where the ZFS music plays-
even for FreeBSD and Linux. I do hope that OpenIndiana will continue. Otherwise I will move to
OmniOS. This is quite similar to a minimalistic OI, but focussed for server needs and with a commercial
support option as the most OI compatible and free Illumosbased distribution.

do you try samba v3?
ACL and others can be work-around under linux :)
I honestly do not like ACL where give another layer:D

on unix, we can user groups and other scenarios
I use groups alot to maintain user-id

we do have a problem on "open solaris" where no commercial supports or contributions to it :|

the main key is, some company need to contribute "open solaris" patches, kernels and updates.
no open source will surive when no company/commerical supports and contributions. thi is a main lack in "open solaris" compares with "linux"

I see many goodies are transplating to linux, for examples, ZFS, dtrace and others :D,

the other main lack is, newer hardware supports on "open solaris". I can use current/newer hardware without issue under linux since many vendors contribute to linux in updating source code and maintain fixes/updates
 
Yo,

After solving most of my file permissions I still have some trouble: I wanted to rename a file but got again permission denied. Mostly when I get that I go to ACL on folder and select the folder I have trouble with and reset ACL.
Now I have come across a file where there are no users appointed and when I try to reset ACL I get errors : (see screenshot)

screenshot036x.png
 
Yo,

After solving most of my file permissions I still have some trouble: I wanted to rename a file but got again permission denied. Mostly when I get that I go to ACL on folder and select the folder I have trouble with and reset ACL.
Now I have come across a file where there are no users appointed and when I try to reset ACL I get errors : (see screenshot)

One of the yet unsolved problems with ACL and Unix permissions when using NFS3 and CIFS

In such a case, you can only reset Unix permissions of the parent folder recursively to 777
and the reset ACL to somethging like everyone@=modify with inheritance=on to avoid this access problem when trying to set AC:
 
Out of curiosity, would FreeBSD which tracks OI closely be a bad choice for ZFS if OI goes down the drain?
ZFS works fine as of 9-series in my experience and seems robust when it comes to their mailinglists too.
//Danne
 
One of the yet unsolved problems with ACL and Unix permissions when using NFS3 and CIFS

In such a case, you can only reset Unix permissions of the parent folder recursively to 777
and the reset ACL to somethging like everyone@=modify with inheritance=on to avoid this access problem when trying to set AC:

Thanks for the info but nothing I do changes anything!
 
I have a little problem with Napp-IT and would like to humbly ask the experts' advice.

When installing either openindiana (latest release), or solaris 11, followed by Napp-IT, no matter what I do, Napp-IT reports "no unused disks are available" when I try to create a zpool.

The OS itself is installed on a 250GB drive, and there are 2 other unused 500GB drives in the system. They both show up in GParted, and I can create a zpool through command line if I used the addresses that GParted displays for the 2 500GB disks. Even then, however, Napp-IT is unable to create folders on this pool, even though the pool shows up as healthy in the Napp-IT interface.

Again, this behavior repeats in both openIndiana and Solaris 11. In Nexentastor 3.1.3, there are no problems creating pools or folders when using Nexenta's native GUI.

The system uses an ICH7/NM10 southbridge. I am at a loss, and humbly ask for advice.
 
One of the yet unsolved problems with ACL and Unix permissions when using NFS3 and CIFS

In such a case, you can only reset Unix permissions of the parent folder recursively to 777
and the reset ACL to somethging like everyone@=modify with inheritance=on to avoid this access problem when trying to set AC:

could work (why not..)
but...
setting all rights to everything (777) is a bad habit :p. as I learn in my life :D
 
Out of curiosity, would FreeBSD which tracks OI closely be a bad choice for ZFS if OI goes down the drain?
ZFS works fine as of 9-series in my experience and seems robust when it comes to their mailinglists too.
//Danne

I guess not,

OI is just another complete distro in general speaking.

ZFS is already transplanted to FreeBSD and they would do more development on ZFS where based on source codes before oracle move solaris to non open source platform.

ZFS will survive on open source environment, there is only one issue... open source licensing issue, <--- this issue can be workaround by transplanting any update to any open source licenses

in one example:
ZFS(Open Solaris originally) on linux (still under heavy testing and minor development) is using GPL licensing :D.,
SSH (OpenBSD originally) on linux is using GPL licensing..
 
Heum heum

OpenIndiana project leader resigns, SchilliX returns to OpenSolaris code base
=> http://distrowatch.com/weekly.php?issue=20120903#news

Ever since Oracle's decision to discontinue the OpenSolaris distribution the company inherited from Sun Microsystems, the developers and contributors to the popular UNIX platform have been searching for a way to continue the project. Many of the coders have found themselves working under the Illumos umbrella, with OpenIndiana emerging as the logical continuation of the OpenSolaris distribution. But not all is well with the project and last week's resignation of OpenIndiana lead developer Alasdair Lumsden highlights some of the conflicts between open-source ideals and commercial interests. The H Online reports in "OpenIndiana project leader steps down": "In Lumsden's opinion, Joyent, Nexenta and Delphix spent too much effort pushing their own distributions of Illumos and only contributed to the core of Illumos, which Lumsden says led to 'the increasing irrelevance of Illumos'. Lumsden also points to the fact that many of the unique features of OpenIndiana are now becoming available on Linux as well. Technologies such as ZFSOnLinux, Btrfs and dtrace are not as mature as their Illumos counterparts yet, but, Lumsden believes, they lead to the OS "becoming less and less important" in comparison with Linux. Taking his leave from all duties on the project, Lumsden says that he will nonetheless continue to provide hosting for OpenIndiana's infrastructure."

In the old days Sun Microsystems' Solaris was the most popular UNIX operating system, but with the growing popularity of Linux at the time, the company was forced to open some of the Solaris code in order to attract more developers and contributors. Thus OpenSolaris was born. The opening of the source code also led to an explosion of OpenSolaris-based distributions, both community and commercial, with various levels of (in)compatibilities between them. To illustrate the complexity, Jörg Schilling, the developer of cdrecord and SchilliX, has emailed DistroWatch to explain the history and relationships between some of the products that evolved from Solaris: "SchilliX is not based on OpenIndiana as OpenIndiana has incompatible IPS packaging. SchilliX uses the native Solaris packaging system. SchilliX is rather based on OpenSolaris, but note that OpenSolaris is not a distribution. The name of the distro you call 'OpenSolaris' is Indiana. And OpenIndiana is based on Indiana and not on Solaris as Solaris is the non-free precursor of OpenSolaris. To be more precise, OpenSolaris is the 'open' Solaris base. The development code name for Solaris 10.1 (renamed to Solaris 11 at the end of 2005) is 'Nevada' and the source base (that is similar to what FreeBSD covers with their code base) is called 'ON' (Operating and Networking). Before Oracle launched their recent closed-source fork from OpenSolaris, the OpenSolaris base was called 'ONNV' by Sun. ONNV in an abbreviation for 'Operating and Networking Nevada'. SchilliX-ON is a continuation project for that code base and SchilliX is based on SchilliX-ON."
 
Has anyone managed to access their smb shares on openindiana using an android device when guest access has been disabled. When I try I keep getting wrong password problems .. no matter what android application is trying to access it, however windows and Linux have no problems.
 
oi 151a6 is out (Release Date: September 4th, 2012)
=> http://wiki.openindiana.org/oi/oi_151a_prestable6+Release+Notes

Notable in this release is the apache/apr/apr-util bump. Complete list of changes is below...

  • Bump illumos to 13793:10c3656ccf76
  • SFW fixes and bumps
    • automake CVE-2012-3386
    • libxslt CVE-2011-3970
    • Bump apache to 2.2.22
    • Bump apr to 1.4.6
    • Bump apr-util to 1.4.1
    • Bump BIND to 9.6-ESV-R7-P2
    • Bump libsndfile to 1.0.25
      • #2145 libsndfile needs newer version
    • Bump memcached to 1.4.14
    • Bump SoX to 14.4.0
    • Bump Wireshark to 1.4.15
    • Remove lcms python-24 dep
    • Remove mc python-24 dep
    • Remove mercurial python-24 dep
    • Fix readline libtermcap patch application
  • oi-build bumps and fixes
    • Bump KVM driver
    • Bump NVIDIA driver to 295.71
    • Bump qemu-kvm
    • Fix permissions in illumos-gcc
    • Fix illumos-gcc assembler
    • Add diagnostic/mtr
    • Obsolete illumos/gcc
  • DI and DC fixes
    • #3020 Remove cpp and cc links from gcc-3
 
if you set ACL afterwards, they Unix permissions are set (reduced) accordingly

It does

change to RWXRWXRWX (777) where everyone can edits/deletes is not a good habit :)

imagine........
a situation, you set ACL and later in the future, you change the structure in you share folders ( many directories where own by many users).

by using groups, you can change pretty quick :)

example on zfs on linux. they implement nfs in ZFS ( as on open solaris or solaris), but they use samba for windows sharing :D.
samba is very flexible for me.
 
It does

change to RWXRWXRWX (777) where everyone can edits/deletes is not a good habit :)

imagine........
a situation, you set ACL and later in the future, you change the structure in you share folders ( many directories where own by many users).

by using groups, you can change pretty quick :)

example on zfs on linux. they implement nfs in ZFS ( as on open solaris or solaris), but they use samba for windows sharing :D.
samba is very flexible for me.

All good and well but this still doesn't fix my issue...I have no clue how to reset ACL on a folder with no permissions....nothing I tried works!

Ty
 
I'm testing OI and Napp-it in ESXi 5 as a home ZFS NAS and all-in-one and I'm running into an issue.

On a fresh boot, I get write speeds of around 110 MB/sec and read speeds of around 55 MB/sec which is fine. However, after leaving the server on for around a day, my write speeds drop to around 60 MB/sec over SMB. Sending the same file over FTP saturates the gigabit line.

I did some troubleshooting and found out that by issuing a "svcadm restart svc:/network/smb/server:default" my write speeds jump back up to 110 MB/sec.

Does anyone know why SMB writes slow to almost half speed after the server has been on for a while (current uptime is 18 hours)?

I will try to reproduce this over the next day. Running OI 151a5 and Napp-it 0.8k stock install. All that has been done is to create my zpool of 6 3TB disks into a RAID-Z2 array. The OI VM has 4 2GHz cores and 8GB of RAM with one e1000 NIC. The disks are spread on 3 IBM M1015 HBAs in IT mode passed through to the VM.
 
All good and well but this still doesn't fix my issue...I have no clue how to reset ACL on a folder with no permissions....nothing I tried works!

Ty

there is no ACL with no permission
you may set a single ACL like root=full or everyone@=read

ACL are
user:allow
or user:deny
with permissions like add, open, write create etc, with or without inheritance
 
Was able to reproduce my problem. Overnight, my SMB write speeds dropped from 110 MB/sec to 70 MB/sec (but only SMB, FTP still saturated the gigabit line).

After SSHing to the VM and restarting the SMB service (the only thing I did) write speeds immediately shot back up to 110 MB/sec.

Any ideas?
 
I Running OI 151a5 and Napp-it 0.8k stock install. All that has been done is to create my zpool of 6 3TB disks into a RAID-Z2 array. The OI VM has 4 2GHz cores and 8GB of RAM with one e1000 NIC. The disks are spread on 3 IBM M1015 HBAs in IT mode passed through to the VM.

Sorry, can't help you with that...besides that I recall that 2 cores is the recommended
value for OI/Solaris in a ESXi VM.

But seeing that you passed through 3 pcs. M1015 cards, may I ask you to share your
hardware specs? ..motherboard make & model, especially.

TIA!

Hominidae
 
Sorry, can't help you with that...besides that I recall that 2 cores is the recommended
value for OI/Solaris in a ESXi VM.

But seeing that you passed through 3 pcs. M1015 cards, may I ask you to share your
hardware specs? ..motherboard make & model, especially.

TIA!

Hominidae

I had 2 cores but they were pegged during write so I went to 4 (which are also pegged during write--8 GHz to write seems high but I'm not worried about it).

Specs:
Chassis: Heavily modded AIC RSC-4ED2 4U 24-bay (modded to accept an ATX PSU and silent fans) + rails
Mobo: Supermicro X9DRI-F-O
CPUs: 2x Intel Xeon E5-2620 (6 cores each=12x 2GHz cores with HT for 24 logical cores)
RAM: 32GB Kingston DDR3 EEC FBDIMMs
SSD: Intel 520 120GB
HDD: 6x WD RED 3TB
HBAs: 3x IBM M1015 2SAS/8SATA
PSU: Seasonic Platinum 860W

Rack: Tripplite 12U Enclosure
Switch: HP Procurve 2810-24G (fully managed Layer 2)
WAP: Ubiquity UniFi AP PRO (802.11abgn)
UPS: APC 1500VA SMART-UPS + network card + rails

All network wiring and punchdowns are CAT6.
 
Last edited:
I had 2 cores but they were pegged during write so I went to 4 (which are also pegged during write--8 GHz to write seems high but I'm not worried about it).

Specs:
Chassis: Heavily modded AIC RSC-4ED2 4U 24-bay (modded to accept an ATX PSU and silent fans) + rails
Mobo: Supermicro X9DRI-F-O
CPUs: 2x Intel Xeon E5-2620 (6 cores each=12x 2GHz cores with HT for 24 logical cores)
[...]

Thanks for sharing!
I am looking into the X9SRL-F with a single E5-2620 as a future upgrade.
My current setup is in no way comparable to yours. However, I never managed to
use all the headroom on my SOLex-11 VM...even with encryption enabled.
I have vmware tools installed and use the vmxnet3 drivers/NICs..maybe that's a route to investigate, too.
 
How often does scrub task should be run? This is for a fileserver which also host VM machines.
 
Last edited:
I scrub my RaidZ 3*2TB monthly, which is hopefully sufficient.
I would not want to hardcore-stress my disks weekly ...
 
Our NAS at work is 9 disk RAIDz2 14TB, scrubs weekly. Scrubs are up to 14 hours now as it's nearly full. :) It would probably be fine bi-weekly or monthly but I too am paranoid as it's our backup of critical data. And again...Sunday it's not doing anything else so I let it scrub. I will soon need to upgrade it to more and larger drives and probably go RAIDz3 in that case. Going Raidz3 would make me more comfortable going to monthly scrubs.
 
I am trying to find way to increase the performance of my esxi all in one setup.

My pool is composed of 3x 2tb mirror. With local throuput of around 300mb/s

I can get around 100-110 MB/s with CIFS to a ssd on my workstation. Sometes it drops to 30 for reason i dont know but putting the transfer to pause in win8 then resume give the 110 mb back.

Anyway thats pretty good.

However sharing within the esx seems slower for no reason. Zfs provide storage via nfs for vm. Thoses can read from their vm disk around 60 MB/s. I would hope to go over 1gps here since all is within the same esx.


Everything seems tweeked the vm are on sync disabled nfs. Running vmNet3 nic.

I wonder if using jumbo frame or ipv6 for the datastore and the whole network could help?

Also right now my nfs datastore are are on the same vm network as my vms maybe adding another nic to my OI and only do nfs datastore stuff over that nic would help?


Any other tips?
 
Last edited:
Sometimes vmxnet3 can be dodgy and result in flaky results (I've seen very bursty performance). Try switching to e1000 on the solaris (whichever version) guest?
 
Dunno if that is going to help, I had E1000 nics on my Nexenta box and saw a large amount of dropped packets in ESXTOP, after switching to OI/Napp-IT with VMXNET3 I didn't see dropped packets in ESXI anymore.
 
The issue wasn't dropped packets, but burstiness and lag. Not saying e1000 will fix it, just a thought.
 
Just a quick update,

I created a new VM kernel with a switch called NFS Datastore, I added another nic to my OI VM connected that nic to the newly created vm network. That network is in a different subnet.

I unmounted my esx data store and re-mounted with the new ip.

Running the same test in similar way (all other vm shutdown)

from inside the vm running crystaldiskmark i saw a pretty big improvement

before:
Seq: R: 110 W: 291
512k-rand: R: 96 W:200
4k: R: 9.9 W: 4.6

now:
Seq: R: 227 W: 275
512k-rand: R: 111 W:276
4k: R: 29 W: 11
 
Okay, I have an All-In-One with 2 pools - one for mass storage and the other for VMs. The VM pool is made up of 4x 1tb WD Black Edition drives as 2 sets of mirrors. I get great performance on reads (275-375Mb/s) but horrible write performance in ESXi VMs. Over SMB it's fine but VM write traffic slows to a max of around 7Mb/s with an average of 3Mb/s. It makes things pretty painful. I have been reading up on it and everyone says it's likely a sync issue. I tested turning off Sync and now performance is 159-326Mb/s. Is the only safe way around this a ZIL? I use this server for my accounting server VM (mission critical) but it's backed up regularly. Just how great is the risk?
 
Last edited:
My personal opinion: I would disable sync for the datastore folder and not worry about it. For an all-in-one, if the host crashes or loses power, everything is off the air at the same time, including the guests, so out of order disk writes are not such a concern. Also, I snapshot my ESXi folder every night (at midnight, when things are relatively quiet), so if a failure does happen, and a guest does somehow get corruption, worst case I lose 24 hours work. I'm willing to live with it, since a separate ZIL device needs to be a VERY fast SSD to match the speed of no-sync writes.
 
do you need to shutdown the vms when you do a snapshot of vm ? I currently use the built-in vsphere snapshot feature.
 
I don't generally bother. Remember this is a light-use environment, so by midnight, not much is going on, and restoring a running snapshot is exactly equivalent to a power of/on of a real PC. With modern filesystems, that's okay, unless very heavy disk I/O is going on. If you are anal, you could script a sequence where do you (via RCLI) vmware snapshots (hot snaps) of all VMs, do a ZFS snapshot (cold snap) of that datastore, then delete the hot snaps. If the guests have vmware tools installed, you'd make sure you specified the 'quiesce guest filesystem' option.
 
Hi _Gea ...

I'm currently evaluating a new SE11 / Napp-it box here at work, and I've just run into a bug trying to join it to an AD domain.

Every time I tried to join it, I received the "failed to find any domain controllers" error message.

I spent lots of time trying to troubleshoot the problem online, and - long story short - changing the lmauth_level setting doesn't work through the Napp-it UI on Solaris Express 11, and that's what was causing my problem.

[From a local terminal prompt on the server]

Code:
root@solaris-test:~# sharectl set -p lmauth_level=2 smb
lmauth_level: not defined

In SE 11, the lmauth_level value no longer exists, and has actually been replaced by two different values: "client_lmauth_level" & "server_lmauth_level"

Once I'd updated both of those values, I was able to join the domain without any problems.

Just a heads up! ;)

--David
 
Back
Top