OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Why does napp-it not register the hostname with AD when you join?

I'm not able to connect to my NAS via the hostname, only IP.

Also, any easy way to understand how to add SMB share permissions? I can see the share as a guest, but I'd like to remove guest access and only allow the administrator (which I have set up as a proxy to root) and a specific group like "staff".
 
Last edited:
The DNS should be updates when you join, otherwise check DNS entry ar add the entry manually.
A different thing is when you want the server tp appear under "workgroup"

On OmniOS 151016 and newer, this is disabled by default, you can enable in menu
Services > SMB > propertiies where you must set netbios_enable to true.

Share permissions is something that offers Windows as well meaning that you can not only add permissions to files or folders but additionally to the share to restrict access without modifying files and folder ACL.

If you only want to allow user administrator you can remove any ACLs beside the administrator and staff=modify or full entry. Other users may see a share but cannot access. After Access you can hide files and folders where a user has no access with the abe share option.
 
Hello Gea,

I'm currently testing PCIex SSDs (2x 2.5 inch Intel DC P3600 1.6TB). Omnios installer (install to baremetal) seemed to hang when the intels are connected. Solved this after ejecting the PCIex SSD's.

SSD's are recognized and extermely fast =)

The problem I'm facing now is that hotplugging/hotswapping does not seem to work on the PCIex SSD's. Did you get this working when you did some tests with PCIex nvme storage?

Thanks in advance
 
I tried the P750 and P3600 (PCI-e adapter).
They worked in a barebone setup but are hanging under ESXi.

There is currently a lot of work on the NVMe driver @Illumos.
A newer driver release is included in the newest OmniOS bloody 151019.

Next stable with newest NVMe driver (and support for Linux zones as a beta) is in OmniOS 151020 -
coming very soon (expected October 2016)
 
Hi Gea!

I've been using napp-it with OmniOS under ESXi for a while now, and it works great, thank you!
Now I have slight problem with custom jobs. Is there away to set locale for jobs?

I have a script that runs amazon s3 sync for a set of document folders, where files and folder names are UTF8 with norwegian chars. I'm using the aws cli tool.
I have this working from a SSH shell where my locale i set to nb_no.UTF-8. But this fails when the same script is runs as a job. Running locale from the little commad shell in the web pages shows the the locale (for jobs?) is indeed C.
The sync job fails on filenames with said norwegian chars. Is there any way to fix this?

Regards,
Wish
 
Last edited:
There is an update on OmniOS 151019 bloody

From anouncement
"I hope this is the bloody that will form the r151020 branch. Please note that because of the sheer volume of upstream illumos-gate activity, we froze pulls from -gate early this time, early enough to not include BSD Loader and Python 2.7 support. A few cherrypicks may happen pre-020, and a couple have already happened:

- illumos 4498, update ACPI to version 6.x
- A series of nvme and blkdev fixes culminating in illumos 7382, basic NVMe 1.1 support.

uname -v says omnios-master-2f9273c, corresponding to the illumos-omnios:master commit, and omnios-build is at commit 8d79e11. This is likely to be where we branch r151020."


Important is the update for the NVMe drivers.
The next stable 151020 is to be expected soon then
 
#!/bin/bash
source /root/.profile
some_other_cmd

That worked, thank you! :)
Funny thing though, I had to add ENV locale settings to .profile as they where commented out.
My putty shell obviously got the correct locale from somewhere else.
Haven't had the time to look closer, but i works now!
 
Hello,

I've noticed that napp-it agent_stat.pl–prstat balloons to gigabytes of process memory over time. That agent_stat function calls itself recursively in perpetuity, unlike the other perpetual agents which simply loop while (1).
 
First time user and wanted to say thanks.

Poking around through the menus I noticed a little typo:
upload_2016-10-31_15-45-52.png
 
Got everything together for now, but have a newb question. When I'm looking at the pool versus file system sizing why are there additional capacity losses beyond the ten percent provisioning? I'd like to see the 16TB and 2.13TB, but am instead seeing the 14.4TB and 1.92TB for the respective pools.
upload_2016-10-31_18-59-23.png
 
Drink more coffee, then try your calculator again! Hint: 16 raw - 1.6 refres = 14.4 available.

Please see the attached image. When I look at the raw numbers from the pool screen I see 17.6TB and 2.3TB usable. With 10% FRES provisioning that should be 16TB and 2.13TB available. Why then again is there an additional 10% penalty beyond that when I created the ZFS filesystem?
 

Attachments

  • Napp-It Pools.PNG
    Napp-It Pools.PNG
    71.5 KB · Views: 51
Only a few hints
- 16T on ZFS = 16Tib not 16TB (base 1000 vs base 1024), first value is lower
- zpool shows added capacity of all disks, you must remove the disks for redundancy for usable capacity
- there is an additional small internal reservation within ZFS to allow the CopyOnWrite filesystem to not crashing when filled
- napp-it adds a 10% reservation per default to keep performance high on a large fillrate
 
Is there anyone having problems with NVME device (Intel 750 PCIe 400GB/1.2TB) with the latest updates from OmniOS?
r151018 (258cc99) and r151020 (b5b8c75).
We are getting a lot of (H) errors on those devices.
When i revert back to r151018 (95eaa7e) all is working fine.
I guess it has something to do with "NVMe support for some NVMe 1.1 devices".
Also tried latest firmware from intel; but the whole system is slowing down (about 30-40% performance degradation)
 
Is there anyone having problems with NVME device (Intel 750 PCIe 400GB/1.2TB) with the latest updates from OmniOS?
r151018 (258cc99) and r151020 (b5b8c75).
We are getting a lot of (H) errors on those devices.
When i revert back to r151018 (95eaa7e) all is working fine.
I guess it has something to do with "NVMe support for some NVMe 1.1 devices".
Also tried latest firmware from intel; but the whole system is slowing down (about 30-40% performance degradation)

You should write a mail to omnios-discuss about that
http://lists.omniti.com/mailman/listinfo/omnios-discuss
 
info

OmniOS 151020 stable is out
https://omnios.omniti.com/wiki.php/ReleaseNotes/r151020

new.

NVMe 1.1 driver
Linux LX container support (from SmartOS, beta)

USB3 support for Illumos (OmniOS, OI, SmartOS) on the way
https://www.mail-archive.com/[email protected]/msg07158.html

On updates, care about the move of Illumos from SunSSH to OpenSSH
https://omnios.omniti.com/wiki.php/Upgrade_to_r151020


napp-it 16.11.dev
From this release on I will remove support for older browsers to simplify the GUI code
first step: menu is css only with simple ul lists. (If you intend to modify the UI)
http://napp-it.org/downloads/changelog_en.html
 
Last edited:
Solaris being canned, at least 50% of teams to be RIF'd in short term - post regarding Oracle Corp. layoffs

Sun was the most innovative IT firm years ago with the best engineers and concepts.
Seems that many of their superiour ideas survive only because they were opensourced by Sun like ZFS, dtrace, the zones, the Linux container concept or servicemanagement and adopted by others. The Solaris OS with included projects like the ZFS integrated NFS/SMB/iSCSI services, Comstar or Crossbow may survive also only because the OpenSource fork Illumos as the successor of OpenSolaris that may become now the "real" Solaris.

Only rumours at the moment, but Oracle seems the worst what could happen to Sun and others who were bought by Oracle. More or less the Adobe project. Buy it and stop it as a possible competitor??
 
Last edited:
Gea,

Tiny problem: my napp-it "Server overview" shows "zones service : disabled" even though the two services are enabled and running.
 
Hi,
At work we have two ZFS all-in-one setups, one for production, one for disaster recovery at a remote site. The production one is on a Dell R720xd with two Xeon E5-2620 cpus, 96gb ECC memory, and 8 nearline SAS 2Tb disks running under ESXi 5.1. There is an OpenIndiana vm with the disks passed through (on a LSI 2008 controller in IT mode). The other vms are mix of Windows 2008 servers and linux servers. These servers have been flawless for a few years now and we have been totally pleased with ZFS and napp-it.

I would like to build and create an all-in-one for home use using ESXi 6.5, OmniOS, and the latest napp-it. I would expect to have a few linux vms (say four or five -- I run my own mail server, OpenVPN server, firewall, etc.) and a few windows vms. I would like to have a home server that uses one or two Xeons and about 128gb of memory, As far as disk use for ZFS, I was thinking of starting with as small as four 4Tb sata drives (in two mirrored vdevs). I would love to hear hardware recommendations for a system like this. I have read older articles about the Supermicro X10SDV-TLN4F, but wonder if there is something better since that has very little expandability. I would like to keep power consumption and noise down to reasonable levels. Cost? I am open at this point, but was hoping for something under $3k.

Thanks in advance.
 
Hi,
At work we have two ZFS all-in-one setups, one for production, one for disaster recovery at a remote site. The production one is on a Dell R720xd with two Xeon E5-2620 cpus, 96gb ECC memory, and 8 nearline SAS 2Tb disks running under ESXi 5.1. There is an OpenIndiana vm with the disks passed through (on a LSI 2008 controller in IT mode). The other vms are mix of Windows 2008 servers and linux servers. These servers have been flawless for a few years now and we have been totally pleased with ZFS and napp-it.

I would like to build and create an all-in-one for home use using ESXi 6.5, OmniOS, and the latest napp-it. I would expect to have a few linux vms (say four or five -- I run my own mail server, OpenVPN server, firewall, etc.) and a few windows vms. I would like to have a home server that uses one or two Xeons and about 128gb of memory, As far as disk use for ZFS, I was thinking of starting with as small as four 4Tb sata drives (in two mirrored vdevs). I would love to hear hardware recommendations for a system like this. I have read older articles about the Supermicro X10SDV-TLN4F, but wonder if there is something better since that has very little expandability. I would like to keep power consumption and noise down to reasonable levels. Cost? I am open at this point, but was hoping for something under $3k.

Thanks in advance.

probably best to post this as it's own thread

that said - i'd do (for 3k) an

intel D1541 - 581 usd
4x 32GB ddr4 - roughly 1k
sas3 IT controller like an LSI 31xx can be had on ebay for 250+
case, powersupply, drives etc - use as many SSDs as you can afford and use the spindles as cold storage
that gets you to about 2k and you would have one the quietest systems out there - not the fastest but not bad either

another route is a dual 2670 system from Natex.us, they've a bundle for about 500 bucks that comes with two 2670s, 128GB ddr3, the motherboard and HSFs I think
 
probably best to post this as it's own thread

that said - i'd do (for 3k) an

intel D1541 - 581 usd
4x 32GB ddr4 - roughly 1k
sas3 IT controller like an LSI 31xx can be had on ebay for 250+
case, powersupply, drives etc - use as many SSDs as you can afford and use the spindles as cold storage
that gets you to about 2k and you would have one the quietest systems out there - not the fastest but not bad either

another route is a dual 2670 system from Natex.us, they've a bundle for about 500 bucks that comes with two 2670s, 128GB ddr3, the motherboard and HSFs I think

Thanks for your reply, gigatexal. If this is inappropriate for this thread, feel free to have a mod delete it. I posted it here due to a) using SFX, Omnios, and Napp-it, and b) the wealth of experience of fellow all-in-one devotees.
 
The X10 SDV low power line is available from a two core up to a 16 core system, 10G onboard and optionally a 16 port LSI HBA, max 128G RAM

I would suggest the X10SDV-7TP4F (8 core, 10G, LSI HBA, 64GB RAM). For the VMs I would also use an SSD only pool preferable enterprise class Samsung SM 863 due the powerloss protection and high write iops (single mirror or Raid-Z, ex 2 x 960TB or 3 x 480TB ) with an additional mirror for general use from 4-8 TB disks (ex HGST HE).

Some more build examples
http://www.napp-it.org/doc/downloads/napp-it_build_examples.pdf
 
i defer to _Gea his build out makes a lot more sense than mine
 
i defer to _Gea his build out makes a lot more sense than mine

Thanks to both _Gea and you -- I have a lot of info to get started on. The Supermicro X10SDV-7TP4F is pretty interesting beast, including the Xeon CPU, 10gb ethernet, and 16-port LSI 2116. I hadn't stumbled onto this in my previous research. Looks like it even has a graphics controller unlike some of the other Supermicro mbs.
 
The onboard graphic comes with IPMI management capability (remote console via browser)
 
Also, using AWS as a backup mechanism with OmniOS would you mirror your AWS drives or is that overkill?
 
probably best to post this as it's own thread

that said - i'd do (for 3k) an

intel D1541 - 581 usd
4x 32GB ddr4 - roughly 1k
sas3 IT controller like an LSI 31xx can be had on ebay for 250+
case, powersupply, drives etc - use as many SSDs as you can afford and use the spindles as cold storage
that gets you to about 2k and you would have one the quietest systems out there - not the fastest but not bad either

another route is a dual 2670 system from Natex.us, they've a bundle for about 500 bucks that comes with two 2670s, 128GB ddr3, the motherboard and HSFs I think

I own the 1541. I've had it since the release along with the 1540 and they are absolutely fantastic. I have 128GB and run ESXi flawlessly. If you don't need VM's and just want a pure storage solution the board recommend with the built in controller is a sweet solution. Need to consider if you need the SFP port. For my needs, I don't need SFP and I value the extra clock speed of the 1541. I use several VM's that are non idle.

Also, you can save some bucks on that controller by getting it directly from wiredzone. I don't think it will be more then 200. I went with the AOC-S2308L-L8e only because driver support at that time seemed superior but right now they are probably similar. The 2308 is also priced better.

For a slog you can get a 400GB S3710 near new right now on eBay for about $140 which is hard to beat.

As for the 2670 - they are solid. I have two running on a X9DAE and use it for my desktop. I would not use it for storage because it eats electricity :)
 
What does one get with Nappit licensed send/receive over the usual ZFS send receive?

napp-it is using the normal ZFS send -> receive mechanism.
It adds comfort regarding backup management, easy handling and advanced snap management on the target side with remote management of an appliance group to use a centralized backup machine for many source server. The zfs send datastream is routed over a buffered netcat connection what makes it very fast, up to wirespeed or the pool limit (mainly limited with pool iops)
 
Last edited:
Has anyone gotten OmniOS/Napp-it to work with an Intel X520-DA2 10gb card? I'm trying to rebuild my home storage and figured this was as good of a time as any to move back to an AIO setup but leave an option for adding additional VMWare hosts. I've got 3 Dell R710's with only one having actually been used for the last 6 months.

My thought was to set up one R710 ESXI (dual L5630 with 64GB RAM) and run the majority of my VMs on that, including the storage server (currently a separate system). I'm trying to set up the storage VM using the Napp-it AIO VM template, with a X520-DA2 and 4 port external LSI HBA passed to it. The LSI card will provide the connections for the storage disks since they are 3.5" while the R710 only supports 2.5". I have two additional X520-DA1 cards for the other two VMWare hosts to direct attach (trying to avoid needing a 10GB switch with 4 SFP+ ports).

The issue is that as soon as I pass the X520 card through, the VM will start up and run for about 5 minutes.. then just power off. If I just pass the LSI HBA through or don't pass anything through, no issues and the VM will stay up and running. If I only pass the X520 and no HBA through, same result of power off after about 5 minutes.

Just for grins I've passed the X520-DA2 card through to a Windows 10 VM on the same host without any issues, though I need to set up a VM on another host with a DA1 card passed through to fully test it.. but at least it detects the hardware and doesn't crash.

I've been tinkering with this on and off for some time, but with Christmas here and some PTO time being spent just hanging around the house I would love to get this working and move forward with my storage migration. Worst case is I give up and move back to 4GB fiber (what I'm currently using to for my hosts to connect to the dedicated storage box), but since I have the hardware I kinda wanted 10GB working :p
 
The X520 is a little bit older but supported. I use a X520 DA1 in my test machine in a barebone setup but I have not heard about problems in AiO. But Pass-through can be always a different thing as this is very hardware sensitive.

What I would do:
- update OmniOS to 151020. The ixgbe driver from 151018 is a new release to support the newer X/XL710 chips from Intel.
- update ESXi to newest 6.5 (attention, html5 weblient only, Windows Vsphere client EoL).

If the problem is not fixed with a newer OS base, then
- use the X520 in ESXi directly connected to the vswitch
- use vmxnet3 vnics in the guests incl. OmniOS

This gives you 10G to all guests and Highspeed in sofware between ESXi and Storage over NFS
 
Gea,

Any thoughts on performing a ZFS send directly to file and sending it up into the cloud via something like rclone? Basically want to send all my snaps in file format as a backup. I suppose the risk is that if if one of the snaps is corrupted you are basically screwed.
 
You can send a ZFS filesystem-snap to a file and backup the file. But yes, any data corruption in that file cannot be repaired and you can not restore the filesystem from that file then.

If this is critical, build a ZFS pool from a raid-Z(1-3) on files. This would allow one or more files to become corrupt. If you encrypt these files with lofiadm you can even use encrypted files to backup sensitive data to unsecure places/cloud etc
 
So does this mean that if backing up to AWS. You would use at least 2 drives to store your data?
 
what do you mean by build a zfs on a raid 1-3 of files?

My use case is that I'm trying to figure a cost effective way of backup up some pools and would like the backups to be incremental with the usual zfs snaps.

Just trying to figure out the best way to store them e.g. Full blown server on EC2 for the important stuff or just copying the data up to a cloud drive.
 
Back
Top