OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

A short test with Solaris 11.3 Text
Setup from USB stick: fails
Setup from Sata DVD: fails
I have tried different bios settings regarding USB2/3 settings

In my experience getting Sol 11.x to work on Supermicro Boards requires resetting BIOS to defaults, and then disabling VT-D and SCU in the BIOS.

Additionally did you try using IPMI to virtually mount the ISO?
 
In my experience getting Sol 11.x to work on Supermicro Boards requires resetting BIOS to defaults, and then disabling VT-D and SCU in the BIOS.

Additionally did you try using IPMI to virtually mount the ISO?
No, but when even a boot from an Sata DVD fails, this would not help.

I have never hat problems with vt-d in the past. Problems are mostly related to USB 2/3 driver problems (keyboard) or disk drivers. But I only tried the "free" version for developers. With a subscription and support, results may be different.
 
Hey _Gea,
Seems the amp installer is returning a 404 error.


Code:
root@fs1:/root# wget -O - www.napp-it.org/amp | perl
--2016-05-29 20:02:09--  http://www.napp-it.org/amp
Resolving www.napp-it.org (www.napp-it.org)... 188.93.13.227
Connecting to www.napp-it.org (www.napp-it.org)|188.93.13.227|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2016-05-29 20:02:10 ERROR 404: Not Found.
 
Hey _Gea,
Seems the amp installer is returning a 404 error.


Code:
root@fs1:/root# wget -O - www.napp-it.org/amp | perl
--2016-05-29 20:02:09--  http://www.napp-it.org/amp
Resolving www.napp-it.org (www.napp-it.org)... 188.93.13.227
Connecting to www.napp-it.org (www.napp-it.org)|188.93.13.227|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2016-05-29 20:02:10 ERROR 404: Not Found.


You must add a release number what allows to use the current or older ones:
napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana, Solaris and Linux : Extensions


current:
Code:
wget -O - www.napp-it.org/amp-15.4.2  | perl
 
Hello, I ended up here while looking for a solution to my problem concerning mDNS under OmniOS r151018.
I updated from r15104 to r151018 (I waited for SMB2) and everything works, but Zeroconf.
In other words, my other computers cannot reach my server by typing "Server.local".
For example, I can reach it with the IP: http://192.168.1.5:9091 for TransmissionBT, but not with http://Server.local:9091
Another example (I'm not sure it is related): my WD TV live and my Windows 10 cannot see "SERVER" among the network servers anymore.

Has this happened to anyone? I tried to diagnose the problem but I had no success.

Code:
$ dns-sd -V
Currently running daemon (system service) is version 576.30.4
$ dns-sd -F
Looking for recommended browsing domains:
DATE: ---Wed 08 Jun 2016---
0:13:02.500  ...STARTING...
Timestamp     Recommended Browsing domain
0:13:02.500  Added                          local
^C
$ dns-sd -E
Looking for recommended registration domains:
DATE: ---Wed 08 Jun 2016---
0:10:57.885  ...STARTING...
Timestamp     Recommended Registration domain
0:10:57.885  Added                          local
^C
$ svcs |grep multicast
online         Jun_06   svc:/network/dns/multicast:default
 
You must set netbios_enable to true if you want that server is shown in network environment.
Then you have the same behaviour like OmniOS < 151014 (napp-it Services > SMB >properties)
Another server with a Windows Master Browser functionality is required

Multicast is mostly needed if you want to show a share ex on a Mac via Bonjour


Other option
If you have a local DNS server, add an entry there for server or
on your clients add an entry in /etc/hosts
 
So the situation is this: I used this guide to build the ZFS NAS in 2012. Worked fine and still does.

I moved two weeks ago and finally got the house wired for cat5. Was going to restart the NAS but I seem to have locked myself out of OpenIndiana as my admin login details don't work.

Is there a way to reset root password somehow?

I am a huge linux noob.
 
So the situation is this: I used this guide to build the ZFS NAS in 2012. Worked fine and still does.

I moved two weeks ago and finally got the house wired for cat5. Was going to restart the NAS but I seem to have locked myself out of OpenIndiana as my admin login details don't work.

Is there a way to reset root password somehow?

I am a huge linux noob.

What to Do If You Forgot the Root Password - System Administration Guide: Advanced Administration
or as your OS is quite old: reinstall the OS (a current OmniOS stable or newest OI Hipster dev) and import the pool
 
Hey, maybe someone can help me with some tuning/solve 1 problem? I have omnios-r151018-95eaa7e and I'm doing FC target with comstar.

I have supermicro SC847E26-RJBOD1 with 43 x Seagate ST4000NM0023(4TB SAS drives) which is connected to two LSI SAS6160 Switches which in turn is connected to Dell R630(128GB RAM) with "SAS 9207-8e Host Bus Adapter", FW version P19. FC HBA is in target mode and is card model is QLE2562.


ZFS:
Code:
  pool: storage1
state: ONLINE
  scan: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        storage1                   ONLINE       0     0     0
          raidz2-0                 ONLINE       0     0     0
            c1t5000C50062F7D58Bd0  ONLINE       0     0     0
            c1t5000C50062F7F1E7d0  ONLINE       0     0     0
            c1t5000C50062F7F517d0  ONLINE       0     0     0
            c1t5000C50062F7FEABd0  ONLINE       0     0     0
            c1t5000C50062F817BBd0  ONLINE       0     0     0
            c1t5000C50062F8380Bd0  ONLINE       0     0     0
          raidz2-1                 ONLINE       0     0     0
            c1t5000C50062F87EB7d0  ONLINE       0     0     0
            c1t5000C50062F8876Bd0  ONLINE       0     0     0
            c1t5000C50062F88BA3d0  ONLINE       0     0     0
            c1t5000C50062F88C4Bd0  ONLINE       0     0     0
            c1t5000C50062F88D8Fd0  ONLINE       0     0     0
            c1t5000C50062F89417d0  ONLINE       0     0     0
          raidz2-2                 ONLINE       0     0     0
            c1t5000C50062F89D47d0  ONLINE       0     0     0
            c1t5000C50062F8AEC7d0  ONLINE       0     0     0
            c1t5000C50062F8E29Fd0  ONLINE       0     0     0
            c1t5000C50062F9711Bd0  ONLINE       0     0     0
            c1t5000C50062F97AB3d0  ONLINE       0     0     0
            c1t5000C50062F997F7d0  ONLINE       0     0     0
          raidz2-3                 ONLINE       0     0     0
            c1t5000C50062F9AB47d0  ONLINE       0     0     0
            c1t5000C50062F9B0E3d0  ONLINE       0     0     0
            c1t5000C50062F9B38Bd0  ONLINE       0     0     0
            c1t5000C50062F9B7D3d0  ONLINE       0     0     0
            c1t5000C50062F9C62Bd0  ONLINE       0     0     0
            c1t5000C50062F9CBF3d0  ONLINE       0     0     0
          raidz2-4                 ONLINE       0     0     0
            c1t5000C50062F9CE97d0  ONLINE       0     0     0
            c1t5000C50062F9D853d0  ONLINE       0     0     0
            c1t5000C50062F9E3FFd0  ONLINE       0     0     0
            c1t5000C50062F9E47Fd0  ONLINE       0     0     0
            c1t5000C50062F9E9DBd0  ONLINE       0     0     0
            c1t5000C50062F9ED7Bd0  ONLINE       0     0     0
          raidz2-5                 ONLINE       0     0     0
            c1t5000C50062F9EEA3d0  ONLINE       0     0     0
            c1t5000C50062F9F443d0  ONLINE       0     0     0
            c1t5000C50062F9F72Bd0  ONLINE       0     0     0
            c1t5000C50062FA0B4Bd0  ONLINE       0     0     0
            c1t5000C50062FA2EDFd0  ONLINE       0     0     0
            c1t5000C50062FA423Fd0  ONLINE       0     0     0
          raidz2-6                 ONLINE       0     0     0
            c1t5000C50062FA8EEFd0  ONLINE       0     0     0
            c1t5000C50062FA9043d0  ONLINE       0     0     0
            c1t5000C50062FA906Fd0  ONLINE       0     0     0
            c1t5000C50062FAA0E3d0  ONLINE       0     0     0
            c1t5000C50062FAA54Fd0  ONLINE       0     0     0
            c1t5000C50062FAA677d0  ONLINE       0     0     0
        spares
          c1t5000C50062FAA777d0    AVAIL

errors: No known data errors


I have 2 zvol's shared out through FC(both are 2 TB).On other end I have 2 Xenserver7 servers with one FreeBSD vm machine where I'm conducting my disk speed tests.I get speeds ~ 300-400 MB/s(with zpool iostat) for a while and then speed drops to ~ 66MB/s.When that happenes, I see that read operations have increased from 0-5 to 100-200.

Command I'm testing write speed:
Code:
openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin

No other VM are started/installed, only 1 vm is making IO.

zpool iostat:
Code:
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage1    1.58T   151T     81  1.89K  5.06M  86.8M
storage1    1.58T   151T    116  1.89K  7.25M  67.1M
storage1    1.58T   151T    193      0  12.1M      0
storage1    1.58T   151T    158  1.87K  9.93M  67.1M
storage1    1.58T   151T    152      0  9.55M      0
storage1    1.58T   151T    166  1.70K  10.3M  65.9M
storage1    1.58T   151T    101    165  6.37M   974K
storage1    1.58T   151T    162  1.80K  10.2M  66.4M
storage1    1.58T   151T    269      0  16.8M      0
storage1    1.58T   151T     76  1.80K  4.81M  66.1M
storage1    1.58T   151T    256  1.39K  16.0M  65.1M



zfs get all:
Code:
storage1/xenserver_lun0                type                          volume                                 -
storage1/xenserver_lun0                creation                      Wed Jun 15  9:52 2016                  -
storage1/xenserver_lun0                used                          2.01T                                  -
storage1/xenserver_lun0                available                     94.3T                                  -
storage1/xenserver_lun0                referenced                    493G                                   -
storage1/xenserver_lun0                compressratio                 1.00x                                  -
storage1/xenserver_lun0                reservation                   none                                   default
storage1/xenserver_lun0                volsize                       2T                                     local
storage1/xenserver_lun0                volblocksize                  64K                                    -
storage1/xenserver_lun0                checksum                      on                                     default
storage1/xenserver_lun0                compression                   lz4                                    inherited from storage1
storage1/xenserver_lun0                readonly                      off                                    default
storage1/xenserver_lun0                copies                        1                                      default
storage1/xenserver_lun0                refreservation                2.01T                                  local
storage1/xenserver_lun0                primarycache                  all                                    default
storage1/xenserver_lun0                secondarycache                all                                    default
storage1/xenserver_lun0                usedbysnapshots               0                                      -
storage1/xenserver_lun0                usedbydataset                 493G                                   -
storage1/xenserver_lun0                usedbychildren                0                                      -
storage1/xenserver_lun0                usedbyrefreservation          1.53T                                  -
storage1/xenserver_lun0                logbias                       latency                                default
storage1/xenserver_lun0                dedup                         off                                    default
storage1/xenserver_lun0                mlslabel                      none                                   default
storage1/xenserver_lun0                sync                          standard                               default
storage1/xenserver_lun0                refcompressratio              1.00x                                  -
storage1/xenserver_lun0                written                       493G                                   -
storage1/xenserver_lun0                logicalused                   494G                                   -
storage1/xenserver_lun0                logicalreferenced             494G                                   -
storage1/xenserver_lun0                snapshot_limit                none                                   default
storage1/xenserver_lun0                snapshot_count                none                                   default
storage1/xenserver_lun0                redundant_metadata            all                                    default

storage1/xenserver_lun1                type                          volume                                 -
storage1/xenserver_lun1                creation                      Wed Jun 15 11:34 2016                  -
storage1/xenserver_lun1                used                          2.01T                                  -
storage1/xenserver_lun1                available                     94.3T                                  -
storage1/xenserver_lun1                referenced                    583G                                   -
storage1/xenserver_lun1                compressratio                 1.00x                                  -
storage1/xenserver_lun1                reservation                   2T                                     local
storage1/xenserver_lun1                volsize                       2T                                     local
storage1/xenserver_lun1                volblocksize                  64K                                    -
storage1/xenserver_lun1                checksum                      on                                     default
storage1/xenserver_lun1                compression                   lz4                                    inherited from storage1
storage1/xenserver_lun1                readonly                      off                                    default
storage1/xenserver_lun1                copies                        1                                      default
storage1/xenserver_lun1                refreservation                2.01T                                  local
storage1/xenserver_lun1                primarycache                  all                                    default
storage1/xenserver_lun1                secondarycache                all                                    default
storage1/xenserver_lun1                usedbysnapshots               0                                      -
storage1/xenserver_lun1                usedbydataset                 583G                                   -
storage1/xenserver_lun1                usedbychildren                0                                      -
storage1/xenserver_lun1                usedbyrefreservation          1.44T                                  -
storage1/xenserver_lun1                logbias                       latency                                default
storage1/xenserver_lun1                dedup                         off                                    default
storage1/xenserver_lun1                mlslabel                      none                                   default
storage1/xenserver_lun1                sync                          standard                               default
storage1/xenserver_lun1                refcompressratio              1.00x                                  -
storage1/xenserver_lun1                written                       583G                                   -
storage1/xenserver_lun1                logicalused                   583G                                   -
storage1/xenserver_lun1                logicalreferenced             583G                                   -
storage1/xenserver_lun1                snapshot_limit                none                                   default
storage1/xenserver_lun1                snapshot_count                none                                   default
storage1/xenserver_lun1                redundant_metadata            all                                    default



I have experienced the same 66MB/s speed with FreeBSD also, compression off and on. I thought that maybe I don't have this problem with OmniOS but same problem here. Under FreeBSD I also tried UFS over zvol and mounted it locally and experienced same problem.

Under omnios I used napp-it and below is zpool history:
Code:
History for 'storage1':
2016-06-15.09:38:28 zpool create -f storage1 raidz2 c1t5000C50062F7D58Bd0 c1t5000C50062F7F1E7d0 c1t5000C50062F7F517d0 c1t5000C50062F7FEABd0 c1t5000C50062F817BBd0 c1t5000C50062F8380Bd0
2016-06-15.09:38:29 zfs set refreservation=1.40T storage1
2016-06-15.09:40:12 zpool add -f storage1 raidz2 c1t5000C50062F87EB7d0 c1t5000C50062F8876Bd0 c1t5000C50062F88BA3d0 c1t5000C50062F88C4Bd0 c1t5000C50062F88D8Fd0 c1t5000C50062F89417d0
2016-06-15.09:40:36 zpool add -f storage1 raidz2 c1t5000C50062F89D47d0 c1t5000C50062F8AEC7d0 c1t5000C50062F8E29Fd0 c1t5000C50062F9711Bd0 c1t5000C50062F97AB3d0 c1t5000C50062F997F7d0
2016-06-15.09:40:54 zpool add -f storage1 raidz2 c1t5000C50062F9AB47d0 c1t5000C50062F9B0E3d0 c1t5000C50062F9B38Bd0 c1t5000C50062F9B7D3d0 c1t5000C50062F9C62Bd0 c1t5000C50062F9CBF3d0
2016-06-15.09:41:12 zpool add -f storage1 raidz2 c1t5000C50062F9CE97d0 c1t5000C50062F9D853d0 c1t5000C50062F9E3FFd0 c1t5000C50062F9E47Fd0 c1t5000C50062F9E9DBd0 c1t5000C50062F9ED7Bd0
2016-06-15.09:41:38 zpool add -f storage1 raidz2 c1t5000C50062F9EEA3d0 c1t5000C50062F9F443d0 c1t5000C50062F9F72Bd0 c1t5000C50062FA0B4Bd0 c1t5000C50062FA2EDFd0 c1t5000C50062FA423Fd0
2016-06-15.09:41:58 zpool add -f storage1 raidz2 c1t5000C50062FA8EEFd0 c1t5000C50062FA9043d0 c1t5000C50062FA906Fd0 c1t5000C50062FAA0E3d0 c1t5000C50062FAA54Fd0 c1t5000C50062FAA677d0
2016-06-15.09:42:19 zpool add -f storage1 spare c1t5000C50062FAA777d0
2016-06-15.09:52:11 zfs create -V 1073741824000 -b 64KB storage1/xenserver_lun0
2016-06-15.11:29:58 zfs set compression=lz4 storage1
2016-06-15.11:35:00 zfs create -V 1073741824000 -b 64KB storage1/xenserver_lun1
2016-06-15.12:13:02 zfs set volsize=2T storage1/xenserver_lun0
2016-06-15.12:13:18 zfs set volsize=2199023255552 storage1/xenserver_lun0
2016-06-15.12:14:52 zfs set volsize=2T storage1/xenserver_lun1
2016-06-15.12:15:06 zfs set volsize=2199023255552 storage1/xenserver_lun1





So maybe someone has some suggestions what I'm doing wrong? Right now no tuning under omnios.
 
Just to rule out options.
Set compress and sync to disabled on your volume
 
Nope, did not help..

Code:
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage1    1.58T   151T     50    561  3.15M  22.0M
storage1    1.58T   151T    267      0  16.7M      0
storage1    1.58T   151T    177  1.41K  11.1M  65.0M
storage1    1.58T   151T    101    476  6.31M  1.89M
storage1    1.58T   151T    244  1.62K  15.3M  65.7M
storage1    1.58T   151T    100    267  6.31M  1.25M
storage1    1.58T   151T    335  1.71K  20.8M  66.2M
storage1    1.58T   151T    148    166  9.30M   970K
storage1    1.58T   151T    234  1.80K  14.7M  66.2M
storage1    1.58T   151T    203     75  12.7M   776K
storage1    1.58T   151T     66  1.86K  4.12M  66.2M
storage1    1.58T   151T    160     76  10.1M   845K
storage1    1.58T   151T    179  1.89K  11.2M  66.7M
storage1    1.58T   151T    186      0  11.7M      0
storage1    1.58T   151T    132  1.82K  8.20M  66.5M
storage1    1.58T   151T    242      0  15.2M      0
storage1    1.58T   151T    107  1.87K  6.74M  66.9M
storage1    1.58T   151T    289      0  18.1M      0


zfs settings:
Code:
storage1/xenserver_lun0  compression           off                    local
storage1/xenserver_lun0  sync                  disabled               local
storage1/xenserver_lun1  compression           off                    local
storage1/xenserver_lun1  sync                  disabled               local
 
Can you try a benchmark tool like filebench > fivestreamread/ fivestreamwrite/ seqwrite/seqread
or iozone 1g (Napp-it menu Pool > Benchmarks)
 
fivestreamread:
Code:
23510: 125.905: Run took 120 seconds...
23510: 125.906: Per-Operation Breakdown
seqread5             605337ops     5044ops/s 5039.5mb/s      0.2ms/op      196us/op-cpu [0ms - 2ms]
seqread4             580567ops     4838ops/s 4833.3mb/s      0.2ms/op      205us/op-cpu [0ms - 1ms]
seqread3             607988ops     5066ops/s 5061.5mb/s      0.2ms/op      195us/op-cpu [0ms - 2ms]
seqread2             579370ops     4828ops/s 4823.3mb/s      0.2ms/op      205us/op-cpu [0ms - 0ms]
seqread1             611596ops     5097ops/s 5091.6mb/s      0.2ms/op      194us/op-cpu [0ms - 1ms]
23510: 125.906:

IO Summary:
2984858 ops, 24873.424 ops/s, (24873/0 r/w), 24849.1mb/s,    210us cpu/op,   0.2ms latency
23510: 125.906: Shutting down processes

fivestreamwrite:
Code:
1837: 126.009: Run took 120 seconds...
1837: 126.010: Per-Operation Breakdown
seqwrite5            57026ops      475ops/s 475.2mb/s      2.1ms/op      285us/op-cpu [0ms - 27ms]
seqwrite4            82483ops      687ops/s 687.3mb/s      1.4ms/op      190us/op-cpu [0ms - 28ms]
seqwrite3            56688ops      472ops/s 472.4mb/s      2.1ms/op      288us/op-cpu [0ms - 28ms]
seqwrite2            82920ops      691ops/s 691.0mb/s      1.4ms/op      188us/op-cpu [0ms - 28ms]
seqwrite1            57612ops      480ops/s 480.1mb/s      2.1ms/op      283us/op-cpu [0ms - 28ms]
1837: 126.010:

IO Summary:
336729 ops, 2806.039 ops/s, (0/2806 r/w), 2806.0mb/s,    731us cpu/op,   1.8ms latency
1837: 126.010: Shutting down processes

seqwrite:
Code:
3923: 2.145: Run took 1 seconds...
3923: 2.145: Per-Operation Breakdown
finish               1ops        1ops/s   0.0mb/s      0.0ms/op        1us/op-cpu [0ms - 0ms]
write-file           1025ops     1025ops/s 1023.9mb/s      0.4ms/op      374us/op-cpu [0ms - 0ms]
3923: 2.145:

IO Summary:
1025 ops, 1024.936 ops/s, (0/1025 r/w), 1023.9mb/s,    885us cpu/op,   0.4ms latency
3923: 2.145: Shutting down processes

seqread:
Code:
 7283: 121.556: Run took 120 seconds...
7283: 121.558: Per-Operation Breakdown
seqread-file         839530ops     6996ops/s 6989.2mb/s      0.1ms/op      141us/op-cpu [0ms - 0ms]
7283: 121.558:

IO Summary:
839530 ops, 6995.997 ops/s, (6996/0 r/w), 6989.2mb/s,    180us cpu/op,   0.1ms latency
7283: 121.558: Shutting down processes
 
Sequential r/w performance of a Raid-Z scales with number of datadisks while iops scale with number of vdevs (important for small random r/w actions)
You have 4 datadisks in 7 vdevs = 28 datadisks. If you count around 150 MB/s that a disk can deliver continously, you are at 4000 -4500 MB/s theoretical maximum.

If you only check the write values as read values are distorted by the ARC cache , you see
1000 MB/s with a single stream write and 2800 MB/s with five streams (around 500MB/s per stream).

If you count around 150 iops per disk, you are at around 1000 iops as pool capability
Seems ok for me.
 
Thanks for the help :)
Those numbers seem ok yes. Problem is when using volumes and let say volsize 1TB and when it has written 600GB, it gets slow.

I'll try thin prov. LU now and see if that makes any difference.
 
Made 1TB thin provisioned LU.In xenserver made 600GB disk for VM and wrote it full with "openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin"

Code:
root@storagepod1:/storage1# ls -lah /storage1/xenserver_lun2
-rw-r--r--+ 1 root root 602G Jun 16 14:05 /storage1/xenserver_lun2

Deleted that randomfile.bin and started again "openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin" and now speeds are ~66MB/s

Code:
root@storagepod1:/storage1# zpool iostat storage1 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage1    2.43T   150T     65    686  4.12M  29.0M
storage1    2.43T   150T    224      0  28.0M      0
storage1    2.43T   150T    357  1.26K  44.7M  66.4M
storage1    2.43T   150T    312  1.12K  39.1M  50.7M
storage1    2.43T   150T    361    916  45.2M  51.1M
storage1    2.43T   150T    199  1.02K  25.0M  3.43M
storage1    2.43T   150T    177    160  22.1M   354K
storage1    2.43T   150T    248  1.26K  31.0M  66.4M
storage1    2.43T   150T    365  1.04K  45.3M  65.4M
storage1    2.43T   150T    123    792  15.4M  2.15M


Now running "openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin" directly in omnios and see what speeds I'll finally get.
 
Intel X550 and X710 drivers now available for OmniOS

OmniTi published drivers for the new 10G cards with Intel X550 chipset (2.5,5,10G) and drivers for the Intel X710 nic (single or duals 40G QSFP+ or up to 8 x 10G)

Drivers are in OmniOS 151019 bloody after a pkg update or downloadable
Listbox • Email Marketing and Discussion Groups

Integration in 151014 and 151018 is coming.
Good news for storage servers (barebone or AiO) with SuperMicro X10 SoC boards like my X10SDV-2C-7TP4F
 
Hey Gea, still loving Napp-it! Just got "Pushover" working and it's awesome.

Small suggestion: I'm on the latest Napp-it but it's still linking to some old documentation. If you click on "Mirror rpool" it gives a link to a Solaris 11.1 article with some outdated instructions. It tells you to install the bootloader, but Oracle has told me this is not correct for for Solaris 11.2 and 11.3. There is a new Oracle doc that you might want to use instead:

Old: Part 8 - Mirroring the ZFS Root Pool | Oracle Community
New: How to Configure a Mirrored Root Pool (SPARC or x86/VTOC) - Managing ZFS File Systems in Oracle® Solaris 11.2
 
Awesome!

Question about alerts. I'm using Pushover and confirmed the test message worked.

But when I added a mirror disk to my rpool, the rpool was of course "degraded" until resilver completed. But Pushover didn't send a message. Will Pushover send a message when my pools are degraded, or only when Napp-it it detects a bad disk through SMART?

I have Disk error, Low capacity, Active Job error, all checked in the Napp-it GUI. (If it matters, it's Solaris 11.3).
 
HI,

I have just installed Napp-it on Omni-os. I have assigned a static IP following the install guide on the Napp-it website. I am only able to ping the server via the ip-address and not the host name. Its not even discoverable on my windows machine. Anyone have any idea why this may be the case?
 
HI,

I have just installed Napp-it on Omni-os. I have assigned a static IP following the install guide on the Napp-it website. I am only able to ping the server via the ip-address and not the host name. Its not even discoverable on my windows machine. Anyone have any idea why this may be the case?


I found my answer above. Thanks _Gea
 
It depends
If you just reinstall the OS and import the pool, all data with ZFS properties like shares and permission are kept.
You loose napp-it settings like jobs and groups, local users or SMB settings like idmappings, SMB passwords or SMB groups.

You can redo these settings manually or you can run a napp-it backup job that saves these settings to your datapool.
You can restore these settings then manually or with napp-it Pro/ACL extension in menu User > restore settings
 
:hello:

I'm doing my monthly scrub, and I'm seeing this :

98.7T scanned out of 187T at 4.47K/s, (scan is slow, no estimated time)

Usually when I see this I reboot the server. However I'm doing other things at the moment, several copy jobs to and from the server, and the speeds are fine (limited by my gigabit network), so I'm wondering what's going on with the scrub, any idea ?
 
A scrub must travers all metadata and read all data so its performance is iops limited.
If a scub is slow when its the only load, you mostly have a bad/weak disk.

With other loads where you do not see a performance problem there, a slow scrub is quite ok as it runs with a lower priority
 
appliance maps

I have added a new feature to napp-it: maps (16.04f2, 16.08dev)

You can create up to 9 maps to visualize location of your disks.
You first define a map (3,5" or 2,5"), number of rows and cols, nvme slots and slots to hide.

In a second step, you assign a disk to slot, example a disk to the first slot of a row.
After this you can auto-calc the next three slots

With napp-it free, you can use an eval key to build and printout the map (use a screenshot)
and place it on top of the server

map.png
 
Buginfo to all PoeF encrypted pool users

There is a bug in that can lead to a non importable encrxpted pool
Please update napp-it

thanks to Chris who discovered the bug
 
Awesome!

Question about alerts. I'm using Pushover and confirmed the test message worked.

But when I added a mirror disk to my rpool, the rpool was of course "degraded" until resilver completed. But Pushover didn't send a message. Will Pushover send a message when my pools are degraded, or only when Napp-it it detects a bad disk through SMART?

I have Disk error, Low capacity, Active Job error, all checked in the Napp-it GUI. (If it matters, it's Solaris 11.3).

Hey Gea, sorry to double post, do you happen to have an answer to this?
 
Just a heads up. I tested in Virtualbox with Solaris 11.3 and no alert was triggered.

I confirmed that Pushover is working with a test message.
I created a two disk mirror pool. Shutdown the system, removed one virtual disk. Rebooted, zpool status shows "DEGRADED" and C2t1d0 UNAVAIL. Pushover did not send any message.
 
The push alert script /var/web-gui/data/napp-it/zfsos/_lib/scripts/job-push.pl is quite identical to the email alert script.
- does email alert work?
- have you enabled the auto service?

The script sends only one disk failure message per day.
Delete /var/web-gui/_log/tmp/push-alert.log to re-allow an alert
 
I don't have an email server to test with unfortunately.

The auto-service is enabled for every 5 mins.

/var/web-gui/_log/tmp/ is empty. I ran auto.pl from command line and still nothing. Auto.log is empty.

Is there anything else I can try?
 
If you want to test manually from console,
you must run the push script from console like:
Code:
perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/job-push.pl  run_jobid

replace jobid with your jobid

if you want to debug the script and output a var during run , you can use
&mess($var)

basic scripting, best Perl (or PHP) knowledge is required
You can edit the script ex via WinSCP (its a textfile)
 
Noob question warning....:)
I have finally decided to proceed and reinstall my home server as a Napp-it AIO.
On my testserver I have version 15d running OmniOS v11 r151014 - I want to update this to the latest versions, but I am unsure how to proceed...
Do I update first OmniOS from command line and then Napp-It from Web GUI?
Or do I simply download the new WM AIO package and import the pool?
If the latter, do I then need to export the pool first?
My lab server is set up as a typical Napp-it AIO, having only Esxi and Napp-it on the local datastore and then passing Napp-it datastore using NFS for the other WMs.
TIA
 
Back
Top