OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

How many people have succesfully joined their OI to a win2k8r2 domain? So far it has only failed for me..

Are there setting that must be made to the domain? Such as compatibility mode etc

Also, assuming there is a OI root user with access to everything, if OI is joined to a domain and that domain is lost, all file should still be accessible through the local root right?

Thanks :)

- Have you tried to join via napp-it menu Service - SMB - Active Directory or manually like
https://blogs.oracle.com/timthomas/entry/configuring_the_opensolaris_cifs_server

- You do not need to modify AD for CIFS

- root has always access
 
Last edited:
I just setup my LACP on a HP 1810G

I had to svcadm disable svc:/network/physical:nwam and enable default

result
Your current settings:
online 22:30:49 svc:/network/physical:default
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
aggr0: flags=1000843 mtu 1500 index 2
inet 192.168.1.126 netmask ffffff00 broadcast 192.168.1.255
ether 38:60:77:55:e9:3
lo0: flags=2002000849 mtu 8252 index 1
inet6 ::1/128
aggr0: flags=20002000840 mtu 1500 index 2
inet6 ::/0
ether 38:60:77:55:e9:3

Is this correct? I cannot get interent on my OI (because its not finding the gateway (router)) ? I don't know how to point the aggr0 to it

I set a static ip on the OI for aggr0. Performance isn't that good too =(

before on NAS performance tester 1.4 (write/read = 110MB/s)

now its write = 109MB/s read = 92 MB/s
 
Im also having a problem with daily alarms. Ive tried to recreate the alarm job and updating napp-it and then recreating the job. Still I keep getting this email every day+15min:

Alert/ Error on tankki from :

-disk errors: none

------------------------------
zpool list
NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
rpool 59.5G 20.0G 39.5G - 33% 1.00x ONLINE -
tankki 9.06T 4.49T 4.57T - 49% 1.00x ONLINE -

In the older napp-it version I remember I had to once change some 1 to 0 in some script. Tried looking for it now, but didnt find. Im running now with napp-it v. 0.8k.

Any clues how to fix this?
 
I just setup my LACP on a HP 1810G

I had to svcadm disable svc:/network/physical:nwam and enable default

result


Is this correct? I cannot get interent on my OI (because its not finding the gateway (router)) ? I don't know how to point the aggr0 to it

I set a static ip on the OI for aggr0. Performance isn't that good too =(

before on NAS performance tester 1.4 (write/read = 110MB/s)

now its write = 109MB/s read = 92 MB/s

Basic
http://blog.allanglesit.com/2011/03/solaris-11-network-configuration-basics/

Advanced
http://blog.allanglesit.com/2011/03/solaris-11-network-configuration-advanced/

:)

.
 

Nice site you got there

I try to do

Code:
add net default: gateway 192.168.1.1
add persistent net default: gateway 192.168.1.1
but getting
Code:
persistent: bad value

Is the 192.168.1.0 causing the problem? on OI I still cannot get access to website

Code:
root@openindiana:~# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
aggr0/dhcp        dhcp     ok           192.168.1.126/24
lo0/v6            static   ok           ::1/128
root@openindiana:~# netstat -r

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              192.168.1.1          UG        1          0 aggr0
openindiana          openindiana          UH        2        158 lo0
[COLOR="Red"]192.168.1.0[/COLOR]          192.168.1.126        U         4    2585198 aggr0

Routing Table: IPv6
  Destination/Mask            Gateway                   Flags Ref   Use    If
--------------------------- --------------------------- ----- --- ------- -----
openindiana                 openindiana                 UH      2      12 lo0

Code:
root@openindiana:~# dig www.google.com

; <<>> DiG 9.6-ESV-R7-P1 <<>> www.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39141
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.google.com.                        IN      A

;; ANSWER SECTION:
www.google.com.         43199   IN      CNAME   www.l.google.com.
www.l.google.com.       299     IN      A       74.125.226.49
www.l.google.com.       299     IN      A       74.125.226.50
www.l.google.com.       299     IN      A       74.125.226.48
www.l.google.com.       299     IN      A       74.125.226.52
www.l.google.com.       299     IN      A       74.125.226.51

;; Query time: 171 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)
;; WHEN: Tue Aug 21 03:14:52 2012
;; MSG SIZE  rcvd: 132

thanks

-edit-

after restarting NAS this morning, its working =) maybe dhcp got renewed thanks for your help Stanza33
 
Last edited:
Silly question probably already answered in the thread but my search capabilities are shit.
So why via napp-it there is no way to share the whole root zpool w/o need of making datasets[zfs folders] on it ?
And not only CIFS sharing but others option ,like dd bench, are no accessible for the root pool ?
Or I generally miss something ?
 
Silly question probably already answered in the thread but my search capabilities are shit.
So why via napp-it there is no way to share the whole root zpool w/o need of making datasets[zfs folders] on it ?
And not only CIFS sharing but others option ,like dd bench, are no accessible for the root pool ?
Or I generally miss something ?

about pools in general
ZFS was designed with some handling principles in mind:
- a Pool is a container for datasets (ZFS folders/ partitions)
each partition can grow up to poolsize. Capacity can be limited with quotas and
can be ensured with reservations - independently from other datasets.

-> this is only possible if you do not use the pool itself to stora data,
so create at least one dataset and use that.

second, if you share a parent ZFS via CIFS , you cannot access the datasets below via CIFS
reason: datasets may have different ZFS properties

about root pool
ZFS was designed to have a separate root pool for OS only, not used for data .
It also has a different snap mechanism (boot environments), so sharing is not offered.

You can do a manual dd but why?
Performance beside USB sticks (always too slow) does not really matter.
 
So why via napp-it there is no way to share the whole root zpool w/o need of making datasets[zfs folders] on it ? And not only CIFS sharing but others option ,like dd bench, are no accessible for the root pool ?

The root 'rpool' is for the OS. Generally it's a "very good idea" to run the OS separately from data being served. For lots of good reasons. ZFS assumes this from the start.

The better question is why would you want to share it?

Is this because you've only got a limited number of disc spindles from which to run everything? If that's the situation then you might want to address that first.
 
Im also having a problem with daily alarms. Ive tried to recreate the alarm job and updating napp-it and then recreating the job. Still I keep getting this email every day+15min:

Alert/ Error on tankki from :

-disk errors: none

------------------------------
zpool list
NAME SIZE ALLOC FREE EXPANDSZ CAP DEDUP HEALTH ALTROOT
rpool 59.5G 20.0G 39.5G - 33% 1.00x ONLINE -
tankki 9.06T 4.49T 4.57T - 49% 1.00x ONLINE -

In the older napp-it version I remember I had to once change some 1 to 0 in some script. Tried looking for it now, but didnt find. Im running now with napp-it v. 0.8k.

Any clues how to fix this?

There was a minor bug in newly created alerts.
1. update to 0.8k2
and recreate the alert job

2. The script that executes the alert is in /var/web-gui/data/napp-it/_log/jobs/
named "id".pl. you may check the actions there
 
The root 'rpool' is for the OS. Generally it's a "very good idea" to run the OS separately from data being served. For lots of good reasons. ZFS assumes this from the start.

The better question is why would you want to share it?

Is this because you've only got a limited number of disc spindles from which to run everything? If that's the situation then you might want to address that first.

My bad !!!!
I didn't mean the root pool where the OS resides but some other pool.
Here is example :
zpool create tank1 c5t0d0
zfs set share=name=tank1,path=/tank1,prot=smb tank1
zfs set sharesmb=on tank1
smbadm enable-user dedobot
chown dedobot /tank1

copying files :
tank1.jpg

df output
tank2.jpg

napp-it
tank3.jpg


Now I'm understanding this is optional and twisted and that is the reason to not be included in napp-it.
Nice day!
 
Here is example :
zpool create tank1 c5t0d0
zfs set share=name=tank1,path=/tank1,prot=smb tank1
zfs set sharesmb=on tank1
smbadm enable-user dedobot
chown dedobot /tank1

But you're still not explaining why you want to create an SMB share of the pool's root instead of a folder.
 
Haven't had a definitive answer for myself how good the NTFS userrights stick with a pool when it gets moved from 1 OI machine with Napp-IT to the 2nd OI VM so my plan is this:
- install 2 OI machines
- get it in the correct workgroup
- create local users on OI machine
- create test zfsfolder and share
- set ntfs permissions through windows
- export the pool
- power up the 2nd OI machine
- get it to the same workgroup
- create the same local users with the same passwords
- import the pool

connect to see what happens :)

If I missed a step please let me know
 
...I just wanted to create a new ZFS Folder and noticed that the option for encryption in napp-it shows always as "not available" :confused:

I am running napp-it 0.8k on Solaris-11 Express
and ZFS Version shows as V31 in all pools.
Existing encrypted Folders are running fine....

Also I came accross another problem.
I created a second, basic pool out of a spare disk.
Creating ZFS folders with the same name as already used in another pool is possible, but
SMB full-guest access cannot be applied to that ZFS folder.
Is this a SMB limitation ? If I choose a system-wide new name, everything works as espected.

Many thanks in advance for help, hints or comments.

regards,
Hominidae
 
Yo,

Could use some help!
I needed an SSD log device for testing in another computer so I exported the pool and reimported it without it's SSD log device. After that I removed it from the array.
Now ZFS tells me the pool is degrade and I can't access it. I tried using zpool clear command and the pool stopped what I guess was rebuilding but it still remains in degraded state.
How can I access back the data on the pool without having to reinstall the SSD...my intent is to later put a bigger SSD than the one I removed?

TY
 
pool: Qmedia1
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: scrub repaired 0 in 11h28m with 0 errors on Tue Aug 7 21:17:55 2012
config:

NAME STATE READ WRITE CKSUM
Qmedia1 DEGRADED 0 0 0
raidz1-0 ONLINE 0 0 0
c2t5000C50037A21444d0 ONLINE 0 0 0
c2t5000C50037FA97BFd0 ONLINE 0 0 0
c2t50024E9002CECC5Ad0 ONLINE 0 0 0
c2t50024E9002CECC6Dd0 ONLINE 0 0 0
c2t50024E900325F15Fd0 ONLINE 0 0 0
c2t50024E9004569E67d0 ONLINE 0 0 0
c2t50024E920498EEB8d0 ONLINE 0 0 0
c2t50024E9204A79F48d0 ONLINE 0 0 0
logs
c4t1d0 UNAVAIL 0 0 0 cannot open

I did but it still remains degraded and unavailable!

EDIT : Found it,needed a reboot! So will it stay in degraded mode untill next scrub?
 
Last edited:
I did but it still remains degraded and unavailable!

EDIT : Found it,needed a reboot! So will it stay in degraded mode untill next scrub?

It will stay in degraded until you remove or replace the missing log
 
...I just wanted to create a new ZFS Folder and noticed that the option for encryption in napp-it shows always as "not available" :confused:

I am running napp-it 0.8k on Solaris-11 Express
and ZFS Version shows as V31 in all pools.
Existing encrypted Folders are running fine....

Also I came accross another problem.
I created a second, basic pool out of a spare disk.
Creating ZFS folders with the same name as already used in another pool is possible, but
SMB full-guest access cannot be applied to that ZFS folder.
Is this a SMB limitation ? If I choose a system-wide new name, everything works as espected.

Many thanks in advance for help, hints or comments.

regards,
Hominidae

I cannot check (S11 is ok, I do not currently have a working S11 Express)
optionally create manually via CLI

I would always keep dataset and sharenames unique
 
I cannot check (S11 is ok, I do not currently have a working S11 Express)
optionally create manually via CLI

I would always keep dataset and sharenames unique
Thanks _Gea for your reply.

I kept S11-Express because of working PM.
Next upgrade of that box is due at YE...will have to cope with CLI until that time.
Maybe I am lucky and OI will introduce their own zfs encryption feature meanwhile.
 
Thanks _Gea for your reply.

I kept S11-Express because of working PM.
Next upgrade of that box is due at YE...will have to cope with CLI until that time.
Maybe I am lucky and OI will introduce their own zfs encryption feature meanwhile.

you may
- edit the "create ZFS folder" script (replace pool 30 with 28) for encryption option
- use lofi encrypted pools on Solaris/OI/Illumian
they are about 20% slower but very flexible (you can backup encrypted pools,
integrated in napp-it menu pools)
 
Can I replace it with a different ZIL device? Original was a 60GB SSD but I want to replace it with a 128GB SSD!

TY

you can replace the log with an equal/larger disk (napp-it menu disk replace)

or
remove the log (menu disk remove)
add any new log (menu add vdev)

or the according CLI commands without napp-it
 
you may
- edit the "create ZFS folder" script (replace pool 30 with 28) for encryption option
Hmm...ok, do you mean that this action will get me back the option to create V31 encrypted pools under S11-Ex with napp-it 0.8k?

- use lofi encrypted pools on Solaris/OI/Illumian
they are about 20% slower but very flexible (you can backup encrypted pools,
integrated in napp-it menu pools)
Yes I know about that feature but this is not an option for me, I am afraid.
 
Hmm...ok, do you mean that this action will get me back the option to create V31 encrypted pools under S11-Ex with napp-it 0.8k?
napp-it does not support encrypted pools in whole but encrypted datasets on it.
The above modification shows the encryption dialog regardless the ZFS version.
 
napp-it does not support encrypted pools in whole but encrypted datasets on it.
The above modification shows the encryption dialog regardless the ZFS version.

Ah, right, of course - datasets it is, not pools
Thanks for your help and sharing that workaround...will try.
 
Almost ready to move my pools from Nexenta to OI/Napp-IT
I currently have no ZIL.
My NFS share (which holds my VM's) therefore has the option sync=off

Question

What would be better to do once my pools are moved to OI/Napp-IT (performance wise):
- set async=disabled for the imported NFS share
- place SSD as ZIL and leave async=standard..?
 
Is this a separate box or an all in one? For the latter I wouldn't bother with a ZIL, just disable sync (if you lose power or somesuch, everything will fail at once, so the issue of guest FS corruption due to out of order writes is not such a big deal [to me anyway].) Even for a separate box, you have to assess risk. I have everything on a mongo UPS, and do snapshots of the esxi datastore NFS folder every night, so even if I have a crash/powerfail and a guest gets corrupted, worst case I lose 24 hours work.
 
Is this a separate box or an all in one? For the latter I wouldn't bother with a ZIL, just disable sync (if you lose power or somesuch, everything will fail at once, so the issue of guest FS corruption due to out of order writes is not such a big deal [to me anyway].) Even for a separate box, you have to assess risk. I have everything on a mongo UPS, and do snapshots of the esxi datastore NFS folder every night, so even if I have a crash/powerfail and a guest gets corrupted, worst case I lose 24 hours work.

Thanks danswartz,
Yes it's all in one box.
I've read that a SSD ZIL can benefit the VM's performance and L2ARC is always good to have.
 
A point to consider is not all kinds of access will benefit from the same setups. VMs benefit from mirrors because you can get more operations running at the same time. Using raidz maximizes space but at the cost of operational speed, esp if you had VMs on it. There's a lot you can do to affect the performance even before getting into ZIL and L2ARC. So you might want to get that all sorted out first and tested before you resort to using a ZIL and/or an L2ARC.
 
A point to consider is not all kinds of access will benefit from the same setups. VMs benefit from mirrors because you can get more operations running at the same time. Using raidz maximizes space but at the cost of operational speed, esp if you had VMs on it. There's a lot you can do to affect the performance even before getting into ZIL and L2ARC. So you might want to get that all sorted out first and tested before you resort to using a ZIL and/or an L2ARC.

True..!
I'm only working with mirrors because they should be the easiest to maintain and have the most speed benefits.
 
Hey guys,

I seem to run into an unsolvable problem (from my point of view) and would be glad if someone of you could help me.

I am using Solaris 11 as storage Server in a VM on an ESXI 5. At this time I need to use Solaris, because encryption is ESSENTIAL to me, and at this time being, Solaris 11 is (sadly) the only platform supporting native ZFS encryption.

I dont know why, but the Solaris 11 VM doesnt recognize, that my CPU supports Hardware-Accelarted AES encryption, therefore encryption is awfully inefficient (uses up to 8 GHz CPU Power for 100 MB/s !!).

Another VM with OpenIndiana installed recognizes AES (but cant use it for ZFS encryption -.-)

I didnt try installing Solaris 11 bare-metal, because I got about 30 VM's running, most of them production VM's which i cant turn off for a day just to test an Solaris bare-metal install.


System Setup:

2x AMD Opteron 4226 (= 12 x 2,7 GHz) [Up to 8 GHz assigned to Solaris VM]
64 GB RAM [12 GB assigned to Solaris VM]
Supermicro USAS2 Card [VmDirectPath to Solaris VM]

Output of isainfo -v on Solaris 11:
Code:
64-bit amd64 applications
        amd_lzcnt popcnt amd_sse4a tscp ahf cx16 sse3 sse2 sse fxsr amd_mmx 
        mmx cmov amd_sysc cx8 tsc fpu 
32-bit i386 applications
        amd_lzcnt popcnt amd_sse4a tscp ahf cx16 sse3 sse2 sse fxsr amd_mmx 
        mmx cmov amd_sysc cx8 tsc fpu

Output of isainvo -v on Open Indiana:
Code:
64-bit amd64 applications:
	avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 amd_lzcnt  popcnt amd_sse4a
	tscp ahf cx16 sse3 sse2 sse fxsr amd_mmx mmx mov amd_sysc cx8 tsc fpu
32-bit i386 applications:
	avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 amd_lzcnt  popcnt amd_sse4a
	tscp ahf cx16 sse3 sse2 sse fxsr amd_mmx mmx mov amd_sysc cx8 tsc fpu

Thank You in advance!
 
Supposedly I remember napp-it has some kind of dataset-level encryption available? I seem to recall your problem is not unique.
 
Is this a separate box or an all in one? For the latter I wouldn't bother with a ZIL, just disable sync (if you lose power or somesuch, everything will fail at once, so the issue of guest FS corruption due to out of order writes is not such a big deal [to me anyway].) Even for a separate box, you have to assess risk. I have everything on a mongo UPS, and do snapshots of the esxi datastore NFS folder every night, so even if I have a crash/powerfail and a guest gets corrupted, worst case I lose 24 hours work.

I forgot to ask: do you use a SSD for L2ARC in your setup..?
 
...the CPU features reported by the guests are so different, which AFAIK is not "normal" for ESXi because it should not use soft-emulation in normal operation mode.
However, I seem to recall that you CAN set CPU features or rather CPU models....that feature is called EVC and AFAIK only applies to clustered nodes, since it is supposed to help migration between different host types.by helping CPU features standardized in guests.
 
has anyone had any success with afp 3.0 and timemachine shares?

Just setting up a new server, share is setup.

I get the an error 17. sparebundle could not be created even though I can mount the share and read/write to it no problems.

any ideas?

cheers
Paul

Edit - fixed - had to turn nbmand to off at the cli then reboot to take affect - the web interface wouldnt let me update that field.
 
Last edited:
...the CPU features reported by the guests are so different, which AFAIK is not "normal" for ESXi because it should not use soft-emulation in normal operation mode.
However, I seem to recall that you CAN set CPU features or rather CPU models....that feature is called EVC and AFAIK only applies to clustered nodes, since it is supposed to help migration between different host types.by helping CPU features standardized in guests.

Thank you! Any Idea or guideance how I could achive this?

FYI: vmware.log says that Hardware Virtualization (AMD-V) is used...
 
Back
Top