OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

I'm in the process of setting up a new file server. When finished it will have 12x 2TB drives.

There is a small "problem" though. I currentley have 6 drives configured in RAIDZ1 and the other 6 drives are in my old windows server.

My plan is to just move all data from the windows machine to my new server, then remove the drives from the old server and add them as another RAIDZ1 vdev. This should be all good right?

But how will the performance be? Data already written to the first vdev will not bennefit the performance increase gained from adding the second vdev, no? How do I approach this? Will I have to re-write all the data?
 
Thanks _Gea for fixing the broken links : ).

Is there any option to make napp-it use ssl without having to use apache etc for reverse proxy?
 
Last edited:
Gea_
You are a proponent of OmniOS for virtualizing. Why are you not prefering SmartOS instead? Is is made for virtualizing and uses KVM.
http://lwn.net/Articles/459754/

hello brutalizer
i am a proponent of OmniOS for storage. For virtualizing i use ESXi as my main VMs are mostly Windows or OSX based. Also features like VM move, storage move and pass-through (for storage use) are working painless and driver support for all OS systems are best with ESXi.

KVM mainly on OmniOS or opionally SmartOS may be an option for me in future.
For general storage use, SmartOS is not suited. It is a specialiced solution for clouds and virtualisation and offers only limited access to the global zone (needed for storage use).
 
Is there any option to make napp-it use ssl without having to use apache etc for reverse proxy?

napp-it use minihttpd that supports https.
You only need to supply a /var/web-gui/_my/mini_httpd.pem

http://acme.com/software/mini_httpd/

But you should not use napp-it in unsecure networks.
Even https is unsecure with private certificates due to man in the middle attacks.
 
Yeah it's just the local network, i just like to have everything connecitng through https :) , cheers will do that.

Working, thanks :)
 
Last edited:
hello brutalizer
i am a proponent of OmniOS for storage. For virtualizing i use ESXi as my main VMs are mostly Windows or OSX based. Also features like VM move, storage move and pass-through (for storage use) are working painless and driver support for all OS systems are best with ESXi.

KVM mainly on OmniOS or opionally SmartOS may be an option for me in future.
For general storage use, SmartOS is not suited. It is a specialiced solution for clouds and virtualisation and offers only limited access to the global zone (needed for storage use).
Ok, that makes sense. OmniOS for storage, SmartOS for virtualization.

A remark, in your first post you write:
"i will keep this initial THREAD up to date, please re-read from time to time !!"

I suggest you change it to:
"i will keep this initial POST up to date, please re-read from time to time !!"
 
napp-it use minihttpd that supports https.
You only need to supply a /var/web-gui/_my/mini_httpd.pem

http://acme.com/software/mini_httpd/

But you should not use napp-it in unsecure networks.
Even https is unsecure with private certificates due to man in the middle attacks.

You can increase the level of security if you create your own CA cert that you install on your own devices, then as long as you get the "ALL good from the browser" you know you do not have a MiM attack since no one would be able to sign the cert with your CA. Course if you are accessing from a machine without the CA then you can check the cert path and if you memorized your CA fingerprint determine if you are good or not ;).
 
I have some problems with permissions and I was hoping that you guys could point me in the right direction.. It would be a great help. I have been consulting the Solaris Administrator Reference and a lot of websites, or so it seems, but I cant get get it quite right.

The OS is OmniOS in the newest build. File sharing is supposed to be SMB (primarily when my girlfriend accesses the NAS) and NFS (when the NAS is accessed by me and our HTPC).

It is a completely new build with 1 pool and 9 ZFS filesystems:

- 1st filesystem is my personal one (only root and I must be able to read and modify)
- 2nd is my girlfriends personal one (only root and her must be able to read and modify)
- 3rd is shared between us (everyone must be able to read and modify)
- 4th is a VMware datastore (everyone must be able to read and modify)
- 5-9th are storage (only root and I must be able to modify, but everyone can read)

My plan:

I was thinking, that I would want to use trivial ACL permissions.

I would make root owner of all ZFS filesystems and root group of all ZFS filesystems except for 1st and 2nd. Then, I would create a group for myself and one for my girlfriend, which I would make group of ZFS filesystem 1st and 2nd, respectively. Having done that, I would add my own user (created via napp-it) and my girlfriends user (also created via napp-it) to these groups, respectively.

So far, I have accomplished the above by using:

chown -R root:root <ZFS filesystem>

or

chown -R root:<name of group> <ZFS filesystem>

Having done that, I wanted to recursively assign trivial ACL to the 9 ZFS filesystems. I read that old-style (POSIX?) getfacl and setfacl commands do not apply in ZFS, meaning that I should use the new NFSv4 commands (http://docs.oracle.com/cd/E23823_01/html/819-5461/gbacb.html#scrolltoc). My intention was to end up with something like:

- 1st filesystem:

owner@ = full
group@ (my group) = modify
everyone@ = none

- 2nd is my girlfriends personal one (only root and her must be able to read and modify)

owner@ = full
group@ (girlfriends group) = modify
everyone@ = none

- 3rd is shared between us (everyone must be able to read and modify)

owner@ = full
group@ = modify
everyone@ = modify

- 4th is a VMware datastore (everyone must be able to read and modify)

owner@ = full
group@ = modify
everyone@ = modify

- 5-9th are storage (only root and I must be able to modify, but everyone can read)

owner@ = full
group@ = modify
everyone@ = read

Problem #1: Setting NFSv4 style ACLs
When I try setting ACL permissions using the NFSv4 style, it always fail with the message:

chmod: invalid mode

In the following example, "SANPOOL" is the name of my pool and "Apps" is a ZFS filesystem, in which I store the installers of all software used in our home.

Code:
root@NAS-SAN:/SANPOOL# chmod A+owner@:rwxpdDaARWcCos:fd:allow Apps
chmod: invalid mode: 'A+owner@:rwxpdDaARWcCos:fd:allow'
Try 'chmod --help' for more information.

Problem #2
I have better luck using numeric chmod commands, but I dont know if that is a good idea? Chmod -R 764 on ZFS filesystems 5-9 works fine and so does chmod -R 777 on ZFS filesystem 3 and 4.

However, when I use chmod -R 760 on ZFS filesystems 1 and 2, napp-it reports that everyone@ has read permissions. How come? And why can I give everyone@ no permissions through napp-it, but not through chmod?

Problem #3: NFS
When I mount the 8 ZFS filesystems, to which I should have access (excluding my girlfriends one), I can only properly access 3 and 4.

Is this because the NFS protocol restrics access from the server side, treats all connected users as everyone@, and thus cannot properly share ZFS filesystems on which ACL permissions only allow owner@ and group@ modify or above?

Problem #4: SMB
Connecting using SMB with my own or my girlfriends credentials, I can only properly access 3 (4 is not exposed using SMB).

I think I have to take this one problem at a time. I spent my entire last weekend on this, and now this one as well, so I hope someone is able to point me in the right direction.
 
some hints
- to modify ACL, you must use /usr/bin/chmod (only this one supports ACL)
- Solaris CIFS server depends on ACL only and Windows SID
- CIFS users are the same like Unix user but groups are different (SMB groups)
- NFS V3 use Unix permissions but is not based on user logins but on UID of creator (on client computer) or nobody
- SMB and NFS3 are basically incompatibel. You may set ZFS property aclmode to discard to ignore chmod to Unix permission.
- ACL has advanced inheritance settings, if you do a chmod to Unixpermissions, the are lost

If you need NFS3 and SMB for the same ZFS filesystem, set aclmode=discard and everyone@=modify
For easyness, use everyone@ and user ACL or set everyone@ for common access and user root for full acess
Fot trivial ACL, you can use the napp-it ACL extension where trivial ACL settings are free
 
Gea, huncut, and anyone,

Have you guys tested the Intel DC S3700 100GB version as ZIL Device? Is the performance OK compared to the 200GB? I'm would like to get one and would like to know your result before making the purchase. Thanks!
 
some hints
- to modify ACL, you must use /usr/bin/chmod (only this one supports ACL)
- Solaris CIFS server depends on ACL only and Windows SID
- CIFS users are the same like Unix user but groups are different (SMB groups)
- NFS V3 use Unix permissions but is not based on user logins but on UID of creator (on client computer) or nobody
- SMB and NFS3 are basically incompatibel. You may set ZFS property aclmode to discard to ignore chmod to Unix permission.
- ACL has advanced inheritance settings, if you do a chmod to Unixpermissions, the are lost

If you need NFS3 and SMB for the same ZFS filesystem, set aclmode=discard and everyone@=modify
For easyness, use everyone@ and user ACL or set everyone@ for common access and user root for full acess
Fot trivial ACL, you can use the napp-it ACL extension where trivial ACL settings are free

Thanks Gea_. Your answer pointed me in the right direction.
 
Gea, huncut, and anyone,

Have you guys tested the Intel DC S3700 100GB version as ZIL Device? Is the performance OK compared to the 200GB? I'm would like to get one and would like to know your result before making the purchase. Thanks!

I'd like to know about this too, I've had my eye on it as well.
 
Gea, huncut, and anyone,

Have you guys tested the Intel DC S3700 100GB version as ZIL Device? Is the performance OK compared to the 200GB? I'm would like to get one and would like to know your result before making the purchase. Thanks!

I have not compared these two so I cannot talk about the difference.
I would expect them both to be very good ZIL devices.

I have done some tests to compare different SSDs and also a ZeusRAM. Maybee this helps to have an idea about the differences. Most important is, that write performance of sync writes without ZIL can go down to 10-20% of async values. A slow SSD is not very helpful and can give worser values than without dedicated ZIL and only the best ones like very fast SLC SSDs, the Intel 3700s and especially a ZeusRAM improves performance up to a level similar to async values.

http://napp-it.org/doc/manuals/benchmarks.pdf
 
Thanks Gea, I've already gone through your awesome benchmarks pdf from your older posts. I've just ordered the Intel S3700 100GB and will test it. I'll post my result once I get a chance.
 
Thanks Gea, I've already gone through your awesome benchmarks pdf from your older posts. I've just ordered the Intel S3700 100GB and will test it. I'll post my result once I get a chance.

Thanks
would be great if you can do a similar test on 1 GbE (and 10 GbE if available) like the bench with the 800 GB one.
 
Ok, finally created my RAIDZ2 pool then created a ZFS folder called data. On my Windows 7 box, I can do \\servername\data and it will access for user/password to connect.

I tried user root and the pw, does not work. I guess root cannot connect via network?

How do I create a new user? I tried acl extension > ACL on folders. When I go to create a local user, there is no box to specify the new user name. Just properties I can change. Appreciate any feedback.

New to all of this. Been using Windows Server OS most my life.

Thank you!
 
Ok, finally created my RAIDZ2 pool then created a ZFS folder called data. On my Windows 7 box, I can do \\servername\data and it will access for user/password to connect.

I tried user root and the pw, does not work. I guess root cannot connect via network?

How do I create a new user? I tried acl extension > ACL on folders. When I go to create a local user, there is no box to specify the new user name. Just properties I can change. Appreciate any feedback.

New to all of this. Been using Windows Server OS most my life.

Thank you!

Use napp-it menu user, then ++ add local user
to create new users

If root is not listed as user then do a passwd root at CLI
After this you can connect SMB shares as root with all permissions.
 
Doh! That was easy. Can't believe I missed the Users directory.

Thank you, working great now!

Any good test to see that the RAIDZ2 configuration is working properly, such as a stress test? I ran badblocks on each drive before and they all passed, but would like to do one final test with everything as a pool now.
 
How is your RAIDZ2 pool configured? You can always throw some data on the pool then do a scrub of the pool. That'll tell you if the drives have issues.

I think there's some kind of IO test built into Napp-It, but I've not tried it. Check out YouTube as I think I saw a video about it there.
 
How is your RAIDZ2 pool configured? You can always throw some data on the pool then do a scrub of the pool. That'll tell you if the drives have issues.

I think there's some kind of IO test built into Napp-It, but I've not tried it. Check out YouTube as I think I saw a video about it there.

2x IBM M1015
10x Toshiba 3TB

RAIDZ2
 
Any good test to see that the RAIDZ2 configuration is working properly, such as a stress test? I ran badblocks on each drive before and they all passed, but would like to do one final test with everything as a pool now.

You can run some benchmarks (Menu Pools > Benchmarks) to stress the pool.
 
Q:Will SSD vmware datastore show benifits over NFS?

I am considering buying two or three SSD for my VMs which are currently running on 6 2TB in 3 mirrors.

Thanks.
 
I have question about data security and safety with regards to NFS and iSCSI in default config of OmniOS (or any Solaris based OS) with regards to sync writes.

Environment:
ESXi and ZFS
NFS vs iSCSI (COMSTAR)
Sync = Standard
No ZIL device

With NFS:
1) sync=standard > data is safe if lose power. slow performance
2) sync=disabled > data NOT safe if lose power. fast performance

With iSCSI:
1) sync=standard > sync writes are all async? is data safe if I lose power? fast performance!
2) sync=disabled > same as NFS with sync=disabled.
3) sync=always > sync writes are honored. Same as NFS sync=standard. slow peformance

Please clarify iSCSI behavior with sync=standard in terms of data security.

If iSCSI sync=standard is not safe then this is bad default implementation out of the box. Unawared users will lose data and suffer corruption.

Thank you!
 
Last edited:
Q:Will SSD vmware datastore show benifits over NFS?

I am considering buying two or three SSD for my VMs which are currently running on 6 2TB in 3 mirrors.

Thanks.

SSD in a local datastore vs a NFS share on a SSD pool?

local datastore: no cache but lower latency,
NFS: large L2Arc cache, more features like fast access for clone or backup
and all other ZFS features like snaps or checksums.
 
I have question about data security and safety with regards to NFS and iSCSI in default config of OmniOS (or any Solaris based OS) with regards to sync writes.

Environment:
ESXi and ZFS
NFS vs iSCSI (COMSTAR)
Sync = Standard
No ZIL device

With NFS:
1) sync=standard > data is safe if lose power. slow performance
2) sync=disabled > data NOT safe if lose power. fast performance

With iSCSI:
1) sync=standard > sync writes are all async? is data safe if I lose power? fast performance!
2) sync=disabled > same as NFS with sync=disabled.
3) sync=always > sync writes are honored. Same as NFS sync=standard. slow peformance

Please clarify iSCSI behavior with sync=standard in terms of data security.

If iSCSI sync=standard is not safe then this is bad default implementation out of the box. Unawared users will lose data and suffer corruption.

Thank you!

A ZFS filesystem has a sync property:
default/standard: client decides (ex Fileaccess via SMB is without sync and NFS via ESXi request sync writes)
always: sync is always used
disabled: sync in never used

Regarding iSCSI, you have a similar setting with writeback cache
enabled: fast but not save
disabled: safe but slow without a fast ZIL device.

Not Safe means: Your ZFS filesystem is always consistent due to copy on write.
If you have a VM on it with its own disk cache, it can happen that your VM or iscsi target is corrupted after a power loss..
 
A ZFS filesystem has a sync property:
default/standard: client decides (ex Fileaccess via SMB is without sync and NFS via ESXi request sync writes)
always: sync is always used
disabled: sync in never used

Regarding iSCSI, you have a similar setting with writeback cache
enabled: fast but not save
disabled: safe but slow without a fast ZIL device.

Not Safe means: Your ZFS filesystem is always consistent due to copy on write.
If you have a VM on it with its own disk cache, it can happen that your VM or iscsi target is corrupted after a power loss..

I read your reply several time and am still not clear.

So, if I use iSCSI with default ZFS configuration of sync=standard, am I safe if I lose power?

From the sound of it, it is NOT safe?
 
I read your reply several time and am still not clear.

So, if I use iSCSI with default ZFS configuration of sync=standard, am I safe if I lose power?

From the sound of it, it is NOT safe?

From a pure safety aspekt, sync=always is save but your performance may go down
to 10% or 20% of the nonsync value so you do not want sync when not absolutely needed.

Vmware decides that ESXi should use NFS with sync and for a pure SMB server sync it not requested.
This makes sense. In first case a VM may be corrupted in last case a single file when you have a power loss on saving a file.

So use sync=standard and let the client decide.
 
From a pure safety aspekt, sync=always is save but your performance may go down
to 10% or 20% of the nonsync value so you do not want sync when not absolutely needed.

Vmware decides that ESXi should use NFS with sync and for a pure SMB server sync it not requested.
This makes sense. In first case a VM may be corrupted in last case a single file when you have a power loss on saving a file.

So use sync=standard and let the client decide.

Thanks GEA, I know what NFS does.

I am only interested in iSCSI on ZFS.

What I want to know is, if I use iSCSI with ZFS (with sync=standard) and ESXi, am I safe? My client is a Windows 2008 R2 VM using Veeam backup/replication.
 
Thanks GEA, I know what NFS does.

I am only interested in iSCSI on ZFS.

What I want to know is, if I use iSCSI with ZFS (with sync=standard) and ESXi, am I safe? My client is a Windows 2008 R2 VM using Veeam backup/replication.

If you want to be perfect safe, set sync to always and writeback cache to disabled on your Logical units to include file based LUs.
Do not do any performance benchmarks unless you have a really fast ZIL device....
 
If you want to be perfect safe, set sync to always and writeback cache to disabled on your Logical units to include file based LUs.
Do not do any performance benchmarks unless you have a really fast ZIL device....

My storage for the VMs is a ZFS box with M1015 so there is no write back cache.

So, it looks like using iSCSI with ZFS (with sync=standard) is not safe then?
 
Just got the Intel DC S3700 100GB for ZIL device! preliminary testing is really awesome. With NFS sync=standard, I get almost same performance (close to 97%) of sync=disabled!

Do you under-provision the S3700 to 3GB for ZIL for endurance? ZIL only needs about 3GB on 1Gbps network...
 
My storage for the VMs is a ZFS box with M1015 so there is no write back cache.

So, it looks like using iSCSI with ZFS (with sync=standard) is not safe then?

Writeback cache is not a physical device, its a LU setting -
just like sync=disabled

If you use napp-it: Its in menu Comstar >> Logical units
 
Just got the Intel DC S3700 100GB for ZIL device! preliminary testing is really awesome. With NFS sync=standard, I get almost same performance (close to 97%) of sync=disabled!

Do you under-provision the S3700 to 3GB for ZIL for endurance? ZIL only needs about 3GB on 1Gbps network...

Benchmarks do not request sync write.
Compare sync=always vs sync=disabled

If you use ChrystalDiskMark, create a 50 GB LU (volume based) and do the test via iSCSI (100MB testfile is ok) from
Windows as this is the fastest way to connect ZFS.
If you post a screenshot of Chrystal values, I would add it to my benchmark overview.


I have not done extensive tests underpovision vs using whole disks wtth low usage.
I would not expect significant differences especially with the Intel 3700.
 
Last edited:
Result of Intel DC S3700 100GB used as ZIL device under-provisioned to 8GB partition.

Environment: ESXi 5.1, FreeNAS 9.1, 1Gbps Network, 50GB iSCSI target from FreeNAS mounted on Windows 2008 R2 VM.

4 x Raidz vdevs - each vdev has 3 10K rpm SAS disks

Sync=Disabled
http://imageshack.us/photo/my-images/15/20fk.jpg

Sync=Always
http://imageshack.us/photo/my-images/845/w9l1.jpg

Sync=Standard
http://imageshack.us/photo/my-images/6/v2ff.jpg

Summary with No ZIL
http://imageshack.us/photo/my-images/853/fuli.jpg

Note:
- The S3700 helped a lot compared to without a ZIL
- S3700 several percentage slower (with Sync=Always) than with Sync=Disabled
- S3700 100gb is comparable to huncut's 800gb testing from the GEA's benchmark.pdf (http://napp-it.org/doc/manuals/benchmarks.pdf) in terms of 4K number, but my 4KQD32 is way lower. What's wrong? 32 Queue Depth is not reality anyway, so should I worry about this?
- In real test, veeam replication to ZFS datastore, with the S3700 as ZIL device, it is only a couple of minutes slower in completing the replication than when sync=disabled. Without a ZIL device, replication completed several hours slower.

Conclusion:
S3700 is a very good ZIL device to help with sync write peformance and data security in ESXi, NFS, and iSCSI environment.

Now, onto more real production testing.

Thanks GEA, huncut, and others for the pointers!
 
Last edited:
Info:

With ESXi 5.5 free the 32 GB Limit is no longer valid - the biggest problem for higher end free All-In One configs with napp-tt.
I am now testing a ready to use ZFS storage VM for ESXi 5.1/5.5

If you want to try as well:
http://www.napp-it.org/downloads

You only need to download the zipped VM, unzip and upload to a local ESXi datastore (can last some time)
It is ready to use with napp-it, OmniOS stable, AFP, Mediatomb add-on and vmware tools installed with an e1000 and vmxnet3 vnic
 
Last edited:
Info:

With ESXi 5.5 free the 32 GB Limit is no longer valid - the biggest problem for higher end free All-In One configs with napp-tt.
I am now testing a ready to use ZFS storage VM for ESXi 5.1/5.5

If you want to try as well:
http://www.napp-it.org/downloads

You only need to download the zipped VM, unzip and upload to a local ESXi datastore (can last some time)
It is ready to use with napp-it, OmniOS stable, AFP, Mediatomb add-on and vmware tools installed with an e1000 and vmxnet3 vnic

Wow, great work _Gea...I really admire all the work effort you've put into this to streamline the experience for people new to ZFS.

check your Paypal ;)
 
Back
Top