OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Okok.

So, what could be the best workaround to have a powerful ZFS file server with SMB share for Windows clients ?
  • Xubuntu 14.04 + ZoL ?
  • OmniOS + Napp-it, connected with Infiniband to a Windows 2008r2 / 2012r2 server + Starwind Virtual San ?

One time, we talk together about a shared server for collaborative video editing with multiple clients working with the same media at the same time ... what protocol are you using to make this possible ?

Cheers.

St3F
 
If you need multiuser access to the same data, you need a filesharing protocol like NFS or SMB. Blockbased access via iSCSI is sometimes faster but without a cluster software only available from one client at a time.

If you you only have Windows clients, prefer SMB (NFS is often faster).
The kernelbased and multithreaded Solaris SMB server is often faster as SAMBA, even when comparing Solaris SMB1 vs SAMBA SMB 2.1+.

If your clients are using some newer SMB features, SMB2 can be faster than SMB1.
SMB 2.1 plus the kernelbased multithreaded SMB server is in NexentaStor and the new Solaris 11.3 that is the current state of the art ZFS server as it offers ZFS encryption, LZ4 compress, kernelbased multithreading SMB 2.1 and faster resilvering of failed disks than any other ZFS.

Use OmniOS (currently SMB1) if you like a completely free solution.
If you need blockaccess like with Starwind, you can share your ZFS storage directly via Comstar and iSCSI (OmniOS or Solaris) http://docs.oracle.com/cd/E23824_01/html/821-1459/fmvcd.html.

btw
There must be something wrong as your read performance in OmniOS is much slower than expected.
 
Could someone provide some commands along with output so that I can get a general idea of what to expect for write performance using a SLOG with OmniOS when using ESXi over NFS with by sync and non synced writes. Would be greatly appreciated.
 
Could someone provide some commands along with output so that I can get a general idea of what to expect for write performance using a SLOG with OmniOS when using ESXi over NFS with by sync and non synced writes. Would be greatly appreciated.

Create a Windows VM on such a datastore and use any performance tool like Crystal disk benchmark to the local c disk (your NFS datastore) with sync=disabled vs sync=always,
 
If you need multiuser access to the same data, you need a filesharing protocol like NFS or SMB. Blockbased access via iSCSI is sometimes faster but without a cluster software only available from one client at a time.
1. if I want to test iSCSI + cluster software, what is you recommendation ?
... GlusterFS ?
... BTRFS ?

If you you only have Windows clients, prefer SMB (NFS is often faster).
2. Only Windows Enterprise version have the capability to enable NFS, not the Pro
... professional video editing software as Avid Media Composer works on the Pro version only.
.... so if you have any recommendation to test NFS with the Pro version, I will benchmark it !

IUse OmniOS (currently SMB1) if you like a completely free solution.
If you need blockaccess like with Starwind, you can share your ZFS storage directly via Comstar and iSCSI (OmniOS or Solaris) http://docs.oracle.com/cd/E23824_01/html/821-1459/fmvcd.html.
3. I'm making this possible : OmniOS + Napp-it, connected with Infiniband to a Windows 2008r2 / 2012r2 server or Windows 8.1 Pro + Starwind Virtual San.
The volume mounted on Windows will be shared by shared by Samba > 3.0..
 
1
I never used any as I do not need (to complicate things).
Keep it simple, use SMB or pure iSCSI/IB (ZFS volume as local disk)

2.
I use NFS mainly for my Macs as SMB1 is slow on Macs (50% as fast compared with Windows clients),
For Windows clients, use SMB. Compare OmniOS/SMB1 vs Solaris 11.3/SMB 2.1 vs Windows 2012 with a ZFS filesystem over iSCSI/IB/Comstar, shared via SMB3 from Windows 2012.

3.
The first approach with ZFS should be: skip Windows 2012 for filesharing and use ZFS directly over SMB with ZFS snaps as Windows previous version. If you need a special SMB3 feature, use Windows 2012 for sharing a ZFS iSCSI/IB target. In this case you can only snap the whole ZFS filesystem/target what makes an undo from snaps with previous versions more complicated as you need to clone/import the whole filesystem.


btw
SAMBA is an SMB server for Linux/Unix not a protocol like SMB 1,2,3. On Solaris you use the kernelbased Solaris SMB server (or SAMBA). On Windows you use the Microsoft SMB server with SMB3.

10 GbE + OmniOS/ Solaris SMB to your Windows clients may be another option if you really need performance
 
Last edited:
No Slog, Slog, No Sync

21194243426_482be1eafd_o.png
21220491485_bfdfaef517_o.png
20599366353_7695cb33e0_o.png


I've started with a single mirror for now. Thoughts? My slog is a S3700 - I had considered using NVMe but was told it was a waste of $$$.
 
As expected

- Buffered writes without sync are fast
- Secure sync writes with a S3700 ZIL are faster than an 1 Gb/s network
- Secure sync writes without a fast ZIL (using onpool ZIL) is slow.

You may increase filesize to reduce cache effects.
 
For Windows clients, use SMB. Compare OmniOS/SMB1 vs Solaris 11.3/SMB 2.1 vs Windows 2012 with a ZFS filesystem over iSCSI/IB/Comstar, shared via SMB3 from Windows 2012.
I will do it
+ OmniOS on Hyper-V with his ZFS file system over iSCSI shared via SMB3 from 2012r2 or 8.1 pro

The first approach with ZFS should be: skip Windows 2012 for filesharing and use ZFS directly over SMB with ZFS snaps as Windows previous version. If you need a special SMB3 feature, use Windows 2012 for sharing a ZFS iSCSI/IB target. In this case you can only snap the whole ZFS filesystem/target what makes an undo from snaps with previous versions more complicated as you need to clone/import the whole filesystem.
No need snaps with my kind of use (collaborative video shared files server)

10 GbE + OmniOS/ Solaris SMB to your Windows clients may be another option if you really need performance
allready done :
-> 3x RaidZ of 3 disk 2 To WD SE
= sequential R/W performance, benchmarked with Blackmagic Speedtest are same as
-> 7x mirored 2 way 2 to WD SE
... with server : 10 GbE Intel + Omnis OS on X9SCM-II F with Xeon e3 1220v2 + 32 Go ECC
... and client : 10 GbE Intel Windows 7 Pro or 8.1 Pro
.... direct attach wirh 5m Om3 or into a 2x FSP+ 10 GbE switch

That's why I'm inquiring a reliable & performance solution, to share files between several clients.

Just for fun : Synology DS713+ with latest DSM 5.2 and SMB 3, works very fine with 2x MacBookPro on osX 10.10.4, and ProRes 422 codec.
 
we use freebsd + zfs + native nfs services where we need strictly nfs and don't need ha

you can use starwind to cluster windows servers and expose failover nfs mount points

keep in mind windows built-in nfs is not the best one :mad:

i talked to starwind ppl on latest vmworld and they promised to come up with third-party nfs stack

Okok.

So, what could be the best workaround to have a powerful ZFS file server with SMB share for Windows clients ?
  • Xubuntu 14.04 + ZoL ?
  • OmniOS + Napp-it, connected with Infiniband to a Windows 2008r2 / 2012r2 server + Starwind Virtual San ?

One time, we talk together about a shared server for collaborative video editing with multiple clients working with the same media at the same time ... what protocol are you using to make this possible ?

Cheers.

St3F
 
we use freebsd + zfs + native nfs services where we need strictly nfs and don't need ha

you can use starwind to cluster windows servers and expose failover nfs mount points

keep in mind windows built-in nfs is not the best one :mad:

i talked to starwind ppl on latest vmworld and they promised to come up with third-party nfs stack

NB : NFS on Microsoft is only avaible on Enterprise version !!!
-> https://support.microsoft.com/en-us/kb/2769923
-> http://www.howtogeek.com/195447/you-cant-use-them-8-features-only-available-in-windows-8-enterprise/

Windows-8.1-Enterprise-Free-Download-ISO-32-Bit-64-Bit-Direct-Link-Download.png

... we work with the Pro version.

s0847823_sc7

So => SMB share, not NFS !!
 
As expected

- Buffered writes without sync are fast
- Secure sync writes with a S3700 ZIL are faster than an 1 Gb/s network
- Secure sync writes without a fast ZIL (using onpool ZIL) is slow.

You may increase filesize to reduce cache effects.

Should I be expecting "more" with respect to writes i.e. would NVME slog help or is this about as good as it gets? Why is it that the writes in crystal mark for 4k are a lot lower than the other metrics?
 
Last edited:
NVME is basically a faster pci-e based connection.
More relevant is the quality of Flash and the flash controller.

This is similar with USB3 that is not fast with as slow stick.
It can be faster according to the used hardware.

Currently:
If you really need fast secure writes, best is a HGST ZeusRam, followed by Intel S37x0.
The bigger models are faster. You can enhance performance a little when you limit space with an HBA on a new SSD.

about 4k and queuedepth
You get best performance with large files or higher queuedepth. If you randomly write small files with queuedepth 1 (no write optimization), performance is very low even when using SSDs. This value is what you should look for a ZIL or for ex for some database applications.

http://www.userbenchmark.com/Faq/What-is-queue-depth/41
More or less the 4k (QD1) value shows the real power of a disk with small random reads/writes
 
Last edited:
Has anyone tried running Solaris 11 and Napp-it on the 45 Drives "Storinator"? The XL60 enclosure can hold 60 drives, all direct-wired and 7x PCI-E Cards. So you can have 4x LSI HBA's and 2x 10GB NICs with room to spare. I'm thinking of using one as opposed to our previous systems which were JBOD's connected to Servers with HBA's.

Anyone try these yet? Mobo is SuperMicro X9SRL-F btw. Haven't seen too many reports of them with Solaris 11.
 
Thanks!

As a home users what extensions do most go with? Also any good articles in setting up CIFS or best practices. Currently setup via the acl extension which works great but not sure if I'm doing things ideally.
 
Thanks!

As a home users what extensions do most go with? Also any good articles in setting up CIFS or best practices. Currently setup via the acl extension which works great but not sure if I'm doing things ideally.

OmniOS or Solaris with the kernel-CIFS server is a fileserver that adds Windows alike NFS4 ACL and security IDs SID to the Unix filesystem ZFS to act similar like NTFS. If you for example backup and restore a filesystem to another Active Directory server, all AD permissions keep intact. It also includes filebased and sharebased ACL and out of the box support for ZFS snaps as Windows "Previous Versions."

So basically you set the same like on a real Windows server. If you do not need user based access, enable anonymous guest acess. Otherwise add some users or join an AD domain, enable a share and set filebased user-ACL from Windows or via ACL extension, optionally add share ACL.

There is only one real difference between Windows and Solaris. Windows processes first all deny rules then the allow rules where a deny overrides an allow in all cases. Solaris works more like a firewall. It respects order of rules where the first matching rule does the job. An allow in first place gives access even when there is a deny rule with a higher rule number below. ACLs in an order cannot be set from Windows, this must be done on Solaris either via CLI or ACL extension.

Free: all what is needed

Regarding Pro extensions:
Most homeusers use the free version, Extensions@home are available at reduced prices.
Pro adds suppport, GUI performance and saves time/ add comfort

ACL: set ACL permissions on Solaris with ACL order
Monitoring: Real time monitoring, Disk slot detection with WWN disks etc
Replication: Highspeed replication over net
Complete: all of the above

see http://napp-it.org/extensions/quotation_en.html

all pro: GUI acceleration with background tasks,
access to support/bugfix releases and dev editions
 
Last edited:
Thanks Gea. What is the GUI acceleration and background tasks used for. I'm currently trialing but am not sure if these options are temporarily enabled.
 
Thanks Gea. What is the GUI acceleration and background tasks used for. I'm currently trialing but am not sure if these options are temporarily enabled.

Example
If you list snaps, filesystems or disks with the free edition, the infos are collected via the CLI commands and CGI on every menu reload what means that a menu reload last seconds up to a few minutes with many items. GUI acceleration means that this info is collected by agents in the background to display the menu immediatly.
 
Having a bit of an issue with NFS.

I have two subnets, one linked for 1G ethernet, the other for 10G.

When I mount via NFS on the 10G network I have no issues. When I try to mount via the 1G network I am getting access denied from NFS server. Any ideas what could cause this?
 
ZFSGuru 10.03
  • Asus x79 Extreme 11
  • Xeon E5 2687w
  • 32 Go ECC 4x 8 Go
  • 2x m1015 flashed in LSI iT p19
  • 14x 2 TB WD SE (7200rpm)
  • Intel 10GbE XFP+
... in a mirrored configuration (7x miror of 2x 2 TB.)

Client is a Windows 8.1 Pro with the same Intel 10GbE XFP+

Both are connected to switch Mikrotik CRS226-24G-2S+RM with SFP+ 10G SR Finistar

Test with SMB share and the Blackmagic Speedtest software
Write : 450 MB/s
Read : 135 MB/s
Same configuration, the ZFS Pool created on ZFSGuru has been imported on Xubuntu + ZoL + Napp-it.

Same test with a zfs filesystem shared and mounted onto a drive letter, with samba :
Write : 1080 MB/s
Read : 640 MB/s
 
hmm, samba being single threaded that seems like a huge jump. Caching maybe?
 
hmm, samba being single threaded that seems like a huge jump. Caching maybe?
Same configuration on ZFSguru and Xubuntu : no destroy pool, just imported.
Same hardware : no SSD cache, same amount of RAM

Happy but I will accurate tests with Win 2012r2 or 8.1 pro, SMB sharing an OmniOS iSCSI volume through InfiniBand.
... on a hardsystem
+
... on a Virtual Machine in a Hyper-V container.

Cheers.

St3f
 
so going from ZFSGuru to OmniOS you got those speeds? I may have misunderstood.
 
Still with Samba / CIFS on Xubuntu 14.04 + ZoL
Xubuntu_Linux_ZFS_ZoL_Benchmark_7vdev_mirored_WD_SE_2TB.PNG
Xubuntu_Linux_ZFS_ZoL_CristalMark64b_7vdev_mirored_WD_SE_2TB.PNG


Gea : on Nappit website is written : " ACL extension (with additional user settings, not working on Linux)"

On Xubuntu 14.04 + latest ZoL, I can manage ACL extension, and then manage on windows the rights to a samba shared folder :
Code:
zfs set acltype=posixacl tank/media

If I wanted to set the acltype back to stock configuration (default), I would do the following (Thanks to DeHackEd from #zfsonlinux freenode channel for letting me know about this):
Code:
zfs inherit acltype tank/media

Then, as you can see, a new column appears on Nappit :

Xubuntu_Linux_ZFS_ZoL_ACL_Nappit.PNG


There is another fun tool : lsidrivemap
-> https://github.com/louwrentius/lsidrivemap
... displays each drive in an ASCII table that reflects the physical layout of the chassis.

The data is based on the output of the LSI 'megacli' tool for my IBM 1015 controllers.

Code:
root@nano:~# lsidrivemap disk

| sdr | sds | sdt | sdq |
| sdu | sdv | sdx | sdw |
| sdi | sdl | sdp | sdm |
| sdj | sdk | sdn | sdo |
| sdb | sdc | sde | sdf |
| sda | sdd | sdh | sdg |

The Python script can be easily tailored for your own server.
... and it is possible to see by thge same way, the temperatur of each hard drive

Code:
root@nano:~# lsidrivemap temp

| 37 | 40 | 40 | 37 |
| 36 | 36 | 37 | 36 |
| 35 | 37 | 36 | 36 |
| 35 | 37 | 36 | 35 |
| 35 | 36 | 37 | 36 |
| 34 | 35 | 36 | 35 |

Do you think these tweaks are implementable into Nappit ???

Cheers.

St3F
 
Last edited:
St3F

Did you follow a specific guide to installing ZoL on Xubuntu? I tried getting ZoL to work on Ubuntu not long ago, without much success...
 
Gea : on Nappit website is written : " ACL extension (with additional user settings, not working on Linux)"

On Xubuntu 14.04 + latest ZoL, I can manage ACL extension, and then manage on windows the rights to a samba shared folder :
Code:
zfs set acltype=posixacl tank/media

If I wanted to set the acltype back to stock configuration (default), I would do the following (Thanks to DeHackEd from #zfsonlinux freenode channel for letting me know about this):
Code:
zfs inherit acltype tank/media

Then, as you can see, a new column appears on Nappit :

Solaris and Napp-it use ntfs4 ACL Regarding functionality and permission inheritance thay are far more Windows alike as Posix ACL that you use with SAMBA. Beside different functionality, different OS non-ZFS related tools are used to manage them.
An ACL extension for Linux would therefor require a whole new module. I would write some together with a SAMBA integration into napp-it but this is currently not planned.

There is another fun tool : lsidrivemap
-> https://github.com/louwrentius/lsidrivemap
... displays each drive in an ASCII table that reflects the physical layout of the chassis.

The data is based on the output of the LSI 'megacli' tool for my IBM 1015 controllers
Do you think these tweaks are implementable into Nappit ???

On Solaris I have integrated sasircu from LSI to detect and assign disk bays of a backplane to a disk. An integration into the Linux version would be possible but is currently not planned due lack of time.

btw.
You should not use sda, sdb etc to detect/assign disks as they can move around after a reboot. On Solaris I use always WWN as this is a method that is based on a unique disk id from the disk manufacturer. You can move around such a disk between slots and servers and the id keeps the same. This can be used on Linux as an option. On Solaris it is mandatory with newer HBA controllers.

Only problem: The WWN id consists from the controller id + the unique disk id.
On a reinstall the controller id may change, so you need to manually assign a graphical disk table as this cannot be detected automatically.
 
Following the link : https://github.com/louwrentius/lsidrivemap
... you can see : ;)
Code:
root@nano:~# lsidrivemap wwn

| 5000cca23dc53843 | 5000cca23dc52fea | 5000cca23dc31656 | 5000cca23dc01655 |
| 5000cca23dc459ee | 5000cca22bf0f4c3 | 5000cca22bef486a | 5000cca23dc51764 |
| 5000cca23dc186cf | 5000cca23dc02062 | 5000cca23dda5a33 | 5000cca23dd398fa |
| 5000cca23dd56dfb | 5000cca23dd3a8cd | 5000cca23dd9b7df | 5000cca23dda6ae9 |
| 5000cca23dd04ded | 5000cca23dd54779 | 5000cca23dd59e65 | 5000cca23dd59b65 |
| 5000cca23dd45619 | 5000cca23dd57131 | 5000cca23dd329ba | 5000cca23dd4f9d6 |
The wwn name of a drive is found in /dev/disk/by-id/
 
St3F

Did you follow a specific guide to installing ZoL on Xubuntu? I tried getting ZoL to work on Ubuntu not long ago, without much success...

Install PPA support in the chroot environment like this:

Code:
# locale-gen en_US.UTF-8
# apt-get update
# apt-get install ubuntu-minimal software-properties-common
Even if you prefer a non-English system language, always ensure that en_US.UTF-8 is available. The ubuntu-minimal package is required to use ZoL as packaged in the PPA.

Install ZFS in the chroot environment for the new system:

Code:
# apt-add-repository --yes ppa:zfs-native/stable
# apt-add-repository --yes ppa:zfs-native/grub
# apt-get update
# apt-get install --no-install-recommends linux-image-generic linux-headers-generic
# apt-get install ubuntu-zfs
# apt-get install grub2-common grub-pc
# apt-get install zfs-initramfs
# apt-get dist-upgrade

More : http://docs.gz.ro/debian-linux-zfs.html

Just be very carful if you update / upgrade Ubuntu : it breaks ZFS !!
 
Following the link : https://github.com/louwrentius/lsidrivemap
... you can see : ;)
Code:
root@nano:~# lsidrivemap wwn

| 5000cca23dc53843 | 5000cca23dc52fea | 5000cca23dc31656 | 5000cca23dc01655 |
| 5000cca23dc459ee | 5000cca22bf0f4c3 | 5000cca22bef486a | 5000cca23dc51764 |
| 5000cca23dc186cf | 5000cca23dc02062 | 5000cca23dda5a33 | 5000cca23dd398fa |
| 5000cca23dd56dfb | 5000cca23dd3a8cd | 5000cca23dd9b7df | 5000cca23dda6ae9 |
| 5000cca23dd04ded | 5000cca23dd54779 | 5000cca23dd59e65 | 5000cca23dd59b65 |
| 5000cca23dd45619 | 5000cca23dd57131 | 5000cca23dd329ba | 5000cca23dd4f9d6 |
The wwn name of a drive is found in /dev/disk/by-id/

You can add your own menu items in napp-it (private update save menus),
see napp-it menu "My menus".

needed action in the menu script:
print &exe("lsidrivemap wwn");
 
Any thoughts on what could be preventing me from mirroring my rpool.

When trying to mirror my bootdisk I am seeing this error.

zpool attach -f rpool c2t0d0s0 c2t1d0s0
cannot open '/dev/dsk/c2t1d0s0': I/O error

These are both disks from ESXI.

I believe this is related to the physical geometry of the disk. The disk I have added in ESXI is the same size but the physical geometry is completely different. Any insight on how napp-it one step created their disks for ESXI or how one might be able to create a disk that has the same geometry?

/root# fdisk -G c2t1d0p0
* Physical geometry for device /dev/rdsk/c2t1d0p0
* PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
5015 5015 0 0 224 56 512
fdisk -G c2t0d0p0
* Physical geometry for device /dev/rdsk/c2t0d0p0
* PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
3916 3916 0 0 255 63 512
 
Last edited:
It seems like when the disk is being initialized that the issue is taking place. I physically tried cloning the disk via ESXI and then tried and got a message that the disk was already in a pool. The fisk info did show the correct sectors and there was no alignment error.

When I re-initalized the disk I'm getting different physical dimensions again so it seems that the issue is rooted there somehow.

Edit:

There are comments here http://constantin.glez.de/blog/2011/03/how-set-zfs-root-pool-mirror-oracle-solaris-11-express which describe the process to manually specify the drive geometry and update the partition table. I was able to manually add drive after following this.
 
Last edited:
Back
Top