OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Why do you need to move off Nexenta?

Support for Solaris 11 is 12% of hardware cost per year, so that's a big chunk of change to get support for hardware you already have. Maybe it'd make sense to purchase a smaller number of newer systems directly from Oracle?

The upshot is Oracle only provides support to the machines they want to support, and Supermicro isn't jumping through their hoops. See the HCL definitions for more details.

Thanks for the info. I can not change the OS (Solaris is literarily the only choice), nor can I change the number of servers. I'm running their HCL validation tool to see what happens, otherwise it's 25 new Dell R620's.
 
Every time I access 'Disks -> Smart Info' page on Napp-It, Solaris (11.1) generates below message for every hard drive attached to the system and increases soft error count by one.

May 26 17:00:50 solaris scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk@g50014ee2073a1452 (sd6):
May 26 17:00:50 solaris Error for Command: <undecoded cmd 0xa1> Error Level: Recovered
May 26 17:00:50 solaris scsi: [ID 107833 kern.notice] Requested Block: 0 Error Block: 0
May 26 17:00:50 solaris scsi: [ID 107833 kern.notice] Vendor: ATA Serial Number: WD-WCAZ
May 26 17:00:50 solaris scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
May 26 17:00:50 solaris scsi: [ID 107833 kern.notice] ASC: 0x0 (<vendor unique code 0x0>), ASCQ: 0x1d, FRU: 0x0

As described here http://sourceforge.net/apps/trac/smartmontools/ticket/261 this is happening because Napp-It executes smartctl with -a parameter, which includes -H parameter, which causes the problem.

Command that Napp-It runs:

/usr/sbin/smartctl -a -d sat,12 -T permissive /dev/rdsk/c11t7d0s0

can be replaced with the below that generates exactly same output and does not cause soft error to happen

/usr/sbin/smartctl -i -c -A -l error -l selftest -l selective -d sat,12 -T permissive /dev/rdsk/c11t7d0s0

_Gea, maybe it is something you might consider changing since, at least per smartmontools ticket, is also happening on OpenIndiana.

Thanks
 
hello nezach

thanks for the info. I tried that but the command misses the overall health status -
the most important information. But this (-H) is the point where the soft-error is produced too.
 
I have a question about Napp-it. Not sure if it's a bug or if I have just missed something and this seems to be a pretty active place to ask.

On replication jobs the Job % is always listed as n.a. so I never know how far along it is. Other than this everything works perfectly. I am a licensed user, I have seen the % work in the past between other boxes but on this one it's always just n.a.

I am running version 0.9b3 on OpenIndiana 151a7
 
%-values are only shown at initial sync where it compares size of source and target ZFS.
This is not possible with incremental snap syncs.
 
_Gea, sorry for such a stupid basic question:

For 4 users trying to stream 1080P uncompressed video (sustained), what is the best config? This is going to stream to OS X computers. NFS? SMB?

Also, is 4 vdev with RAIDZ1 good? This system is NOT A BACKUP. It is needed 99% for speed.

Also, we should we use ZIL or LARCH? Or just HDD.

(People other than Gea can answer too! :) )
 
Yo,

I have permission problems that keep reoccuring after a new installation op OI and Napp-it. I'm running ESXi 5.1 and reinstalled OI after not being able to logon on to the main screen of OI in the console screen of ESXi. When I install all works well but when I try to logon after a month or more I get errors on permission problems and all I get is the blue wallpaper of OI...
plzzz help as I want to install LMS!

screenshot044u.png

screenshot045em.png

screenshot046e.png
 
I recently decided to use napp-it with OpenIndian.

Does it require that I flash my LSI SAS 2008 card to IT mode, or can I leave it as it is?

Does OpenIndiana require additional drivers to support this card?
 
I vaguely remember that if you don't flash, OI cannot find a driver for it. After the flash it should work.

With that said, I *also* remember it working on a stock M1015 card - and it loaded a different driver! However, I started getting weird driver errors on every 2nd reboot (with VMWare ESXi and the M1015s on passthrough). Flashing to IT made the errors go away - and loaded a different driver for them too.
 
Sounds like something borked the permissions. Maybe an acl issue. I've never been able to grok this stuff, so I've moved away from OI and company...
 
Sounds like something borked the permissions. Maybe an acl issue. I've never been able to grok this stuff, so I've moved away from OI and company...

solved it, used wrong username!

Only help I need is installing Logitech Media Server on OI...
 
Last edited:
_Gea, sorry for such a stupid basic question:

For 4 users trying to stream 1080P uncompressed video (sustained), what is the best config? This is going to stream to OS X computers. NFS? SMB?

Also, is 4 vdev with RAIDZ1 good? This system is NOT A BACKUP. It is needed 99% for speed.

Also, we should we use ZIL or LARCH? Or just HDD.

(People other than Gea can answer too! :) )

AFP > NFS > SMB (shame for Apple to build such a poor SMB implementation,
suppose its more a matter of politics than bad engineering, only half as fast than under Windows)

For a multi-user video/editing server, use netatalk and as much RAM as possible.
No need for Zil or ARC, Raid-Z is ok, best use a Raid-50/60 (multiple Raid-Z1/Z2 vdevs) setup

Multiple mirror offers better I/O but I suppose you are better with multiple Raid-Z
- RAM for ReadCache may be the critical point (ex 2011 boards, up to 768 GB RAM).

Think about 10GbE, at least to the switch
- quite cheap now with Intel X540-T1 and the new Netgear 10GbE x 8 Switch
- look at new SuperMicro boards with X540 onboard, really cheap for what is included

If you use MacPro, think about the Intel 10 Gbe as well (Thunderbolt-> 10 Gbe converter for iMacs/Macbooks are quite expensive)

If you use newer 4k disks, prefer 4,8,16 datadisks for each vdev for max capacity
 
Last edited:
Yo,

I have permission problems that keep reoccuring after a new installation op OI and Napp-it. I'm running ESXi 5.1 and reinstalled OI after not being able to logon on to the main screen of OI in the console screen of ESXi. When I install all works well but when I try to logon after a month or more I get errors on permission problems and all I get is the blue wallpaper of OI...
plzzz help as I want to install LMS!

i have seen a similar behaviour too with ESXi but I have no other solution than
putty (remote console) or reboot.
 
AFP > NFS > SMB (shame for apple to build such a poor SMB implementation,
suppose its more a matter of politics than bad engineering, only half as fast than under Windows)

For a multi-user video/editing server, use netatalk and as much RAM as possible.
No need for Zil or ARC, Raid-Z is ok, best use a Raid-50/60 (multiple Raid-Z1/Z2 vdevs) setup

Multiple mirror offers better I/O but I suppose you are better with multiple Raid-Z
- RAM for ReadCache may be the critical point (ex 2011 boards, up to 768 GB RAM).

Think about 10GbE, at least to the switch
- quite cheap now with Intel X540-T1 and the new Netgear 10GbE x 8 Switch
- look at new SuperMicro boards with X540 onboard, really cheap for what is included

If you use MacPro, think about the Intel 10 Gbe as well (Thunderbold-> 10 Gbe converter for iMacs/Macbooks are quite expensive)

If you use newer 4k disks, prefer 4,8,16 datadisks for each vdev for max capacity

Thanks sir!
 
I am having permissions problems with my Napp-it NAS. I am running OmniOS (SunOS ALINEA 5.11 omnios-33fdde4 i86pc i386 i86pc) and need a variety of systems to access share folders with fairly strict permissions. The idea of setting world read or write access won't work in my environment.

I have many issues, but maybe if I start to figure out how to get one of these problems resolved I'll start figuring out my other permissions issues. First, how do I pass root through to the fileshare?

ZFS File share on OmniOS NAS:

drwx------+ 14 briancl briancl 27 Apr 28 21:45 briancl

From my Ubuntu Desktop, I can access this fileshare with my same user (UID and GID match on both systems). However, I need root from the Ubuntu Desktop to be able to access this fileshare as well. Root gets permission denied:

root@bristol:/mnt/giant# id
uid=0(root) gid=0(root) groups=0(root)
root@bristol:/mnt/giant# ls -la briancl/
ls: cannot open directory briancl/: Permission denied


briancl@bristol:/mnt/giant$ id
uid=1000(briancl) gid=1000(briancl) groups=1000(briancl),27(sudo)
briancl@bristol:/mnt/giant$ ls -ld briancl/
drwx------ 14 briancl briancl 27 Apr 28 21:45 briancl/
briancl@bristol:/mnt/giant$ ls -la briancl/
total 182463
drwx------ 14 briancl briancl 27 Apr 28 21:45 .
drwxr-xr-x 3 root root 0 May 29 15:34 ..



Now, obviously I can access the actual file on the OmniOS NAS as root and briancl:

# id
uid=0(root) gid=0(root)
# ls -al briancl/
total 364964
drwx------+ 14 briancl briancl 27 Apr 28 21:45 .
drwxrwxrwx+ 10 root root 11 May 29 16:11 ..

How do I grant my root user access from the Ubuntu Desktop to briancl's NFS share on the OmniOS NAS?
 
Is there a web interface for remote data access, like Windows Home Server has?

Or is it back to trusty old FTP? Does OpenIndiana have a WebDav server?
 
Is there a web interface for remote data access, like Windows Home Server has?

Or is it back to trusty old FTP? Does OpenIndiana have a WebDav server?

You can add a webserver like Apache or a web package like Xampp
http://www.apachefriends.org/en/xampp-solaris.html

I did not use but heard of persons trying OwnCloud
Last what I have heard was a problem about a missing zip-Module for Apache.
There are binaries for this in http://pkgsrc.smartos.org/packages/illumos/2012Q3/All/

Has anyone successfully installed OwnCloud on OI or OmniOS?
 
Hello,

I have two questions about ZFS filesystem and I would be very grateful if you can help me with answers.

1) Do you have any experience with ZFS fragmentation? We had one storage with five mirrors in pool (10 disks + SSD ZIL) and after arount one year this pool became very slow. Restart didn't helped. In iostat there were really high service times for all of disks (svc_t, for example 50 - 500 ms and so on). There was only around 20% free space, bud if I deleted some data to free pool to 50%, nothing changed. Service times were still high on load and the pool was slow.

Problem was solved only with pool destroy and creating new pool (on the same disks). So I would like to know, if this behavior was made by fragmentation or you have some other experience?

2) What about thin provisioning of ZVOLs? In Oracle administration guide is "It's not recommended" but they don't say why. Is there any performance penalty or higher fragmentation problems with refreservation disabled? Or it's not recommanded only because you can easily lost free disk space?

Thank you!
 
Hello,

I have two questions about ZFS filesystem and I would be very grateful if you can help me with answers.

1) Do you have any experience with ZFS fragmentation? We had one storage with five mirrors in pool (10 disks + SSD ZIL) and after arount one year this pool became very slow. Restart didn't helped. In iostat there were really high service times for all of disks (svc_t, for example 50 - 500 ms and so on). There was only around 20% free space, bud if I deleted some data to free pool to 50%, nothing changed. Service times were still high on load and the pool was slow.

Problem was solved only with pool destroy and creating new pool (on the same disks). So I would like to know, if this behavior was made by fragmentation or you have some other experience?

2) What about thin provisioning of ZVOLs? In Oracle administration guide is "It's not recommended" but they don't say why. Is there any performance penalty or higher fragmentation problems with refreservation disabled? Or it's not recommanded only because you can easily lost free disk space?

Thank you!

1. Yes, there is fragmentation with reduced throughput on Copy On write filesystems if a pool is quite full.

more
http://blog.delphix.com/uday/2013/02/19/78/


2. I suppose the danger is over-allocation without keeping an eye on it
http://en.wikipedia.org/wiki/Thin_Provisioning
 
So only solution for storage pools with heavy load is time to time make some defragmentation (copy data to another pool)? And do you think, that large svc_t numbers in iostat are good indicator of fragmented pool?

Thanks for 2), I thought that.
 
For heavy loads, I think best solution is
- not to fill a pool over 50% or
- use SSD only pools where fragmentation is not a problem.

For reads, a lot of RAM can help, for sync writes you need a very good ZIL, best a ZeusRAM
a slow SSD can be worser than no SSD ZIL, see http://napp-it.org/doc/manuals/benchmarks_5_2013.pdf
 
Interesting benchmark document. Thanks for sharing.

We are using Intel 311 SLC for ZIL (20GB). I'm thinking, if some large MLC could be better, for example 240GB Intel 330.

According to our tests we know, that full SSD could be (because of internal wear leveling algoritmus) as slow as classic 7200 SATA disk or even slower. So maybe there could be some benefit to use large SSD (for example 300GB) but only with smaller slice (for example 100 GB) for L2ARC too.
 
A 311 is not a good ZIL because it lacks a supercap (possible dataloss on powerfailure) and
it is slow regarding write performance and latency.

I would create a scsi target and do a benchmark with sync=always and sync=disable
to see how good or bad the results are.

Sharing a large Intel for ZIL and ARC is not recommended.
Using a large Intel for ZIL only is ok.
 
Yes, you are right, Intel 311 seems to be slow for ZIL. I just made benchmark as you wrote.

Sequential write with sync=disabled -> 90 MB/s (on 1Gb/s iSCSI ethernet)
Sequential write with sync=always -> 41 MB/s (on 1Gb/s iSCSI ethernet)

Ok, I will try some another SSD. Thanks.

Sharing a large Intel for ZIL and L2ARC is not recommended because of possible dataloss on powerfailure, but not because of performance penalty, right?
 
No because of performance.
A ZIL must be fast or its worthless
 
I made some other SSD test for ZIL, according your benchmark. Maybe it could be interesting for others:

SLC Intel 311 20GB sync=disabled --------------------------- / ----------------------------- SLC Intel 311 20GB sync=always
1owuv7.png
8x5zq1.png


MLC Intel 330 60GB sync=disabled --------------------------- / ----------------------------- MLC Intel 330 60GB sync=always
2ro1bud.png
kckxuo.png


MLC-HET Intel S3700 800GB sync=disabled --------------------------- / ----------------------------- MLC-HET Intel S3700 800GB sync=always
2ns738i.png
bfrlt.png


Unfortunately I don't have 10Gb Ethernet, so all tests are only on 1 Gb iSCSI. On 10 Gb iSCSI would be this tests more interesting.
 
i am astonished about the minimal performance loss of a Intel 3700.
They are expensive but much cheaper than a ZeusRAM
 
i am astonished about the minimal performance loss of a Intel 3700.
They are expensive but much cheaper than a ZeusRAM

I think we are seeing the results of exactly what Intel designed this drive for! And that is consistent I/O latency. The high queue depth writes are incredible compared to the desktop drives!

When I first read the Anandtech review of the S3700 drives (when they introduced their new benchmarks to show SSD I/O consistency as being an important factor of performance) I remember seeing the results of the S3700 drives and immediately thinking that they would make excellent "budget" ZIL drives.
http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review
 
Back
Top