OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

I am not sure what is causing the slowness when copying from the shares to desktop, 30MBps seems slow to me especially from a stripped 2 x SSD to my desktop SSD. :confused:

some options:

copy method
http://forums.servethehome.com/showthread.php?533-SMB-reads-are-slow-on-OpenIndiana-NAPP-IT

nic problem (Windows side)
try other PC
connect directly via a crossover cable

VM problem
try a barebone install
compare to OI live + VMare tools, opt VMXnet3 driver
try more RAM for VM
 
Hi _Gea
just like to sid the new napp it look relay nice thx for the big work you do
 
I've got a problem which is frustrating the hell out of me.
Just installed OI and Napp-It on ESXI.

Created my pool (raidz over 5 disks). Create my shares and shared via smb/nfs with nbmand turned on. Guest access is on, nothing complicated setup.

I can access the smb shares on my laptop - but not on another computer. I can connect via Windows 7 NFS on my laptop to the pool - but can't see any of the shares. However, I share a zfs folder back to ESXI for the VM's, and that works fine.

I'm really at my wits end here and need some help - I feel I'm missing something really basic, as this is NOT a complicated setup at all....

Anyone?
 
some options:

copy method
http://forums.servethehome.com/showthread.php?533-SMB-reads-are-slow-on-OpenIndiana-NAPP-IT

nic problem (Windows side)
try other PC
connect directly via a crossover cable

VM problem
try a barebone install
compare to OI live + VMare tools, opt VMXnet3 driver
try more RAM for VM

Thanks Gea, it was the copy method causing the issue as mentioned in the link, I disabled Teracopy, all is well. :D

May I know what's the command / property used by Nappit to allow full access for the CIFS share on Windows without the password prompt ? I have already created the share and set permissions to 777 on the folder. Trying to do this from command line without going through Nappit.
 
I can access the smb shares on my laptop - but not on another computer

sounds like a Windows problem?

try:
delete all id-mappings if you have ones
set ACL of the shared folder to defaults with everyone@= modify (napp-it menu zfs folder)
reboot Clients and server
 
Thanks Gea, it was the copy method causing the issue as mentioned in the link, I disabled Teracopy, all is well. :D

May I know what's the command / property used by Nappit to allow full access for the CIFS share on Windows without the password prompt ? I have already created the share and set permissions to 777 on the folder. Trying to do this from command line without going through Nappit.

Solaris SMB is like Windows: cares about ACL
You must set ACL to everyone@ to modify or full access
easiest way for working defaults: napp-it menu ZFS folder - ACL

or you can use the ACL extension.
Allow trivial ACL and allow user ACL are free

or google or read Oracle docs about ACLsettings to do from CLI

or look at the zfs-lib.pl or acllib.pl for napp-it actions
sources are open and documented
 
Last edited:
some options:

copy method
http://forums.servethehome.com/showthread.php?533-SMB-reads-are-slow-on-OpenIndiana-NAPP-IT

nic problem (Windows side)
try other PC
connect directly via a crossover cable

VM problem
try a barebone install
compare to OI live + VMare tools, opt VMXnet3 driver
try more RAM for VM

Thanks alot
Had this problem for a while now with bad read speeds!
Uninstalled Teracopy and that solved the problem!

thanks alot

Now I only need a tutorial how to install SabNZB and squeezebox server on OI!
 
Hoping to get some help with a build...

Just finished a microserver build and was planning on running an OS with ZFS on top of ESXi 5. Issue is that I am using 5x3TB disks and the MIcroServer does not support hardware pass-through.

I am considering going bare metal, but I would prefer to have a 2k8 server VM running on this machine as well. (VMs will be on an SSD)

Anyone have experience on how badly ZFS performance impacted if you give it VMDKs vs hardware pass-through? (These are true 4k block size disks)

The other issue with giving VMDKs to the ZFS VM is that the maximum size VMDK is 2TB, so my thought was to carve out a 2TB and a 1TB VMDK on each disk. I would then put all the 2 TB logical disks into a RAID-Z and all the 1 TB disks into another RAID-Z.

Any thoughts?
 
Looking for a quick confirmation that I am thinking about this appropriately..


Lian Li Q25B connected via onboard controller:

1x Corsair Force GT 60GB - OS (ESXi 5 hopefully)

A passed-through M1015:

6x 3.5" 4TB in a RAIDZ2 - 16TB of usable storage
1x Corsair Force GT 60GB - 8GB ZIL, 52GB L2ARC

to a VM with Solaris 11 Express + napp-it. My question really is down to what to do with the Force GTs - should they be mirrored? Any issues putting the ZIL/L2ARC on one disk?

Workload will be a low-impact VM datastore, NFS for media streaming. (a domain controller, perhaps a dedicated media server, PHP dev server)
 
Last edited:
Hoping to get some help with a build...

Just finished a microserver build and was planning on running an OS with ZFS on top of ESXi 5. Issue is that I am using 5x3TB disks and the MIcroServer does not support hardware pass-through.

I am considering going bare metal, but I would prefer to have a 2k8 server VM running on this machine as well. (VMs will be on an SSD)

Anyone have experience on how badly ZFS performance impacted if you give it VMDKs vs hardware pass-through? (These are true 4k block size disks)

The other issue with giving VMDKs to the ZFS VM is that the maximum size VMDK is 2TB, so my thought was to carve out a 2TB and a 1TB VMDK on each disk. I would then put all the 2 TB logical disks into a RAID-Z and all the 1 TB disks into another RAID-Z.

Any thoughts?

Yea, dont do that:)

ZFS needs access to bare hard-drives to perform as it should... For more info search this thread...

Matej
 
I was previously running version 0.500s of Napp-it on OpenIndiana.

I went into the menu to upgrade to the latest version (0.8c) which began upgrading and then crashed...Now when I log in I get the following message:

Code:
cat: cannot open /var/web-gui/data/napp-it/zfsos/_lib/lang/en/menus.txt: No such
 file or directory Content-type: text/html
Software error:

Can't locate /var/web-gui/data/napp-it/zfsos/_lib/interface.pl in @INC (@INC contains: 
/var/web-gui/data/napp-it/CGI /usr/perl5/site_perl/5.10.0/i86pc-solaris-64int /usr/perl5/site_perl/5.10.0 
/usr/perl5/vendor_perl/5.10.0/i86pc-solaris-64int 
/usr/perl5/vendor_perl/5.10.0 /usr/perl5/vendor_perl /usr/perl5/5.10.0/lib/i86pc-solaris-64int
 /usr/perl5/5.10.0/lib . /var/web-gui/data/napp-it/zfsos.new/_lib /var/web-gui/data/napp-
it/_my/zfsos/_lib) at admin.pl line 181.

For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.

[Mon Apr 9 10:18:00 2012] admin.pl: Can't locate /var/web-gui/data/napp-it/zfsos/
_lib/interface.pl in @INC (@INC contains: /var/web-gui/data/napp-it/CGI 
/usr/perl5/site_perl/5.10.0/i86pc-solaris-64int /usr/perl5/site_perl/5.10.0 
/usr/perl5/vendor_perl/5.10.0/i86pc-solaris-64int /usr/perl5/vendor_perl/5.10.0 
/usr/perl5/vendor_perl /usr/perl5/5.10.0/lib/i86pc-solaris-64int /usr/perl5/5.10.0/lib . 
/var/web-gui/data/napp-it/zfsos.new/_lib /var/web-gui/data/napp-it/_my/zfsos/_lib) at 
admin.pl line 181.

Anyone have any ideas how to fix this? without loosing my Napp-it configurations!
 
I think you can just reinstall nappit via wget...
Make a backup of configuration directory first..

Matej
 
I'm getting this error every 15 minutes, when playing movies or music the connection drops from the OI server (SMB share called 'storage'), so am guessing it may be related. Otherwise connectivity is fine most of the time without any authentication issues.

Apr 9 09:46:50 openindiana smbsrv: [ID 138215 kern.notice] NOTICE: smbd[NT Authority\Anonymous]: storage access denied: IPC only
Apr 9 09:46:50 openindiana last message repeated 7 times
 
I was previously running version 0.500s of Napp-it on OpenIndiana.

I went into the menu to upgrade to the latest version (0.8c) which began upgrading and then crashed...Now when I log in I get the following message:
..

Anyone have any ideas how to fix this? without loosing my Napp-it configurations!

update via wget from 0.5 or read
http://www.napp-it.org/downloads/changelog_en.html

After update to 0.8 from 0.5/0.6 or downgrade to 0.7 und re-update to 0.8 you get an error about missing interface.pl
Fix: Copy content of folder /var/web-gui/data/napp-it/_versions/0.8b/web-gui/data/napp-it/zfsos.new/ to
/var/web-gui/data/napp-it/zfsos/ (or reinstall via wget)
 
Does NAPP-IT works with NexentaStor CE ? I know the NexentaStor CE already has a GUI but just want to compare side by side.. Thanks.
 
update via wget from 0.5 or read
http://www.napp-it.org/downloads/changelog_en.html

After update to 0.8 from 0.5/0.6 or downgrade to 0.7 und re-update to 0.8 you get an error about missing interface.pl
Fix: Copy content of folder /var/web-gui/data/napp-it/_versions/0.8b/web-gui/data/napp-it/zfsos.new/ to
/var/web-gui/data/napp-it/zfsos/ (or reinstall via wget)

I think you can just reinstall nappit via wget...
Make a backup of configuration directory first..

Matej

Thanks guys that worked! - Got me a bit scared I would loose all my config. I did the wget install again and its all up and running now!
 
Does NAPP-IT works with NexentaStor CE ? I know the NexentaStor CE already has a GUI but just want to compare side by side.. Thanks.

No, not possibe due to restrictions of NexentaStor with root access and
different default mountpoints of pools (NexentaStor mounts all pools under /volumes)

Only NexentaCore was supported (This was the free version without GUI)
but NexentaCore is no longer available (End Of Live)

The successor is Illumian (already available), which is the base of next NexentaStor 4
 
Thanks guys that worked! - Got me a bit scared I would loose all my config. I did the wget install again and its all up and running now!

It's quite hard to loose data on ZFS, since all share data are stored on filesystem and not in nappit config files...

Matej
 
Should I be worried about this? My soft errors have been growing slowly over time.

QyqMm.png
 
I found you setup really interesting , i knows than I7-2600 is a great chips.(doing FT vt-d etc)
I like to know if you were able to make everything work FT , VT-D.
I want to buy the same setup as your. (2 like this).
Can you post some picture of vsphere client (summary screen of the host).
And by the way if everything is working, you will be the only one i found that his setup is low cost and can make FT VT-D run with also 32g of ram.

Thanks ;)
 
Guys,
I will add one more disk to my setup once I got my mainboard back from RMA and recreate the whole pool, so I will have 7 x 1TB Samsung drives with a sector size of 512b plus one new disk. The new disks (I'll take a 2 TB, doesn't make sense to buy smaller sizes anymore) almost all come with 4kB sectors. I understand there are some problems mainly with Windows XP or older OS'es. How about compatibility with Solaris 11 and ZFS? Are there known issues, or any show stoppers telling me I shouldn't buy such drives? I googled and read some articles, but it was hard to find information related to Sol11 and ZFS...
Thanks,
Cap'
 
Newbie in here and looking for some advise on best way to proceed

Currently have 1 x Norco 4220 with 12 x 1.5 tb drives installed on a 3Ware 9500-12 setup in a raid 5 (11 + 1 Hotspare drives) with NasLite as the applience SW, Also have a TST ESR316 chassis with 8x2 TB drives and 8x1TB drives all attached to a pair off 9550-8 controllers again running NasLite (this time drives are configured in R0)

I have on order a IBM M1015 HBA and an Intel RES2SV240 Expander,

Am going to rebuild the Norce Chassis with both these new cards + a suitable MB/CPU combination and would appreciate any advice on best way to go with this build.

The server will be used solely for Media storage to stream out to my various watching positions.

Current idea is to go OI / Napp-it with ZFS File system but dont have any real LINUX Experience so Ideas welcome.

TIA

Doug
 
Newbie in here and looking for some advise on best way to proceed

Currently have 1 x Norco 4220 with 12 x 1.5 tb drives installed on a 3Ware 9500-12 setup in a raid 5 (11 + 1 Hotspare drives) with NasLite as the applience SW, Also have a TST ESR316 chassis with 8x2 TB drives and 8x1TB drives all attached to a pair off 9550-8 controllers again running NasLite (this time drives are configured in R0)

I have on order a IBM M1015 HBA and an Intel RES2SV240 Expander,

Am going to rebuild the Norce Chassis with both these new cards + a suitable MB/CPU combination and would appreciate any advice on best way to go with this build.

The server will be used solely for Media storage to stream out to my various watching positions.

Current idea is to go OI / Napp-it with ZFS File system but dont have any real LINUX Experience so Ideas welcome.

TIA

Doug

For a single expander, you can buy 2-3 IBM 1015 (suggest to reflash to IT mode).
They are cheaper and give you a better performance and a trouble free system.

There are reports about problems with expanders (http://hardforum.com/showthread.php?t=1548145) and Sata disks. I also have had problems after months of use without problem - real problem unclear beside a disk problem that was hard to discover on a blocking expander.

I would currently avoid SAS expanders with Sata disks when not needed.

For the mainboard I would look at SuperMicro with Intel server-chipsets and Intel Nics
like the Sandy Bridge X9 series. They have enough fast PCI-e slots, IPMI and ECC Ram.
example http://www.supermicro.nl/products/motherboard/Xeon/C202_C204/X9SCL_-F.cfm
 
For a single expander, you can buy 2-3 IBM 1015 (suggest to reflash to IT mode).
They are cheaper and give you a better performance and a trouble free system.

There are reports about problems with expanders (http://hardforum.com/showthread.php?t=1548145) and Sata disks. I also have had problems after months of use without problem - real problem unclear beside a disk problem that was hard to discover on a blocking expander.

I would currently avoid SAS expanders with Sata disks when not needed.

For the mainboard I would look at SuperMicro with Intel server-chipsets and Intel Nics
like the Sandy Bridge X9 series. They have enough fast PCI-e slots, IPMI and ECC Ram.
example http://www.supermicro.nl/products/motherboard/Xeon/C202_C204/X9SCL_-F.cfm

Gea

Thanks for the reply and yes that is a possible way to go but is adding quite a lot of expense for me (I already have various MB's CPU's etc but nothing with a few PCI-e Slots at sufficient speed) what I would really like to know of course is the best way to utilise the drives that I have, Is it possible to have drives in the same scheme that are accross more than one controller?

I had already decided to flash to IT Mode, The Expander has had some decent reports as it uses the LSI Chipset rather than the one on the HP Expanders and I have found it for $145 including a full set of cables so comes in at the same cost as a pair of M1015's for me here in the UK.

Again thanks for any advice given?

Doug
 
I would really like to know of course is the best way to utilise the drives that I have, Is it possible to have drives in the same scheme that are accross more than one controller?

A ZFS pool is build from vdevs from any disks on any controller.
So it is not a problem to use several controllers and onboard Sata at once.
 
A ZFS pool is build from vdevs from any disks on any controller.
So it is not a problem to use several controllers and onboard Sata at once.

Excellent that was what I was hoping to hear so next question is what is the best way to utilise my 8 x 2TB + 12 x 1.5TB and 8 x 1TB drives (The 8 x 1TB are not that important at the moment and may never be used as the TST 16 bay Chassis will probably get re-used as an Extension chasis with 16 larger drives as and when funds allow)

My thinking was to have a vdev of the 8 x 2 and another with the 12 x 1.5's but dont really know enough about this stuff to decide.

Regards

Doug
 
Excellent that was what I was hoping to hear so next question is what is the best way to utilise my 8 x 2TB + 12 x 1.5TB and 8 x 1TB drives (The 8 x 1TB are not that important at the moment and may never be used as the TST 16 bay Chassis will probably get re-used as an Extension chasis with 16 larger drives as and when funds allow)

My thinking was to have a vdev of the 8 x 2 and another with the 12 x 1.5's but dont really know enough about this stuff to decide.

Regards

Doug

ZFS is very flexible. You can do a lot of usefull and useless things.
If you build a pool from two vdevs with different disks, this is ok.

You must only know: you cannot shrink or extend a vdev
All settings are forever, even if they are not optimal. You must destroy a pool to fix

You should care therefor for efficience (vdevs should be similar in amount of disks)
performance (they should perform similar) and redundancy (should have a similar redundancy)

If you build two vdevs, each Raid-Z2, you are quite well
 
Yo,

Would it be possible to install SABnzb and squeezebox server with a setup link in the Napp-it menu?
And how would one install those on OI? I would like to completely eleminate my QNAP NAS usage and only use my OI server with Napp-It!
Sorry for the Noob question!

Ty
 
Question: Why does drive R/W throughput seem to scale by the number of drives in a single vdev, when IOPS dosen't?

My question is not why IOPS dosen't increase but more why does the throughput increase with the same number of IOPS.
 
Question: Why does drive R/W throughput seem to scale by the number of drives in a single vdev, when IOPS dosen't?

My question is not why IOPS dosen't increase but more why does the throughput increase with the same number of IOPS.

If you read or write a single datastream from a non-fragmented disk, your transfer-speed is only limited by the interface, the rotation speed, the data-density on disk and the ability of your CPU/ RAM/Nic/Source/Target to to handle the data.

If you have a Raid where you can read or write simultaniously, the effective speed can be up to the sum of sequential performance of all disks.

If your disks are fragmented, or if you have multiple datastreams or lots of small files, I/O is the limiting factor, not sequential performance because you have to position all heads on all disks prior a read or write. (with 10 disks in a Raid, you have about the same total position time of all heads like a single disk, this is the reason why I/O is similar to one disk)
 
Have you tried different lmauth settings
use 3 with S11 and OI 151a1, otherwise 2 is suggested

OK. I tried all that and every other possible variation. Here's the output in napp-it

Code:
Try to join domain :

step 1: Timesync with ad-server 192.168.1.221: 10 Apr 18:55:17 ntpdate[7245]: no server suitable for synchronization found
step 2: set dns entry in /etc/resolv.conf: 
step 3: set /etc/krb5/krb5.conf: 
step 4: try to join via: smbadm join -u Administrator foobar.net - please wait ....

Joining technicalmultimedia.net ... this may take a minute ...
failed to join technicalmultimedia.net: UNSUCCESSFUL
Please refer to the system log for more information.

Andi n /var/adm/messages
Code:
Apr 10 18:52:52 nas5 smbd[3017]: [ID 504979 daemon.notice] ldap_add: Operations error
Apr 10 18:52:52 nas5 smbd[3017]: [ID 702911 daemon.notice] Failed to create the workstation trust account.
Apr 10 18:52:52 nas5 smbd[3017]: [ID 871254 daemon.error] smbd: failed joining foobar.net (UNSUCCESSFUL)

is there a way to get more verbose logs?

Somehow my other, main NAS (exact same OS and hardware OI_151a) is joined and working.
 
Joining technicalmultimedia.net ... this may take a minute ...
failed to join technicalmultimedia.net: UNSUCCESSFUL
Please refer to the system log for more information.

UNSUCCESSFUL happens mostly when
- there is already a computeraccount for this host in the domain -> DELETE and retry
- SMB service is not working -> SMB share a folder, restart service or reboot
- wrong user/PW
 
Anyone here is using nexentastor (free) ?

How does it compare with OI and Solaris 11?

It's 18TB limit bothers me a little, but it's web management seem very nice!

Does anyone know if it suffer for the same vmxnet3 nic issue as Solaris11/OI with ESX?

Thanks!
 
Anyone here is using nexentastor (free) ?

How does it compare with OI and Solaris 11?

It's 18TB limit bothers me a little, but it's web management seem very nice!

Does anyone know if it suffer for the same vmxnet3 nic issue as Solaris11/OI with ESX?

Thanks!

Which VMXNET3 issue? I'm using vmxnet3 on IO without any issues at all!

And on a sidenote OI w/ napp-it can't be beaten. I found it to be a fantastic combo, with minimal maintenance.
 
@_GEA:

home -> System -> Power Mgmt -> edit powerconf:

Seems to be missing the "Submit" button on 0.8c (OI)

BR Jim
 
Which VMXNET3 issue? I'm using vmxnet3 on IO without any issues at all!

And on a sidenote OI w/ napp-it can't be beaten. I found it to be a fantastic combo, with minimal maintenance.

People have had issues with hangs, performance spikes, etc...
 
Are there any reasons, despite TB-limit and no-encryption-support, against using NexentaStor Community Edition?
Are there any specialities regarding ESXi?
 
Back
Top