OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

This setup is awesome, I didn't expect to be able to browse all the snapshots and access them instantly from windows 7 over smb using previous versions. :D

On a side note, I have not built up the raidz2 yet, but what kind of read / write speeds are typical for a 1 disk pool?

currently getting:

write 10.24 GB via dd, please wait...
time dd if=/dev/zero of=/test/dd.tst bs=1024000 count=10000

10000+0 records in
10000+0 records out

real 2:21.8
user 0.0
sys 6.1

10.24 GB in 141.8s = 72.21 MB/s Write

read 10.24 GB via dd, please wait...
time dd if=/test/dd.tst of=/dev/null bs=1024000

10000+0 records in
10000+0 records out

real 1:08.5
user 0.0
sys 3.5

10.24 GB in 68.5s = 149.49 MB/s Read

The read speed looks good but this disk (st33000651as) benchmarks about 120mb/s average write speed in windows formatted ntfs... I expect to have no problem saturating gigabit once the raidz2 is setup, but just curious what is typical for single disk write speeds when compared to windows / ntfs.
 
The read speed looks good but this disk (st33000651as) benchmarks about 120mb/s average write speed in windows formatted ntfs... I expect to have no problem saturating gigabit once the raidz2 is setup, but just curious what is typical for single disk write speeds when compared to windows / ntfs.

The bad with single disks:
ZFS is slower than ntfs/ext3 or others due to copy on write and checksums and
complex data structures

The good :
It is more secure due to these things

The best
It scales well with number of disks and best with number of vdevs.
With enough disks/ vdevs you can reach any numbers

Sun developped ZFS for datacenters not desktops
 
Hi Gea,

Did you have a look at the issue with importing previous pools using napp-it?

Regards

David
 
I'm running into an issue where napp-it is spamming my console.

Code:
# /etc/init.d/napp-it start
# perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/agent-init.pl
/var/web-gui/tools/httpd/napp-it-mhttpd: started as root without requesting chroot(), warning only


============================================================

-----
 job requested: /var/web-gui/data/napp-it/_log/tmp/read_disk.request

----
agent-request -> request: /var/web-gui/data/napp-it/_log/tmp/read_disk.request

#########
 other monitoring requested:
agent request 19.11.2011, 20:25 23 s: get-disk


============================================================

-----
 job requested: /var/web-gui/data/napp-it/_log/tmp/read_zfs.request

----
agent-request -> request: /var/web-gui/data/napp-it/_log/tmp/read_zfs.request
agent request 19.11.2011, 20:25 24 s: get-zfs

#########
 other monitoring requested:

#########
 other monitoring requested:

#########
 other monitoring requested:

#########
 other monitoring requested:

#########
 other monitoring requested:

Even inside screen, or after using su to change users, it still spams the console. Is there a way to disable this, or is this an issue with the latest 0.6i nightly? I just updated from 0.560 to the latest nightly.
 
Gea or anyone use AFP - have you seen a console message saying that two instances of afpovertcp are running - this happens on first boot. Is this an issue?

Paul
 
Even inside screen, or after using su to change users, it still spams the console. Is there a way to disable this, or is this an issue with the latest 0.6i nightly? I just updated from 0.560 to the latest nightly.

these are status messages from napp-it*s background agents used for debugging.
In one of the next versions they will show only errors
 
@Gea:

You remember my problems described here -> http://hardforum.com/showpost.php?p=1038029854&postcount=2027

I have now setup my router to act as a DNS server(DD-WRT), and that seems to work just fine, since I can access my ESXi with hostname now.

Although my WDTVLive media streamer still show the napp-it box as "WORKGROUP" and not it's actual name(NAS01). A restart of the network services from the napp-it interface solves the problem. I've tried everything now, but without any luck. They all reside in the same workgroup, and I've tried re-joining napp-it to the workgroup, with success, but still no go on the WDTVLive box(unless I restart network services from napp-it interface).

Any pointers as to where I can investigate further?

Thank you
Best regards
Jim
 
Last edited:
I'm following the all-in-one guide (http://www.napp-it.org/doc/downloads/all-in-one.pdf) and ran into a problem on step 7.9, installation of OpenIndiana 151a.

My server has the following parts:
  • Supermicro X9SCM-F-O
  • Xeon e3-1230
  • 16 gb ECC unbuffered RAM
  • Supermicro AOC-USAS2-L8i (SAS2008 based HBA)
  • Chenbro CK23601 (SAS Expander)

I have the VM configured as specified in the guide, the HBA passed through to the VM, and the ISO loaded in the virtual CDROM. The live DVD boots up and gets to the desktop, but when I attempt to begin the installation the install window closes within a second. Launching via command line (/usr/bin/pfexec /usr/bin/gui-install) shows that the installer is encountering a Segmentation Fault.

If I remove the HBA (via passthrough) from the VM it will run the installer just fine. Any thoughts on what to try?
 
Last edited:
Try doing the install without the HBA present, shutdown the VM, add the HBA and see if it boots?
 
I just finished 24 hours of memtest without any errors so that rules out the ram.

I flashed it with the latest release that Supermicro support had, IT-FW P10. I know the card is LSI based, but is it OK to grab P11 from LSI and use that instead of the latest Supermicro firmware? I wasn't sure if there was any customization with the board/firmware that would cause problems with stock LSI firmware.
 
I just finished 24 hours of memtest without any errors so that rules out the ram.

I disagree that 24 hours rules out the ram. At work I had a system that failed memtest after around 72 hours and it was repeatable. I replaced the ram and no problems at all.
 
these are status messages from napp-it*s background agents used for debugging.
In one of the next versions they will show only errors
Background agents writing out to stdout is slopping coding. Those Warning/Info/Error/etc should all be writing to a log file instead of stdout. You can't monitor stdout like you can a log file, meaning unless someone is sitting on the console all the time, it will be very easy to miss issues.
 
P10 IT-FW should work OK, that's what most people are running as of today.
Most controllers based on LSI chipsets from other channels are re-branded OEM versions.
I've seen many people reporting to have successfully cross-flashed these with stock LSI FW.
Like here: http://lime-technology.com/forum/index.php?topic=12767.0
I personally cross-flashed some IBM M1015 cards to IT-Mode (converting them to 9211-8i or actually 9210-8i cards) without any problems...running napp-it all-in-one with P10 IT-FW now.
 
...what firmware is the controller running?
Make sure it is on IT-FW...P11 is the current version.
I cross-flashed to LSI's P11 firmware but still had the same segment fault issue with the OI installer.

I disagree that 24 hours rules out the ram. At work I had a system that failed memtest after around 72 hours and it was repeatable. I replaced the ram and no problems at all.
Good to know. I'll let it run longer to fully test when I have more time.

Try doing the install without the HBA present, shutdown the VM, add the HBA and see if it boots?
Your suggestion worked. I removed the AOC-USAS2-L8i, ran the OI-151a installer, powered down, added the AOC-USAS2-L8i, powered on, and the card/drives were detected and working.

Thanks for all the help everyone! Now to do some reading and figure out how I want to build my zpool/vdevs. I have 4x500gb and 10x2tb drives.
 
How big is a Solaris 11 Express install with NAPP-It and any other needed goodies? Basically I'm wondering if an 80G SSD is enough for ESXi and the Solaris VM, or if I need to get a 120 or larger drive?

Thanks!
 
How big is a Solaris 11 Express install with NAPP-It and any other needed goodies? Basically I'm wondering if an 80G SSD is enough for ESXi and the Solaris VM, or if I need to get a 120 or larger drive?

Thanks!
My ESXI 5 + openindiana (full install) + nappit is taking up 13gb of my 60gb SSD. I would think you have more than enough with 80gb.
 
How big is a Solaris 11 Express install with NAPP-It and any other needed goodies? Basically I'm wondering if an 80G SSD is enough for ESXi and the Solaris VM, or if I need to get a 120 or larger drive?

Thanks!

i would use at least 12 GB disks
 
About GUID's on newer SAS2 controllers

If you own a LSI SAS2 controller that displays unique disk GUIDS like c3t600039300001EA56d0
instead of the former controller/slot id's like c1t12d0 then you have an advantage and a problem

advantages of GUID's: you have always the same disk number even after changing slot, controller or server
see http://www.google.de/url?sa=t&rct=j...sg=AFQjCNEWGQm2pJdybLqOR5MkEVd_NUcBxw&cad=rja

problems of GUID's
you must write down these numbers + used slot to identify the correct disk in case of problems

With current nightly of napp-it i work on this problem to have a mapping GUID -> physical slot
- at least with LSI SAS2 controlles like LSI 9211i with compatible models and LSI/ Intel SAS2 expanders
 
Last edited:
I have a server that has 2 mirrored drives that make up the rpool. The problem is that they hang off a Dell PERC 5/i controller. I recently bought a Dell SAS 6 iR controller and would like to move the devices to the new controller.

Initially I thought I'd take one of the devices offline (disconnect it from the controller) and then plug it of the new controller and then try to do a zpool replace. But that did not work. It keeps complaining

invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c11t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).

I get the same error with or without the -f flag. Same error even on trying to 'attach' the device.

Any tips on how to cleanly move the devices to the new controller ?
 
Just posting some additional steps...

I've tried 'resettting' the disk by going into fdisk and deleting the partition all together. Then created a new partition using fdisk -B /dev/rdsk/c11t0d0s0.

Tried
prtvtoc /dev/rdsk/c7t0d0s0 | fmthard -s - /dev/rdsk/c11t0d0s0

This gave me the following message :

fmthard: Partition 2 specifies the full disk and is not equal
full size of disk. The full disk capacity is 156248190 sectors.
fmthard: New volume table of contents now in place.

I do see that after moving the drive over to the new controller it's size is bigger than the drive I'm trying to replace. But I thought it was ok add mirror vdevs that are bigger in size.

If all else fails, I can re-install solaris on the new devices. But that would mean several hours of reconfiguration and setup that I was hoping I could save by moving the drives over.
 
Welcome back Gea. Napp-It on Solaris Express 11 working awesome. Vielen dank! Your GUI is actually helping me to learn command line, by removing the "barrier to entry" to set everything up initially, and then finding out which cmdline options i need to use to maintain it. If I had to set it all up from cmdline to start with I would have never bothered.

@Ruroni: re: snapshots. You do realize if you're using a Windows client to connect to an SMB share on your ZFS pool, you can right-click a folder that's been snapshotted and use the "Restore previous versions" function? Works great. Works just like Volume Shadow Copy Service on windows server.
__________________
jh2.jpg

2.jpg

4.jpg
 
Gea,
I now have a NAS and a small SAN up and running with OI 151a + NappIt 0.6i (updated from 0.6d but was working yesterday). Things are running well but I came in this morning to find I can't access the web interface of the SAN. It it running and pools are healthy, VM's are running, I just get this error about 2 seconds after seeing the "initialize napp-it..." message:
(lib hash2file ) Datei /var/web-gui/data/napp-it/_log/tmp/zfs.cfg konnte nicht geschrieben werden.)
Fragen Sie bei Bedarf Ihren Systembetreuer Content-type: text/html
Software error:

(lib hash2file ) Datei /var/web-gui/data/napp-it/_log/tmp/zfs.cfg konnte nicht geschrieben werden.) at admin-lib.pl line 422.

For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
[Wed Nov 23 09:06:42 2011] admin.pl: (lib hash2file ) Datei /var/web-gui/data/napp-it/_log/tmp/zfs.cfg konnte nicht geschrieben werden.) at admin-lib.pl line 422.

I have attempted twice to restart napp it via terminal as root with /etc/init.d/napp-it start but no change. Any ideas?
 
Gea,
I now have a NAS and a small SAN up and running with OI 151a + NappIt 0.6i (updated from 0.6d but was working yesterday). Things are running well but I came in this morning to find I can't access the web interface of the SAN. It it running and pools are healthy, VM's are running, I just get this error about 2 seconds after seeing the "initialize napp-it..." message:


I have attempted twice to restart napp it via terminal as root with /etc/init.d/napp-it start but no change. Any ideas?

This is a write error due to wrong permissions
If you have this file, delete it or set permissions to 666 or 777

sudo rm /var/web-gui/data/napp-it/_log/tmp/zfs.cfg
and update to 0.6k

http://napp-it.org/downloads/changelog_en.html
 
Last edited:
yo,

Need some help installing napp-it! Followed the tutorial and installed solaris 11 but at command line get error while installing napp-it!
When I enter wget -O - www.napp-it.org/nappit | perl it doesn't load!
screenshot019k.png
 
First you need internet access obviously. Assuming that:

You need root permission before you install napp it. At the command line type: su
It will prompt for your root password after which your command line prompt should change to:
admin@solaris:~#

Then the install should work!
 
Leave a space after -O -

Nothing personal, but maybe some people just ought to use Windows or something.
 
...anyone tried OI or SE to run on KVM hypervisor, like ProxmoxVE?
As it looks, pci-passtrough (aka vmdirectpath for ESXi) is at all possible with KVM now (see http://forum.proxmox.com/threads/6952-PCI-PCIe-passthrough?p=43180#post43180).
Now a windoze-free system build with napp-it all-in-one should be possible....happy days ;-)

P.S.: unfortunately I don't have a vt-d capable system to spare atm...will try as soon as supplies will be available but I have to wait until disk prices drop again, I am afraid.
 
Yo,

Another noob question...so bear with me...
Created a pool(MEDIA) with 2 vdevs and a write SSD cache.
Enabled Samba and server is found on my Win7 pc but that's it!
Do I need to add a Samba folder to the pool to be able to move data to the pool?

gr33tz

EDIT1: created share under pool called Qmedia but it still doesn't show under windows7...
EDIT2: solved it myself..thanks for all the help
 
Last edited:
so here's a stupid question.

i'm thinking of setting up a a solaris express + napp-it build going and I know this must sound very dumb but for the life of me I can't find where to download solaris express 11.

I go to the oracle website and all I can find are the downloads for regular solaris. So I downloaded one of the solaris installation images and installed it on a machine and such but I'm thinking it would probably be better to go for the express version but my stupidity prevents me from finding where to download the express version.
 
so here's a stupid question.

i'm thinking of setting up a a solaris express + napp-it build going and I know this must sound very dumb but for the life of me I can't find where to download solaris express 11.

I go to the oracle website and all I can find are the downloads for regular solaris. So I downloaded one of the solaris installation images and installed it on a machine and such but I'm thinking it would probably be better to go for the express version but my stupidity prevents me from finding where to download the express version.

Oracle Solaris Express 11 = Oracle Solaris 11 preview
Now Solaris 11 final is out and replaced Express under the same license conditions
 
1. try a "su -" to get a fully populated shell with root.
2. it is "wget -O - www.napp-it.org/nappit | perl", not "wget -O -www.napp-it.org/nappit | perl" -> there is a space missing in what you typed.
try -> "wget<space>-O<space>-<space>www.napp-it.org | perl"

wget<space>-O<space>-<space>www.napp-it.org<space>|<space>perl
 
wget<space>-O<space>-<space>www.napp-it.org<space>|<space>perl

well it worked with the previous too!

One question remains:

Under NFS and FTP is marked TODO...does that mean it doesn't work yet? Under settings it's inactive but enabled! Managed to setup a SMB share but NFS and FTP remain inactive.
Or am I doing something wrong?

Gr33tz
 
Last edited:
well it worked with the previous too!

One question remains:

Under NFS and FTP is marked TODO...does that mean it doesn't work yet? Under settings it's inactive but enabled! Managed to setup a SMB share but NFS and FTP remain inactive.
Or am I doing something wrong?

Gr33tz

www and ftp management via web-gui is todo
you can do via cli

nfs and smb is active only if you have active shares
(assumed you have enabled the service)
 
First Off Thank You Gea. I have been following this post and a few others here as I slowly grow my home media server. Its has gone from whs to win7 and now starting with OI. My current main server is using win7 with integrated intel raid, a rocketraid 2340 and an areca 1680 with HP expander...about 30Tb total... So I picked up a INTEL S5000PSL BOARD QUAD CORE XEON E5345 2 x 2.33GHZ 16GB RAM, (2) MV8 controllers on ebay to start playing with nappit. I got it installed, mirrored the boot and have a Z1 running for about a month without issue.(I even started yanking out drives and letting it resliver and running scrubs to get comfortable with the process) Its a tank..it just sits there churning since i put it in the rack. So now my new mission find a way to migrate everything without losing 2000 dvds and 250 tv series ;)

Questions...
I know its not ideal..but will the areca 1680 and expander workout ok? I know I will not be using the raid functions just straight thru to OI.

I figure the highpoint is a no go...but I assume it would still be better than the mv8's on the pci-x slots if it works...
Any input would be greatly appreciated...looking for any and all input...I know its not an ideal scenario, but its what the budget allows for at the moment... help me make it happen ;)

A little more about available hardware...Enclosures I have..
32+2 bay Lian Li 343 Cube --current win7 server
15 bay supermicro -- current expander encl.
16+2 bay rackables -- current nappit server

Figured I would keep the nappit server where it is and move the expander to the Lian Li case

Again open to suggestions here...

Thanks
 
Back
Top