OpenZFS NAS (BSD, Illumos, Linux, OSX, Solaris, Windows + Storage Spaces) with napp-it web-gui

i suppose, you have missed a "-" in wget
wget -O - www.napp-it.org/nappit | perl

more Infos with my updated napp-it.pdf.

nappitpdf.png


reread also first thread http://hardforum.com/showthread.php?t=1573272
i am updating it quite often

Gea
 
Last edited:
If you are using a static IP address, you have to disable the network manager and use the default network service.
http://forums.oracle.com/forums/thread.jspa?threadID=2139833 is what I followed.
Basically:
svcadm disable network/physical:nwam
svcadm enable network/physical:default

Or you can try and make the changes to /etc/default/dhcpagent per that page.

And from there set up your IP information manually, using directions from http://wiki.sun-rays.org/index.php/SRS_5.1_on_Solaris_11_Express (not the packages, just the legacy network part)

Basically even if you have a static IP address set, in some cases Solaris 11 DHCP still does stupid things with DHCP and overwrites some files (like resolv.conf.).

thanx this worked, had to redo my network config from scratch but now it's holding the settings.
 
i suppose, you have missed a "-" in wget
wget -O - www.napp-it.org/nappit | perl

Hi Gea,

On that screenshot, I just missed typing the "-". The error message stayed the same. What I was missing was the space between the "-" and "www.napp-it.org". I installed it and it is working just find now.

Thank you for providing this tool. I have about 3 days of OpenSolaris behind me and this makes my life a lot easier. Without napp-it, I would have needed to go the Ubuntu ZFS-Fuse route.

Now just waiting for my 8GB ram so I can run iTunes in VirtualBox and my homeserver will be complete.:)
 
just to inform

there is a new napp-it nightly (0.415),
installable via nappit04 installer and a updated miniHowTo

- new: different css for Nexenta, OpenIndiana and Solaris Express
- updated infos on main-menus
- some minor bugfixes


miniHowTo: http://www.napp-it.org/doc/downloads/napp-it.pdf

Gea
 
Can you provide us with upgrade instructions please, for those of us who have older versions of napp-it?
 
Can you provide us with upgrade instructions please, for those of us who have older versions of napp-it?

hello dropbear

first time install and update to the current version is done by:
wget -O - www.napp-it.org/nappit | perl



if you want try the very newest 0.415c nightly release:
wget -O - www.napp-it.org/nappit04 | perl


it is also possible to use nappit04 installer and go
back by using the normal nappit installer.


In general:
-napp-it installer creates a system-snapshot, you can go back
-napp-it installer keeps your settings and jobs (delete them manually if needed)
-napp-it installer keeps old menues (you can select them with napp-it-setup)
-napp-it installer keeps your private menues

see http://napp-it.org/downloads/index_en.html


Gea
 
Two thumbs up from me. I'm running in a Norco 4220, Intel Core 2 @ 4GHZ w/6GB RAM, with 10x 2TB drives and 10x 750GB drives + encryption and compression turned on. I can get a solid 100MB/sec over gigabit reading, about 45-50MB/sc writing with CIFS (Intel gigabit NICs on both ends). And it hasn't crashed yet!
 
I just jumped in, very new to unix/linux. Tried doing the install on ESXi, but don't think my MB supports the passthrough of my BR10i controller.

I've been running my 8TB media server inside of Win7. Just using SyncBack to keep daily backups of the Folders to equal sized int/usb drives.

I've installed OpenIndiana on my WD Raptor and adding 2xSamsung 2xSeagate 1xWD Green (all 2TB) drives to the server. I also have a 60GB SSD I might like to use for a cache drive. Does OI support that and how would be the best way to set this up?

Also, since I have 2xInt & 2xUSB 1.5TB and 3xInt & 3xUSB 1TB drives, I haven't seen anyone writing about using USB drives in a Pool. Any opinions?

Also, is there a way to mount a disk with data on it to copy over to a pool?
 
I just jumped in, very new to unix/linux. Tried doing the install on ESXi, but don't think my MB supports the passthrough of my BR10i controller.

I've been running my 8TB media server inside of Win7. Just using SyncBack to keep daily backups of the Folders to equal sized int/usb drives.

I've installed OpenIndiana on my WD Raptor and adding 2xSamsung 2xSeagate 1xWD Green (all 2TB) drives to the server. I also have a 60GB SSD I might like to use for a cache drive. Does OI support that and how would be the best way to set this up?

Also, since I have 2xInt & 2xUSB 1.5TB and 3xInt & 3xUSB 1TB drives, I haven't seen anyone writing about using USB drives in a Pool. Any opinions?

Also, is there a way to mount a disk with data on it to copy over to a pool?

i would build one pool from one raid-z1 or z2 raidset (similar to raid-5 or raid-6, but without the write-hole problem) with the five 2 tb drives (capacity optimized)
or one pool, build of two mirror-raidsets with a hotspare (performance optimized)

and set a weekly scrubbing (online filecheck with data refresh to avoid silent data failures, one of the key advantages of zfs).
add the ssd to this pool as a read-cache, for better i/o

You can create datapools on usb-disks, but i would not create raids. (pool would be usb-slow and i would not trust
such a raid due to the external cabling a lots of cases). i would build a datapool from a single drive on each usb-disk.

you could then use them as backup-drives, like you have done with windows.
do not forget to export the usb pools prior to unplug and to import after plug.

sync files like you have done from windows via SyncBack
or like me with free win7-tool robocopy (to keep acl) on your shares
or use rsync
or use zfs replicate
or just copy your files via nautilus file browser or remote via midnight commander

mount ntfs-disks
it may be possible by adding extra software, but i suggest to use always the simplest way.
connect the ntfs drive to a windows or mac and copy the files to a smb-shared folder on your pool.

Gea
 
Last edited:
_Gea,Thank you! I'll be testing out this today.

mount ntfs-disks
it may be possible by adding extra software, but i suggest to use always the simplest way.
connect the ntfs drive to a windows or mac and copy the files to a smb-shared folder on your pool.
If I went with S11, would I be able to mount disks to copy data over? It looks like I'm going to setup a pool copy all the data off the 2TB drives (that I moved over from the USB drives earlier) then create the 2TB disk pool and copy it back. Over the network might still be faster than USB.
 
Last edited:
yes, you have to copy all files to your backup disks,
then create a pool and copy the files back.

you may install ntfs-3g driver to mount disks directly.
(i have not used it already).
but copy files via net is the usual way.


Gea
 
You've been a great help. It's all up and running. Just getting stuff moved around a bit.

Any advice on setting it up to be headless? I plan on putting it downstairs.
 
Does anyone have experience with the ZFS Linux port (http://kqstor.com/)? Is it "production ready"?

Nope. I wouldn't risk important data with it for at least another year. A storage server should generally only be used for storage.. so even if you only know linux and not the *solaris varieties I would 100% recommend NexentaCore instead as you basically get a linux environment with proper ZFS.
 
Thanks, I know about the *Solaris *BSD alternatives, but I want to use Linux. I'll keep an eye on the progress. Might go for Amahi anyway.
 
This thread has been a great resource and I've settled on using one of the solaris derivatives and _Gea's napp-it.

The only problem I'm having with my test system is I can't get it to resume from an S3 state. The system is a e2180 pentium dual-core on a gigabyte ga-73vm-s2. S3 is enabled in the bios, in power.conf and if I sleep it with sys-suspend or uadmin 3 20 it does sleep but it won't wake from sleep. This happens both with openindiana and solaris 11 express.

The system does power up but never fully comes back it's completely unresponsive. I went as far as to set up a serial console and if I use uadmin 3 22 to test sleep it, the system goes to sleep and resumes fine. But as soon as I sleep it "for real" with an actual power down it doesn't wake up.

Is there anything else I can try to get this working?
 
Hello LBJ

you may look if you have already the newest Bios.
If so, you have to shut down completely

Gea
 
about napp-it and netatalk

i'm working on a problem with napp-it + netatalk (2.15 and 2.2 beta) and Solaris*
similar to http://permalink.gmane.org/gmane.network.netatalk.user/20763

Problem:
I can connect as a Solaris user via afp from OSX 10.5 and 10.6
I can create files and folders. i can delete empty folders and i can rename
files. if i try to delete a file i get permission denied.

If i use file-information on osx i see the user as owner
if i reuse file-information, the owner is unknown.

I have tried to compile netatalk with and without PAM-support and tried
volume options:acls and upriv with perm 0777 -without success

I am not very experienced with netatalk problems,
so maybee someone has an idea.


Howto.
Install Nexenta, OpenIndiana or SE11
install napp-it via wget -O - www.napp-it.org/nappit04 | perl
Install afp via wget -O - www.napp-it.org/afp | perl

(installer is downloaded and in your $HOME folder, if
you want to check installer settings)

share a folder for afp. Settings see menu services-afp


Gea
 
Hello LBJ

you may look if you have already the newest Bios.
If so, you have to shut down completely

Gea

Hi _Gea

I'm on the latest bios, it's actually from 2009 this board has been discontinued. The whole server was assembled from spare parts, tho it looks like I'll have to spend some money if I want it fully functional.
 
New feature in napp-it: Setup basic ACL settings up from 0.415f
acl.png



@LBJ
with Solaris, you are not mainstream. Main focus is enterprise storage-use not desktop with energy saving options.


Gea
 
Last edited:
Oh I understand that; I was just hoping for a little bit better energy savings as oracle has been mentioning that feature in the solaris 11 express docs. I think I may just replace the hardware with something more efficient if it has to run 24/7.
 
about napp-it and netatalk

i'm working on a problem with napp-it + netatalk (2.15 and 2.2 beta) and Solaris*
similar to http://permalink.gmane.org/gmane.network.netatalk.user/20763

Problem:
I can connect as a Solaris user via afp from OSX 10.5 and 10.6
I can create files and folders. i can delete empty folders and i can rename
files. if i try to delete a file i get permission denied.

If i use file-information on osx i see the user as owner
if i reuse file-information, the owner is unknown.

I have tried to compile netatalk with and without PAM-support and tried
volume options:acls and upriv with perm 0777 -without success

I am not very experienced with netatalk problems,
so maybee someone has an idea.


Howto.
Install Nexenta, OpenIndiana or SE11
install napp-it via wget -O - www.napp-it.org/nappit04 | perl
Install afp via wget -O - www.napp-it.org/afp | perl

(installer is downloaded and in your $HOME folder, if
you want to check installer settings)

share a folder for afp. Settings see menu services-afp


Gea

zfs property nbmand=off fixed the afp problem!

about nbmand=off
set to off seems only a problem when locking between cifs and nfs (afp?) is needed

see: http://www.mail-archive.com/cifs-discuss@opensolaris.org/msg02534.html


thanks to apnar for the info


Gea
 
Last edited:
bugfix in current napp-it 0.415f for AFP file-sharing now available
(could not delete file with AFP) with set nbmand=off for ZFS filesystems

Update or first time install to this newest nightly - login as root (or user + su) on NexentaCore, OpenIndiana or Solaris Express 11 and enter:
wget -O - www.napp-it.org/nappit04 | perl

see
afp.png


Gea
 
Last edited:
Hey Gea, is there any possibility to have encrypted filesystems like GELI on BSD or ZFS-Encryption on SE11?
 
ZFS integrated encryption is a feature of ZFS pool version 30 and up.
Currently only SE11 has included it with Pool V. 31 and build 151a.

In the last free bits from OpenSolaris build 148, encryption was nearly ready.
So its only a question of time, when you have encryption in the opensource
versions like NexentaCore or OpenIndiana.

Currently you have the options:
FreeBSD + Geli + ZFSGuru Web-GUI

or

Create a Comstar iscsi target, mount it from Windows as a local drive
and encrypt it. There is no NAS performance problem with this way
even on a low performance NAS, because the work to encrypt is done
from your mostly powerful Windows machine.

or

Use Solaris 11 express + integrated encryption + napp-it Web-Gui (already supports encryption)

see see
enc.png



Gea
 
Last edited:
Heya Gea, loving Napp-it so far, any chance on a better configuration of the mail system?
Like a better configuration of the smtp server (authentication). Or is that already in there somewhere and I'm looking in the wrong place. I'd love to use my gmail :)

Thanks !
 
email notification/delivery after authentification is planned for a future release

currently i use the perl-included modul Net:smtp
http://quark.humbug.org.au/publications/perl/perlsmtpintro.html

which does not support authentification / i could not get it working.
i someone has tested another working modul (must be included or work after copying
on Nexenta, OI and SE) i may include it earlier.

currently you need:
a open relay (hard to find) or you can try a mailserver, where you have a local mailbox,
a local forwarder or a local maillist. in these cases, delivery mostly works.

Gea
 
Last edited:
I plan on doing this with a Dell PowerEdge 510 here are the specs:

Dual Xeon 5620's
8GB (4x2gb) Unbuffered Ram
12 bay hot-swap chasis + a 2.5" cage inside that holds 2 drives
Perc H200 HBA card (LSI 2008)
Onboard dual Broadcom Gig NICs
750W Power supply
No hard drives I'll order them when I order the server from somewhere like NewEgg

~$1,800

R510_12bay.jpg


My plan:
Flash the HBA card with the LSI IT firmare.
Install ESXi on 2 2.5' drives (raid 1) (Was planning on cheap laptop drives, would it be advisable to use 2.5" ssd or 2.5" SAS drives?)
Follow the instructions for passing through the HBA card and setting up Solaris (haven't chosen a version yet)
Create a zpool with 2 or 3tb drives haven't decided which yet. I don't have enough money to fill up all the drive bays so I'm leaning towards getting a few 3tb's to get me started and expand as the prices go down.

Anyone have any recommendations on either hardware or software configuration? Any ideas on what is the best utilization of the disks? If I setup a 4 drive raidz-2 and over time expand it to 11 + hotspare or 11 + cache drive or even just a 12 drive raidz-2. I assume the LSI 2008 card uses an expander, would it be better to use two of these cards? If I used two cards what is the best way to setup the vdevs?

Usage:
Im going to be streaming bluerays/dvds/music/etc, running a few VMs one Linux webserver and a few Windows boxes, backing up desktops, running an ftp server.

Let me know what you guys think. Thanks.
 
Last edited:
Are there any available versions of the OSes for the PPC architecture? (not SPARC)

Even older versions would work, thanks! :)
 
Are there any available versions of the OSes for the PPC architecture? (not SPARC)

Even older versions would work, thanks! :)

*Solaris* is Sparc and Intel only.
PPC was EOL year ago, Why would you do that?
ZFS is high end, power and ram hungry and not the right
thing to reuse old hardware.
 
I plan on doing this with a Dell PowerEdge 510 here are the specs:

Dual Xeon 5620's
8GB (4x2gb) Unbuffered Ram
12 bay hot-swap chasis + a 2.5" cage inside that holds 2 drives
Perc H200 HBA card (LSI 2008)
Onboard dual Broadcom Gig NICs
750W Power supply
No hard drives I'll order them when I order the server from somewhere like NewEgg

~$1,800

My plan:
Flash the HBA card with the LSI IT firmare.
Install ESXi on 2 2.5' drives (raid 1) (Was planning on cheap laptop drives, would it be advisable to use 2.5" ssd or 2.5" SAS drives?)
Follow the instructions for passing through the HBA card and setting up Solaris (haven't chosen a version yet)
Create a zpool with 2 or 3tb drives haven't decided which yet. I don't have enough money to fill up all the drive bays so I'm leaning towards getting a few 3tb's to get me started and expand as the prices go down.

Anyone have any recommendations on either hardware or software configuration? Any ideas on what is the best utilization of the disks? If I setup a 4 drive raidz-2 and over time expand it to 11 + hotspare or 11 + cache drive or even just a 12 drive raidz-2. I assume the LSI 2008 card uses an expander, would it be better to use two of these cards? If I used two cards what is the best way to setup the vdevs?

Usage:
Im going to be streaming bluerays/dvds/music/etc, running a few VMs one Linux webserver and a few Windows boxes, backing up desktops, running an ftp server.

Let me know what you guys think. Thanks.

I would say, a All-In-One server (ESXi+embedded NAS) should be doable with this config. The Dell contains a quite old 5500 chipset, but this was the first Intel server-chipset that supports vti-d, needed for pass-through. Broadcom nics are sometimes reported to have problems with Solaris. Be aware to add a Intel nic if possible (slot available)

Booting ESXi from internal Sata 2,5" drive is ok. I do not believe, that you can build a raid-1 other tha with a 2x2,5" -> 3,5" hardware raid-1 enclosure. A alternative is to use a good Sata drive with better mtbf (wd raptor or 50g ssd). Use the remainig space of this bootdrive as local datastore and install a ZFS-OS on it is also ok.

About RAM:
use at least 16 GB, 6GB+ for your ZFS-OS, the rest for virtualisation.
use always 4GB modules for better expandability.
Ram is more important than CPU-Power

About your pools:
If you want to setup a pool to hold vms for virtualization, i would never build a raid-zx of large low power drives. its better to build raid-1 or raid 10 with smaller and faster drives (>7200 U/m or SSD) and to add a separate pool for vm-backups and other media data. i would also use a lot of 2TB drives (they are cheap). currently i would avoid > 2Tb drives (too new). Keep in mind: you can only expand pools by adding vdevs (best of the same config).

SAS:
lsi 2008 is a 8 port sas2/sata controller without expander. best is to avoid expanders at all, use a second lsi 1068 or 2008 controller instead.

i would start with one raid-1 + hotspare or a raid-z1 + hotspare and expand the pool with the same vdevs. i would not use a raid-z2 of 4 drives. although it can handle a two drive failure, it is slower than a raid-10 with the same capacity. raid-z2 or z3 is fine, if you want best capacity of your drives. my backup-server for example are 12-15 drive raid-z3 + hotfix.

About version:
currently i would prefer OpenIndiana or Solaris Express for ESXi because Nexenta currently lacks the vmxnet3 netdriver in vmware tools, needed for high-speed interconnects.


another question.
are you using the dell because of support?
a supermicro case + a supermicro mainboard from X8 series with 3420 or 5520 chipset are more modern and are known to be the best for any Solaris and they are in the same price range.

Gea
 
_Gea,

Thanks for your reply and thanks for all your work on napp-it.

If Solaris has issues with Broadcoms wouldn't it not matter since they would be virtualized and Solaris would see them as e1000's?

The internal cage supports 2 2.5" drives and is wired to the mainboard that has raid1 capabilities.

As for ram, the motherboard has 18 slots, so even at 4 x 2gb there is still a lot of room to expand.

As for the second HBA card, how should I set that up? In a simple example lets say I have a raid1, would it be best to have 1 drive on one HBA and the mirror on the second HBA or have them both on one HBA?

I chose Dell because I work with hundreds of R series servers at work. I am familier with them, I like the support and have a lot of contacts at Dell who can help me out. I like their refined look and how well and logical they seem to be put together. I realize I am probably paying a premium for that, but that doesn't bother me.
 
_Gea,

Thanks for your reply and thanks for all your work on napp-it.

If Solaris has issues with Broadcoms wouldn't it not matter since they would be virtualized and Solaris would see them as e1000's?

The internal cage supports 2 2.5" drives and is wired to the mainboard that has raid1 capabilities.

As for ram, the motherboard has 18 slots, so even at 4 x 2gb there is still a lot of room to expand.

As for the second HBA card, how should I set that up? In a simple example lets say I have a raid1, would it be best to have 1 drive on one HBA and the mirror on the second HBA or have them both on one HBA?

I chose Dell because I work with hundreds of R series servers at work. I am familier with them, I like the support and have a lot of contacts at Dell who can help me out. I like their refined look and how well and logical they seem to be put together. I realize I am probably paying a premium for that, but that doesn't bother me.

about Dell
Dell is ok in any way.
Maybee i would prefer a newer Intel Chipset 5520

about Broadcom
if you virtualize it does not matter indeed
use vmware vmxnet3 network driver

about Ram
Start with more Ram
even with 18 slots, i would prefer 4 GB modules

about onboard Sata
This is mostly Software Raid.
I would be surprised if ESXi will support

about HBA
from performance view, it does not matter
if you use mirrors with the second disk always on the second HBA
it may survive a HBA failure.

Gea
 
_Gea,

Hmm didn't consider that ESXi would have a problem with the onboard raid1. The system has internal usb ports, would it make more sense to boot ESXi from say a 64gb usb stick instead of the drives? If so would you reccomend also installing the Opensolaris VM on that stick? Obviously there would be no redundency, but I could make backups of the stick periodically.

I guess another option would be if I had the two HBA cards I could use 2 of the 16 ports to run the 2.5" drives and skip the onboard controller.

I'll have to double check the chipset, this is from the R510 tech guide from Dell:
Introduction of the new Intel Xeon processor 5600 series includes a stepping revision of the Intel 5520 and 5500 chipset, which is required to enable the full 5600 series feature set. Dell servers shipped with the new chipset revision have the symbol II in the System Revision Field visible through OpenManage™ Server Administrator (OMSA) and the iDRAC GUI. They are physically marked with a 12 x 6mm rectangular label containing the symbol II. The memory interface is optimized for 800/1066/1333 MHz DDR3 SDRAM memory with ECC when running with Intel Xeon processor 5600 series.
 
Last edited:
_Gea,

Hmm didn't consider that ESXi would have a problem with the onboard raid1. The system has internal usb ports, would it make more sense to boot ESXi from say a 64gb usb stick instead of the drives? If so would you reccomend also installing the Opensolaris VM on that stick? Obviously there would be no redundency, but I could make backups of the stick periodically.

I guess another option would be if I had the two HBA cards I could use 2 of the 16 ports to run the 2.5" drives and skip the onboard controller.

I'll have to double check the chipset, this is from the R510 tech guide from Dell:
Introduction of the new Intel Xeon processor 5600 series includes a stepping revision of the Intel 5520 and 5500 chipset, which is required to enable the full 5600 series feature set. Dell servers shipped with the new chipset revision have the symbol II in the System Revision Field visible through OpenManage™ Server Administrator (OMSA) and the iDRAC GUI. They are physically marked with a 12 x 6mm rectangular label containing the symbol II. The memory interface is optimized for 800/1066/1333 MHz DDR3 SDRAM memory with ECC when running with Intel Xeon processor 5600 series.

about chipset
http://www.dell.com/downloads/global/products/pedge/en/poweredge-r510-specs-en.pdf
claims a 5500 chipset and 8 dimm-slots only

do have a newer model?


about booting from usb
i would say its a absolutely nogo.
usual usb sticks are not good for os-use and 10 x slower than disks
even with the newest ssd-like slc usb3 sticks, its much slower.
use a good sata drive instead (i would prefer a 50 gb ssd)

about using two disks of your hba for booting
not possible. you could only pass-through a complete controller not single disks

Gea
 
Back
Top