OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

@Gea:

You remember my problems described here -> http://hardforum.com/showpost.php?p=...postcount=2027

I have now setup my router to act as a DNS server(DD-WRT), and that seems to work just fine, since I can access my ESXi with hostname now.

Although my WDTVLive media streamer still show the napp-it box as "WORKGROUP" and not it's actual name(NAS01). A restart of the network services from the napp-it interface solves the problem. I've tried everything now, but without any luck. They all reside in the same workgroup, and I've tried re-joining napp-it to the workgroup, with success, but still no go on the WDTVLive box(unless I restart network services from napp-it interface).

Any pointers as to where I can investigate further?

Thank you
Best regards
Jim
 
anyone know if there's any decent guides/tutorials on setting up email alerts in napp-it to gmail?

So I was going through the install tls process then realized i needed to get netssleay first, then that came up with errors because something else was missing. Anyways, it just kind of seemed like an endless chain and I'm sure I can't be the only one trying to figure that out.

btw I'm on solaris 11. thanks
 
NM so I figured out guest access. I randomly read something that said it wasn't possible, but apparently that was old info.

New thing:

Is it possible to share something twice? If I have like /mypool/movies, can I have a share 'movies' that requires user/pass, and a share 'movies-ro' that allows guest but is readonly?
 
Last edited:
I had a static L2 link aggregation setup working for awhile with a dell 2708 switch. (Dropped it to go 10GbE though).
 
Yo,

Been fidling around with Solaris 11 but are there any advantages using Open Indiana for instance?
I tried Open Indiana first but had missing drivers, with Solaris I didn't have the missing drivers on startup!
Need to mention that I will only use this for private use, not commercial!
Also have a problem when I use the Solaris desktop screen I can't change any settings as it asks me for a password and allways says it's the wrong password! I only used 1! I just want to change the NFS and FTP settings as this can't be done at the moment from Napp-it!

gr33tz
 
Well, for one thing, AFAIK using solaris itself you will not get bugfixes or anything without a support contract, unlike openindiana.
 
Yo,

Been fidling around with Solaris 11 but are there any advantages using Open Indiana for instance?
I tried Open Indiana first but had missing drivers, with Solaris I didn't have the missing drivers on startup!
Need to mention that I will only use this for private use, not commercial!
Also have a problem when I use the Solaris desktop screen I can't change any settings as it asks me for a password and allways says it's the wrong password! I only used 1! I just want to change the NFS and FTP settings as this can't be done at the moment from Napp-it!

gr33tz

During setup, you must enter a root-pw and a pw for a user.
If you want to change settings, you must enter the root-pw not the user-pw
 
Anyone know a copy manager program for Windows 7 that does not slow down to nothing when copying to and from a ZFS server? In my case it happens to be OI 151. I have tried TeraCopy and Supercopier and both just cause the read from the server to 15 MB/s at its peak. Writes are alright at around 80 MB/s.

The only real function I would like to have is having a copying queue, so I don't do 5 simultaneous copies and have it take forever. If there was also a way to skip files and have overwrites rules that would be great as well.
 
gah they have WD 2TB drives on sale.

what do people think about mixing 5x 2TB WD drives with 5x 2TB Samsung F4 drives for my 10 drive raidz2 setup?

Bad idea or should I wait? Because it seems like the Samsung F4 drives will be the last to come down in price....

Or should I consider selling off these Samsung F4s and going with a 10 drive WD setup? I feel like the Samsung 2TBs are the best though :(
 
Anyone know a copy manager program for Windows 7 that does not slow down to nothing when copying to and from a ZFS server? In my case it happens to be OI 151. I have tried TeraCopy and Supercopier and both just cause the read from the server to 15 MB/s at its peak. Writes are alright at around 80 MB/s.

I'm using Total Commander 7.56a. Don't know if it's fast enough for your application.

The only real function I would like to have is having a copying queue, so I don't do 5 simultaneous copies and have it take forever. If there was also a way to skip files and have overwrites rules that would be great as well.

There are some rules like overwriting smaller files or automatic renaming.
 
Is there a better way to uniquely identify disks in a pool than by c0d0? logical drive 0 on controller 0... When I reboot my system and switch the sata cable order the pool fails to mount, and napp-it can see three drives available with new different names, but does not detect that they are part of the current unavailable pool missing three drives. It just lists the disks it thinks are missing from the pool identifying them by their old names.
 
example:

root@ZFS:~# zpool status
pool: pool1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-3C
scan: none requested
config:

NAME STATE READ WRITE CKSUM
pool1 UNAVAIL 0 0 0 insufficient replicas
raidz2-0 UNAVAIL 0 0 0 insufficient replicas
c3d0 ONLINE 0 0 0
c1d1 UNAVAIL 0 0 0 cannot open
c2d0 UNAVAIL 0 0 0 cannot open
c3d0 ONLINE 0 0 0
c0t5000C500361888E0d0 UNAVAIL 0 0 0 cannot open
c0t5000C500358285E2d0 ONLINE 0 0 0


new names of disks in pool:
c3d0 c2d1 c0t5000C500358285E2d0 c0t5000C500361367B9d0 c0t5000C50035C5778Dd0 c0t5000CCA228C07899d0

Is there a way to import the pool using all of the disk names that are a part of it? zpool import doesn't list pool1.

Edit:
zpool export pool1 then running zpool import pool1 fixed it.


also:
mv /dev/dsk /dev/dsk-old
mv /dev/rdsk /dev/rdsk-old
mkdir /dev/dsk
mkdir /dev/rdsk
devfsadm -c disk

resolved some of the other issues I was having.
 
Last edited:
Is there a better way to uniquely identify disks in a pool than by c0d0? logical drive 0 on controller 0... When I reboot my system and switch the sata cable order the pool fails to mount, and napp-it can see three drives available with new different names, but does not detect that they are part of the current unavailable pool missing three drives. It just lists the disks it thinks are missing from the pool identifying them by their old names.


from my experience

Pool members are identified by the c0d0.. numbers
if they have four or six digits, they refer to the physical port of your IDE/ SATA/ SAS controller.

If you change the port, the number change and you usually need to reboot and import the pool to have it re-running
(Export prior to change ports is a good idea but should not be needed)

If you have long GUID numbers like c0t5000C500358285E2d0 then you usually have a SAS2
controller. These long numbers are mostly unique disk numbers assigned by the disk manufacturer.
If you move such a disk to another SAS2 port, they keep the number so you do not have this problem.

But older disks does not have these numbers so they are generated and the number can change if you change the port.
In any case, you need to write down the number to identify which disk is in a slot.
If you can identify the slot of such a disk, this is the prefered way to identify pool members.

(In current napp-it i am working on a GUID-> slot assignment)
 
Last edited:
With the recent upgrade is anyone else noticing permission errors?

I am accessing everything via SMB (cant get it to connect NFS for some reason)

Enabled Guest access and it wont let me delete or rename anything. However it reads everything correct.

All files are 777. Dont know what to do, I'd like to config each folder for NFS with username/password to access but as soon as I put a password on it i can't get it to be stable.
 
With the recent upgrade is anyone else noticing permission errors?

I am accessing everything via SMB (cant get it to connect NFS for some reason)

Enabled Guest access and it wont let me delete or rename anything. However it reads everything correct.

All files are 777. Dont know what to do, I'd like to config each folder for NFS with username/password to access but as soon as I put a password on it i can't get it to be stable.

1.
if you create a file/folder without guest=on, only the creator and root has full access.
if you enable guest=on afterwards, you must set ACL like everyone@=full/modify
for already created files/ folders to allow

you can set ACL via napp-it (acl-extension), Windows (as root. not all versions) and CLI

2.
Unix permission of the shared folder is irrelevant for SMB
SMB is ACL only,

3.
NFS V.3 is unix permission sensitive only
But permissions cannot be set for users, only for hosts
Set 777 for the folder or every@=full and NFS will work

4.
NFS V. 4 is ACL aware
 
If I want encryption and intend to run the SAN/NAS on an all-in-one, would you recommend I go for Solaris 11 Express or Solaris 11?

I noticed Solaris 11 doesn't support VMware tools (yet?), so maybe Solaris 11 Express until Solaris 11 is more stable, and then import the pool from Solaris 11 Express?
 
I thought napp-it worked under SE11?

It probably works, which is why I asked the question in the first place, but I think I read somewhere that official support for SE11 will be phased out, and the main focus will be S11.

I think I'll prolly go the route of using SE11 until S11 is more stable, and then just export/import the pool when I switch over.

EDIT: by stable, I mean supported ofc
 
I think I am confused now. napp-it is a one-person product, I didn't think there was anything official or otherwise about it. ???
 
It probably works, which is why I asked the question in the first place, but I think I read somewhere that official support for SE11 will be phased out, and the main focus will be S11.

I think I'll prolly go the route of using SE11 until S11 is more stable, and then just export/import the pool when I switch over.

EDIT: by stable, I mean supported ofc

There is no real choice to use either Solaris 11 or Solaris 11 Express.
Express was like a preview for Solaris 11. Solaris 11 is out now,
so Express is dead (Maybee until Oracle give us a Solaris Express 12).
 
I see... Off the top of your head, any idea what I lose from not having VMware tools installed?

nothing really needed at first step.
you are not able to shutdown or restart from Esxi, no vmxnet driver

main problem: Solaris 11 is very very new - no experience/ help about problems yet
 
Okay, that all makes sense. My next question: unless you have a paid support contract, you will not get updates/fixes for S11, so what is the motivation for switching unless there is something it is buying you (I mean at this point in time?) S11 still seems kinda bleeding edge, but that is just my opinion...
 
Okay, that all makes sense. My next question: unless you have a paid support contract, you will not get updates/fixes for S11, so what is the motivation for switching unless there is something it is buying you (I mean at this point in time?) S11 still seems kinda bleeding edge, but that is just my opinion...

I'm not switching, I'm setting up a completly new system - and since OpenIndiana does not support ZFS crypto I have to go the S11 route

EDIT: Wait, what? I don't get updates and fixes for S11 without a paid support contract? Didn't know that. So next update I'll get is when Solaris 12 Express is released? :S
 
Argh. Are there any known issues using HP SAS expander on a LSI2008 controller, under Solaris?

I keep having a problem where not all the drives will be detected.

I've got 8 drives directly connected to one LSI2008, and another LSI2008 connected to an expander with 12 drives connected to it. 20 drives total in a Norco 4020.

Upon starting up system, not all 20 drives are seen. One or two drives will not show up on the expander. If I look at the front of the case the drive(s) that are not detected will have their activity LEDs on solid. I can let it sit like that and nothing ever happens. Eventually I seem to manage to get them all connected, I usually unplug the drives that won't show up, wait a minute or two, then put it back in. Sometimes it screws up and the actiivty LED gets stuck on again, but eventually the OS will detect the drive and everything works.

Not sure what is going to happen if I turn system on one day and only 8 of the 12 are detected or something and my pool is then broken?

I've only got 3 PCIe slots, so although I could ditch the expander and get a 3rd LSI2008, eventually I wanted attach more drives via external enclosure. Without expanders and 3 pcie slots I'd be limited to 24 drives using LSI2008 controllers.

I feel like this should somehow be related to Solaris as before I installed Solaris I've been running exact same drives/chassis/expander/controller under Linux and never had an issue.
 
I am now running 10gbe between my desktop pc and OI ZFS box. Installation was fairly easy on windows 7 and open indiana.... It cost me 165 for two 10gbe cards, a CX4 cable and shipping :D
 
Gea,

I am currently testing the replication on the two OI appliances I have here. I have a slight issue with the deletion of snapshots during the replication process. If create a job for a single filesystem within a pool and replicate to an empty pool on the destination machine I end up with:
PoolA/Filesystem1 (from source machine) --> PoolB/Filesystem1 (at destination machine)

This is good. I replicate every few minutes and it works well. Snaps are created on the source machine for the replication but it only keeps 2 or 3 and deletes oldest as it creates new ones. Perfect and about how I'd expect it to act.

Here is where I run into a problem. I want to actually replicate an entire pool containing 5 separate filesystems over to separate pool on the destination machine where it initially creates the same filesystems. So I create one job that is PoolA --> PoolB, with option for "Name of new ZFS (empty=same)" left BLANK, I end up with:
PoolA --> PoolB/PoolA/Filesystem(1-5)

Everything replicates and each filesystem goes one at a time just fine. The trick is that for 5 filesystems, 6 snapshot are being created on the source machine. Each time the replication job runs it will create snaps like:
PoolA@
PoolA/Filesystem1@
PoolA/Filesystem2@
PoolA/Filesystem3@
PoolA/Filesystem4@
PoolA/Filesystem5@

The problem is the job only auto-deletes the pool@ snapshot. All the separate Pool/Filesystem@ snaps stay and keep getting added to. For a replication job running every few minutes continuously this will quickly fill the system with snaps.

If I create a separate job for each of the 5 filesystems of course it won't do this, but I'd much prefer the single pool to pool replication since where it can automatically pull any filesystem within the source pool and recreate it at the destination, and it sends them sequentially. I'd imagine 5 snaps sending at once might cause too much congestion over my single gigabit connection between them.

Did I do something wrong here? Is this expected behavior for a single pool to pool replication job? Do I need to just create 5 separate replication jobs?
 
Yo,

Small question....diskspindown times are they set in powermanagement? I added 900secs for 15min and pushed submit..will this work?

gr33tz
 
Gea,

I am currently testing the replication on the two OI appliances I have here. I have a slight issue with the deletion of snapshots during the replication process. If create a job for a single filesystem within a pool and replicate to an empty pool on the destination machine I end up with:
PoolA/Filesystem1 (from source machine) --> PoolB/Filesystem1 (at destination machine)

This is good. I replicate every few minutes and it works well. Snaps are created on the source machine for the replication but it only keeps 2 or 3 and deletes oldest as it creates new ones. Perfect and about how I'd expect it to act.

Here is where I run into a problem. I want to actually replicate an entire pool containing 5 separate filesystems over to separate pool on the destination machine where it initially creates the same filesystems. So I create one job that is PoolA --> PoolB, with option for "Name of new ZFS (empty=same)" left BLANK, I end up with:
PoolA --> PoolB/PoolA/Filesystem(1-5)

Everything replicates and each filesystem goes one at a time just fine. The trick is that for 5 filesystems, 6 snapshot are being created on the source machine. Each time the replication job runs it will create snaps like:
PoolA@
PoolA/Filesystem1@
PoolA/Filesystem2@
PoolA/Filesystem3@
PoolA/Filesystem4@
PoolA/Filesystem5@

The problem is the job only auto-deletes the pool@ snapshot. All the separate Pool/Filesystem@ snaps stay and keep getting added to. For a replication job running every few minutes continuously this will quickly fill the system with snaps.

If I create a separate job for each of the 5 filesystems of course it won't do this, but I'd much prefer the single pool to pool replication since where it can automatically pull any filesystem within the source pool and recreate it at the destination, and it sends them sequentially. I'd imagine 5 snaps sending at once might cause too much congestion over my single gigabit connection between them.

Did I do something wrong here? Is this expected behavior for a single pool to pool replication job? Do I need to just create 5 separate replication jobs?

i have that on my todo list.
Currently you should create a job for each filesystem
 
Argh. Are there any known issues using HP SAS expander on a LSI2008 controller, under Solaris?

I keep having a problem where not all the drives will be detected.

not sure what lsi card you are using but i had a similar issue with a 9211 8i and a hp sas expander. my solution was to update the firmware to the latest version on the lsi card. i think it had something to with multipath.... also note that i only have sata drives connected ine my system.
 
I'm using Napp-it to automate snapshots and it works great. I have a spare USB 1TB hard drive and I want to hook it up to my NAS and have it zfs send/receive a zfs folder to that drive as a back-up.

I figured I would just set up a cron job that would zfs send/receive the most recent snapshot. However, I can't seem to find a way to identify the most recent snapshot. Whats the best way to do this? I figured it would be really simple.

Also, should I do this from the root cron?

Thanks!

edit: maybe this script would be the easiest: http://137.254.16.27/constantin/entry/useful_zfs_snapshot_replicator_script it supports automatic incremental transfer which is good.
 
Last edited:
_Gea

I'm having an issue (I think) with several of my drives reporting hard errors in the Napp-it console (Link. I'm currently running an all-in-one with ESXi and OpenIndiana (v151) for the SAN. OI has 10 GB RAM assigned to it and Napp-it version 0.6i is installed.

The system is housed in a Norco 4220 case which has been functioning fine for my old Windows Home Server (recenctly swapped system cases to migrate away from my old box). I just replaced the power supply with a SeaSonic 750W to handle the additional drives (the ESXi box was running off an older Corsair 520HX).

The only drives exhibiting the issue are my WD20EADS and Hitachi 2TB drives. I thought it could be a controller issue, but they exhibit the same problems whether they are all on a seperate LSI 3081E-R card, or split between the onboard 1068e and M1015 controllers that the other drives are one. I've also tried changing drive bays on the Norco just to make sure it isn't an isue with the backplanes.

I'm having a hard time believing that all these drives could actually be going bad. These drives were pulled from work from a retired DroboPro, if that would make any difference. Should I need to format the drives in some way before creating a pool from them?
 
Back
Top