OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

You can try mounting the remote server's drive via NFS. For solaris/Unix based systems you can enter
mount -F nfs (IP ADDRESS):/share2 /Any_folder in terminal.

This will mount the file system /share2 located on the host (IP ADDRESS) to the mount point /Any_folder
I find that NFS transfers are faster then smb transfers in solaris type systems. You can also tune the NFS parameters if you want to really get crazy.
 
Well I use this server at home for movie and BD bluray backup and have 2 mediaplayers connected to it with NFS. Sometimes I need to move a huge amount of data from 1 server to another so it it isn't the speed with simultanious users that interest me but rather the speed with 1 user moving a whole bunch of data mostly files from 8-45TB!
I'm really trying to tackle this problem as it's the only major obstruction my server has!
My read speeds should more or less be raised to my write speed!

ty

if your trying to copy from one server to another then you could try pushing the data instead of reading it from the server. as wingfat mentioned you can mount using nfs. Also you can test using SMB from OI as the initiator and your windows 7 machine or another server as the file share end. This is real easy if you have the graphical OI installed as there is a GUI menu under 'Places'->'Network' at the top. If you are sending the data from this end It will be the write performance you will need to test. But it will be interesting to see if it now writes at 40MB/s and reads at 100MB/s this way around or what happens.
 
I'm having really bad scrub performance. No errors are shown in either zpool status or iostat -exmn.

This is one of the outputs from an 'iostat -exmn 5;

extended device statistics ---- errors ---
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c4t0d0
0.8 0.0 218.8 0.0 0.0 4.0 0.0 4999.9 0 100 0 0 0 0 c4t1d0
0.8 0.0 9.1 0.0 0.0 0.0 0.0 19.3 0 1 0 0 0 0 c4t2d0
0.8 0.0 9.1 0.0 0.0 0.0 0.0 17.3 0 1 0 0 0 0 c4t3d0
0.8 0.0 9.1 0.0 0.0 0.0 0.0 20.1 0 1 0 0 0 0 c4t4d0
0.8 0.0 9.1 0.0 0.0 0.0 0.0 41.6 0 1 0 0 0 0 c4t5d0
0.8 0.0 9.1 0.0 0.0 0.0 0.0 41.3 0 1 0 0 0 0 c4t6d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c5t0d0
1.2 0.0 2.2 0.0 0.0 0.0 0.0 14.0 0 1 0 0 0 0 c5t1d0
1.8 0.0 2.2 0.0 0.0 0.0 0.0 0.1 0 0 0 0 0 0 c5t2d0
1.2 0.0 2.2 0.0 0.0 0.0 0.0 31.5 0 1 0 0 0 0 c5t3d0
1.2 0.0 2.2 0.0 0.0 0.0 0.0 29.9 0 1 0 0 0 0 c5t4d0
1.2 0.0 2.2 0.0 0.0 0.0 0.0 32.0 0 1 0 0 0 0 c5t5d0
1.2 0.0 2.2 0.0 0.0 0.0 0.0 34.3 0 1 0 0 0 0 c5t6d0

So it looks like c4t1d0 is slowing it down, badly? But there are no errors on this drive?
 
I just had that (the 100% b) issue with a WD Green drive no errors, no smart warnings, etc. - but under heavy load it would "lock up" with massively long asvc delay's.

I tried setting the vdev_max_pending to 1 (from 10) but it made no difference. I RMA'd the drive and that fixed it. I haven't seen this with any of my Hitachi drives, only with WD Green's though.
 
Guys I need to add another 4 HDD's RAIDZ2 to the pool and have some questions:

I have filled all the space in the current pool (3 x 4 x 2tb drives in raidz2), can I just add the next 4 HDD's cause I read somewhere that you need to have a least 20% of space before adding more drive so the pool is balanced, is this right?

Should I just move some data elsewhere before adding the drives?

Also the 4 drives that I am adding have got some data on and formatted to NTFS, does this matter, will ZFS just format the drives or do I need to format first before adding?

Thanks for your help.
 
Dunno about the 20%, but it's a good idea never to let free space get much below, say, 20%, as the efficiency goes down. I'd move off a bunch, add the new vdev, and copy back. I don't *think* you need to reformat, but if you do, the zpool command will complain, so no harm done.
 
I just had that (the 100% b) issue with a WD Green drive no errors, no smart warnings, etc. - but under heavy load it would "lock up" with massively long asvc delay's.

I tried setting the vdev_max_pending to 1 (from 10) but it made no difference. I RMA'd the drive and that fixed it. I haven't seen this with any of my Hitachi drives, only with WD Green's though.

Great, that makes 4 failures out of 15 drives within 6 months! Mine aren't even Green drives either.....
 
I have filled all the space in the current pool (3 x 4 x 2tb drives in raidz2), can I just add the next 4 HDD's cause I read somewhere that you need to have a least 20% of space before adding more drive so the pool is balanced, is this right?

So am I reading this right? You have:

3 vdev's
each vdev is 4 drives in a raid-z2 configuration?

So you are getting 12TB usable with 24TB raw?

Why such small vdev's? If you need the performance it seems like 6 mirrors would be much more performant? If you need the space/want to optimize for multiple disk failure than a larger raid-z2 or raid-z3 array would be much more efficient.
 
Great, that makes 4 failures out of 15 drives within 6 months! Mine aren't even Green drives either.....

An easy check is to just offline the offending drive (assuming you aren't in a degraded state) and monitor iostat to see if everything looks good. You could try a different controller/cable if you have one available, but I would put my $$$ on the drive.
 
If I modify (increase) the size of a Comstar LUN with Napp-It will it have any ill effects on the data contained in it?
 
Given that the other side sees it as a 'bag of bytes', it's hard to imagine anything good happening...
 
So am I reading this right? You have:

3 vdev's
each vdev is 4 drives in a raid-z2 configuration?

So you are getting 12TB usable with 24TB raw?

Why such small vdev's? If you need the performance it seems like 6 mirrors would be much more performant? If you need the space/want to optimize for multiple disk failure than a larger raid-z2 or raid-z3 array would be much more efficient.

Yes that is correct, I have got 3 vdev's each with 4 drives, I thought I should be getting 18tb usable with 24tb raw?

If I recall one disc is lost for parity per vdev, must have got it wrong then.

A larger Raidz2 would need more drives per vdev, so if and when I need to add drives I would have to add to this multiple.
 
Yes that is correct, I have got 3 vdev's each with 4 drives, I thought I should be getting 18tb usable with 24tb raw?
If I recall one disc is lost for parity per vdev, must have got it wrong then.
A larger Raidz2 would need more drives per vdev, so if and when I need to add drives I would have to add to this multiple.

Okay, so each of your vdev's is a raid-z1 then (not a raid-z2). That makes more sense.
 
Okay, so each of your vdev's is a raid-z1 then (not a raid-z2). That makes more sense.

Stupid me, that's right raidz1 not raidz2:eek:, so any one drive can fail in a vdev.

I have moved now 182GB elsewhere, is it safe enought now to add the other 4 drives?
 
Info:

Illumian 1.0, successor of NexentaCore 3 is downloadable at
http://www.illumian.org

ps
I have modified the napp-it wget-installer
It is untested but napp-it is basically running
 
Last edited:
Thank you Gea, I was just searching for this, there is no upgrade path to Nexentacore 3.0 I take it?

Will the import pool work if a fresh install is made?

You must do a fresh install.
Pool import should work
 
Any advantages over Open Indiana?

Both share the same kernel and apps but Illumos uses Debian-like re-packaging (apt-get..)
OpenIndiana installs same apps via pkg ...

Illumian uses the NexentaCore installer which allows to mirror
systempool during first install.

Most interesting aspect is the question who delivers bugfixes first
 
Hey guys...been running an OI box for about 4 months now and the performance started to go downhill when I started using iSCSI.

Found a bunch of hardware errors in the log pertaining to one drive so I swapped the drive...same errors...so I swapped the cable and the errors go away.

So now I have been doing a scrub for the last few days and it is SLOW! I mean ~3MB/s slow.

Its serving as storage for ESXi and whenever I try and power on/off a machine or do anything the whole ZFS array locks up.

I just did an iostat..


extended device statistics ---- errors ---
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c4d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 5 0 5 c3t1d0
86.6 75.4 733.6 121.7 0.0 0.5 0.0 3.3 0 38 0 0 0 0 c2t50014EE6AC1E5960d0
95.4 72.2 778.0 120.6 0.0 0.6 0.0 3.8 0 37 0 0 0 0 c2t50014EE6AC24F6E3d0
80.8 72.0 735.9 120.6 0.0 0.5 0.0 3.6 0 36 0 0 0 0 c2t50014EE6AC1AE78Cd0
103.2 98.4 843.3 186.0 0.0 1.2 0.0 5.8 0 51 0 0 0 0 c2t50014EE6017B68F6d0
94.4 71.0 744.2 121.4 0.0 0.6 0.0 3.4 0 38 0 0 0 0 c2t50014EE656D16DBFd0
103.0 72.2 776.2 122.4 0.0 0.6 0.0 3.7 0 40 0 0 0 0 c2t50014EE6AC26B7A6d0
97.2 99.2 832.1 186.6 0.0 1.1 0.0 5.5 0 49 0 0 0 0 c2t50014EE656C8C6D6d0
101.2 99.0 838.3 185.9 0.0 1.1 0.0 5.6 0 50 0 0 0 0 c2t50014EE6017C28ECd0
116.4 101.8 841.0 183.7 0.0 0.9 0.0 4.3 0 46 0 0 0 0 c2t50014EE601709B1Dd0
79.8 74.2 749.4 120.8 0.0 0.6 0.0 3.7 0 37 0 0 0 0 c2t50014EE6AC1DF00Ed0
75.0 72.6 746.3 121.0 0.0 0.6 0.0 4.2 0 37 0 0 0 0 c2t50014EE6AC21CD0Cd0
101.6 99.4 825.2 185.4 0.0 1.1 0.0 5.4 0 48 0 0 0 0 c2t50014EE6017C28E7d0
100.0 73.2 781.1 120.2 0.0 0.7 0.0 3.8 0 40 0 0 0 0 c2t50014EE6AC26AB29d0
105.8 103.2 854.1 184.2 0.0 1.3 0.0 6.2 0 56 0 0 0 0 c2t50014EE656CFDE12d0
118.8 98.6 849.7 185.3 0.0 1.0 0.0 4.7 0 49 0 0 0 0 c2t50014EE002DDFEECd0
124.4 95.8 847.8 185.1 0.0 1.3 0.0 6.0 0 52 0 0 0 0 c2t50014EE656CC65C0d0


Any ideas?
 
Is Illumian supported by your Napp-it Web-UI Gea?

Its out a few hours...
I have just modified the wget installer to have it basically (not fully tested) running.

But i will support it with the current napp-it 0.7 line
 
Good work really appreciated, I will wait for your Napp-it to be ready before installing, many thanks for your hard work Gea.
 
Its out a few hours...
I have just modified the wget installer to have it basically (not fully tested) running.

But i will support it with the current napp-it 0.7 line

This is extremely exciting! I'm installing illumian now and can't wait to start using it with napp-it.
 
Gea, Latent and everyone else who has chipped in info; I have some great news.:D. Version 1.0 of my server is now about to go in Beta. I do not have a sys-ad background, and this is the first time I have set up a server. It has been an interesting adventure since my experience with *nix/solaris had been primarily as a user, with very little need to understand what is going on in the background.

As I write, I have TimeMachine running from my our CustoMac Lion on to the OIServer . I downloaded nettalk 2.2. using the instructions you had specified and it worked great.

My server is running in a Lian Li PC-A71 chasis which can support 10 disks natively and more with the 5.25" bay adapters. Right now it has a 5 disk pool with (3+1 RAIDZ1 and 1 Spare) of Seagate 5900rpm 2TB drives; it boots from a 16GB SLC MTRON SSD with a Pentium G620, dual core sandybridge, and 16GB of DDR3 RAM running 1066@CL7. I might add another pool with Samsung 1TB 7200rpm disks to use as a database server.

I have also enabled compression on all the folders. I am also thinking of enabling de-duplication. With around 5.5TB of total available space, 16GB should be enough RAM. With a dual-core SandyBridge Core at 2.6GHz I doubt that CPU cycles are going to be an issue. It is a home-server so typical usage is for backups and media storage.

I have created an account for myself and my wife on the OIServer. I am using her password on the OIServer to authenticate for accessing the server from the CustoMac. I have changed the default permission for the AFP server to allow everyone full acess (o=full, g=full, e=full). Is there any workaround to that? I guess security is not an issue since the OIServer will still authenticate the login (and since it is a home server not accessible from outside).

On the windows side, all our PCs are in a workgroup and the SMB Server is also in the workgroup. I used "add -d "wingroup:power Users@BUILTIN" unixgroup:staff" and did the root/root login thingy to make it work. All that stuff about adding extra ACL mappings/orders etc. is something I have not really understood too well. When I try to alter permissions from the Windows Machine (W7P64) I do not see my unix account as a valid user name, and I am not sure how I can change anything.

I have also enabled guest access for some of the SMB file shares. Eventually I will like to provide some more nuanced security like read only for guests. Need to read up more to figure it out.



CIFS/SMB is the default way Windows PC's share files between them and a Windows server.
...
AFP (Apple filing protocol) is the default way Macs share files between them and a Apple server. Its usually a little bit faster than SMB, has better finder integration and is the only way offering proper Timemachine support.

Thats the reason I use SMB only although I support more than than 60% Mac's at work.

Thanks for the tutorial. I intend to use the AFP server for TimeMachine only. Everything else would go on the SMB shares.

I am surprised that your design oriented school actually has Windows computers. For some reason I had the impression that Windows is not acceptable to the creative types; at least in the US.


about napp-it settings:
most settings are ZFS properties. They are part of a Pool. If you import a pool, these settings are used.
Other settings, that are part of napp-it like keys, jobs or logs can be save with current napp-it in menu
extension -register- backup napp-it. (Copy complete napp-it folder to a datapool, to restore copy it back and optionally set perm to 777)

If you use napp-it as a webserver (www, Mysql, php, ftp) via XAMPP, you can save all XAMPP settings with menu service - XAMPP - Backup cfg

Thanks for the pointers. I will read up more and figure things out.

(1) One question I had was to do with backups and snapshots. From what I understand snapshots allow you to rollback changes in the files so accidental deletion etc. can be undone. Since we are primarily be using the main pool for backup and media storage, it is not that much of a requirement.

What I would like to do is take periodic backups on an external disk. I have a 3TB external USB 3.0 disk which I would like to use. From what I understand OI does not support USB3.0 but that is not a big deal. I will appreciate if you could give me some pointers on how to set it up. The wikis have some reference to ZFS Snapshot streams, but I am not sure how they would work, or whether they would support incremental backups on the external drive.

(2) Once things settle down I will experiment with adding a ZIL and L2ARC Device. I have a whole bunch of these Mtron SLC 16GB Drives. While they have excellent read performance (these are 3-4 years old) their 4K random write performance is poor. I could also use an Intel 40GB V-25 which just finished RAID-0 duties on my worksation (with zero write wear so far).

(3) Is there a backup software you recommend for windows based systems? I would like to have folder/drive based control of what gets backed up when.
 
Last edited:
Yes that is correct, I have got 3 vdev's each with 4 drives, I thought I should be getting 18tb usable with 24tb raw?

If I recall one disc is lost for parity per vdev, must have got it wrong then.

A larger Raidz2 would need more drives per vdev, so if and when I need to add drives I would have to add to this multiple.

New vdevs does not have to match the size, number of disks or even type as the rest of the vdevs in the pool.

You could just as easily add a raidz2, raidz3 or mirror vdev of any number of drives of any size. One of the beauties of zfs.. It's not desirable from a performance aspect, but if you application isn't bandwidth limited by the filesystem but by a 1gbit NIC for example, then it does not matter.
 
I am surprised that your design oriented school actually has Windows computers. For some reason I had the impression that Windows is not acceptable to the creative types; at least in the US.
.

I love Macs because of their 'easy of use' but we also use CAD/CAM Software like Solidworks or Rhino on Windows and we use Windows for 3D modelling with Cinema. They are much cheaper and more economical. A renderfarm with Macpro's may be good for anything but your budget.

(1) One question I had was to do with backups and snapshots. From what I understand snapshots allow you to rollback changes in the files so accidental deletion etc. can be undone. Since we are primarily be using the main pool for backup and media storage, it is not that much of a requirement.

What I would like to do is take periodic backups on an external disk. I have a 3TB external USB 3.0 disk which I would like to use. From what I understand OI does not support USB3.0 but that is not a big deal. I will appreciate if you could give me some pointers on how to set it up. The wikis have some reference to ZFS Snapshot streams, but I am not sure how they would work, or whether they would support incremental backups on the external drive.


Snaps help a lot for availabilite or to go back to a formare state of a file or folder.

About external backups
You may put your removable disk via USB (as a ZFS pool) to your Solaris box and
- replicate your daasets via ZFS send
- sync your files via rsync
- sync your files remotely from your PC or Mac with a sync tool

You may connect your external disk to a Mac or PC
- sync files with a sync tools

(2) Once things settle down I will experiment with adding a ZIL and L2ARC Device. I have a whole bunch of these Mtron SLC 16GB Drives. While they have excellent read performance (these are 3-4 years old) their 4K random write performance is poor. I could also use an Intel 40GB V-25 which just finished RAID-0 duties on my worksation (with zero write wear so far).

For a fileserver use your SSDs as read-caches or a fast shared disk pool for current work.
A write cache is used for sync writes only (ex NFS storage for ESXi or database use). For a pure fileserver they are useless.
Avoid deduplication - only usefull on very special use-cases, even with 16 GB RAM

(3) Is there a backup software you recommend for windows based systems? I would like to have folder/drive based control of what gets backed up when.

i use robocopy (robust file copy) from Microsoft. Its free (included in Win7), ultra fast, ultra stable and easy and cares about ACL. (Can keep Terabytes in sync)
Write a batchfile like robocopy c:\folder1 \\server\\folder1 /mir to keep two folders in sync (start the script as job)

add this with snaps on \\server (Solaris) and you are fine
 
Hi all, long time lurker in this thread, finally finished reading all 130 pages yesterday! I've been running NexentaCore as a home media and VM storage server for about a year and a half with napp-it and great success and all the performance I need. My OS disk decided to die last week and I rebuilt this today using OpenIndiana. The migration was super easy with the only minor snags I had around re-adding my smb user accounts.

I'm having an intermittent issue on reboot though. I have a pool called media, and under it are two zfs folders - webroot and temp. Half the time on reboot webroot and temp will be mounted before the media pool itself, so when it tries to mount media, I get an error that the destination is not empty and it seems to put the box into some kind of maintenance mode where I can't ssh in or get at napp-it.

I have to log in at the console, unmount the folders, remount in the correct order and reboot to get functioning again. It always works on this first reboot, but the next time the same issue happens. How can I get zfs to mount the file systems in the correct order? I didn't have this issue under NexentaCore, and I'm not sure what I did differently this time other than importing the pools, where as they were fresh builds when I started with NexentaCore.

Any help is appreciated, thanks! -Dan

Edit: Looking at the history for the pool, I see that the mount point is manually being set at some point:

2012-02-04.20:04:23 zpool import -f 6451925873541036207 media
2012-02-04.20:04:31 zfs set mountpoint=/media media

I'm unclear at exactly what point this is being set, whether it is on reboot or when I'm doing an import. It's definitely not me typing that command. I'll have to test more tomorrow.
 
Last edited:
I'm having an intermittent issue on reboot though. I have a pool called media, and under it are two zfs folders - webroot and temp. Half the time on reboot webroot and temp will be mounted before the media pool itself, so when it tries to mount media, I get an error that the destination is not empty and it seems to put the box into some kind of maintenance mode where I can't ssh in or get at napp-it.


2.
Looking at the history for the pool, I see that the mount point is manually being set at some point:

2012-02-04.20:04:23 zpool import -f 6451925873541036207 media
2012-02-04.20:04:31 zfs set mountpoint=/media media

A pool is always mounted first then the filesystems. But mountpoints must not exist at this moment. It seems that a service (webserver?) has already created a such named real data folder so ZFS cannot mount at this point.

Disable all extra services to see which one is the problem. This service needs to be start with a delay (or use /tmp) for temp-files
if temp is the problem. But be aware that /tmp is build from RAM.

about 2.
The manually mountpoint setting is done during import. Napp-it sets this (default) mountpoint to avoid problems with
pools that were originally mounted under a different mountpoint (ex from NexentaStor which mount pools under /volumes)
 
A pool is always mounted first then the filesystems. But mountpoints must not exist at this moment. It seems that a service (webserver?) has already created a such named real data folder so ZFS cannot mount at this point.

Disable all extra services to see which one is the problem. This service needs to be start with a delay (or use /tmp) for temp-files
if temp is the problem. But be aware that /tmp is build from RAM.

about 2.
The manually mountpoint setting is done during import. Napp-it sets this (default) mountpoint to avoid problems with
pools that were originally mounted under a different mountpoint (ex from NexentaStor which mount pools under /volumes)

Thank you, Gea. That makes sense about the mountpoint being set by Napp-it to handle filesystems from different mount points.

I don't know what service could be creating the directories. This is a fresh default install of OpenIndiana with nothing else enabled or installed besides Napp-it. The webroot folder is used by a VM over NFS that is not powered on. The temp directory is just a place where I put files before I have time to sort them where they belong, so no applications access it directly.

When the pool fails to mount, I just noticed there is also a .hal-mtab-lock file that is created in /media. Could this be part of the cause? /media, /media/temp and /media/webroot are all shared via NFS, and /media and /media/temp are shared via SMB.
 
Last edited:
When the pool fails to mount, I just noticed there is also a .hal-mtab-lock file that is created in /media. Could this be part of the cause? /media, /media/temp and /media/webroot are all shared via NFS, and /media and /media/temp are shared via SMB.

I seemed to have resolved the issue, and it was because of the .hal-mtab-lock file and the fact that I chose media and /media as my pool. HAL uses /media for it's auto mounting of devices, so it was creating /media on boot with .hal-mtab-lock. I don't need auto mounting of any devices, so I've disabled HAL:

root@thomasomalley:/# svcadm disable system/hal
root@thomasomalley:/# svcs -a | grep hal
disabled 12:04:52 svc:/system/hal:default
root@thomasomalley:/# rm -rf media/
root@thomasomalley:/# reboot
root@thomasomalley:/# zpool import media

The pool has come back successfully after three reboots, so I think I'm good. Hope this helps someone else. Next time I won't use media as my pool name :)
 
(1) One question I had was to do with backups and snapshots. From what I understand snapshots allow you to rollback changes in the files so accidental deletion etc. can be undone. Since we are primarily be using the main pool for backup and media storage, it is not that much of a requirement.

What I would like to do is take periodic backups on an external disk. I have a 3TB external USB 3.0 disk which I would like to use. From what I understand OI does not support USB3.0 but that is not a big deal. I will appreciate if you could give me some pointers on how to set it up. The wikis have some reference to ZFS Snapshot streams, but I am not sure how they would work, or whether they would support incremental backups on the external drive.

As Gea mentioned you can plug in a drive using USB 2.0 and make it a 1 disk pool to rsync/zfs send to etc. However If you were to backup a 3TB dataset at the 25MB/s you will get with USB 2.0 it will take 33 hours!!! You may get better speeds if you have a second machine on a gigabit network which may get closer to 100MB/s if your lucky. Another option for when you get a CPU that supports vt-d and you install vmware is to pass though the onboard USB 3.0 controller (or add one in if there is none) to a linux or windows 7 VM. Then set up rsync or similar on this box to pull the data over vmwares internal high speed network to the local usb 3.0 device.

Another option is to get an E-SATA drive cage and find a e-sata controller supported by OI. you may need some scripts to mount the external pool when its hot plugged though.
 
Can anyone guess why my shares went down? Got a call from the boss today (Sunday) saying he couldn't get on his network drives. First thing I checked was the stupid active directory server, but that wasn't it.
Everything in the napp-it gui looked OK, except the ZFS Folder tab wouldn't load. Also froze my shell by typing cd /tank/w<TAB>

Rebooted from napp-it, and everything seems fine.

I had about a million mails in root's mail about :
Use of uninitialized value in subroutine entry at /usr/perl5/5.10.0/lib/i86pc-solaris-64int/DynaLoader.pm line 226.

Which I thought upgrading would have stopped, but anyways, I couldn't see any other mails.

oi_151a
napp-it 0.6r

what else can I check next time? My solaris-fu is very weak.
 
I also noticed a weird problem. It has happened twice until now:

Out of nowhere, I was unable to connect to server via network. Did a restart and when it got back, 2 iscsi logical units were deleted(only LUs, data was intact). I then had to import the LUs(sbdadm import-lu) and add iscsi view and then everything started to work again. Anyone else had such problems and know why?

By the way, is it safe to upgrade to OI 151 with LSI1068e SAS hba?

Matej
 
Can anyone guess why my shares went down? Got a call from the boss today (Sunday) saying he couldn't get on his network drives. First thing I checked was the stupid active directory server, but that wasn't it.
Everything in the napp-it gui looked OK, except the ZFS Folder tab wouldn't load. Also froze my shell by typing cd /tank/w<TAB>

Rebooted from napp-it, and everything seems fine.

I had about a million mails in root's mail about :
Use of uninitialized value in subroutine entry at /usr/perl5/5.10.0/lib/i86pc-solaris-64int/DynaLoader.pm line 226.

Which I thought upgrading would have stopped, but anyways, I couldn't see any other mails.

oi_151a
napp-it 0.6r

what else can I check next time? My solaris-fu is very weak.

You can ignore/delete the perl warnings. They are only a problem when disk is full.
A blocking pool is a serious problem. Its mostly due to hardware problems (disk blocking,
cabling or controller problems)

what you can do:
check system-log (napp-it menu system-log)
check activity LED on disks (if always on in case of problems)

check at CLI
zpool list (should nearly always work)
zpool status (may block in case of a disk/controller problems)

format or iostat -Enr (shows list of disks and/or controller, may block in case of disk or controller problems)

check illumos.org buglist for known bugs
 
I also noticed a weird problem. It has happened twice until now:

Out of nowhere, I was unable to connect to server via network. Did a restart and when it got back, 2 iscsi logical units were deleted(only LUs, data was intact). I then had to import the LUs(sbdadm import-lu) and add iscsi view and then everything started to work again. Anyone else had such problems and know why?

By the way, is it safe to upgrade to OI 151 with LSI1068e SAS hba?

Matej

what version are you using (148?). If so an update is recommended.
(151a or 151a prestable) LSI 1068 is supported on every member of the Solaris family
 
what version are you using (148?). If so an update is recommended.
(151a or 151a prestable) LSI 1068 is supported on every member of the Solaris family

I thought I saw there were some problems with mega_sas drivers then 151 got out. Something with io hangs... Anyway, I check bugtraq and it seems the problems is solved.

By the way, how do I make a snapshot of the root filesystem. I mean, not just a snapshot but the whole boot enviroment, like you do when I install napp-it(so that on boot, I get pre_napp version and current live one).

Thanks, Matej
 
I thought I saw there were some problems with mega_sas drivers then 151 got out. Something with io hangs... Anyway, I check bugtraq and it seems the problems is solved.

By the way, how do I make a snapshot of the root filesystem. I mean, not just a snapshot but the whole boot enviroment, like you do when I install napp-it(so that on boot, I get pre_napp version and current live one).

Thanks, Matej

beadm
http://docs.oracle.com/cd/E23824_01/html/E24456/betools-6.html
 
Guys,

After having so much troubles with the network config of my Oracle 11, I have decided to reinstall my NAS from scratch (as I had some irregularities like network down every few days).

So I did what _Gea was proposing and exported my diskpool, then reinstalled everything and in the end, re-imported my diskpool again. This went very fine, it's really a joy with ZFS and napp-it!

When I checked back on my ZFS folders, I saw that my VMWare NFS export was correctly recreated. At the same time, I have also reinstalled VMWare on my second box (I don't have the all-in-one solution) and hence upgraded it to v.5 ESXi. I could attach the existing VMWare NFS export without a problem.
So with ESXi 5.0, they have introduced vmfs 5 as well, so I wanted to create another VMWare NFS export that I could use with vmfs 5. When I create a second NFS Export, napp-it tells me that it was created successfully. However, when I check through CIFS, it does only see the "old" (original) VMWare store. The new one is not visible, even though under solaris, I can see it under /diskpool/vm5. I tried to see it through CIFS by using the root account. When I try to attach it using my VMWare Host, it gives me a "the host has actively refused mounting operation" error. I assume I have some kind of permission problem here, but I cross-checked with the existing vmware export, and it looks exactly the same to me.

Any idea what could be wrong here?

Thanks,
Cap'
 
Back
Top