OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Ok, more problems, and searched for this error but nothing recent. Every page that has anything to do with disks gives me "Failed to initialise libfdisk."

Half the drives are connected directly to teh supermicro mobo and other half to an lsi 9240. omnios saw all the drives fine during installation. Again, any ideas? Thanks

LSI 9240 is a LSI 2008 controller with Raid-5 firmware - not what you want for ZFS.
Ideal for ZFS would be a raidless HBA with IT firmware (aka LSI 9211-IT)

Maybee you can flash the LSI 9211-IT firmware just like many person do with IBM 1015 (a 9240 variant)
 
Is there a way to give ad users ssh access? I'm looking around and don't see it

Built-In AD support is for the CIFS server only.
I would not say its impossible but I have never tried and never heard of someone that did this.
 
LSI 9240 is a LSI 2008 controller with Raid-5 firmware - not what you want for ZFS.
Ideal for ZFS would be a raidless HBA with IT firmware (aka LSI 9211-IT)

Maybee you can flash the LSI 9211-IT firmware just like many person do with IBM 1015 (a 9240 variant)

9240 is HBA, nickname is "skinny"
raid5 on 9240 is NO NO!!

if you are on linux or windows, do NOT need to flash to 9211. the firmware set all unconfigured as JBOD.
But...
when you are talking to open-solaris variance, flashing to 9211 IT is a must,


I am using IBM M1015 with 9240 lsi firmware on cenots(linux), all running smooth and performance as I expected
on my testing, 9211 IR is a different story due on giving less performance when all drives as JBOD, 9211 IT is the top notch.

I would like to break the "myth", which must be flashed to 9211IT.
this is depend on what OS are you using :p.
 
LSI 9240 is a LSI 2008 controller with Raid-5 firmware - not what you want for ZFS.
Ideal for ZFS would be a raidless HBA with IT firmware (aka LSI 9211-IT)

Maybee you can flash the LSI 9211-IT firmware just like many person do with IBM 1015 (a 9240 variant)

9240 is HBA, nickname is "skinny"
raid5 on 9240 is NO NO!!

if you are on linux distro or windows, do NOT need to flash to 9211. the firmware set all unconfigured as JBOD.
But...
when you are talking to open-solaris variance, flashing to 9211 IT is a must,


I am using IBM M1015 with 9240 lsi firmware on cenots(linux), all running smooth and performance as I expected
on my testing, 9211 IR is a different story due on giving less performance when all drives as JBOD, 9211 IT is the top notch.

I would like to break the "myth", which must be flashed to 9211IT.
this is depend on what OS are you using :p.
 
I see two questions

One is the question if and how good is the driver support from your OS of choice for a special hardware/firmware (raid firmware requires another controller software). In case of Solaris, an LSI 9240 option with lousy raid-5 capability is not used in professional storage boxes and therefor not or not well supported while the HBA variants with the same SAS chip like a 9211 is one of best.

The other question is what type of driver and firmware is the best for the task of ZFS.
And this is a as lean as possible firmware and HBA disk driver with only one task: pass-through the disks to the OS. Even with a well supported 9211 with IR (raid) firmware where I have not seen performance or stabilty problems with IR firmware myself, I prefer an IT firmware or buy a controller from LSI that comes with IT firmware per default like a 9207 . (Where a reflash may also be needed when it comes with P20 and you are using high performance SSDs due to reported stability problems).

The more complex and less usual a firmware or driver is, the more propable problems may be left. The discussion of the preferred LSI firmware releases or the current problem reports with the P20 line of LSI firmwares show the importance of the firmware/ driver. Even if you actually do not use the raid-capability of a firmware or driver, its there and processing the data. The more code the more potential problems.

If an IBM 1015 would be available with all three firmware options (raid-5/raid-1/raidless) for the same price everyone would buy the raidless option for ZFS. This discussion came up because IBMs with an in professional environments useless raid-5 firmware option are available for a fraction of the price of a generic LSI controller where you can order the firmware option of choice with a similar price and reflash if another is needed (What you can do with the IBM as well as it is a rebranded LSI)
 
Last edited:
A quick question. Is it ok to copy/delete files from a pool while resilvering a new disk?
 
A quick question. Is it ok to copy/delete files from a pool while resilvering a new disk?

There is no usage restriction during a scrub or resilver beside a slight performance degration.
 
Thanks everyone for the help, and I didnt want to use that card but had no choice,, always use the ibm's flashed in my previous servers. It turned out to be a couple of ssd's that were in a hardware raid prior that was throwing the error, booted with gparted and started getting same sort of lib errors, but was able to clear out the partitions using gparted and everything is fine now. Thanks again
 
Has anybody out there experience with PCIe Extension Cables?
I have to use one cause of my case U-NAS NSC-800 to get the IBM 1015/LSI 9211-8i IT working.
omnios r101015u states the following error that isn't looking good.
I've already tried four different cables but the result is always the same.
Without PCIe Extension Cable the everything is fine.
omnios seems to be very picky when the communication over the bus isn't perfect.
With the same hardware and ubuntu installed I couldn't see a similar message.
I'm not an expert but ubuntu /var/log/syslog didn't mention anything.
Any ideas?

Code:
--------------- ------------------------------------  -------------- ---------
TIME            EVENT-ID                              MSG-ID         SEVERITY
--------------- ------------------------------------  -------------- ---------
Sep 20 14:06:43 abb38e47-cd60-6638-bd03-bbd5cf1bbd51  PCIEX-8000-KP  Major     

Host        : sefs
Platform    : A1SAi	Chassis_id  : 123456789
Product_sn  : 

Fault class : fault.io.pciex.device-interr-corr max 50%
              fault.io.pciex.bus-linkerr-corr 25%
Affects     : dev:////pci@0,0/pci8086,1f12/pci1000,3020@0
              dev:////pci@0,0/pci8086,1f12@3
                  faulted and taken out of service
FRU         : "MB" (hc://:product-id=A1SAi:server-id=sefs:chassis-id=123456789/motherboard=0)
                  faulty

Description : Too many recovered bus errors have been detected, which indicates
              a problem with the specified bus or with the specified
              transmitting device. This may degrade into an unrecoverable
              fault.
              Refer to http://illumos.org/msg/PCIEX-8000-KP for more
              information.

Response    : One or more device instances may be disabled

Impact      : Loss of services provided by the device instances associated with
              this fault

Action      : If a plug-in card is involved check for badly-seated cards or
              bent pins. Otherwise schedule a repair procedure to replace the
              affected device.  Use fmadm faulty to identify the device or
              contact Sun for support.

I've found a solution for unreliable PCIe Extension Cables.
Buy more expansive cables :cool:: "3M™ Twin Axial PCI Express X8 Extender Assemblies"
Round about 75 EUR at digikey or others.

At first I wasn't sure if it is worth the money and if I should give it a try.
But it is! No errors like above mentioned in the past four months!
Forget all other no name cables. I've tested many.
I'm using this cable in my U-NAS NSC-800 case with a IBM 1015/LSI 9211-8i IT controller connected to a Supermicro A1SRi-2758F Motherboard.
The built isn't cheap but it is very compact for a 8 bay NAS and that is what I was looking for. I'm really happy with it and would recommend it.
 
I'm using a long spin down threshold of 4200s for my drives.
The drive spin down works fine when I won't login to the Web-GUI after a restart.
If I do login to the Web-GUI the disks won't spin down after the given threshold.
Even if I logoff from the Web-GUI.
SSH sessions are no problem.

With shorter thresholds the spin down works (eg. below 1800s)!

This problem wasn't there on older nappit versions.
As far a I remember it came with the 0.9f1 last summer.
Now it seams to be that there is a task touching the drives on a regular base (every >1800s) if you did a Web-GUI login after restart.

My system:
omnios r151012 nappit 0.9f3
IBM 1015/LSI 9211-8i IT controller connected to a Supermicro A1SRi-2758F
 
Within napp-it, you have the following (optional) activity:
- alert jobs (checks pool state)
- gui accelerator (requests stats in the background)
- realtime monitoring

you can disable all three either in menu jobs or in your top level menu (upper right).
You can also stop napp-it via /etc/init.d/napp-it stop (start)

Other options:
- smart checkings
- Solaris fault management service fmd
- other logging tasks
 
What I would do
- buy a cheap SSD (30GB) and use it to boot ESXi and as a local datastore for OmniOS/napp-it
- pass-through the LSI 2308 with the Samsung SSDs and the Seagates

- create two pools (SSD mirror and Z1 from the Seagates)
- share a filesystem on the SSD pool via NFS, use it as ESXi datastore for your VMs

- share a filesystem on the Z1 pool via SMB for general use and for VM backups
- create a zvol on the Z1 pool (a filesystem shared as blockdevice).
Use this as mass storage device via iSCSI

(other option may be: use a OmniOS SMB share if Sage can store files on a share.
This gives you a filebased snapshot capabiity. iSCSI gives only a diskbased snapshots option)

Gea it's taken a while for me to get everything ironed out - thanks for the help

On the above I have most things working

VMs are off the SSD Mirror and working great

I'm at the point where I need to setup the NFS on the Z1/Seagates pool. What is the reason to carve out a Zvol w/ iScsi? Is it just to segregate data? Couldn't I just use NFS for the whole thing having Vm backups, general use as well as the SageTV media files (huge) on it?
 
Hi all,
It's been a while since I was here, as my NAS was working great the last two years. As it was running full slowly, I recently decided to reinstall it completely from scratch with larger disks. So I rebuilt the box with OmniOS as a standalone NAS and almost everything is working great (once more, great job _Gea, thanks a lot for your efforts!).
Since in the meantime, I also bought a Mac Mini, I decided to create an AFP share to put all my music on it.I can access it from the Mac with no problems, however, when I try to import music into Itunes and let Itunes copy the music files to the NAS AFP folder, I always get an error message "the destination cannot be read or written to". However, I can access the folder in Finder with no problems and move data there manually. Note that I have mapped the NAS share using afp://NAS1/mp3. AFP is ways faster than SMB, so that is something I really would like to take advantage of. I have tried setting "Sync" to disabled, to no avail. My OS is the OS X before Yosemite (I don't remember the name from the top of my head).
Any ideas or experiences on how to make this work? Any hint is very much appreciated.
Thanks and kind regards,
Cap'
 
check ZFS poperties:
- aclmode and aclinherit: pass-through
- nbmand: off

check ACL (best: start with default: everyone=modify with inheritance for files and folders):
- root level of your filesystem: modify for everyone@ (min: folder only, inheritance can be off)
- restrictions can be done on folders

a last word
AFP is faster than SMB1 (this is a Mac only statement, on Windows/Linux SMB1 is much faster)
but AFP is end of life as Apple is moving to SMB but is optimized for SMB2/3.

This is currently not in OmniOS/CIFS but in SAMBA or NexentaStor CIFS
But there is work on this, see http://surge.omniti.com/2014/illumos-day

Other Option: NFS (also very fast, mount via nfs://ip/zfs)
 
Thanks _Gea for your quick response.
aclmode and aclinherit as well as nbmand has been set as required. Also I had ACL's to everyone=modify as you suggested. The odd thing is that I was able to write some files there, and then it suddenly threw that error. I know that Apple does not further pursue AFP, but in the moment, it's still much better than SMB integration in OSX 10.9.5. At my workplace, we have to use ExtremeZ-IP as the SMB support is so poor on new Mac's. I am aware though that AFP will not last, hence I really liked your idea with NFS, didn't think of that.
So I removed the ZFS Folder I had with AFP and created a new one with NFS support. I already had one for my VMWare Pool, which works perfectly fine with ESXi 5.5. However, when I try to connect to the NFS share with my Mac, I always get an instant "Access denied"... although meanwhile, I am testing with Everyone=Full Set permissions. Is there something I need to enable in the Mac so it support NFS? I know this is a Mac question, thought you might know it though...
Thanks and regards,
Cap'
 
Usually it should work like
- set nbmand to on (in contrast to AFP)
- aclinherit=pass-through, aclmode can be discard
- set ACL to default (everyone@=modify or full)
- set NFS to on

on your Mac
Finder menu Goto - connect to server
nfs://ip/pool/filesystem
 
OK, that worked so far. Now there's two odd things, I'm afraid:
1. it created 4 files when I moved the Itunes DB there: ._.DS_Store, .DS_Store, ._.iTunes...plist and .iTunes...plist. Is that normal?
2. every now and then, I get a "mp3 got disconnected" message. Settings are as you proposed.Would it help to disable sync?
 
The . files are a relict from old OS9 days when Apple was proud that they do not need file-extensions to detect a file type. They reached this with a data and resource fork of a file where the resource fork contained the type and creator of a file.

Today when you write a file on a Mac on a non-HFS fiesystem, it creates the .file containing the creator information. On a Mac his file is usually hidden.

Sync is used on writes not reads.
But for a pure filer, sync can be disabled for a much better performance. You need it mainly if you need secure transactions or if you store VM files with a foreign (non CopyOnWrite) filesystem.

If it helps - maybe (not)
 
Thanks _Gea, that's very interesting. Unfortunately, it didn't help to disable sync, so I guess I let the idea go now and I go for SMB then. Seems as if I have to go this route anyway one day, so I can as well do this now.
I understand SMB support is better in Yosemite, but I am afraid my hardware will not do well with that, so I stay and see if I can get it running with SMB on my current 10.9.5.
As usual, thanks a lot for your support, something new learnt once more!
Kind regards,
Cap'
 
gea, whats the easiest way to downgrade .9f4 to .9d2? Can it be done via a wget switch?
such as:
wget -O - www.napp-it.org/nappit9d2 | perl

Or do i need to do a full reinstall?

EDIT: nevermind, .9d2 showed up in the upgrade options now.

Any plan to allow trivial ACLs in the future or is that always going to be a paid feature going forward?
 
If you update (menu About-Update) you can go back to the last 5 installed releases.

Resetting ACLs to everyone@ =modify is a free option.
Beside that, you can set any ACL from Windows (as user root).
 
Hello,

I am using ZFS for a home media server. 10x Toshiba 3TB in raidz2 with ECC RAM. How often should I scrub the pool? I read somewhere to do it once a week for consumer drives and once a month for enterprise drives.

I was thinking to meet in the middle to put less wear/tear on my drive and scrub every 2 weeks. Does that sound logical? I have had it set for once a week for last 2 years without issue but wondering if I should change it.
 
There is no rule for everyone - depends always on your concerns about silent data corruption.
 
Sounds reasonable. How long do your scrubs take?
I would be more concerned about a quality reliable power supply and a separate backup.
 
Beside concerns, scrub time is the limiting factor
as scrubbing reduces performance.

I have low usage times on weekends (students..)
so I do scrubs on first saturday of a month (I use mainly HGST desktop disks) as they finish until monday.
 
As of last week, I have noticed extreme lag and videos skipping while streaming. This has never happen before. Pool successfully ran a scrub on 3/4/15, although it did take few hours longer then it normal does. Normally takes 8-9 hours, last scrub took 13 hours.

Screenshot:



Now I see that one drive is throwing "errors" I assume? Does it simply mean a drive is failing? If so, what exact steps do I need to do to replace it? I have a same drive and model spare that I have laying around. This is a raidz2 config. Also, what exactly does S H T values stand for? I googled around and could not find anything.

zpool status command shows No known data errors. I did open up the case few days ago to dust blow out dust. Could this be caused by a loose cable?

Thanks!
 
Last edited:
Solaris 11.1 , napp-it 0.9e2l, 2xIBM M1015 in IT mode,16xST4000DM000 [seagate 4TB],2 independent pools [RAID-Z2] configured: each controler->8 HDDs in raid-z2.

Evything went fine for 2 years until yesterday when accidently one HD was pulled out. I plugged it back and " zpool online poolname c0t5000C50050ABD0A0d0". After that everything was fine -the pool was online,all drives available.
I decided to scrub the pool and 30 minutes later I found the system in this state:
Code:
pool: bufo1
 state: SUSPENDED
status: One or more devices are unavailable in response to IO failures.
        The pool is suspended.
action: Make sure the affected devices are connected, then run 'zpool clear' or
        'fmadm repaired'.
   see: http://support.oracle.com/msg/ZFS-8000-HC
  scan: resilvered 256K in 0h0m with 0 errors on Fri Mar 13 14:39:53 2015
config:

        NAME                       STATE     READ WRITE CKSUM
        bufo1                      UNAVAIL      0     0     0
          raidz2-0                 UNAVAIL      0     0     0
            c0t5000C50050AB4F1Ad0  UNAVAIL      0     0     0
            c0t5000C50050AB6EF0d0  UNAVAIL      0     0     0
            c0t5000C50050AB8251d0  UNAVAIL      0     0     0
            c0t5000C50050ABCCE5d0  UNAVAIL      0     0     0
            c0t5000C50050ABD0A0d0  UNAVAIL      0     0     0
            c0t5000C50050ABDB63d0  UNAVAIL      0     0     0
            c0t5000C50050ABEC87d0  UNAVAIL      0     0     0
            c0t5000C50050AC64B3d0  UNAVAIL      0     0     0

device details:

        c0t5000C50050AB4F1Ad0    UNAVAIL          experienced I/O failures
        status: ZFS detected errors on this device.
                The pool experienced I/O failures.

        c0t5000C50050AB6EF0d0    UNAVAIL          experienced I/O failures
        status: ZFS detected errors on this device.
                The pool experienced I/O failures.

        c0t5000C50050AB8251d0    UNAVAIL          experienced I/O failures
        status: ZFS detected errors on this device.
                The pool experienced I/O failures.

        c0t5000C50050ABCCE5d0    UNAVAIL          experienced I/O failures
        status: FMA has faulted this device.
        action: Run 'fmadm faulty' for more information. Clear the errors
                using 'fmadm repaired'.

        c0t5000C50050ABD0A0d0    UNAVAIL          experienced I/O failures
        status: ZFS detected errors on this device.
                The pool experienced I/O failures.
           see: http://support.oracle.com/msg/ZFS-8000-QJ for recovery

        c0t5000C50050ABDB63d0    UNAVAIL          experienced I/O failures
        status: ZFS detected errors on this device.
                The pool experienced I/O failures.

        c0t5000C50050ABEC87d0    UNAVAIL          experienced I/O failures
        status: FMA has faulted this device.
        action: Run 'fmadm faulty' for more information. Clear the errors
                using 'fmadm repaired'.

        c0t5000C50050AC64B3d0    UNAVAIL          experienced I/O failures
        status: ZFS detected errors on this device.
                The pool experienced I/O failures.


"Format" command shows the drives ,napp-it too:
Code:
c0t5000C50050AB4F1Ad0 	 (!parted) 	 ok 	 bufo1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 Z300H5CD 
 c0t5000C50050AB6EF0d0 	 (!parted) 	 ok 	 bufo1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 Z300H58F 
 c0t5000C50050AB8251d0 	 (!parted) 	 ok 	 bufo1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 Z300HJ44 
 c0t5000C50050ABCCE5d0 	 (!parted) 	 ok 	 bufo1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 Z300HGTD 
 c0t5000C50050ABDB63d0 	 (!parted) 	 ok 	 bufo1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 Z300HGVN 
 c0t5000C50050ABEC87d0 	 (!parted) 	 ok 	 bufo1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 Z300HHM1 
 c0t5000C50050AC64B3d0 	 (!parted) 	 ok 	 bufo1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 Z300HHEC 
 c0t5000C5006086ABAFd0 	 (!parted) 	 ok 	 edit1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 W3008XPW 
 c0t5000C50060870B51d0 	 (!parted) 	 ok 	 edit1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 W3008W2H 
 c0t5000C5006087381Bd0 	 (!parted) 	 ok 	 edit1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 W3008TN4 
 c0t5000C50060874E05d0 	 (!parted) 	 ok 	 edit1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 W3008CB8 
 c0t5000C50060874F0Dd0 	 (!parted) 	 ok 	 edit1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 W3008CAJ 
 c0t5000C5006091D647d0 	 (!parted) 	 ok 	 edit1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 W3008M43 
 c0t5000C5006092AA09d0 	 (!parted) 	 ok 	 edit1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 W300A889 
 c0t5000C5006092E6F8d0 	 (!parted) 	 ok 	 edit1 	   	 - 	 - 	   	 4 TB 	 - 	 - 	 - 	   	   	 ATA 	 ST4000DM000-1F21 	 W3009A95

but strangely ZFS cant access them.Did a few system's restarts w/o change.
fmadm faulty + fmadm repaired did not help ,same for zpool clear.
I'm in big trouble ,any ideas ?

p.s.
Right now I discovered that immediately after system restart,the pool is online,degraded state with only 1 HDD unavailable,5 seconds later all of it's HDDs and the pool itself becomes unavailable !!!
 
Last edited:
A Also, what exactly does S H T values stand for? I googled around and could not find anything.

zpool status command shows No known data errors. I did open up the case few days ago to dust blow out dust. Could this be caused by a loose cable?

Iostat collects all driver warnings since last bootup as hard, soft and transfer errors.
Most important are the hard errors as they can indicate bad sectors, resulting at least in a speed degration.These warnings are below real ZFS errors. I have seen disks with several thousand errors on a working (but slow) pool.

When the iostat error rate grow, I usually remove the disk and do a low level test with a tool from the disk manufacturer. Mostly after some repair actions I can re-use the disk. I mark the disk then to trash it if the problem occurs again.

A cable can be the reason too as well as bad PSU, but mostly you have bad sectors.
 
Right now I discovered that immediately after system restart,the pool is online,degraded state with only 1 HDD unavailable,5 seconds later all of it's HDDs and the pool itself becomes unavailable !!!

Remove that disk, check zpool status or logs to find the disk (or remove all and insert disk by disk until the problem occurs). I suppose the disk is semi dead and blocks the controller.

Unlike hardware raid, ZFS will not kill the raid on such a procedure.
The pool only stays offline until enough disks came back
 
Situation is under control !!:) _Gea is Nostradamus !
One of the HDDs caused the issue- probably semi-dead or something like this. I found it inserting the HDDs one by one ,restarting the system between.
Thanks !
 
Hi everyone,

Setup = Solaris express 11 + napp-it 0.9F4

I'm a little bit confused with my Solaris_napp-it config , i'm in troubles to configure the NFS server to give NFS version 4.1 , i've made a :

# sharectl set -p server_versmin=4 nfs
# sharectl set -p server_versmax=4 nfs

but on my client side , actually esxi 6.0 i've got this error returned by esxi :

WARNING: NFS41: NFS41ExidNFSProcess:2022: Server doesn't support the NFS 4.1 protocol

Did you have any idea that can help me ? i was thinking that 4.1 is included in "versmax=4" but maybe there is no 4.1 support ?

Thanks a lot for any contribution.
 
Thank you for this answer ;-)

Is anyone know how to have NFS 4.1 with Napp-it ?
Another version of solaris ? another Distro ?

Thanks
 
Gea,

Can you think of any reason why net-snmp on the omni-os releases wouldn't show ZFS pool storage info? I only ask because my older openindiana servers show it just fine. I have even replaced the \etc\net-snmp and \var\net-snmp folders on an omni server with the OI versions but still no luck. I'm not sure if its an OS issue on my end or a config issue on the snmp monitoring end (observium). I know this isn't really your area, but figured i would ask. Thanks for your time!

1zS5svg.png
 
found the issue:
"17. Known Bugs!!

A. hrDeviceTable (HOST-RESOURCES-MIB)

This section of code is only aware of disk controllers 0 through 7.
Hence, anything on controller c8 and above will be invisible.

B. hrPartitionTable (HOST-RESOURCES-MIB)

At present, hrPartitionSize data only works for regular ufs
partitions
eg. /dev/dsk/c0t0d0s0 that are mounted. They
are displayed in partition order rather than the order
they are mounted. Partitions mounted as mirrors, metastate
database replicas, swap or members of a RAID display size 0
.

As a workaround, put entries for disks you are
interested in in snmpd.conf and examine
using UCD-SNMP-MIB.


-- Bruce Shaw "

http://www.net-snmp.org/docs/README.solaris.html
 
Back
Top