But how can i see the temperatures from here?
Hmm, my 1015 flashed to LSI reports temps. Is yours still in raid firmware by chance?
Try running directly from the command line to see if it reports:
/usr/sbin/smartctl -a -d scsi /dev/rdsk/c4t5000CCA22EC0BD39d0
For example my 9211 reports in the gui just fine, my br10i doesn't report anything in the gui, but still works fine from the CLI using the command above.
# Auto-Shutdown Idle(min) Start/Finish(hh:mm) Behavior autoshutdown 30 0:00 0:00 shutdown
OpenSolaris PowerTOP version 1.2 C-states (idle power) Avg Residency P-states (frequencies) C0 (cpu running) (2.5%) 1200 Mhz 98.1% C1 0.0ms (0.0%) 1333 Mhz 0.0% C2 1.8ms (3.8%) 1467 Mhz 0.0% C3 2.0ms (93.7%) 1600 Mhz 0.0% 1733 Mhz 0.0% 1867 Mhz 0.0% 1917 Mhz(turbo) 1.9% Wakeups-from-idle per second: 3854.6 interval: 5.0s no ACPI power usage estimate available Top causes for wakeups: 46.7% (1799.9) sched : <xcalls> unix`dtrace_xcall_func 12.4% (477.2) sched : <xcalls> unix`speedstep_pstate_transition 3.1% (119.8) <kernel> : genunix`cv_wakeup 3.0% (115.0) perl : <xcalls> unix`speedstep_pstate_transition 2.6% (100.2) <kernel> : genunix`clock 1.3% ( 50.0) <kernel> : SDC`sysdc_update 0.2% ( 8.2) <kernel> : cpudrv`cpudrv_monitor_disp 0.2% ( 7.6) <kernel> : ehci`ehci_handle_root_hub_status_change 0.1% ( 4.0) <kernel> : genunix`schedpaging 0.1% ( 2.8) syslogd : <xcalls> unix`speedstep_pstate_transition 0.1% ( 2.8) sleep : <xcalls> unix`speedstep_pstate_transition 0.1% ( 2.8) nscd : <xcalls> unix`speedstep_pstate_transition 0.1% ( 2.8) bash : <xcalls> unix`speedstep_pstate_transition 0.1% ( 2.8) ntpd : <xcalls> unix`speedstep_pstate_transition 0.1% ( 2.0) <kernel> : e1000g`e1000g_local_timer 0.0% ( 1.6) smbd : <xcalls> unix`speedstep_pstate_transition 0.0% ( 1.4) hald-addon-acpi : <xcalls> unix`speedstep_pstate_transition 0.0% ( 1.4) devfsadm : <xcalls> unix`speedstep_pstate_transition 0.0% ( 1.4) <interrupt> : e1000g#0 0.0% ( 1.4) fsflush : <xcalls> unix`speedstep_pstate_transition
I've decided to toss in a 1366 quad-core Xeon in the x58 and use 4x4GB ECC DDR3, probably going with those Ultrastars (going for 2 to begin with in a RaidZ1, then later will dump and recreate into a RaidZ2 or Z3 when I obtain more disks) It's a very modest setup here at the home office, I think this should be sufficient.
When reading the documentation (http://www.napp-it.org/doc/downloads/napp-it.pdf) it says NFS V4 is also supported and allows user based permissions and ACL (not used with ESXi). Does that mean that an ESXi host ignores/is incompatible with user based permissions/ACLs or that Napp-it NFS4 and ESXi are incompatible entirely and I should use NFS3 instead? I am using ESXi 5.5.0
Thanks for the advice
nappit free don't allow ACL permissions??
Not at all, the motherboard has to have the traces and BIOS support for ECC. Typical Intel consumer boards don't have them, only workstation/server ones.
Been using napp-it + OI for some time now on ESXi with Unraid and Win7 vm.
I have 1 vdev raidz1 which consists of 8x 2TB drives. The vdev is getting full and I've bought 4 new WD red 3TB drives and will swap 4 3TB drives from Unraid to the vdev, so 8x 3TB in total.
I would like to replace 1 disk at a time and resilver as I need the swapped drives back in Unraid for data. After all drives have been resilvered I would like to expand the pool from 16TB to 24 TB.
How do I proceed and what do I need to be careful of?
Drives have arrived, I've enabled autoexpand, do I just power down the server and swap the drive?
Technically it is OmniOS and Napp-IT.
Thanks for explanations!
Is there a way to replace the compiled smartmontools 6.2 with the older version (don't know what version it was at that time) that worked for me?
about shares, can i have at the same time:
-a guest access that can only read but not write, modify or delete
-a root full access?
Because i'm not asked for password i just can't see the dataset from windows if guest is disabled.
About vdev, i wanted to make a pool of 2 vdev each having 12*hdds in raidz2, i only need to make one vdev in "create pool" menu then extend it and make a second and the two z2 vdev will be stripped together automatically?
About L2ARC, i noticed that they need to be added one by one, but if i add two L2ARC ssd for my two vdev based pool, are they going to be shared for each vdev? (one l2arc for one vdev), or put in some sort of big JBOD for the whole pool?
This indicates that this control doesn't understand IPv6.