OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Too bad you didn't go with OpenIndiana, powermanagement is working there and I don't think solaris11 is better in any way, it only has additional zfs encryption support everything else should be the same.

I tried that first but during installation it gave me missing drivers issues!
Solaris didn't!
 
I would not update but reinstall:
- install ESXi 5 and activate pass-through to SAS controller, reboot
- install OpenIndiana 151 live with pass-through adapter, set to autostart first
(use ESXi filebrowser to copy boot-iso to local datastore, use it as virtual DVD file)
- install vmware-tools
- install napp-it via wget, set root pw, reboot
- import datapool

- import NFS shared datafolder in ESXi
- start ESXi filebrowser and import VM's (right-click on .vmx file)
confirm moved when asked if the VM is copied or moved

up and running in 30min up to an hour!

Thanks seems like a plan. Can i swap my controller card first (Adaptec 6 series 6405) and then make the reinstall on the same raid 10? ESXi 4.1 did boot bot it didn't find my datastores and by that my VMs
 
Last edited:
Has anyone here installed their OS onto a 4K aligned drive? Searching around on the internet reveals ppl working with zpools, but not anything to do with OS installs. I guess I need to manually align it (somehow).

Not sure if this was related to me post but came across this today as I was searching how to create vdevs with ashift=12. This time, I need to write it down whenever I upgrade again in a year.

Note: do not apply this to the syspool, yet. The patches for grub to understand ashift=12 are just now being putback into illumos, so the grub that is delivered in NexentaStor 3.x cannot boot a system with ashift=12. -- richard

Link: http://www.nexentastor.org/boards/1/topics/2892
 
having an issue with this setup. If I write a file via SMB, I can't read/access it via NFS.

I am assuming this has something to do with the default SMB settings in napp-it and the ACL maps...??

What the the easiest, most direct way to fix this?
 
having an issue with this setup. If I write a file via SMB, I can't read/access it via NFS.

I am assuming this has something to do with the default SMB settings in napp-it and the ACL maps...??

What the the easiest, most direct way to fix this?

problem:
CIFS server is ACL only
NFS3 is unix permission only. Permission is granted either based on everyone or root=@host via share setting

the easiest way: set ACL of the smb shared folder to everyone@ = full or modify with inheritance =on
or nfs-share it with root permissions for your host like [email protected]/24 xxxx/y (OpenIndiana, not working on Solaris 11)

for already created files, you must eventually reset permissions
 
Sorry to repost this, but...

OK. Dumb question: why is my pool so much bigger than the filesystem?


Code:
root@nas5:~# zpool list tank
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  9.06T  4.56T  4.51T    50%  1.00x  ONLINE  -

root@nas5:~# zfs list -r tank
NAME             USED  AVAIL  REFER  MOUNTPOINT
tank            2.70T  2.58T   334K  /tank
tank/rootsnaps  11.8G  2.58T  11.8G  /tank/rootsnaps
tank/snapshots   356G  2.58T   356G  /tank/snapshots
tank/work       2.34T  2.58T  2.34T  /tank/work


I got this alert message from napp-it, and I'm not really sure why:

Code:
Alert/ Error on nas5 from  :

-disk errors: none

------------------------------
zpool list
NAME     SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool1  29.8G  4.10G  25.6G    13%  1.00x  ONLINE  -
tank    9.06T  4.56T  4.51T    50%  1.00x  ONLINE  -

Pool capacity from zfs list
NAME	USED	AVAIL	MOUNTPOINT	%
rpool1	6.22G	23.1G	/rpool1	79%
rpool1@0102	0	-	-	%!
tank	3.23T	2.58T	/tank	44%
 
1. Dumb question: why is my pool so much bigger than the filesystem?

- zpool list reports the sum of all pool disks
ex if you have 3 x 1 TB = 3 TB ( regardless the Raid-level)

- zfs-list reports the usable space regarding the Raid-level
ex if you have a Raid-Z1 of 3 x 1 TB disk
3 x 1 TB = 3 TB - 1 TB redundancy = 2 TB usable

2. about alert:
if zfs-capacity is below 15%, you get Cap-alerts
or- its a quite old alert script with a bug, resending multiple alerts: delete and recreate the job
 
Ah, thanks. That's a much more concise way to put it than what I've been reading.

It's a RAID-Z2 of 5 x 2TB disks though, so shouldn't that be closer to 6 TB available (instead of the 2.58)for the filesystem?
 
Some months ago, I created a pool with 1 vdev (5-disk raidz). I recently purchased 5 more of the same drives as I was running out of space. The first vdev, I honestly don't remember what I did (patched zpool or exported pool from ZFsGuru) in order to get the ashift=12 for my samsung f4's. I went ahead and added a new vdev as I thought since the POOL was already configured with that sector size, all vdevs would be as well. I was wrong and found out that there is no way of removing a vdev from a pool, regardless if I haven't added any new data to the pool.

Luckily, I had enough spare drives laying around to transfer all of my movies (would have taken 2 weeks or more to put them back on...whew!). I backed up all of the stuff and recreated the pool, this time with the patched zpool for OI. Once figuring how to use it, I transferred my data back on the pool with 1 vdev (I couldn't create 2 vdevs as I was using one of the harddrives out of the second vdev for backing up the data originally on the server). I then, with the same patched zpool did zpool add -f poolname raidz vdev

and it worked!

I was curious though, that since I created it manually, I did not have this thing offered in the GUI called overflow protection

about overflow protection

Its always bad, if you fill up your pool up to 100% with data.
Overflow protection sets a 10% reservation on the pool ZFS itself.

The available space for your ZFS folder is then 90%.
That means, if you fill your folder up to the max, 10% of the pool remains always free.
You can check/modify that always in menu ZFS folder under reservation.

Gea

1) I was wonder if this could be added later

I was also searching for this, now that I have 2 vdevs, with the first vdev already having some data on it, I was wondering how does this dynamic striping across vdevs work. I think I may have found this answer

Dynamic Striping in a Storage Pool
ZFS dynamically stripes data across all top-level virtual devices. The decision about where to place data is done at write time, so no fixed-width stripes are created at allocation time.

When new virtual devices are added to a pool, ZFS gradually allocates data to the new device in order to maintain performance and disk space allocation policies.

Source

I also found this answer:

ZFS will favour writing to newer/emptier vdevs, and (over time) will balance the writes across all the vdevs. There's no way, that I know of yet, to force it to rebalance the data across all the vdevs to spread the load onto new vdevs.

Source

So I will check overtime to see how well it is balancing across the vdevs with this command:

zpool iostat -v poolname

I was just concerned that since the first vdev is almost full, would it continue to write to it and max it out to 100% before moving to the new vdev, but it seems that's not the case. I guess that is why Gea since 10% of the pool

Two more questions

2) was there an easier way to type in the harddrive names when doing zpool create or zpool add? Typing c1tXXXXXXXXXXXXXX got a little tedious
3) if I were to upgrade my drives to 3 TB hard drives, I won't see the extra space until all hard drives in the vdev are replaced with the same capacity?

Thanks for your time.
 
I just did my second test with OI & nappit on a esxi box

I have two other baremetal installs that are working fine.

nappit wont start, I cant access the webui.

I did the same steps as before, except now its in esxi.

I tried two diffrent vm's

I can ping the OI install

Not sure where to start...

I tried this
/etc/init.d/napp-it start

openindiana2:~# /etc/init.d/napp-it start
/var/web-gui/tools/httpd/napp-it-mhttpd: started as root without requesting chroot(), warning only
perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/agent-init.pl
 
I just did my second test with OI & nappit on a esxi box

I have two other baremetal installs that are working fine.

nappit wont start, I cant access the webui.

I did the same steps as before, except now its in esxi.

I tried two diffrent vm's

I can ping the OI install

Not sure where to start...

I tried this
/etc/init.d/napp-it start

openindiana2:~# /etc/init.d/napp-it start
/var/web-gui/tools/httpd/napp-it-mhttpd: started as root without requesting chroot(), warning only
perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/agent-init.pl

startup message is ok.
check if napp-it webserver is running: ps axw | grep mhttpd
check network config: ifconfig -a
check ESXi virtual switch if the virtual nic is connected to your physical nic
 
1) I was wonder if this (10% reservation) could be added later
2) was there an easier way to type in the harddrive names when doing zpool create or zpool add? Typing c1tXXXXXXXXXXXXXX got a little tedious
3) if I were to upgrade my drives to 3 TB hard drives, I won't see the extra space until all hard drives in the vdev are replaced with the same capacity?


1. yes, add a reservation to the pool-dataset
in napp-it, goto menu zfs folder, klick on RES/RFRES and add a reservation in the (pool) line
(overflow protection=10% space reservation to the pool dataset, all other datasets could eat max 90% of total space)

2. no

3. yes, if autoexpand property is set to on, you will see the extra space if all disks in a vdev are replaced.
 
One question regarding OpenIndiana / Solaris , when i reboot or install my system and then i ssh into it it takes around 3-5 seconds to connect the first time (for it to ask for the password) , after that it is instant until reboot and first time after reboot there is always that 3-5 second delay , is this normal ?

Cheers
 
It seems I might be the only person with this issue, but...

When using the newest version of napp-it 0.6p smartinfo cannot be read. It looks like it can't read the smarttype field...

id cap pool vdev state error smart_model smart_type smart_health smart_sn smart_temp
c7t0d0 2.00 TB ninkasi raidz ONLINE Error: S:0 H:30 T:0 WDC WD20EARS-00MVWB0 - UNKNOWN! WDWCAZA4870234 -

When I activate 0.6k it works just fine:
id cap pool vdev state error product sn smart_type smart_health smart_sn smart_temp
c7t0d0 2.00 TB ninkasi raidz ONLINE Error: S:0 H:60 T:0 WDC WD20EARS-00M WD-WCAZA4870234 disk OK WD-WCAZA4870234 32 C

Any ideas what is wrong?
 
It seems I might be the only person with this issue, but...

When using the newest version of napp-it 0.6p smartinfo cannot be read. It looks like it can't read the smarttype field...

id cap pool vdev state error smart_model smart_type smart_health smart_sn smart_temp
c7t0d0 2.00 TB ninkasi raidz ONLINE Error: S:0 H:30 T:0 WDC WD20EARS-00MVWB0 - UNKNOWN! WDWCAZA4870234 -

When I activate 0.6k it works just fine:
id cap pool vdev state error product sn smart_type smart_health smart_sn smart_temp
c7t0d0 2.00 TB ninkasi raidz ONLINE Error: S:0 H:60 T:0 WDC WD20EARS-00M WD-WCAZA4870234 disk OK WD-WCAZA4870234 32 C

Any ideas what is wrong?

Behaviour of smartmontools is different between disks and controllers
Current setting uses the smartctl -a parameter to get mostly all properties while the former tries to detect single properties. (see changes http://napp-it.org/downloads/changelog_en.html)

You may look at the script /var/web-gui/data/napp-it/zfsos/_lib/get-disk-smart.pl where detection is done. you may also create a private menu with an action in the simplest case like
print &exe("smartctl with desired parameter"); to get different outputs. (napp-it is a tool like a CMS for a Web-GUI,you can add your own add-ons). Extend it.

It may be also helpful if you add infos about your OS and Controller
 
Last edited:
Behaviour of smartmontools is different between disks and controllers
Current setting uses the smartctl -a parameter to get mostly all properties while the former tries to detect single properties. (see changes http://napp-it.org/downloads/changelog_en.html)

You may look at the script /var/web-gui/data/napp-it/zfsos/_lib/get-disk-smart.pl where detection is done. you may also create a private menu with an action in the simplest case like
print &exe("smartctl with desired parameter"); to get different outputs. (napp-it is a tool like a CMS for a Web-GUI,you can add your own add-ons). Extend it.

It may be also helpful if you add infos about your OS and Controller

I'm using Solaris 11 Express, I'm not sure which controller, I have an HP Microserver N36L, using modified bios so I can use all sata ports, including esata, at full speed. They are all onboard on the AMD chipset.

Looking at the get-disk-smart.pl I believe this is the command it is running:
Code:
root@ninkasi:~# /usr/sbin/smartctl -a -d sat -T permissive /dev/rdsk/c7t0d0
smartctl 5.42 2011-10-20 r3458 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Green (Adv. Format)
Device Model:     WDC WD20EARS-00MVWB0
Serial Number:    WD-WCAZA4870234
LU WWN Device Id: 5 0014ee 25aec047e
Firmware Version: 51.0AB51
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:    Tue Jan 10 16:35:23 2012 MST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

Error SMART Values Read failed: scsi error aborted command
Smartctl: SMART Read Values failed.

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: UNKNOWN!
SMART Status, Attributes and Thresholds cannot be read.

Read SMART Log Directory failed.

Error SMART Error Log Read failed: scsi error aborted command
Smartctl: SMART Error Log Read Failed
Error SMART Error Self-Test Log Read failed: scsi error aborted command
Smartctl: SMART Self Test Log Read Failed
Device does not support Selective Self Tests/Logging

I think it skips over this command because it isn't seeing "Identity Failed"
Code:
root@ninkasi:~# /usr/sbin/smartctl -a -d scsi -T permissive /dev/rdsk/c7t0d0
smartctl 5.42 2011-10-20 r3458 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

User Capacity:        2,000,398,934,016 bytes [2.00 TB]
Logical block size:   512 bytes
Serial number:             WD-WCAZA4870234
Device type:          disk
Local Time is:        Tue Jan 10 16:37:15 2012 MST
Device supports SMART and is Enabled
Temperature Warning Disabled or Not Supported
SMART Health Status: OK

Current Drive Temperature:     31 C
Manufactured in week 00 of year 0000
Specified cycle count over device lifetime:  100
Accumulated start-stop cycles:  0

Error Counter logging not supported
No self-tests have been logged
 
Shouldn't I at least be able to access the UI locally on the OI box?

my esxi vnetwork config is all stock


@openindiana2:~$ ps axw | grep mhttpd
2167 ? S 0:00 /var/web-gui/tools/httpd/napp-it-mhttpd -c **.pl -u napp-it -d /var/web-gui/www
4319 pts/2 S 0:00 grep mhttpd
@openindiana2:~$ ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6
inet 192.168.1.75 netmask ffffff00 broadcast 192.168.1.255
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
e1000g0: flags=20002004841<UP,RUNNING,MULTICAST,DHCP,IPv6> mtu 1500 index 4
inet6 fe80::20c:29ff:fe17:ac0a/10
e1000g1: flags=20002004841<UP,RUNNING,MULTICAST,DHCP,IPv6> mtu 1500 index 6
inet6 fe80::20c:29ff:fe17:ac14/10
@openindiana2:~$
 
I think it skips over this command because it isn't seeing "Identity Failed"

in get-disk-smart.pl line 82, 93 and 101 i may change

if ($r=~/Identity Failed/is) { to
if (!($r=~/=== START/s)) {

to solve this problem.
Could you try to be sure
 
in get-disk-smart.pl line 82, 93 and 101 i may change

if ($r=~/Identity Failed/is) { to
if (!($r=~/=== START/s)) {

to solve this problem.
Could you try to be sure


That didn't work...still shows UNKNOWN! for me.
Code:
#      if ($r=~/Identity Failed/is) {
      if (!($r=~/=== START/s)) {
           $r=`sudo /usr/sbin/smartctl -a -d scsi -T permissive /dev/rdsk/$disk`;
           # print "--------------- scsi? -> $r\n";

           $type="scsi";
           $k="$id\_smart_dtype";
           $disk{$k}="$type";
      }


      ##
#      if ($r=~/Identity Failed/is) {
      if (!($r=~/=== START/s)) {
           $r=`sudo /usr/sbin/smartctl -a -d ata -T permissive /dev/rdsk/$disk`;
           # print "--------------- ata? -> $r\n";
           $type="ata";
           $k="$id\_smart_dtype";
           $disk{$k}="$type";
      }
      ##
#      if ($r=~/Identity Failed/is) {
      if (!($r=~/=== START/s)) {
           $k="$id\_smart_dtype";
           $disk{$k}="";
           next;         # no success
      }
 
That didn't work...still shows UNKNOWN! for me.

please replace line 83
$r=`sudo /usr/sbin/smartctl -a -d sat -T permissive /dev/rdsk/$disk`; with
$r=`sudo /usr/sbin/smartctl -a -d sat,12 -T permissive /dev/rdsk/$disk`;
 
please replace line 83
$r=`sudo /usr/sbin/smartctl -a -d sat -T permissive /dev/rdsk/$disk`; with
$r=`sudo /usr/sbin/smartctl -a -d sat,12 -T permissive /dev/rdsk/$disk`;

I *think* you mean line 74, and that seemed to fix it(still have changes from before, should I revert back?), smart_type is still blank but I don't think that matters?

id cap pool vdev state error smart_model smart_type smart_health smart_sn smart_temp
c7t0d0 2.00 TB ninkasi raidz ONLINE Error: S:1 H:750 T:0 WDC WD20EARS-00MVWB0 - PASSED WDWCAZA4870234 32 °C

Also, when I click one of the smart_sn for a detailed view, it fails:

Code:
Smart details of disk c7t0d0 WDC WD20EARS-00MVWB0
smartctl 5.42 2011-10-20 r3458 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Green (Adv. Format)
Device Model:     WDC WD20EARS-00MVWB0
Serial Number:    WD-WCAZA4870234
LU WWN Device Id: 5 0014ee 25aec047e
Firmware Version: 51.0AB51
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:    Wed Jan 11 10:19:55 2012 MST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

Error SMART Values Read failed: scsi error aborted command
Smartctl: SMART Read Values failed.

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: UNKNOWN!
SMART Status, Attributes and Thresholds cannot be read.

Read SMART Log Directory failed.

Error SMART Error Log Read failed: scsi error aborted command
Smartctl: SMART Error Log Read Failed
Error SMART Error Self-Test Log Read failed: scsi error aborted command
Smartctl: SMART Self Test Log Read Failed
Device does not support Selective Self Tests/Logging
 
@Gea

Just wondering if napp-it will support powermanagement with Solaris 11?
Do I need to change to Open Indiana or will powermanagement with disk spindown be supported in a future release?

thanks
 
@Gea

Just wondering if napp-it will support powermanagement with Solaris 11?
Do I need to change to Open Indiana or will powermanagement with disk spindown be supported in a future release?

thanks

That depends on Oracle or on someone publishing a working howto
Powermanagement in Solaris 11 seems to be somehow unfinished
 
1. yes, add a reservation to the pool-dataset
in napp-it, goto menu zfs folder, klick on RES/RFRES and add a reservation in the (pool) line
(overflow protection=10% space reservation to the pool dataset, all other datasets could eat max 90% of total space)

2. no

3. yes, if autoexpand property is set to on, you will see the extra space if all disks in a vdev are replaced.

Thanks. Is there any reason why I should leave it at its default=OFF.
 
Thanks. Is there any reason why I should leave it at its default=OFF.

You should not fill filesystems above 90% or they may become very slow.
without reservation you should not
with reservation you can not
 
You should not fill filesystems above 90% or they may become very slow.
without reservation you should not
with reservation you can not


Oops, my question was in regards to Autoexpand. Is there any reason why I should leave it off=default.

Also with the property reservation=10%, napp-it wants me to give the size in GB/MB rather than percentage. Does that mean when I add more 2TB harddrives or actually upgrade a vdev to a bigger set of hard drives, I would need to somehow remove the old property reservation and give it a new size?

TIA
 
Oops, my question was in regards to Autoexpand. Is there any reason why I should leave it off=default.

Also with the property reservation=10%, napp-it wants me to give the size in GB/MB rather than percentage. Does that mean when I add more 2TB harddrives or actually upgrade a vdev to a bigger set of hard drives, I would need to somehow remove the old property reservation and give it a new size?

TIA

Switch to off if you do not like to autoexpand when inserting larger disks.
Reservation is set absolute not relative but you can modify always
 
Ok. M1015 flashed to 9240, but still not seeing Spindown.

I've disabled fmd as suggested here: http://www.nexenta.org/boards/1/topics/1414

I have autopm enable. Here's my power.conf
Code:
#
# Copyright 1996-2002 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
#pragma ident   "@(#)power.conf 2.1     02/03/04 SMI"
#
# Power Management Configuration File
#

device-dependency-property removable-media /dev/fb
autopm                  enable
autoS3                  default
cpu-threshold           1s
# Auto-Shutdown         Idle(min)       Start/Finish(hh:mm)     Behavior
autoshutdown            30              9:00 9:00               noshutdown
cpupm  enable
device-thresholds         /dev/dsk/c3t0d0     120s 
device-thresholds         /dev/dsk/c2t0d0     120s 
device-thresholds         /dev/dsk/c5t9d1     120s 
device-thresholds         /dev/dsk/c4t9d1     120s 
device-thresholds         /dev/dsk/c5t10d1     120s 
device-thresholds         /dev/dsk/c4t10d1     120s 
device-thresholds         /dev/dsk/c5t11d1     120s 
device-thresholds         /dev/dsk/c4t11d1c4t11d1     120s

I looked at powertop in napp-it, and it seems something is keeping the system awake. Any help here?
Code:
powertop: battery kstat not found (-1)
OpenIndiana PowerTOP version 1.2   (C) 2009 Intel Corporation

Collecting data for 5.00 second(s) 
C-states (idle power)	Avg	Residency
C0 (cpu running)		(0.0%)
C1			2.7ms	(100.0%)
P-states (frequencies)
3092 Mhz	100.0%
Wakeups-from-idle per second: 795.8	interval: 5.0s
Top causes for wakeups:
51.0% (406.0)               sched : unix`dtrace_xcall_func                                 
15.7% (124.6)             :genunix`cv_wakeup                                               
12.6% (100.0)             :genunix`clock                                                   
 6.3% ( 50.0)             :SDC`sysdc_update                                                
 1.4% ( 11.2)          :mpt#0                                                           
 1.3% ( 10.0)             :ata`ghd_timeout                                                 
 0.5% (  4.0)             :genunix`schedpaging                                             
 0.5% (  3.6)          :e1000g#0                                                        
 0.3% (  2.6)          :vmxnet3s#0                                                      
 0.3% (  2.0)             :imr_sas`io_timeout_checker                                      
 0.1% (  1.0)             :TS`ts_update                                                    
 0.1% (  1.0)             :e1000g`e1000g_local_timer                                       
 0.0% (  0.2)             :mpt`mpt_watch                                                   
 0.0% (  0.2)             :genunix`delay_wakeup                                            
 0.0% (  0.2)             :kcf`rnd_handler                                                 
no ACPI power usage estimate available

Wahhh, why are there so many things keeping my system awake??? And why is my CPU at 3.1ghz consistently? Shouldn't it ramp down? EIST is enabled in BIOS. I'm running ESXi 5.0 though. But it should support my i5-2400 right?
 
_Gea:

I have five 2TB Seagate Barracuda Coolspins. I was thinking of doing a single vdev with RAIDZ1 + 1 hot-spare. That would have given be 6TB of storage.

However for the short-term I really do not need that much storage.

Would you recommend that I build 2 vdevs with a 2 Disk Mirror each instead of a single (4 Disk RAIDZ1)+1 hot spare? I presume this will give me much better Read/Write performance. This will primarily be used for backing up individual desktops and store home photos and videos.

Is it possible to designate a hot-spare shared between two mirrored vdevs (assuming it will be consumed by which ever vdev fails).

I have an external 3TB USB 3.0 drive which I plan to connect to take snapshot backups of the entire system.

Also does OI support consumer grade Blue Ray burning drives?

Also how difficult would it be go from a non-virtualized to a virtualized server after I have set everything up?
 
Hiya,

I have been using Solaris 11 Express for a while now but now that full Solaris 11 is out I thought about upgrading.

However, the one thing native Solaris doesn't have is a nice management tools, like resource monitoring etc.

So I've been thinking about using another build of ZFS but i'm not sure which one to go with. Nexenta Community looks really good re web gui and monitoring, but it's limited to 18TB raw excluding spares, logs, cache drives etc, and i'm at 24TB raw excluding spares, logs, cache drives etc. Can you get a buyable community edition that expands on the storage allowed?

ZFSguru also looks good but it's still only beta and I don't want to be upgrading every 5 minutes.

Other than those 2 what else is there, that has a fairly recent ZFS build. Due to the RAM limitations in my machine (maxed at 8GB) I think de-dupe is a no-no so whatever build I go with doesn't need this (which isn't a problem as my stuff isn't really duplicate anyway).
 
But isn't that quite an old build of ZFS?

Just seen it's now at b151a (same as S11E), but says it's a development release?
 
Last edited:
How old is old? The important thing is the zfs and pool versions. OI supports 5 and 28. All they are missing (AFAIK) is encryption.
 
Is it possible to change the web root for napp-it?

For example, I'm using an Apache reverse proxy and I'm trying to map:

http://myserver.com/zfs/
to
http://internal-fileserver:81/

...but I can't get it to work because napp-it seems to use absolute url's so when I access /zfs/cgi-bin/napp-it/admin.pl, it removes the /zfs/ root and tries to access /cgi-bin/ which doesn't exist on my reverse proxy...
 
Yo,

As Solaris 11 is using a different way for powermanagement and my drives aren't spinning down I decided to install Open Indiana 151.
The only problem is that I have a missing driver problem for my Supermicro X9SCM-F Intel Gigabit Nic 82579LM
Any ideas to get it working?

thanks
 
Back
Top