OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Hi,

I have 8 disk in my raidz2 openindiana/nappit machine.
I created the pool a while ago and started with e.g 1 TB and 1.5 TB drives. Now I have only 2 TB drives in the system. That's 16TB in total. with an ashift of 12
.
If I click on pool/volumes in nappit and it reports:
size 14.5TB, alloc 9.19T, Fres 100G, Free 5.31T. The exported volume using cifs to a windows machine report that it is a 10TB drive. Also df -h gives a size of 11TB.

I think that something went wrong with the auto expansion.
Exporting and importing the pool did not help. Also what not helped was doing eight times the following: zpool offline pool disk followed by zpool online -e pool disk.

Is there anyone who has idea's on how to correct this?

Many thanks

below you will find the output of zpool -status -v zfs:

pool: zfs
state: ONLINE
scan: resilvered 36K in 0h0m with 0 errors on Tue Dec 18 09:57:40 2012
config:

NAME STATE READ WRITE CKSUM
zfs ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c3t50014EE25C418F1Ad0 ONLINE 0 0 0
c3t50014EE2B0618343d0 ONLINE 0 0 0
c3t50014EE25C47C37Ad0 ONLINE 0 0 0
c3t50014EE25C480C88d0 ONLINE 0 0 0
c3t50014EE204251903d0 ONLINE 0 0 0
c3t50014EE2B19D9A4Ed0 ONLINE 0 0 0
c3t50014EE2B19DB827d0 ONLINE 0 0 0
c3t50014EE25C418DE1d0 ONLINE 0 0 0

errors: No known data errors--------------------------------------------------------------------------------
Pool details zdb -C zfs

MOS Configuration:
version: 5000
name: 'zfs'
state: 0
txg: 1727969
pool_guid: 11005383116779722236
hostid: 304245
hostname: 'ZFS-Indiaan'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 11005383116779722236
children[0]:
type: 'raidz'
id: 0
guid: 13483852061631947782
nparity: 2
metaslab_array: 30
metaslab_shift: 36
ashift: 12
asize: 16003083796480
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 14865855767456254650
path: '/dev/dsk/c3t50014EE25C418F1Ad0s0'
devid: 'id1,sd@n50014ee25c418f1a/a'
phys_path: '/scsi_vhci/disk@g50014ee25c418f1a:a'
whole_disk: 1
DTL: 199
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 5318508995038236104
path: '/dev/dsk/c3t50014EE2B0618343d0s0'
devid: 'id1,sd@n50014ee2b0618343/a'
phys_path: '/scsi_vhci/disk@g50014ee2b0618343:a'
whole_disk: 1
DTL: 112
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 9354125030098489434
path: '/dev/dsk/c3t50014EE25C47C37Ad0s0'
devid: 'id1,sd@n50014ee25c47c37a/a'
phys_path: '/scsi_vhci/disk@g50014ee25c47c37a:a'
whole_disk: 1
DTL: 109
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 17186569887794530896
path: '/dev/dsk/c3t50014EE25C480C88d0s0'
devid: 'id1,sd@n50014ee25c480c88/a'
phys_path: '/scsi_vhci/disk@g50014ee25c480c88:a'
whole_disk: 1
DTL: 108
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 17551212392206544114
path: '/dev/dsk/c3t50014EE204251903d0s0'
devid: 'id1,sd@n50014ee204251903/a'
phys_path: '/scsi_vhci/disk@g50014ee204251903:a'
whole_disk: 1
DTL: 118
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 17895904653115511844
path: '/dev/dsk/c3t50014EE2B19D9A4Ed0s0'
devid: 'id1,sd@n50014ee2b19d9a4e/a'
phys_path: '/scsi_vhci/disk@g50014ee2b19d9a4e:a'
whole_disk: 1
DTL: 106
create_txg: 4
children[6]:
type: 'disk'
id: 6
guid: 17979516671017777390
path: '/dev/dsk/c3t50014EE2B19DB827d0s0'
devid: 'id1,sd@n50014ee2b19db827/a'
phys_path: '/scsi_vhci/disk@g50014ee2b19db827:a'
whole_disk: 1
DTL: 105
create_txg: 4
children[7]:
type: 'disk'
id: 7
guid: 10969959642944177546
path: '/dev/dsk/c3t50014EE25C418DE1d0s0'
devid: 'id1,sd@n50014ee25c418de1/a'
phys_path: '/scsi_vhci/disk@g50014ee25c418de1:a'
whole_disk: 1
DTL: 111
create_txg: 4
features_for_read:
 
and here is the history:

zpool history

History for 'zfs':


2012-06-08.23:30:46 zpool create -f zfs raidz2 c2t50014EE0019F6CA8d0 c2t50014EE257E8C231d0 c2t50014EE25C47C37Ad0 c2t50014EE25C480C88d0 c2t50014EE2AD3E3A55d0 c2t50014EE2B19D9A4Ed0 c2t50014EE2B19DB827d0 c2t50024E900347A71Cd0


2012-06-08.23:30:46 zfs set reservation=0.51T zfs


2012-06-08.23:30:51 zfs set refreservation=0.51T zfs


2012-06-08.23:34:28 zfs create -o utf8only=on -o normalization=formD -o snapdir=hidden -o casesensitivity=insensitive -o nbmand=off -o sharesmb=off -o atime=off -o compression=off zfs/zfs_z2


2012-06-08.23:34:28 zfs set sharesmb=name=zfs_z2,guestok=true zfs/zfs_z2


2012-06-08.23:34:28 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-08.23:34:33 zfs set aclmode=passthrough zfs/zfs_z2


2012-06-09.17:34:53 zpool scrub zfs


2012-06-09.20:55:35 zpool replace zfs c2t50014EE257E8C231d0 c2t50014EE2B0618343d0


2012-06-10.00:30:06 zpool replace zfs c2t50014EE2AD3E3A55d0 c2t50014EE204251903d0


2012-06-10.04:15:18 zpool set autoexpand=on zfs


2012-06-10.04:27:32 zpool set autoexpand=off zfs


2012-06-10.04:28:25 zpool set autoexpand=on zfs


2012-06-10.04:29:01 zpool set autoexpand=off zfs


2012-06-10.04:30:52 zpool set autoexpand=on zfs


2012-06-10.13:50:34 zfs set compression=lzjb zfs


2012-06-10.13:50:49 zfs set compression=lzjb zfs/zfs_z2


2012-06-10.13:51:01 zfs set compression=on zfs/zfs_z2


2012-06-10.13:51:58 zfs set compression=lzjb zfs/zfs_z2


2012-06-10.13:52:49 zfs set compression=on zfs/zfs_z2


2012-06-10.14:12:22 zfs set compression=gzip zfs/zfs_z2


2012-06-10.14:13:35 zfs set compression=gzip-9 zfs/zfs_z2


2012-06-10.14:14:12 zfs set compression=on zfs/zfs_z2


2012-06-10.14:14:46 zfs set compression=gzip-9 zfs/zfs_z2


2012-06-10.14:15:09 zfs set compression=on zfs/zfs_z2


2012-06-10.14:17:12 zfs set compression=gzip zfs/zfs_z2


2012-06-10.14:19:21 zfs set compression=on zfs/zfs_z2


2012-06-13.12:12:31 zfs set compression=off zfs


2012-06-13.12:36:48 zfs set compression=on zfs


2012-06-13.19:27:49 zpool replace zfs c2t50014EE0019F6CA8d0 c2t50014EE25C418F1Ad0


2012-06-14.01:56:07 zfs set sharenfs=on zfs/zfs_z2


2012-06-14.02:10:19 zfs set sharenfs=off zfs/zfs_z2


2012-06-14.10:39:07 zpool replace zfs c2t50024E900347A71Cd0 c2t50014EE25C418DE1d0


2012-06-15.21:03:40 zfs set reservation=none zfs


2012-06-15.21:04:39 zfs set refreservation=none zfs


2012-06-15.21:05:24 zfs set reservation=522G zfs


2012-06-15.21:05:57 zfs set refreservation=522G zfs


2012-06-15.21:06:22 zfs set reservation=none zfs


2012-06-15.21:06:45 zfs set refreservation=none zfs


2012-06-15.21:24:43 zfs set refreservation=512G zfs


2012-06-15.21:25:14 zfs set refreservation=100G zfs


2012-06-15.22:03:17 zpool scrub zfs


2012-06-15.23:00:36 zfs set sharenfs=on zfs/zfs_z2


2012-06-16.01:00:46 zpool set autoexpand=off zfs


2012-06-16.01:00:56 zpool set autoexpand=on zfs


2012-06-16.01:03:41 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-16.03:44:37 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-16.03:46:55 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-16.03:47:24 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-16.17:39:29 zfs set sharenfs=rw zfs/zfs_z2


2012-06-17.03:00:19 zpool scrub zfs


2012-06-17.12:42:46 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-17.12:56:51 zpool import -f 11005383116779722236


2012-06-17.12:56:52 zfs set nms:dedup-dirty=off zfs


2012-06-17.12:56:53 zfs set nms:dedup-dirty=off zfs/zfs_z2


2012-06-17.12:56:55 zfs set mountpoint=/volumes/zfs zfs


2012-06-17.12:57:00 zpool set failmode=continue zfs


2012-06-17.12:57:04 zfs inherit -r mountpoint zfs/zfs_z2


2012-06-17.13:11:30 zfs create -o reservation=100M -o mountpoint=none zfs/.nza-reserve


2012-06-17.13:42:18 zfs set sharenfs=anon=nfs,sec=none,rw=* zfs/zfs_z2


2012-06-17.13:42:23 zfs set sharenfs=anon=nfs,sec=none,rw=* zfs/zfs_z2


2012-06-17.13:44:02 zfs set sharenfs=on zfs/zfs_z2


2012-06-17.13:44:07 zfs set sharenfs=on zfs/zfs_z2


2012-06-17.13:49:07 zfs inherit -r sharenfs zfs/zfs_z2


2012-06-17.13:49:07 zfs set sharenfs=on zfs/zfs_z2


2012-06-17.13:49:07 zfs inherit -r sharenfs zfs/zfs_z2


2012-06-17.13:49:12 zfs set sharenfs=on zfs/zfs_z2


2012-06-17.15:10:40 zpool import -f 11005383116779722236 zfs


2012-06-17.15:10:43 zfs set mountpoint=/zfs zfs


2012-06-17.15:12:25 zpool set failmode=wait zfs


2012-06-17.15:50:19 zfs destroy -r -f zfs/.nza-reserve


2012-06-17.15:51:19 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-18.01:31:57 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-18.01:51:12 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-19.18:20:56 zfs set sharesmb=off zfs/zfs_z2


2012-06-19.18:21:25 zfs set sharesmb=name=zfs_z2 zfs/zfs_z2


2012-06-19.18:42:16 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-21.23:40:22 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-21.23:41:16 zfs set sharesmb=off zfs/zfs_z2


2012-06-21.23:41:27 zfs set sharesmb=name=zfs_z2,guestok=true zfs/zfs_z2


2012-06-22.14:13:22 zfs set aclinherit=passthrough zfs/zfs_z2


2012-06-24.03:00:13 zpool scrub zfs


2012-07-01.12:52:57 zpool scrub zfs


2012-07-08.03:00:05 zpool scrub zfs


2012-07-12.23:06:24 zpool upgrade zfs


2012-07-22.03:00:03 zpool scrub zfs


2012-07-29.03:00:03 zpool scrub zfs


2012-08-05.03:00:03 zpool scrub zfs


2012-08-19.03:00:03 zpool scrub zfs


2012-08-26.03:00:04 zpool scrub zfs


2012-09-02.03:00:04 zpool scrub zfs


2012-09-06.16:25:56 zpool upgrade zfs


2012-09-06.16:37:54 zpool scrub zfs


2012-09-16.23:00:10 zpool scrub zfs


2012-09-23.23:00:12 zpool scrub zfs


2012-09-30.23:00:10 zpool scrub zfs


2012-10-07.23:00:09 zpool scrub zfs


2012-10-14.06:37:45 zpool set autoreplace=on zfs


2012-10-14.06:38:15 zpool clear zfs


2012-10-14.23:00:10 zpool scrub zfs


2012-10-21.23:00:10 zpool scrub zfs


2012-11-04.23:00:10 zpool scrub zfs


2012-11-11.23:00:11 zpool scrub zfs


2012-11-18.23:00:11 zpool scrub zfs


2012-11-25.23:00:10 zpool scrub zfs


2012-12-16.23:00:10 zpool scrub zfs


2012-12-18.09:39:18 zpool autoexpand=on zfs


2012-12-18.09:43:33 zpool export -f zfs


2012-12-18.09:44:26 zpool import -f 11005383116779722236 zfs


2012-12-18.09:44:33 zfs set mountpoint=/zfs zfs


2012-12-18.09:52:56 zpool offline zfs c3t50014EE25C418F1Ad0


2012-12-18.09:53:18 zpool online -e zfs c3t50014EE25C418F1Ad0


2012-12-18.09:53:38 zpool offline zfs c3t50014EE2B0618343d0


2012-12-18.09:53:51 zpool online -e zfs c3t50014EE2B0618343d0


2012-12-18.09:54:09 zpool offline zfs c3t50014EE25C47C37Ad0


2012-12-18.09:54:39 zpool online -e zfs c3t50014EE25C47C37Ad0


2012-12-18.09:55:10 zpool offline zfs c3t50014EE25C480C88d0


2012-12-18.09:55:19 zpool online -e zfs c3t50014EE25C480C88d0


2012-12-18.09:55:38 zpool offline zfs c3t50014EE204251903d0


2012-12-18.09:55:50 zpool online -e zfs c3t50014EE204251903d0


2012-12-18.09:56:11 zpool offline zfs c3t50014EE2B19D9A4Ed0


2012-12-18.09:56:22 zpool online -e zfs c3t50014EE2B19D9A4Ed0


2012-12-18.09:56:41 zpool offline zfs c3t50014EE2B19DB827d0


2012-12-18.09:56:52 zpool online -e zfs c3t50014EE2B19DB827d0


2012-12-18.09:57:08 zpool offline zfs c3t50014EE25C418DE1d0


2012-12-18.09:57:40 zpool online -e zfs c3t50014EE25C418DE1d0


2012-12-18.10:17:07 zpool export -f zfs


2012-12-18.10:17:58 zpool import -f 11005383116779722236 zfs


2012-12-18.10:18:05 zfs set mountpoint=/zfs zfs


2012-12-18.10:24:20 zpool set autoexpand=on zfs


2012-12-18.10:24:36 zpool set autoexpand=off zfs


2012-12-18.10:24:38 zpool set autoexpand=on zfs
 
Hi,

I have 8 disk in my raidz2 openindiana/nappit machine.
I created the pool a while ago and started with e.g 1 TB and 1.5 TB drives. Now I have only 2 TB drives in the system. That's 16TB in total. with an ashift of 12
.
If I click on pool/volumes in nappit and it reports:
size 14.5TB, alloc 9.19T, Fres 100G, Free 5.31T. The exported volume using cifs to a windows machine report that it is a 10TB drive. Also df -h gives a size of 11TB.

I think that something went wrong with the auto expansion.
Exporting and importing the pool did not help. Also what not helped was doing eight times the following: zpool offline pool disk followed by zpool online -e pool disk.

Is there anyone who has idea's on how to correct this?

Many thanks

It looks fine size to me. 8x 2TB gives you 16TB of RAW storage which i think lines up with the 14.5TB you don't get a full 2TB per drive because of the standard lies the hard drive makers make to inflate their sizes ( drives sized in 1000's but 1k = 1024 etc)

You then have raidZ2 which uses 2x2TB for redundancy info so you can lose 2 drives and keep on going. this leaves just over 10TB of usable space for data which you are reporting.

when reading the zfs pool sizes the alloc amount of 9.19TB i think means that you have used this much data including the 2 sets of redundancy data from raidz2. So this means the real amount of data being used will be 6/8 x 9.19TB = 6.89TB. Also the free 5.31TB is the same with only 6/8 x 5.31TB free for user data. its just reporting on your raw total data not the end usable data.

Michael
 
Yo,

Do I need to go in maintanance mode in ESXi to be able to update Napp-it?
My OI can't connect to the internet! I can reach all my pools though so it's not a network problem!

gr33tz
 
Yo,

Do I need to go in maintanance mode in ESXi to be able to update Napp-it?
My OI can't connect to the internet! I can reach all my pools though so it's not a network problem!

gr33tz

Maintanance mode will not help you. This is just useful for some maintenance tasks for ESXi itself and does nothing useful for the VM's running on the server.

Just because it is virtual does not change many of the normal trouble shooting steps you would take with a normal server. Check things like DNS settings as this is a common cause of internet problems if the machine is otherwise accessible on the network. try pinging from the OI server to local addresses and your default gateway and also to a known good internet IP address like find the address of www.google.com or similar(only some web servers respond to ping though so test on a desktop/laptop first). also check routes and default gateway settings.

Also note that some secure setups for SAN VM's put the SAN on a seperate network so that normal local machines can't access them directly. This seperate network would need more advanced network setups and extra routers to give it internet access. I doubt this is the case for you though.

Michael
 
Hi,

I have 8 disk in my raidz2 openindiana/nappit machine.
I created the pool a while ago and started with e.g 1 TB and 1.5 TB drives. Now I have only 2 TB drives in the system. That's 16TB in total. with an ashift of 12
.
If I click on pool/volumes in nappit and it reports:
size 14.5TB, alloc 9.19T, Fres 100G, Free 5.31T. The exported volume using cifs to a windows machine report that it is a 10TB drive. Also df -h gives a size of 11TB.

calculate:
RAW capacity: 16 TB
- Redundancy: - 4 TB
- Reservation: -100G (to avoid speed degration)
-----------------------------------------------------------------------
about: 11,9 TB usable capacity, thats less than 11 TiB (real Bytes) usable

ps
Newest napp-it 0.9 displays additional GiB/TiB values for disks, pools and datasets
 
Looks like napp-it 0.9 is rounding up disksizes, my 1.5tb disks are now all 2.0tb :)

c2t1d0 (!parted) 2 TB via dd ok ONLINE aggr0 raidz S:0 H:0 T:0 ATA WDC WD1501FASS-0
c2t7d0 (!parted) 2 TB via dd ok ONLINE aggr0 raidz S:0 H:0 T:0 ATA WDC WD2002FAEX-0
c3t0d0 (!parted) 2 TB via dd ok ONLINE aggr0 raidz S:0 H:0 T:0 ATA WDC WD15EARS-00S
 
Looks like napp-it 0.9 is rounding up disksizes, my 1.5tb disks are now all 2.0tb :)

c2t1d0 (!parted) 2 TB via dd ok ONLINE aggr0 raidz S:0 H:0 T:0 ATA WDC WD1501FASS-0
c2t7d0 (!parted) 2 TB via dd ok ONLINE aggr0 raidz S:0 H:0 T:0 ATA WDC WD2002FAEX-0
c3t0d0 (!parted) 2 TB via dd ok ONLINE aggr0 raidz S:0 H:0 T:0 ATA WDC WD15EARS-00S

fixed.
 
Maintanance mode will not help you. This is just useful for some maintenance tasks for ESXi itself and does nothing useful for the VM's running on the server.

Just because it is virtual does not change many of the normal trouble shooting steps you would take with a normal server. Check things like DNS settings as this is a common cause of internet problems if the machine is otherwise accessible on the network. try pinging from the OI server to local addresses and your default gateway and also to a known good internet IP address like find the address of www.google.com or similar(only some web servers respond to ping though so test on a desktop/laptop first). also check routes and default gateway settings.

Also note that some secure setups for SAN VM's put the SAN on a seperate network so that normal local machines can't access them directly. This seperate network would need more advanced network setups and extra routers to give it internet access. I doubt this is the case for you though.

Michael

Thanks,

Found it, seems that network for some reason in OI was set to manual, not automatic!
No need for manual as router reserves a fixed IP for OI!

Ch33rs
 
Arrgh!

One of my disks in a pool is failing but the configured spare does not kick in.:eek:
When I try to replace it manually with the failed disk, I receive:

Code:
cannot replace c0t5000CCA369C9AAACd0 with c0t50014EE206E0EA31d0: devices have different sector alignment

This is SolEx-11 with napp-it 0.8l3.
The pool has been created from 3 two-may mirrros as vdev and ashift=9
Made of 6 Disks, 2TB each, 2x Hitachi, 4x WDEARs
The spare is a WDEARX .. it can be/was perfectly added to the pool as a spare.

What can I do?
....re-running my final backups now...do I need to take that pool apart?

TIA,
Hominidae
 
Arrgh!

One of my disks in a pool is failing but the configured spare does not kick in.:eek:
When I try to replace it manually with the failed disk, I receive:

Code:
cannot replace c0t5000CCA369C9AAACd0 with c0t50014EE206E0EA31d0: devices have different sector alignment

This is SolEx-11 with napp-it 0.8l3.
The pool has been created from 3 two-may mirrros as vdev and ashift=9
Made of 6 Disks, 2TB each, 2x Hitachi, 4x WDEARs
The spare is a WDEARX .. it can be/was perfectly added to the pool as a spare.

What can I do?
....re-running my final backups now...do I need to take that pool apart?

TIA,
Hominidae

The Problem:
-Your pool is ashift=9 with 512b sectors (was created with wrong ashift because the WD disk ly about their true 4k technologie).Newer disks reports true 4k physical sectors.

-Your hotspare is a 4k disk.

So what can you do:
- buy a Hitachi real 512b disk if you can get one

or recreate your pool with ashift=12
In such a case even a 512B disk can replace a 4k disl
 
Thanks _Gea for your fast reply!

This confirms my initial thoughts.
..but why is it possible to add a disk as a spare that obviously would not fit the pool in the first place?:(
 
Thanks _Gea for your fast reply!

This confirms my initial thoughts.
..but why is it possible to add a disk as a spare that obviously would not fit the pool in the first place?:(

Vou can even add 1 MB hotspare to a pool of TB disks.
There is no check. You must care that a hotspare makes sense.
 
Hmmm...ok, so how do I find out about the physical sector size that the disks report?
I expect there are many more drives out there that "lie" about their inner setup.

Edit:
Just created a test-pool with my WDEARX as a basic vdev...ahift is set to 12 automagically.
So running a small test for each model (and FW) is a DIY workaround to collect all..
 
Last edited:
_Gea,

Can you help out... I'm having a strange issue - My setup is as followed;

OI running napp-it latest sharing a pool via NFS only. permissions set as [email protected]/24 datatstore/media using 777 (eventually I would like to run SMB and AFP)

I can mount the NFS share just fine in OSX both wired and wirelessly. Both stations can read/write to the storage just fine with some pretty awesome speeds :) My issue comes when trying to connect to a WD TV Live streaming media player.

I can see and access the stores via the media player, however when I attempt to click on the content the unit just hangs. I've googled around and have found others with similar issues relating to the protocol used = udp. See Here: http://forum.wdlxtv.com/viewtopic.php?f=&t=7243 & http://ubuntuforums.org/showthread.php?t=1883338

Similarly what they describe is what is happening to me...

does this sound like an issue napp-it sending via udp ? is there a way to change to TCP?
 
_Gea,

Can you help out... I'm having a strange issue - My setup is as followed;

OI running napp-it latest sharing a pool via NFS only. permissions set as [email protected]/24 datatstore/media using 777 (eventually I would like to run SMB and AFP)

I can mount the NFS share just fine in OSX both wired and wirelessly. Both stations can read/write to the storage just fine with some pretty awesome speeds :) My issue comes when trying to connect to a WD TV Live streaming media player.

I can see and access the stores via the media player, however when I attempt to click on the content the unit just hangs. I've googled around and have found others with similar issues relating to the protocol used = udp. See Here: http://forum.wdlxtv.com/viewtopic.php?f=&t=7243 & http://ubuntuforums.org/showthread.php?t=1883338

Similarly what they describe is what is happening to me...

does this sound like an issue napp-it sending via udp ? is there a way to change to TCP?

As far as I know, Solaris is TCP only on NFS4 and on NFS3 its TCP unless you mount with the udp option.

http://docs.oracle.com/cd/E19963-01/html/821-1454/rfsintro-101.html

Maybee its a permission problem with newly created files
try to set an ACL everyone@=modify recursively on the share with inheritance=on for files and folders

If you intend to use SMB as well, you must do all settings with ACL and not with unix permissions
 
Last edited:
looks good - at least for newly created files

- everyone has modify permission
- inheritance for newly created files is on for files and folders

- permissions are interited (aclinherit property)

to be sure you may reset ACL recursively to this setting for already created files
 
use the napp-it "reset ACL" option below the list of ACL
and select modify, this option is free in this extension

check files, folders, recursively

Still no luck, =/

Do you think its a problem with the way the WDTV mounts the UID/GID ? I find this really strange as many people use small NAS appliances (QNAP, Synology, etc) as NFS shares just fine - I surely can't be the only person using OI/napp-it with a WDTV
 
Has anyone used BackupExec against NFS shares using ZFS? Right now we are using the VMware connector to backup our VMs but I would prefer to be able to just backup the NFS store we are creating instead. I am trying to figure out if you can backup just a snapshot, but I'm having some issues getting good info on the RALUS agent. I'm hoping we can dump the VMware connector next year and save a little on licensing although we can just continue with our current backup plans and run the existing agent. Since the backups of our PHD Virtual VMs takes 2 days I am looking for whatever I can find to cut down on the open snapshot times.

Life would be even easier if we could just use the Dell TL-2000 tape library natively in OI but there is very little information on the web about that. It could be that I am being to narrow by looking at just OpenIndiana but I do not know enough about the key differences between the OS builds to determine if a broad Solaris search would also yield what I need.

I'm open to alternate suggestions.
 
Hi guys, how do you clean up different startup opinions? Because I install&reinstall napp-it multiply times, I probably have 14+ startup opinions...
Thanks!
 
Hi guys, how do you clean up different startup opinions? Because I install&reinstall napp-it multiply times, I probably have 14+ startup opinions...
Thanks!

list systemsnaps (BE): beadm list
destroy: beadm destroy -F bename
 
Hello _Gea,

I tried to upgrade my napp-it to 0.9, I installed it by wget -O - www.napp-it.org/napp-it09 | perl, but after restart, access to http://ip:81 showed 404 file not found. Any suggestions?

This is a problem with your browser cache due to different locations of new admin.pl
Reload page http://ip:81 or enter http://ip:81/cgi-bin/admin.pl

see also changelog:
http://napp-it.org/downloads/changelog_en.html

ps
I have added a fix to the installer to avoid this problem
 
Last edited:
Can you provide more info on how you resolved the cabling? I am stuck with a R720xd with an H310. Dell has acknowledged that it has a JBOD performance problem that they will not be able to correct in firmware so I am looking to do like you did and use an LSI 9211-8i card in IT mode. You can see me post in their forums with more details (http://en.community.dell.com/support-forums/servers/f/906/t/19480834.aspx) on the issues I ran into.

I have not looked inside yet, but were you able to connect the add-on drives in the back directly to an LSI card? Dell made it sound like it was proprietary. Would a 9211-4i do for those?

thanks.
Sorry, I haven't been checking back on this and missed your question when you first posted it ... the Perc H310 uses a proprietary split cable that branches a single connector at the controller end into two right-angled SFF-8087 cables that attach to the SAS backplane. We replaced that cable with two LSI straight through SFF8087 cables, attaching an LSI 9211-8i to the Dell backplane SFF-8087 female connectors instead of using the built-in Dell Perc H310. All the Dell drives (we have 8 2tb nearline SAS drives) show up under the 9211-8i BIOS and are usable. Unfortunately, however, the SFF-8087 cables stick up on the backplane end and would have to be folded pretty dramatically to fit under the cover (we have the R720xd mounted in a rack with the cover off). LSI sells right-angle 8087 cables which we first purchased, but the right-angle connector is oriented in the wrong direction for use with the Dell backplane and we had to replace those with straight through cables.

So just to make clear, our all-in-one based on a Dell R720xd completely bypasses the included Dell H310 controller. We bought one LSI 9211-4i for two SDDs that are used as boot drives for ESXi 5.1. Because the motherboard SATA connectors are data only (i.e., no power), we had to rig up an external power brick:
Coolerguys 100-240v AC to 12 & 5v DC 4pin Molex 2A Power Adapter

Then we needed a 8087 to Sata forward breakout cable with molex power connectors like this:
LSI 07-00021-01 / Molex 74562-7500 Internal MiniSAS SFF-8087 to (4) SFF-8482 29pin SAS Drive cable with 4-pin Power

And finally we used a one-to-two molex power splitter which attached to the molex power connectors on the breakout cable:
Rosewill Model RCW-300 8" Power Splitter Multi-Color Cable

Phew! All works beautifully. Remember: we are using the 9211-4i in IR mode setting up hardware RAID mirroring of the boot drives and the 9211-8i flashed to IT mode for the SAS backplane which -- via PCI pass-through -- is passed through to the OpenIndiana VM. The Dell Perc H310 is completely unused. Everything is working very well.

Hope this helps.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I'm currently running Solaris Express 11 (SunOS 5.11, snv_151a) and napp-it 0.8 but I see that Solaris is no longer supported by napp-it.

Should I upgrade to OmniOS? ("stable" sounds better than OpenIndiana developer releases)

If so, is there any document or anything explaining what I would need to do this?

Thanks!

(EDIT: I'm running under ESXi 5.0 in case that matters...)
 
I'm currently running Solaris Express 11 (SunOS 5.11, snv_151a) and napp-it 0.8 but I see that Solaris is no longer supported by napp-it.

Should I upgrade to OmniOS? ("stable" sounds better than OpenIndiana developer releases)

If so, is there any document or anything explaining what I would need to do this?

Thanks!

(EDIT: I'm running under ESXi 5.0 in case that matters...)

This is not correct about Solaris.
Solaris Express and Solaris 11.0 are supported up to napp-it 0.8
napp-it 0.9 only supports the current Solaris 11.1

What you can say:
My main platforms are OmniOS and OpenIndiana
all tests are done there first because I use them for my own

Solaris keeps a supported platform with the current Solaris 11.1


If you like to move to OmniOS, you need to recreate your pools
unless you have V.28 pools now. Then its just a import.
But if you like to run it under ESXi: I have not yet seen working vmware tools on Omni..
My All-In-Ones are all on OI live
 
Thanks Gea!

You said that you use OI Live, but it can also be installed, right?

I'm looking at OpenIndiana Build 151A5 Server... is this the correct version?

Also, my pools are showing V31, so I can just do an import into OI?
 
Thanks Gea!

You said that you use OI Live, but it can also be installed, right?

I'm looking at OpenIndiana Build 151A5 Server... is this the correct version?

Also, my pools are showing V31, so I can just do an import into OI?

OI live can be installed on disk as well. It is the version with the GUI
When I have the choice, I prefer live edition over text edition due to usability
Also ESXi vmware tools are running without tweaks on live edition.

Newest OI is 151a7 (live/GUI edition or text edition)
You cannot import pools > Pool V.28 because they are Oracle closed source.
You need to copy data.
 
I'm running into an issue with my M1015 cards with my new all-in-one using VSphere 5.1. I'm not sure if this is part of the passthrough issues I have read or not, but what is happening is that I am configuring the passthrough to my OI VM configured exactly as the step by step instructions list. I can see the RAID card with the Megaraid tools, but I can see none of the drives connected.

I know the card is working because I did the badblocks testing with a pmagic bootable CD and it saw all drives. All drives are configured as JBOD drives as well. The only warnings I am getting are from the device driver UI, but they are listed as the correct device type.

The only thing that I am seeing that doesn't seem to be standard is that VMware is listing the devices as LSI skinny (which there are not many hits for LSI skinny).

What's the best way to troubleshoot this? My hardware doesn't run VSphere 4.x due to new hardware but I am tempted to try VSphere 5.0. I can also run OI on the bare metal to see if that makes a difference at all.

I'm going to also try to update the firmware on all of the cards. One card is listed as 2.120.254-1520 and the other is 2.120.214-1447. I might as well have all cards on the same firmware revision.

It just seems odd that this setup isn't working as intended and I am sure there's a simple fix for it. I'm just not entirely sure what it is.
 
Back
Top