OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

there is only one nic

Napp.it 0.8 used "old style config" with ifconfig and file-based settings.
0.9 uses ipadm similar to http://wiki.openindiana.org/oi/Static+IP that works on a default config.

You may try at CLI
svcadm disable svc:/network/physical:nwam
svcadm enable svc:/network/physical:default

# check for interface name
dladm show-link

# delete old interface (ex interface e1000g0)
dladm delete-if e1000g0

#create interface
ipadm create-if e1000g0

# add IP address
ipadm create-addr -T static -a 192.168.0.1/24 e1000g0/v4

# now set gateway and DNS ex
route -p add default 192.168.0.254
echo 'nameserver 8.8.8.8' >> /etc/resolv.conf
cp /etc/nsswitch.dns /etc/nsswitch.conf

(these are the steps done with 0.9)
 
Would that be similar for omnios? I was debating converting to omnios. i tried it a few days ago but couldnt figure out how to give it an ip.
 
We have been a very satisfied user of multiple all-in-ones based on Gea's model (and using napp-it, of course). We outgrew the limitations of the free version of Vsphere, needing to use more memory. So we upgraded to VMWare Essentials Kit, which removes the memory limitation and allows up to 3 separate servers. We have two servers. The one for disaster recovery has 64gb of memory on an HP ProLiant DL380 G6 with a single quad core Xeon E5530. Our production server is a Dell r720xd with 2 Xeon E5-2620 6-core CPUs and 96gb of memory. For software we are running ESXi 5.1 with OpenIndiana oi_151a7 as the virtual SAN on both servers.

Recently I came across SmartOS -- Joyent's open source variant of Illuminos with KVM virtualization added. See Why SmartOS?

Apparently quite a few former Solaris developers have migrated to Joyent. Here are some reasons I can think of to try it out:
1. SmartOS itself is designed to run solely in memory (i.e., does not need or use a boot drive like ESXi).
2. Eliminates the need for VMWare.
3. No restriction on memory, disk space, or number of CPUs.
4. Virtual machines are in Solaris zones.
5. Dtrace can be used to troubleshoot or analyze performance of both the SAN and the VMs.

And here are some possible pitfalls:
1. Not having VMWare you lose all of the features of VMWare that don't exist in SmartOS.
2. Although SmartOS is open source (and based on the Illuminos core) you are beholden to a commercial company, Joyent.

Anyone here tried it out for an all-in-one?

--peter
 
Hi, I've got a samba question. I'm still using NexentaCore 3.1 but this may apply to other Solaris OSes. Some mac clients can connect to the samba server but can only access the root folder, no subfolders. I found out that a possible solution is to set 'unix extensions = no' in the samba config (http://forums.macrumors.com/showthread.php?t=1269389, http://hints.macworld.com/article.php?story=20100405023255445).

The problem is that can't figure out how to do this with sharectl, there doesn't seem to exist a 'unix extensions' property.
 
The kvm implementation of/for SmartOS lacks support for vt-d, which makes it building an all-in-one the inside-out way, isn't it?
I simply don't like the idea of your SAN to be your hypervisor.

If you want to overcome the vendor lock-in and license cost, I'd suggest to try Proxmox-VE (http://www.proxmox.com/products/proxmox-ve).
The latest Release 2.2 also supports vt-d and offers some nice managing features.
 
Hi, I've got a samba question. I'm still using NexentaCore 3.1 but this may apply to other Solaris OSes. Some mac clients can connect to the samba server but can only access the root folder, no subfolders. I found out that a possible solution is to set 'unix extensions = no' in the samba config (http://forums.macrumors.com/showthread.php?t=1269389, http://hints.macworld.com/article.php?story=20100405023255445).

The problem is that can't figure out how to do this with sharectl, there doesn't seem to exist a 'unix extensions' property.

You are completely looking at wrong places.
With Nexenta* and every other Solaris based system, you do not use SAMBA mostly
but the Solaris built-in kernel-based CIFS server.

I suppose you mean the following
Your "root" folder is your pool or another parent ZFS filesystem
Your "subfolder" below is not a simple folder but another ZFS filesystem mounted below.

In this case, if you share the "root" folder, you see the other filesystems mounted below
but you cannot travers via CIFS/SMB.

Reason:
With the Solaris CIFS server, SMB sharing is a property of a filesystem with different possible settings.
There is currently no mechanism to inherit or change ZFS properties that would be needed to traverse.

Solution:
- use a share for each filesystem on same share level
- use regular folders, not filesystems
- use SAMBA (gives a lot of other problems)
 
Right, I'm sorry for mixing things up, but it really is only one pool :). Accessing the pool works perfectly with windows client and 1 mac client, but on 3 other mac clients I get the strange behaviour described in the macrumors thread.

I think I'll just drop smb for those clients and try with FTP or something. Thanks for the response though Gea!
 
The kvm implementation of/for SmartOS lacks support for vt-d, which makes it building an all-in-one the inside-out way, isn't it?
I simply don't like the idea of your SAN to be your hypervisor.

If you want to overcome the vendor lock-in and license cost, I'd suggest to try Proxmox-VE (http://www.proxmox.com/products/proxmox-ve).
The latest Release 2.2 also supports vt-d and offers some nice managing features.

I'm afraid you have incorrect information -- SmartOS only has vt-d limitations when run as a *nested* virtual host under VMWare or Hyper-V. When run on the bare metal, no limitations. Please go and read the documentation more carefully before you jump to conclusions. Perhaps you are confusing the fact that SmartOS currently only works on Intel vt-x architectures (i.e., no AMD).

System Requirements

The more memory you can dedicate to SmartOS the better due to it running as a live image:

A minimum of 1GB of RAM
64-bit x86 CPU only
To take advantage of KVM features, SmartOS requires an Intel CPU with VT-x extensions in the following microarchitectures:

Nehelem
Westmere
Sandy Bridge
SmartOS will run in a virtual machine. However, due to a lack of nested virtualization, some features of KVM will not function.
 
Right, I'm sorry for mixing things up, but it really is only one pool :). Accessing the pool works perfectly with windows client and 1 mac client, but on 3 other mac clients I get the strange behaviour described in the macrumors thread.

I think I'll just drop smb for those clients and try with FTP or something. Thanks for the response though Gea!

If you do not have problems with OSX 10.6 and lower and have problems with 10.7+
you may need an update to [NexentaStor 4/Illumian, OmniOS, OI or Solaris]

10.7 introduced a new Apple SMB stack with various problems
 
I'm afraid you have incorrect information -- SmartOS only has vt-d limitations when run as a *nested* virtual host under VMWare or Hyper-V. When run on the bare metal, no limitations. Please go and read the documentation more carefully before you jump to conclusions. Perhaps you are confusing the fact that SmartOS currently only works on Intel vt-x architectures (i.e., no AMD).

Well thank you, but I don't think so.
For an Intel based system, vt-x is inherited with vt-d and not the other way around.
Your quote does not proof that vt-d is supported with smartOS.
Can you point me in the right direction where this is confirmed, as you state?

The vt-d feature (or IOMMU for AMD) is what you need for passthru of your HBA to a VM, which is the architecture of an all-in-one, discussed here.
SmartOS is based on Illumos kernel and vt-d support is not implemented there.
You can repeatedly read about that on the smartOS IRC channels...vt-d is not implemented with kvm for smartOS.

from: http://echelog.com/logs/browse/smartos/1341525600
[...]
[23:28:30] <Saskaloon> Is there pci-passthrough, with KVM virtualization?
[23:29:05] <konobi> nope
[23:31:14] <Saskaloon> I thought the Linux implementation of KVM supported it; but, I may be confusing things with Xen.
[23:32:02] <rmustacc> Yes, Linux KVM supports it. We did not implement it.
[23:32:19] <Saskaloon> So, it could be possible with further development.
[23:32:26] <rmustacc> Yes.
[23:32:35] <rmustacc> Nothing we did preculdes adding that functionality.
[23:32:45] <rmustacc> We are currently unlikely to be the ones to do it however.
[...]
from: http://echelog.com/logs/browse/illumos/1330124400
[...]
[07:00:20] <scanf> to clarify, the kvm on smartos doesnt support any type of passthru? not even USB?
[07:00:43] <rmustacc> There is no VT-D support.
[...]
 
Well thank you, but I don't think so.
For an Intel based system, vt-x is inherited with vt-d and not the other way around.
Your quote does not proof that vt-d is supported with smartOS.
Can you point me in the right direction where this is confirmed, as you state?

The vt-d feature (or IOMMU for AMD) is what you need for passthru of your HBA to a VM, which is the architecture of an all-in-one, discussed here.
SmartOS is based on Illumos kernel and vt-d support is not implemented there.
You can repeatedly read about that on the smartOS IRC channels...vt-d is not implemented with kvm for smartOS.

hominindae -- I think you are confused about PCI pass-through. Yes we need that on a typical all-in-one where we run a Solaris variant as a virtual host under ESXi to give the Solaris OS most efficient access to the disk hardware for SAN/ZFS use. However because SmartOS is intended to run on bare metal (i.e., not as a virtual host), it does not need PCI pass-through of the RAIID controller and gives "native" access to ZFS. It does, however, need virtual hardware assistance (Vt-x) to get the most performance out of the virtual machines that will be running under SmartOS.

As far as pointing you in the right direction, here are some links to articles/blogs that can give you a better idea of what SmartOS and KVM are all about:
http://dtrace.org/blogs/bmc/2011/08/15/kvm-on-illumos/
http://opusmagnus.wordpress.com/2012/02/14/discovering-smartos/
Porting KVM to SmartOS - Brian Cantrill, Joyent, KVM Forum 2011

When I brought up SmartOS in this thread it was simply because it sounded cool to me and a possible alternative to ESXi for an all-in-one. I did not intend to start any kind of flame war. Plus I was hoping some of the napp-it users here had given it a try and could share their experiences.
 
is there any documentation on setting up ftp? i can't figure out how to create ftp accounts to point to different directories. sorry if it has been discussed. i dont have an ftp tab under zfs file system next to smb / afp etc.
 
When I brought up SmartOS in this thread it was simply because it sounded cool to me and a possible alternative to ESXi for an all-in-one.
[...]
I did not intend to start any kind of flame war. Plus I was hoping some of the napp-it users here had given it a try and could share their experiences.

Yes I understand. Maybe I have to excuse because english is not my native tongue.
I have been looking for an alternative to ESXi ever since.
And of course, I looked into smartOS as soon as kvm support was announced.
I decided to stay away from it, so I cannot offer real life experience which you are looking for.
What made me NOT try out kvm with smartOS or other illumos based systems
were two things:

- lack of vt-d support, which is a requirement from my side
- derived from that, if running SAN/NAS and hypervisor in the same context, this IMHO
results in a "dirty" design.
An"all-in-one" is an architecture or design for a virtualized SAN/NAS. What makes
it successfull is the clear separation of features, fulfilling individual requirements and offering better flexibility.

IMHO SmartOS with KVM is an alternative for a hypervisor in the cloud only and not a good alternative for the SAN/NAS part of an all-in-one.

Finally, why the vt-d feature might be important to others looking for an alternative to build an all-in-one without ESXi:
For example, on the same "all-in-one", I am running TV severs (passthru of DVB-S2 tuners) and telephony services/PABX (passthrough of landline PABX cards) as well.
This is an advantage of the design, which would get lost with smartOS.and the reason I pointed to PROXMOX-VE.

On a side note, the reason I am still on ESXi is that I am running Solaris11 with native ZFS encryption and SOL11 does not run as stable with linux based KVM kernels (OI does)...but whenever I am doing an overhaul I give the next proxmox release a try.
 
May I get some advise as of when random IOPS is important to ZFS?

I currently have an allinone setup providing 3 main services:

NFS datastore for ESXi
Fileserver (music, many 30gb movies, etc)
PVR (ring buffer for livetv, recordings)
Streaming

My current setup for this is 3vdev or 2TB disk mirrors for a total of 6TB usable.

I went with mirror because they offer the best compromise of space redondancy and iops.

With ZFS raidz you only get the IOPS of a single disk per vdev.

However how important are IOPS for all activities above except the datastore.?

I am considering re doing my pool to provide more speed to VMs by using SSD and more storage space by adding 4TB drives.

I could get 2 512gb SSD in mirror and move my VMs there. I wonder if i could opt for a more space friendly but fewer random iops such as 3-6 disk raidz vdev.

Are there cases where fileserver need lots of iops or are read/write mostly sequantial? Such as many small file etc?

Thanks!
 
Seems like a reasonable plan - playback of movies/music etc is mostly sequential I/O.

However, mixing 4TB drives into a new raidzX pool along with your current 2TB drives might be interesting :).
It's doable, but you may have to be creative, depending on the layout/config you want!
 
Anyone has example where random good random iops would be useful for fileserver purpose?
 
Anyone has example where random good random iops would be useful for fileserver purpose?

huh?

if you have many people accessing a large central store then you need IOPs. if these files they're accessing are also very large (100MB+) you also need good throughput.

if you only have a few people accessing extremely large files (1GB+) you mainly need throughput.

if you're building a home SAN/NAS it really doesn't matter.
 
Evening

Recently I added another storage pool

I copied data from the old pool to the new pool from within the VM.

I'm now having issues with permissions

Result of an attempt at deleting a file "You require permission from S-1-5-21-763861274-1134914269-3099842216-1101 to make changes to this file"

I haven't had any other problems with file storage or performance for my lab VM's

Any ideas?
 
Evening

Recently I added another storage pool

I copied data from the old pool to the new pool from within the VM.

I'm now having issues with permissions

Result of an attempt at deleting a file "You require permission from S-1-5-21-763861274-1134914269-3099842216-1101 to make changes to this file"

I haven't had any other problems with file storage or performance for my lab VM's

Any ideas?

This is a unique Windows SID. The Solaris CIFS server can store them as an extended ZFS attribute to behave completely like Windows. If you move such files to another system, you must fix it like you would do on Windows. Reset ACL recursively (root permission needed) optionally take ownership. If you are on a Windows domain, this id is known on all domain-member computers.
 
The hits just keep on coming with this thing.

Things that I've observed - if I run CrystalDiskMark from a VM also on the same ESXi host, the results are fine. dd bench is still showing around 200MB/s write/500MB/s read.
I determined that uTorrent is a POS but the ZIL + increasing the disk cache largely remedied the Disk Overloading problem though now that I'm having this other issue, it seems to have resurfaced.. and that is:

The server just goes unstable and essentially needs to be rebooted. The only thing that I can see that looks troublesome is the following in the system log:
Code:
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.warning] WARNING: /scsi_vhci (scsi_vhci0):
Jan 26 06:33:34 napp-it-box      /scsi_vhci/disk@g5000cca22bc29860 (sd4): Command Timeout on path mpt_sas2/disk@w5000cca22bc29860,0
Jan 26 06:33:34 napp-it-box scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      Disconnected command timeout for Target 12
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.warning] WARNING: /scsi_vhci (scsi_vhci0):
Jan 26 06:33:34 napp-it-box      /scsi_vhci/disk@g5000cca22bc15e82 (sd3): Command Timeout on path mpt_sas3/disk@w5000cca22bc15e82,0
Jan 26 06:33:34 napp-it-box scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      Disconnected command timeout for Target 11
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.warning] WARNING: /scsi_vhci (scsi_vhci0):
Jan 26 06:33:34 napp-it-box      /scsi_vhci/disk@g5000cca22bc1fab5 (sd9): Command Timeout on path mpt_sas5/disk@w5000cca22bc1fab5,0
Jan 26 06:33:34 napp-it-box scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      Disconnected command timeout for Target 15
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:33:34 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:33:34 napp-it-box scsi: [ID 243001 kern.warning] WARNING: /scsi_vhci (scsi_vhci0):
Jan 26 06:33:34 napp-it-box      /scsi_vhci/disk@g5000cca22bc2a3ac (sd6): Command Timeout on path mpt_sas8/disk@w5000cca22bc2a3ac,0
Jan 26 06:34:44 napp-it-box scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:34:44 napp-it-box      Disconnected command timeout for Target 16
Jan 26 06:34:44 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:34:44 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:34:44 napp-it-box scsi: [ID 243001 kern.warning] WARNING: /scsi_vhci (scsi_vhci0):
Jan 26 06:34:44 napp-it-box      /scsi_vhci/disk@g5000cca22bc29860 (sd4): Command Timeout on path mpt_sas2/disk@w5000cca22bc29860,0
Jan 26 06:34:44 napp-it-box scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:34:44 napp-it-box      Disconnected command timeout for Target 12
Jan 26 06:34:44 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:34:44 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:34:44 napp-it-box scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:34:44 napp-it-box      Disconnected command timeout for Target 13
Jan 26 06:34:44 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:34:44 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:34:44 napp-it-box scsi: [ID 243001 kern.warning] WARNING: /scsi_vhci (scsi_vhci0):
Jan 26 06:34:44 napp-it-box      /scsi_vhci/disk@g5000cca22bc201fa (sd5): Command Timeout on path mpt_sas6/disk@w5000cca22bc201fa,0
Jan 26 06:34:44 napp-it-box scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:34:44 napp-it-box      Disconnected command timeout for Target 15
Jan 26 06:34:44 napp-it-box scsi: [ID 243001 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:34:44 napp-it-box      mptsas_check_scsi_io: IOCStatus=0x48 IOCLogInfo=0x31130000
Jan 26 06:34:44 napp-it-box scsi: [ID 243001 kern.warning] WARNING: /scsi_vhci (scsi_vhci0):
Jan 26 06:34:44 napp-it-box      /scsi_vhci/disk@g5000cca22bc2a3ac (sd6): Command Timeout on path mpt_sas8/disk@w5000cca22bc2a3ac,0
Jan 26 06:34:54 napp-it-box scsi: [ID 107833 kern.warning] WARNING: /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:34:54 napp-it-box      MPTSAS Firmware Fault, code: 265d
Jan 26 06:34:55 napp-it-box scsi: [ID 365881 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):
Jan 26 06:34:55 napp-it-box      mptsas0 Firmware version v9.0.0.0 (?)
Jan 26 06:34:55 napp-it-box scsi: [ID 365881 kern.info] /pci@0,0/pci15ad,7a0@15/pci1000,3040@0 (mpt_sas0):

Code:
root@napp-it-box:~# iostat -En
c8t0d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: VMware   Product: Virtual disk     Revision: 1.0  Serial No:
Size: 13.96GB <13958643712 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 5 Predictive Failure Analysis: 0
c7t0d0           Soft Errors: 0 Hard Errors: 6 Transport Errors: 0
Vendor: NECVMWar Product: VMware IDE CDR10 Revision: 1.00 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 6 No Device: 0 Recoverable: 0
Illegal Request: 1 Predictive Failure Analysis: 0
c0t5000CCA22BC15E82d0 Soft Errors: 0 Hard Errors: 7 Transport Errors: 14
Vendor: ATA      Product: Hitachi HDS72404 Revision: A3B0 Serial No: PK1331PAG30BJS
Size: 4000.79GB <4000787030016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 7 Recoverable: 0
Illegal Request: 6 Predictive Failure Analysis: 0
c0t5000CCA22BC29860d0 Soft Errors: 0 Hard Errors: 11 Transport Errors: 56
Vendor: ATA      Product: Hitachi HDS72404 Revision: A3B0 Serial No: PK1311PAG5PZGJ
Size: 4000.79GB <4000787030016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 11 Recoverable: 0
Illegal Request: 9 Predictive Failure Analysis: 0
c0t5000CCA22BC201FAd0 Soft Errors: 0 Hard Errors: 2 Transport Errors: 2
Vendor: ATA      Product: Hitachi HDS72404 Revision: A3B0 Serial No: PK1331PAG4DXGS
Size: 4000.79GB <4000787030016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 2 Recoverable: 0
Illegal Request: 2 Predictive Failure Analysis: 0
c0t5000CCA22BC2A3ACd0 Soft Errors: 0 Hard Errors: 4 Transport Errors: 5
Vendor: ATA      Product: Hitachi HDS72404 Revision: A3B0 Serial No: PK1311PAG5TZSJ
Size: 4000.79GB <4000787030016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 4 Recoverable: 0
Illegal Request: 4 Predictive Failure Analysis: 0
c0t5000CCA22BC2A3D8d0 Soft Errors: 0 Hard Errors: 3 Transport Errors: 1
Vendor: ATA      Product: Hitachi HDS72404 Revision: A3B0 Serial No: PK1311PAG5U15J
Size: 4000.79GB <4000787030016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 3 Recoverable: 0
Illegal Request: 3 Predictive Failure Analysis: 0
c0t5000CCA22BC1778Dd0 Soft Errors: 0 Hard Errors: 3 Transport Errors: 1
Vendor: ATA      Product: Hitachi HDS72404 Revision: A3B0 Serial No: PK2331PAG371AT
Size: 4000.79GB <4000787030016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 3 Recoverable: 0
Illegal Request: 3 Predictive Failure Analysis: 0
c0t5000CCA22BC1FAB5d0 Soft Errors: 0 Hard Errors: 3 Transport Errors: 2
Vendor: ATA      Product: Hitachi HDS72404 Revision: A3B0 Serial No: PK2331PAG4AZET
Size: 4000.79GB <4000787030016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 3 Recoverable: 0
Illegal Request: 3 Predictive Failure Analysis: 0
c14t5d0          Soft Errors: 0 Hard Errors: 73 Transport Errors: 20
Vendor: ATA      Product: Corsair Force GT Revision: 3    Serial No:
Size: 60.02GB <60022480896 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 37 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c8t1d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: VMware   Product: Virtual disk     Revision: 1.0  Serial No:
Size: 62.28GB <62277025792 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 3 Predictive Failure Analysis: 0

Note, all brand new disks.


I checked all of the connections.. ugh. Just ordered some new SFF8087 breakout cables an external drive to migrate all of my stuff off of..
 
Last edited:
Hi guys,

Just tried the GUI update to 0.9, and i'm getting the following issues:

Code:
Software error:

Can't locate UUID/Tiny.pm in @INC (@INC contains: /var/web-gui/data/napp-it/CGI /usr/perl5/site_perl/5.10.0/i86pc-solaris-64int /usr/perl5/site_perl/5.10.0 /usr/perl5/vendor_perl/5.10.0/i86pc-solaris-64int /usr/perl5/vendor_perl/5.10.0 /usr/perl5/vendor_perl /usr/perl5/5.10.0/lib/i86pc-solaris-64int /usr/perl5/5.10.0/lib .) at admin.pl line 713.
BEGIN failed--compilation aborted at admin.pl line 713.
For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.

Any ideas? Do I need to reinstall something?

Thanks.
 
hi all,
I want to switch from nexenta 4 to OmniOS since I just cant get my nexenta-disks to spindown ...
Are there any major differences between OmniOS Stable and Bloody or is the base OS/kernel/illumos the same?

thanks

nvm, as always, as soon it's posted, you find the answer....
http://omnios.omniti.com/wiki.php/StableVsBloody
 
Last edited:
Hi guys,

Just tried the GUI update to 0.9, and i'm getting the following issues:

Code:
Software error:

Can't locate UUID/Tiny.pm in @INC (@INC contains: /var/web-gui/data/napp-it/CGI /usr/perl5/site_perl/5.10.0/i86pc-solaris-64int /usr/perl5/site_perl/5.10.0 /usr/perl5/vendor_perl/5.10.0/i86pc-solaris-64int /usr/perl5/vendor_perl/5.10.0 /usr/perl5/vendor_perl /usr/perl5/5.10.0/lib/i86pc-solaris-64int /usr/perl5/5.10.0/lib .) at admin.pl line 713.
BEGIN failed--compilation aborted at admin.pl line 713.
For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.

Any ideas? Do I need to reinstall something?

Thanks.

I suppose, you need to reboot after running the wget installer
 
I'm running Napp-it 0.8 without problems, should I follow the rule of "if it's not broken, don't upgrade ?" or is it relatively riskfree to upgrade to 0.9 ?

Has anybody got VAAI working with Comstar and NAPP-IT ?
 
This is a unique Windows SID. The Solaris CIFS server can store them as an extended ZFS attribute to behave completely like Windows. If you move such files to another system, you must fix it like you would do on Windows. Reset ACL recursively (root permission needed) optionally take ownership. If you are on a Windows domain, this id is known on all domain-member computers.

How do I go about doing that?
 
Should same size vdev be prioritized over other factors?

I have six 2TB and two 4 TB ( i am planning to expand with 4tb drive from now on).

Should I go with 2x 2TB raidz + 1x 4TB mirror so each vdev would have 4TB of usable storage or just mirror everything so 3x 2tb mirror + 1x 4TB mirror.

The the second option might be faster but I wonder what will happen when the 2tb will be full.

Thanks
 
Thanks hom - that was on my list of to-do. I did, however, find the culprit. The Corsair Force GT I had in for ZIL testing is faulty.. but it made it look like the entire pool had a problem.
 
Using the latest OmniOS stable, I don't want to upgrade my pools yet but executing
zpool status -x displays the following:
Code:
  pool: tank
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
	still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(5) for details.
  scan: scrub repaired 0 in 3h3m with 0 errors on Thu Jan  3 06:03:52 2013
config:

	NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  raidz1-0  ONLINE       0     0     0
	    c1t0d0  ONLINE       0     0     0
	    c1t1d0  ONLINE       0     0     0
	    c1t2d0  ONLINE       0     0     0
	    c1t3d0  ONLINE       0     0     0

errors: No known data errors

Does anybody know whether I can tell ZFS to ignore that?
 
Should same size vdev be prioritized over other factors?

I have six 2TB and two 4 TB ( i am planning to expand with 4tb drive from now on).

Should I go with 2x 2TB raidz + 1x 4TB mirror so each vdev would have 4TB of usable storage or just mirror everything so 3x 2tb mirror + 1x 4TB mirror.

The the second option might be faster but I wonder what will happen when the 2tb will be full.

Thanks

There is no restriction, you can expand a pool with any vdev type.
If they are different, it results in unbalanced pools, what means that not all disks are used on writes -> lowers performance. If a vdev is full, the others are used. Due to CopyOnWrite the pool re-balance over time.

Only problem: You cannot remove a "suboptimal" vdev/ pool/layout without destroying the pool.
 
I guess I will destroy my pool and go with different type vdev, but same size.

that should give me
Raidz1(3x 2TB): 4TB
Raidz1(3x 2TB): 4TB
Mirror(2x 4TB): 4TB
... and continue to expand with 4TB in mirror.

Hopefully the two raidz1 won't have too much of an impact in term of random performance for my pool.

EDIT: It seem that ZFS don't recommand mixed raid type pool, and it need to be force with -f, should I be worried?

Thanks
 
Last edited:
Having a strange problem!

When starting my VM in ESXi I try login via the console but get a msg from OI to configure my 1st time login; I fill in what is required but then get a msg OI can't create the folders due to permission problems! Result : Windows of OI doesn't start and only get the blue screen with the OI logo,nothing else!
Any ideas?

Ty

ANy1?

screenshot038q.png

screenshot039o.png

screenshot040fs.png
 
I guess I will destroy my pool and go with different type vdev, but same size.

that should give me
Raidz1(3x 2TB): 4TB
Raidz1(3x 2TB): 4TB
Mirror(2x 4TB): 4TB
... and continue to expand with 4TB in mirror.

Hopefully the two raidz1 won't have too much of an impact in term of random performance for my pool.

EDIT: It seem that ZFS don't recommand mixed raid type pool, and it need to be force with -f, should I be worried?

Thanks

You have 3 options.

Option 1 - Add the Mirror: ZFS does allow adding multiple types of vdevs to a single pool it is not recommended.

Option 2 - Rebuild the pool: Backup the existing Data to the new 4TB drives. Destroy the pool, re-build it using mirrors of the 2TB drives. Put the data back on and add the mirror of the 4TB drives.

Option 3 - Buy a third 4TB drive: Add a third 3 drive raidz1 vdev and call it a day
 
Ok, I guess they is no vdev setup that would let me use the same type of vdev, and have the same vdev capacity.

All I could came out with is:

1) Balanced
4TB: raidz (3x2TB )
4TB: raidz (3x2TB )
4TB: Mirror (2x4TB )

2) Balanced
8TB: raidz2 ( 6x2TB )
8TB: raidz1 ( 3x4TB )

3) Unbalanced
2TB: Mirror( 2x2TB )
2TB: Mirror( 2x2TB )
2TB: Mirror( 2x2TB )
4TB: Mirror( 2x4TB )

4) Unbalanced
4TB: raidz (3x2TB )
4TB: raidz (3x2TB )
8TB: raidz (3x4TB )
 
In order to prevent unexpected performance issue due to mixed disk size, different sector size and mixed vdev type, I am pretty much set on creating a second Pool with only 4TB drives.

I think it will pay in the long run, as I add more disk to it and also split my the data into two managable sized Pool if I want to break either onea apart.

Hopefully its gonna help someone in a similar situation.
 
Where on the actual OS are the ZFS folders for napp-it stored? Months ago i set permissions on my own personal folder and can no longer access it to undo that folder permission so i need to do it from the CLI. USing solaris 11.1 as well if that helps.
 
Where on the actual OS are the ZFS folders for napp-it stored? Months ago i set permissions on my own personal folder and can no longer access it to undo that folder permission so i need to do it from the CLI. USing solaris 11.1 as well if that helps.

The real path is /pool/zfs filesystem
- Do not touch Unix permissions for SMB, use ACL or inheritance settings are lost
- Set ACL via CLI, from Windows as root or napp-it ACL extension
 
Back
Top