OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Thanks this answer my question.

SMB seems to work better on S11.1 compared to OI/Omni, and what's broken in S11.1 dosen't impact me.

Two thing broken in OI:

SMB for android dosen't work.
Editing security permission with Window8 result in a crash of Explorer :(

Now i just need to figure out the way to create a a filesystem on a pool that is at version 5 so it remain backward compatible with OI/Omni..

Edit: zfs create -o version=5 [NAME] does the trick...

Gea, it would be nice to add a free form field to the create filesystem to specify options in the gui.
 
Last edited:
Can someone enlighten my as to why why my ZFS pool parameter "ASHIFT" concerning disk sector sizes reports as "ashift=9", when ALL my drives are Advance Format drives, that should report 4K sector size and result in "ashift=12"?
 
Because some drivers report wrongfully that they are 512b. Like most western digital green drives.
 
Because some drivers report wrongfully that they are 512b. Like most western digital green drives.

:mad::mad:

Damn WD, I can't replace any drives at all, since all new drives(ie. WD REDs) are reporting 4K sectors :(

EDIT: Can I recreate the zpool with forced 4K sectors if I offload my data?
 
Yes you can, in disk, disk details: Choose: edit sd conf

Read the doc there, a working example (edit to fit your drive):

sd-config-list = "ATA WDC WD4001FAEX-0", "physical-block-size:4096";
then reboot, then re-initialize the disk, then create the pool :)
 
I think I have a problem with the reported space as below my pool is showing 78g remaining

6eSwzXX.png


The ZFS is showing 2% remaining with 3.49TB being used

dZLjemI.png


However I have only allocated in ESXi

2.7tb and there is space free on the drives

HDkmXIm.png



The LUNS are setup as below

9I36sS5.png



How can I fix this, as clearly something is wrong?
 
Hi Gea,
Thanks for the quick reply, there are no snaps,

hMFKpfF.png


Pool is 3.6tb 10 369gb

There is only 2.7tb allocated via thin provisioned LUNs, the luns are only using 1.7TB, as above only showing 78gb avail and appears to show 5.24 Allocated?
 
Has anyone got a SM socket2011 motherboard with hdd on intel SCU controller and working zfs? Because I cant get nappit to find the drives. They are not showing under initialize, but under Disks -> Controller, i get:
Disks on Interface
Interface Type/Online Busy Phys_Id Modell
c4 connected unconfigured unknown scsi-sas n /devices/pci@0,0/pci8086,3c02@1/pci1000,3020@0/iport@v0:scsi
c5 connected unconfigured unknown scsi-sas n /devices/pci@0,0/pci8086,3c06@2,2/pci1000,3020@0/iport@v0:scsi
sata1/0::dsk/c3t0d0 connected configured ok Mod: INTEL SSDSA2BT040G3 FRev: 4PC10362 SN: BTPR23350144040AGN disk
The intel ssd OS boot disk is on the chipset controller. I have to have the SCU raid rom enabled otherwise the disks aren't even showing up in the motherboard bios.
edit: I'm guessing its the driver, any idea if one is available yet?
 
Last edited:
How do I delete the files/volumes then as I can't see them listed in LUNS or anywhere else?


I tried an import

UtbTaem.png
 
Last edited:
How do I delete the files/volumes then as I can't see them listed in LUNS or anywhere else?

Filebased LU
delete the file ex
rm /VMS/1_Perc6i

I would place the LU files in a filesystem that is SMB shared, ex /VMS/units/1_Perc6i
so you can manage the virtual disks from Mac/Windows (move/backup/delete)

Volumebased LU
delete the Volume
menu Disk - Volume - delete
 
So if I present my volumes as I have been doing as thin provisioned LUNS if I then move all out of the data from within the drive in a VM or with Virtual Centre and then delete the LUN the space will still remain used on the pool? THere is no way to get back the allocated space?
 
So if I present my volumes as I have been doing as thin provisioned LUNS if I then move all out of the data from within the drive in a VM or with Virtual Centre and then delete the LUN the space will still remain used on the pool? THere is no way to get back the allocated space?

Comstar - delete LU = de-register/ delete reference to a LU-file, not delete the file itself
You must delete the file separately when no longer needed
 
I see from the above post I can rm -f the file but what will be the location of the associated file, how can I find any old lun files?
 
Last edited:
How do I delete/find the file that is associated with the deleted LUN reference?

When you create a LU, you must select a filesystem and a filename for the datafile.
You can see the datafile for a LU in menu Comstar Logical Units (Row Datafile)

If you have deleted the LU and have forgotten where you have created the datafile,
you must search all filesystems for unneded datafiles. (root level of filesystem only)

The same when you want to import a Lu + datafile, you must know the path
 
Ignore that last question I gave found all the old LUN file in the /VMS dir



Thanks again for all your help
 
Anyone having issue setting permission on CIFS share using windows 8?

Every time I try to add a user explorer force close and restart closing any open window.

Any idea?
 
Looking a whole lot healthier and everything is flying again now as the performance had degraded severley

eEChbBb.png
 
I forgot to mention, that this was the first thing I tried. Right before rebooting.
Interestingly this did not occur on in test environment.
-Frozen
 
@Gea,

I have a possible feature request. Maybe someone else will need that, too?!

Would it be possible to implement, that I can see, which feature-flags are enabled on a specific pool and what feature-flags are available? Maybe it would be nice too, that you could enable/disable feature-flags on a specific pool.

When I do a zfs upgrade in napp-it, it shows me which feature-flags are available and all of them get enabled. A feature-flag view option would be nice.

CU Ghandalf
 
@Gea,

Is there a procedure or proper way to shutdown the all-in-one server with ZFS? No files are being accessed or written to the ZFS pool but I don't want to interrupt anything it might be doing in the background. Thanks!
 
In esxi setting there is a auto start shutdown menu.

Select the zfs vm to start first then all the others.


Do it the area where order matter.
 
@Gea,

I have a possible feature request. Maybe someone else will need that, too?!

Would it be possible to implement, that I can see, which feature-flags are enabled on a specific pool and what feature-flags are available? Maybe it would be nice too, that you could enable/disable feature-flags on a specific pool.

When I do a zfs upgrade in napp-it, it shows me which feature-flags are available and all of them get enabled. A feature-flag view option would be nice.

CU Ghandalf

edit: done

napp-it 0.9a8 (25.2.) supports
- enabling Features in menu Pools - Features
- Supports LZ4 compress (OmniOS bloody)

more
http://wiki.illumos.org/display/illumos/LZ4+Compression
 
Last edited:
Is everybody here using the recommended number of drives for RAIDZ ? I'm planning on using 12-drives RAIDZ2 and that's not recommended, some go even as far as saying that performance will be horrible (but I haven't seen numbers). Usage will be storage server with minimal I/O demands.
 
Back
Top