OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Just a quick word on ESX 5.1

I just upgraded my ESX from 5.0u1 to 5.1 without any issue, also converted the OI to the newest vm format (9), and re-installed the vmware tool within OI.

All went smooth and my ZFS is back on :)
 
This may have been discussed in this thread but what is the best option/practice for an all in one server running about 5 VMs

1. VM data store on the main drive pool - with esxi and OI on separate disk
2. All VMs on ssd and only data on the pool

Will running green drives with VMs 24/7 be better than letting them spin down multiple times per day?

Has anyone had luck connecting to OI with android and accessing a no guest area?
 
I've had to move away from the all-in-one approach due to limited funds until December year end budget money is free and my Dell 2950 doesn't do PCI passthrough.

I'm trying to install OI/Napp-It standalone now and the install gets stuck on "building cpio file lists" and doesn't continue. I've tried other drives for the install, but it always hangs in this area. Anyone know of a workaround for this? There's not a lot coming back from Google on this one outside of just a handful of MB specific workarounds.
 
Just a quick word on ESX 5.1

I just upgraded my ESX from 5.0u1 to 5.1 without any issue, also converted the OI to the newest vm format (9), and re-installed the vmware tool within OI.

All went smooth and my ZFS is back on :)

Are you using a commercial (i.e., paid for) version of VMWare ESXi 5.1? The reason I ask is that it is my understanding that there is no free version of ESXi 5.1.
 
Upgraded to ESXi 5.1, updated my OI VM to version 9, and updated to the latest version of VMWare Tools and I'm getting better results.

My network read speeds have almost doubled (60 MB/sec to 110 MB/sec) and so far I haven't had to restart the SMB server due to write speeds falling.

I'm hoping over the next couple days to not have to restart SMB but so far I'm liking the improvements.
 
Upgraded to ESXi 5.1, updated my OI VM to version 9, and updated to the latest version of VMWare Tools and I'm getting better results.

My network read speeds have almost doubled (60 MB/sec to 110 MB/sec) and so far I haven't had to restart the SMB server due to write speeds falling.

I'm hoping over the next couple days to not have to restart SMB but so far I'm liking the improvements.

Could you please tell us your hardware?

Thanks.
 
Could you please tell us your hardware?

Thanks.

My hardware is pretty new so hopefully what I'm seeing is due to improvements in 5.1 for newer hardware.

Specs:
Chassis: Heavily modded AIC RSC-4ED2 4U 24-bay (modded to accept an ATX PSU and silent fans) + rails
Mobo: Supermicro X9DRI-F-O
CPUs: 2x Intel Xeon E5-2620 (6 cores each=12x 2GHz cores with HT for 24 logical cores)
RAM: 32GB Kingston DDR3 EEC FBDIMMs
SSD: Intel 520 120GB
HDD: 6x WD RED 3TB
HBAs: 3x IBM M1015 2SAS/8SATA
PSU: Seasonic Platinum 860W

Rack: Tripplite 12U Enclosure
Switch: HP Procurve 2810-24G (fully managed Layer 2)
WAP: Ubiquity UniFi AP PRO (802.11abgn)
UPS: APC 1500VA SMART-UPS + network card + rails

All network wiring and punchdowns are CAT6.
 
Upgraded to ESXi 5.1, updated my OI VM to version 9, and updated to the latest version of VMWare Tools and I'm getting better results.

My network read speeds have almost doubled (60 MB/sec to 110 MB/sec) and so far I haven't had to restart the SMB server due to write speeds falling.

I'm hoping over the next couple days to not have to restart SMB but so far I'm liking the improvements.

here are my situation:
if anyone still using PCI/PCI-X for passthrough, do NOT update to 5.1 :p....
once I starts any VM with passthrough to PCI/PCI-X.. esxi 5.1 crashes(pink screen of death).
there were running normally under esxi 5.0 update 1.
one of my VM is running ZFS with SAT2-MV8 passthrough, this works on esx 5.0 , Not on 5.1...
 
Yo,

Didn't know that snapshots take up a huge amount of space?
Suddenly ran out of space while this was impossible and took me a while to figger out it were the snapshots! Deleted them and free-up about 33% on both pools!

gr33tz
 
snapshots don't inherently take up a lot of space. If you take a snap, and change a crapload of data, then, sure there is a lot of space, since the system needs to track both sets of data. If you do this repeatedly, then yes, you can use up a lot of space. Before deleting them, it would have been nice to see the output of 'zfs list -t snapshot'.
 
snapshots don't inherently take up a lot of space. If you take a snap, and change a crapload of data, then, sure there is a lot of space, since the system needs to track both sets of data. If you do this repeatedly, then yes, you can use up a lot of space. Before deleting them, it would have been nice to see the output of 'zfs list -t snapshot'.

Thanks didn't know!
Been converting a whole bunch of BD-iso files into movie-only for XBMC, deleted the isos and replaced them with movie-only rips so I didn't understand why I was running out of space while these new files were smaller :p

TY
 
Heh, I just had to delete a nightly snapshot because it was 124GB. I had been migrating VMs back and forth to change the recordsize :)
 
Quick question hopefully... how can I get the Zpool screen to show the correct amount of available space on a pool? For example, I have a recording pool I use for TV that got really full during the olympics. I've since then cleared off almost 700 GB, but the pool still only shows 1% available in Napp-it..

It doesn't seem to matter how much data I remove from the pool, the amount available remains at 1%. I even tried copying all the files off to another pool, with no change in the available listing.

The pool and everything appears to be working fine, just that it is reporting wrong...
 
Quick question hopefully... how can I get the Zpool screen to show the correct amount of available space on a pool? For example, I have a recording pool I use for TV that got really full during the olympics. I've since then cleared off almost 700 GB, but the pool still only shows 1% available in Napp-it..

It doesn't seem to matter how much data I remove from the pool, the amount available remains at 1%. I even tried copying all the files off to another pool, with no change in the available listing.

The pool and everything appears to be working fine, just that it is reporting wrong...

Same problem my guess as myself,try deleting some snapshots and see what happens!
 
not running any snapshots currently...

zfs list
Code:
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
SageTV                                1.76T  25.6G  1.76T  /SageTV
Storage                               5.02T  6.73T  89.8K  /Storage
Storage/Private                        177G  5.66T   177G  /Storage/Private
Storage/Storage                       3.16T  5.66T  3.16T  /Storage/Storage
Storage/VM_Backup                      626G  5.66T   626G  /Storage/VM_Backup
VM_Store                               278G  1.13T    35K  /VM_Store
VM_Store/VM_Store                      202G  1.07T   202G  /VM_Store/VM_Store
VM_Store/VM_Test                      7.69G  1.07T  7.69G  /VM_Store/VM_Test

zfs list -t snapshot
Code:
NAME                                                       USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/napp-it-0.8h_update_05.04@install              32.8M      -  2.74G  -
rpool/ROOT/napp-it-0.8h_update_05.04@2012-05-04-04:49:22   379M      -  3.17G  -
rpool/ROOT/napp-it-0.8h_update_05.04@2012-05-04-05:29:19  50.3M      -  2.92G  -
rpool/ROOT/napp-it-0.8h_update_05.04@2012-05-04-05:31:15  50.8M      -  2.92G  -
rpool/ROOT/napp-it-0.8h_update_05.04@2012-05-04-05:34:30  33.7M      -  3.18G  -
rpool/ROOT/openindiana-1@2012-05-04-05:37:30               211K      -  3.50G  -

One note, the SageTV pool is exported via iSCSI to a Windows 7 VM that writes and reads to it. No NFS or CIFS shares. Looking at the VM it currently shows 940 GB free...
 
Oh, I see what's going on. This is a zvol being shared out via iSCSI? It doesn't matter how much you delete in the guest OS filesystem - zfs has no way to tell that. Only thing that would fix that would be if the hypervisor had VAAI enabled and your SAN supported it, so that when the guest frees blocks of storage, a SCSI command is sent to the SAN to free the space there. You may be screwed, and need to copy off whatever you still have from SageTV and rebuild the zvol from scratch :(
 
sort of... the iscsi isn't being shared with VMWare, but with the Windows 7 VM. I'm using the MS iSCSI connection to connect and mount it as a drive for that VM (so just like if I was connecting to an iSCSI connection from a physical system).

Probably a similar issue I guess, but just wanted to point that out. The system itself sees the iSCSI share as having plenty of space, but it probably isn't passing any of that information back to the storage server...

Doesn't seem to be affecting performance or anything, just the reporting.
 
Basic point still obtains though. Like I said, unless the guest FS is using VAAI, it has no way to communicate freeing of blocks back to the storage array.
 
Hi Gea,

If I am monitoring your changelog and notice an upgrade to afp 3; to upgrade do I run rerun the installer?
Code:
wget -O - www.napp-it.org/afp  | perl
or is there more too it?

There is nothing wrong at the moment, everything is working great thank you; I'm just curious to know.

Thank you
Paul
 
Hi Gea,

If I am monitoring your changelog and notice an upgrade to afp 3; to upgrade do I run rerun the installer?
Code:
wget -O - www.napp-it.org/afp  | perl
or is there more too it?

There is nothing wrong at the moment, everything is working great thank you; I'm just curious to know.

Thank you
Paul

yes, run the afp installer and update napp-it to newest
 
Has anyone been able to connect a Y-cam IP camera to their SAN? It says it supports SMB/CIFS shares. It asks for the server ip /Domain Name and the share name. I have tried it a million different ways but cant get it to connect.
 
Quick questions for all-in-one experts.
I'm going to build one of these using new hardware for case, motherboard, cpu etc which I've already purchased but I'll be using my existing disks and ram etc. I have enough storage to shuffle stuff around while I build it so can use my disks pretty much as I want to set it up. All disks are Sata.

2.5 inch drives I have 1 x 80GB , 2 x 160GB WD Black 7200RPM drives, 6 x 500GB WDC 5400 Blue drives.

3.5 inch drives I have 13 x 2TB WDEARS Green drives, 2 x 2TB WD20EADS Green Drives, 2 x 2TB Hitachi 7200 RPM Drives and 2 x 2TB Samsung spinpoint drives 5900RPM.
Lots of usb sticks from 2GB to 32GB in size.

I want to use ESXI with currently 4 VMs plus a VM for whatever version of OI or Solaris etc I go with for storage of my media collection.. VMs are WHS2011 for pc backups, Win7 usenet/torrent/general client, Win server 2012 & Win 8 for learning , maybe more learning environments down the road. Client backups are of laptops drives 320GB max,

Question 1: I can use 2 x 4GB ECC ram + 2 x 1GB ECC ram or 2 x 4GB ECC ram + 2 x 4GB non ecc ram. Should I use the 10GB ECC ram or the maximum mixed 16GB ECC and non ECC ram? Does quantity or quality win here? I believe they will work when mixed but operate as non ecc. Motherboard has 4 ram slots and supports ecc as does cpu,.

Question 2: How best should I set up the disks/volumes to create storage for the VMS and storage for the media? I'm thinking of the 160GB and/or 500GBs for the VMS but not sure how many or in what config. And with the 2 TB drives I want some advice on vdev setup since some are 4k and some are not...

Question 3: Since we'll be using ESXi which I believe has no interactivity available on the server , does this render the IPMI/kvm functionality on the motherboard useless?

Any guidance appreciated...:confused:
 
Last edited:
Hi folks-
I have a number of questions as I am setting up this system:

1) I am configuring ACLs for SMB access from Windows machines, I am not clear when/where idmap is useful, because when I try to access a share it asks me for a username and password, and is this the username/password on Solaris? If so, when and why would I use idmap?

2) When I created a folder it said I need to use case insensitive for SMB access, however I'd like to use the same folder for NFS where I'd like to have case sensitivity.. can I set it to sensitive or mixed? What are the consequences?

3) If my server has a power outage, do I need to do a file system check of any sort on either the base OS install or my ZFS pools?

4) Should I set up an scrub check? I saw an option for it under Napp-it, but wasn't sure at what frequency this should occur.

Thanks!
 
Hi folks-
I have a number of questions as I am setting up this system:

1) I am configuring ACLs for SMB access from Windows machines, I am not clear when/where idmap is useful, because when I try to access a share it asks me for a username and password, and is this the username/password on Solaris? If so, when and why would I use idmap?

The problem:
Unix (filesystems and tools like chmod and services) must use uiserids (UID) and groupids (GID) for file permissions.
Windows use Windows security id's (SID) for the same purpose. The SID are more sophisticated because they are
globally unique (serverid is part of a SID, UID can be the same for different users on different machines) and they are
basically the same for user groups, default services or servers.

To handle this, you must map between Windows SID and Unix UID in any way to allow SMB file-access (=Windows compatible file-access).
You can do it like SAMBA and winbind and generate a UID from SID and use always the UID only. While this works for Linux/Unix you will loose the extra abilities of SIDs and this is not the way a generic Windows 2008 server will act.

OR
(in an AD environment), you can extend the AD server to deliver and manage UIDs for Unix services

The Solaris CIFS server is a new approach in the Unix/Linux world. It was developped to act like Windows what means that it really stores
Windows SIDs as extended ZFS attibutes. It does not use and need Unix IDs at all. To be compatible with the Unix ZFS filesystem and tools
like chmod, it creates a UID/GID and a SID for any local user and store the mapping in the idmap database.

So for local Solaris users, the mapping is fix and generated automatically (you do not need to do anything).

The problem is Active Directory (or LDAP).
where you authenticate against an external database. This authentication must give a local UID/GID for this user. (A default AD server does not).
To overcome this problem, Solaris CIFS generates a session-UID on the fly whenever a SMB/CIFS requests a login and store them temporary in the idmap database.
This mapping is called ephemeral. The generated mappings are only valid for the current session. (The real file UID is nobody).

So for CIFS and AD users, you do not need to do any manual mapping.


When do you need to care about the idmapping service?

- First: only in an LDAP or AD environment AND
- Whenever you use other services like NFS or netatalk (They only use UID, they do not know the SID). In such a case, you need
a persistent mapping between SID and UID, either with AD Unix additions or with manual id-mappings like AD-user: Paul = Unixuser:root (or any other).

(Solaris idmap currently lacks the option for an additional fixed mapping based on a direct SID->UID assignment like winbind.
This will be needed for compatibility between netatalk and CIFS and unmodified AD servers. If are a Illumos developer, please add this to idmapd....).


2) When I created a folder it said I need to use case insensitive for SMB access, however I'd like to use the same folder for NFS where I'd like to have case sensitivity.. can I set it to sensitive or mixed? What are the consequences?

Set it to mixed and it is quite ok for both.

3) If my server has a power outage, do I need to do a file system check of any sort on either the base OS install or my ZFS pools?

ZFS is a CopyOnWrite filesystem. A write is done correctly or not.
On a power outage during a write, the file is corrupted but the filesystem is always consistent.
So a consistency check is not needed and not availble.

4) Should I set up an scrub check? I saw an option for it under Napp-it, but wasn't sure at what frequency this should occur.
Thanks!

A scrub is needed to find, fix and report silent errors.
On enterprise disks a monthly scrub and on desktop disks a weekly scrub is recommended.
(Do it at night or other low usage times)
 
Last edited:

I have read that. Those instructions are for a previous camera. I have the new 1080P camera. I assume the issue is Solaris CIFS. It is strange though because when I look in the log files I can see errors if I put in a non-exsistant share name. So that should mean that when I dont see a log entry that it is connecting. For some reason the camera doesn't want to report that it is successfully connected to the NAS.
 
Thanks Gea!

The problem:
When do you need to care about the idmapping service?

- First: only in an LDAP or AD environment AND
- Whenever you use other services like NFS or netatalk (They only use UID, they do not know the SID). In such a case, you need
a persistent mapping between SID and UID, either with AD Unix additions or with manual id-mappings like AD-user: Paul = Unixuser:root (or any other).

(Solaris idmap currently lacks the option for an additional fixed mapping based on a direct SID->UID assignment like winbind.
This will be needed for compatibility between netatalk and CIFS and unmodified AD servers. If are a Illumos developer, please add this to idmapd....).

So historically when I used NFS at school we had to ensure that our local user id/gid was the same everywhere we accessed data from our NFS server otherwise permissions wouldn't work. Is this also an issue with Solaris, or does idmap or some other mechanism help with that (for Linux access)?
 
Thanks Gea!



So historically when I used NFS at school we had to ensure that our local user id/gid was the same everywhere we accessed data from our NFS server otherwise permissions wouldn't work. Is this also an issue with Solaris, or does idmap or some other mechanism help with that (for Linux access)?

This is a problem of UID/GID. They are not unique for a special user but can be different on every server.
This can be fixed with LDAP/AD and Unixsupport and a central management of UIDs.

But Solaris CIFS + Active Directory + ephemeral mappings is the easiest way to solve all of these problems
by not using UID/GID at all.
 
Thanks Gea!



So historically when I used NFS at school we had to ensure that our local user id/gid was the same everywhere we accessed data from our NFS server otherwise permissions wouldn't work. Is this also an issue with Solaris, or does idmap or some other mechanism help with that (for Linux access)?

this happens to NFS system :D we are in unix world :D

there are many ways to workaround that situation as Gea mentioned. or on your posting(matching manually with UID/GUID across servers).

pick one which you are convenient :).
some servers in my "work" lab use LDAP :D, it is hard during initial setup, but piece of cake after running :)
just use tools/utilities that provided by OS or packages, we are good to go as long as we know the limitation:D
 
Not related to this thread mostly, would like to share,

I just move "other" my backup server from OI ZFS to zfsonlinux on centos 6.3.
knowing zfsonlinux has some caveats that I treat as limitations or "DIY"
the overall is pretty stable than previous RC that I tried early this year.

zfsonlinux has builtin NFS, really stable on my environment. ( I had an issue with OI ZFS-NFS , where copying file is getting slow and hanging randomly during heavy I/O, could be hardware compatibility. I had to use CIFS for workaround)

Move to zfsonlinux on centos 6.3 with the same LGA771 motherboard/ SAT2-MV PCI-X cards.
I take my words "SAT2-MV8 is not good" back, since seeing it's performance on my zfsonlinux on centos 6.3 achieved my expectation :D.

I need to use samba 3 for windows sharing...
 
This is a problem of UID/GID. They are not unique for a special user but can be different on every server.
This can be fixed with LDAP/AD and Unixsupport and a central management of UIDs.

But Solaris CIFS + Active Directory + ephemeral mappings is the easiest way to solve all of these problems
by not using UID/GID at all.

But CIFS/AD don't work with Linux right, which I also have clients that need access from. That and I don't use AD.
 
But CIFS/AD don't work with Linux right, which I also have clients that need access from. That and I don't use AD.

Not sure if i understood your question but from network view, Solaris CIFS acts similar to a real Windows Server
regarding NFS4 ACL, unique Windows SIDS even when Pools are moved to other machines, Snaps via previous version and such things.

If your CIFS server is a Domain Member (either to Windows AD or SAMBA AD) you can connect as a domainuser from
any platform (Windows, Mac, Linux etc).

If your CIFS server is not in a domain, you can connect as local user only (from any guest OS)

Solaris CIFS vs SAMBA
SAMBA can act as a AD server, CIFS not
SAMBA as the client part is comparable to CIFS, has some extra features, is mostly not as fast, does not support previous version
and does not use SIDs but UIDs. This is more Unix-like and has better compatibility wirh other Unix services. Its more an approach to
give Windows users access to a Unix sytem not to imitate a Windows system with Windows user management ideas on Unix.

Imho Microsoft SIDs are more flexible and feature rich. If I would have a choice, I would prever SIDs on Unix also.
 
Back
Top