OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

saying generically 'vdev is same iops as one disk' is not completely correct in the case where you have mirrored disks in the vdev - reads will be for the most part N times as fast (if an N-way mirror...)
If "I have mirrored disks in the vdev" - but a mirror is a vdev.

If I have 10 disks in 5 mirrors, then I have 5 vdevs, each vdev consisting of one mirror.

"One vdev gives the same IOPS as one disk" means that my 5 mirrors gives the same IOPS as 5 disks.

So, my claim is correct.
 
If "I have mirrored disks in the vdev" - but a mirror is a vdev.
If I have 10 disks in 5 mirrors, then I have 5 vdevs, each vdev consisting of one mirror.
"One vdev gives the same IOPS as one disk" means that my 5 mirrors gives the same IOPS as 5 disks.

So, my claim is correct.

Though we are delving a bit into semantics I think the distinction was that, for a mirrored vdev your write IOPS = the IOPS of one disk, but your read IOPS = (IOPSdisk * number of disks in mirror) - since ZFS stripes the reads.
 
Though we are delving a bit into semantics I think the distinction was that, for a mirrored vdev your write IOPS = the IOPS of one disk, but your read IOPS = (IOPSdisk * number of disks in mirror) - since ZFS stripes the reads.
I did not understand this. For reads, you mean that ZFS distributes the reads to different disks, thus all disks are active fetching data? But for writes, ZFS acts as one single disk?
 
I did not understand this. For reads, you mean that ZFS distributes the reads to different disks, thus all disks are active fetching data? But for writes, ZFS acts as one single disk?

When reading mirrored data ZFS can use the drives independently (like to read half of the data from each drive) but it has to write all the data to every drive.
 
I did not understand this. For reads, you mean that ZFS distributes the reads to different disks, thus all disks are active fetching data? But for writes, ZFS acts as one single disk?

If you have a mirrored vdev, zfs will round-robin the reads. This can approx double bulk read thruput, and for random reads, helps too. if writing by definition you have to write to all components of a mirror, hence one logical disk, but for reads you don't have that constraint.
 
I assume that this is the place to report possible napp-it bugs. If not, please give me a corrected location. So here goes...

The napp-it/about server overview shows ftp-server : online when either ftp or tftp is enabled.
The Services/FTP page functions correctly (ignores tftp).

napp-it 0.500s or 0.600a or 0.600b
SunOS hmos 5.11 oi_151a i86pc i386 i86pc Solaris
server overview:
uptime : 3:52pm up 2:07, 1 user, load average: 0.06, 0.08, 0.10
afp-server : online netatalk version 2-2-0-p6
apache-server: disabled
iscsi comstar: online
ftp-server : online

$ svcs -a | grep ftp
disabled 13:46:06 svc:/network/ftp:default
online 13:46:07 svc:/network/tftp/udp6:default
 
napp-it 0.600b nightly Oct.14.2011 on Solaris Express 11

I have compiled the TLS binary as described in "help - readme first", twice, but am still getting the following error in TLS->status:

Code:
Status: 500 Content-type: text/html 

Software error:
Can't locate Net/SMTP/TLS.pm in @INC (@INC contains: /var/web-gui/data/napp-it/CGI /usr/perl5/5.8.4/lib/i86pc-solaris-64int /usr/perl5/5.8.4/lib /usr/perl5/site_perl/5.8.4/i86pc-solaris-64int /usr/perl5/site_perl/5.8.4 /usr/perl5/site_perl /usr/perl5/vendor_perl/5.8.4/i86pc-solaris-64int /usr/perl5/vendor_perl/5.8.4 /usr/perl5/vendor_perl . /var/web-gui/data/napp-it/zfsos/_lib /var/web-gui/data/napp-it/_my/zfsos/_lib /var/web-gui/data/napp-it/zfsos/15_jobs and data services /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS) at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 119.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 119.

For help, please send mail to this site's webmaster, giving this error message and the time and date of the error. 

Status: 500 Content-type: text/html 

Software error:
[Mon Oct 17 00:28:14 2011] admin.pl: Can't locate Net/SMTP/TLS.pm in @INC (@INC contains: /var/web-gui/data/napp-it/CGI /usr/perl5/5.8.4/lib/i86pc-solaris-64int /usr/perl5/5.8.4/lib /usr/perl5/site_perl/5.8.4/i86pc-solaris-64int /usr/perl5/site_perl/5.8.4 /usr/perl5/site_perl /usr/perl5/vendor_perl/5.8.4/i86pc-solaris-64int /usr/perl5/vendor_perl/5.8.4 /usr/perl5/vendor_perl . /var/web-gui/data/napp-it/zfsos/_lib /var/web-gui/data/napp-it/_my/zfsos/_lib /var/web-gui/data/napp-it/zfsos/15_jobs and data services /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS) at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 119.
[Mon Oct 17 00:28:14 2011] admin.pl: BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 119.
Compilation failed in require at admin.pl line 751.

For help, please send mail to this site's webmaster, giving this error message and the time and date of the error. 

[Mon Oct 17 00:28:14 2011] admin.pl: [Mon Oct 17 00:28:14 2011] admin.pl: Can't locate Net/SMTP/TLS.pm in @INC (@INC contains: /var/web-gui/data/napp-it/CGI /usr/perl5/5.8.4/lib/i86pc-solaris-64int /usr/perl5/5.8.4/lib /usr/perl5/site_perl/5.8.4/i86pc-solaris-64int /usr/perl5/site_perl/5.8.4 /usr/perl5/site_perl /usr/perl5/vendor_perl/5.8.4/i86pc-solaris-64int /usr/perl5/vendor_perl/5.8.4 /usr/perl5/vendor_perl . /var/web-gui/data/napp-it/zfsos/_lib /var/web-gui/data/napp-it/_my/zfsos/_lib /var/web-gui/data/napp-it/zfsos/15_jobs and data services /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS) at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 119. [Mon Oct 17 00:28:14 2011] admin.pl: [Mon Oct 17 00:28:14 2011] admin.pl: BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/15_jobs and data services/04_TLS/01_status/action.pl line 119. [Mon Oct 17 00:28:14 2011] admin.pl: Compilation failed in require at admin.pl line 751.

P.S. is it just me, or does the URL to napp-it's web-management front end contain the login users hashed password? I seem to be able to login to the management front end, just by saving & bookmarking the URL! This works even after logging out, as in you don't have to enter the password, if the saved bookmark is used!
 
If you have a mirrored vdev, zfs will round-robin the reads. This can approx double bulk read thruput, and for random reads, helps too. if writing by definition you have to write to all components of a mirror, hence one logical disk, but for reads you don't have that constraint.
Ok, thanx. I was not sure on this. I know hardware raid spreads the reads among all disks, but was not sure regarding ZFS.
 
I assume that this is the place to report possible napp-it bugs. If not, please give me a corrected location. So here goes...

$ svcs -a | grep ftp
disabled 13:46:06 svc:/network/ftp:default
online 13:46:07 svc:/network/tftp/udp6:default


on common interest, you can post it here otherwise send a mail to
[email protected]

ps
this is an easy fix for next nightly
 
Last edited:
P.S. is it just me, or does the URL to napp-it's web-management front end contain the login users hashed password? I seem to be able to login to the management front end, just by saving & bookmarking the URL! This works even after logging out, as in you don't have to enter the password, if the saved bookmark is used!

napp-it uses a time based group-session id valid for the current day.
(unlimited if there is no password)

Reason
With this it is possible to move between menues of different appliances
having the same password without the need to login to each at the current day.
Next day you must re-login

It is planned to optionally use single session id's on the way to a 1.0 release
If this is an issue now, you should use napp-it on a secure management lan
(this is always suggested) or you can use private modus ex on firefox

about TLS
I do not have a google account and this is not a supported feature of napp-it
But I collect hints about persons who get it working. search this thread about.
 
I had a drive drop out of a pool yesterday while I was at work and I had no idea. I fixed it by rebooting the server when I got home (after trying zpool clear and online) but I would like to set up alerts. How do you guys do notifications for your servers?
 
You may run bonnie++ from CLI to see error messages
You may rerun the napp-it installer to check if bonnie is compiling correctly

But bonnie values are quite similar to dd
and i would update to OI 151a first

Tested many things.

Re-tested on a clean new install ESXi 4.1 + OI 148 + Napp-It 0.500s : bonnie++ works in cli, not UI

Installed ESXi 5.0, which seems to be very happy on my dual socket boards with 48GB of RAM, letting me create 40GB VMs without issues. If I had more then 64GB of RAM, I could confirm the "32GB per socket" they claim for memory. Overall, I like 5.0

Tested ESXi 5.0 + OI 151a + Napp-It 0.500s : bonnie++ works in cli, not UI

So then I decided to try and copy bonnie++ from my working ESXi 4.1(U) + OI 148 + Napp-It 0.500s to the above machine. And this binary works CLI and UI.

So there is something broken in the compile methinks. Below the ".local" is the one built on the 151a and non-functional with the UI. I'll try a get some idea what broke later.

-r-xr-xr-x 1 root bin 48692 2011-10-19 15:19 bonnie++
-r-xr-xr-x 1 root bin 48688 2011-10-17 20:50 bonnie++.local
 
Is it possible to use the Windows Indexing Service on a OpenSolaris ZFS server?

My server is OpenSolaris (w/ZFS) but my clients are Windows 7.

Windows Server 2008 supports Windows Indexing Service for very fast file searching but I'm wondering if I could setup something like this even though I'm running OpenSolaris/ZFS...
 
I seem to recall this has been asked here before and the answer was: not really.
 
Last edited:
Is it possible to use the Windows Indexing Service on a OpenSolaris ZFS server?

My server is OpenSolaris (w/ZFS) but my clients are Windows 7.

Windows Server 2008 supports Windows Indexing Service for very fast file searching but I'm wondering if I could setup something like this even though I'm running OpenSolaris/ZFS...

What about a Windows 2008 VM running on the ZFS server..., not really ideal though. It would be nice to have the ZFS files indexed.
 
What about a Windows 2008 VM running on the ZFS server..., not really ideal though. It would be nice to have the ZFS files indexed.

Performance of the 2008 VM is not the problem but you need a ntfs filesystem.
You can provide it via iSCSI but i doubt that the advantage of indexing is greater than
the performance drawback with iSCSI and virtualization.

I would first try a real fast SSD read cache to improve reads of meta data.
There is a good chance of a "fast enough without complexity" experience
 
I just installed OpenIndiana on my Hyper-V server. Does anyone know how to get it to recognize my hard disks I have set up on the virtual SCSI controller on the Hyper-V?

I know that it does not have the drivers for it because I also needed to use the Legacy network adapter to get it connected to the internet.

Ive researched and found the linux integrated components but am unsure if it will work. Has anyone dealt with this?
 
Last edited:
I'm in the process of upgrading and consolidating my storage and I would like to some advice.
I've read through the forum posts here on recommendations but I thought I would see if anyone can suggest any changes.

I'm planning on using OpenIndiana 151a unless someone can suggest a reason not to.

Hardware list:
1x SuperMicro X8DTH-6F Motherboard
1x 1x Intel Xeon E5620 CPU
1x Supermicro SNK-P0040AP4 Cooler
1x Kingston 1333Mhz DDR3 ECC 24GB (3x8GB)
2x IBM M1015 SAS2 HBA
1x Xcase (Norco?) 4224 4U 24Bay enclosure
2x 20GB Intel 311 SSDs (mirrored boot drive)

Storage pools:
zfs root pool - 2x 20GB Intel 311 SSDs (mirrored)
main pool - 10x Samsung HD204UI 2TB HDD (8+2 RaidZ2)
work pool - 4x 120GB Corsair Force Series GT SSD (Raid10)
main pool L2ARC - 80Gb Intel G1 SSD
main pool write cache - 2x 120GB Corsair Force Series GT SSD (mirrored) maybe

I decided on 10 drive RaidZ2 arrays because two 10 drive RaidZ2 vdevs and one Raid10 vdev will fit nicely into a second 24-drive 4U case and if I ever do need to use expanders (past space for 4 more HBAs and a 10gigE card on that motherboard) 20 HDDs & 4 SDDs is around the bandwidth available to on one HBA card.

I know most people will say its a waste to mirror the OS drive but I see it as a small price to pay to not have to build another OS drive at 3am. I just know it'll go wrong at 3am when I'm 200 miles away.

I'm hoping with caching on the main pool I wont need to but I wanted to know if its possible. If vdevs can be single drives, will ZFS let me create a separate pool of striped single drive vdevs? I do understand the risks of running striped drives with no redundancy but it'll be used to store video streams things recorded by a mythtv box and losing a week or two of tv shows wont bother me.

Will I need the 2nd CPU or will one be enough for now? I'm not using ESXi or any VMs right now but its something I might consider in the future. The CPU has AES support for when/if encryption becomes available.
 
The Intel 311 are probably overkill for the boot drive, you don't really need it. I have cheapy kingston SSD but little 2.5" HDD should be fine.

Do you have them lying around?

edit: definitely mirror the OS drive. After rebuilding mine once (from a snapshot, wasn't too hard) I mirror it and its super easy to replace dying/dead drive.
 
I've got tons of old drives that I wouldn't trust to last long and some brand new 2TB drives.
 
I'm in the process of upgrading and consolidating my storage and I would like to some advice.

Storage pools:
zfs root pool - 2x 20GB Intel 311 SSDs (mirrored)
main pool - 10x Samsung HD204UI 2TB HDD (8+2 RaidZ2)
work pool - 4x 120GB Corsair Force Series GT SSD (Raid10)
main pool L2ARC - 80Gb Intel G1 SSD
main pool write cache - 2x 120GB Corsair Force Series GT SSD (mirrored) maybe

I decided on 10 drive RaidZ2 arrays because two 10 drive RaidZ2 vdevs and one Raid10 vdev will fit nicely into a second 24-drive 4U case and if I ever do need to use expanders (past space for 4 more HBAs and a 10gigE card on that motherboard) 20 HDDs & 4 SDDs is around the bandwidth available to on one HBA card.

I know most people will say its a waste to mirror the OS drive but I see it as a small price to pay to not have to build another OS drive at 3am. I just know it'll go wrong at 3am when I'm 200 miles away.

I'm hoping with caching on the main pool I wont need to but I wanted to know if its possible. If vdevs can be single drives, will ZFS let me create a separate pool of striped single drive vdevs? I do understand the risks of running striped drives with no redundancy but it'll be used to store video streams things recorded by a mythtv box and losing a week or two of tv shows wont bother me.

Will I need the 2nd CPU or will one be enough for now? I'm not using ESXi or any VMs right now but its something I might consider in the future. The CPU has AES support for when/if encryption becomes available.

what you may consider:

If you have not yet bought the disks, avoid 4k models like the Samsung F4
use 512B Hitachis for example. If you have them, just use them.

use 3TB models, you need less of them, cheaper, less power, less failures
(possible with your LSI 2008 controller)

do not build a pool from one ZFS Z2 vdev of 8 disks with two hotspares
build a ZFS Z3 vdev with one hotfix. Same capacity, think like "one hotfix is already online"

You can add a read cache SSD at any time to a pool
Helps a lot on small random reads from slow pools

Intel 311 SSD for boot are not necessary
may use cheaper disks but they are very good, also as a write cache

A write cache/ log device is only used for syncronous writes for
example if you have NFS storage for ESXi or for databases.
Without these, you do not need a write cache

You can build vdevs from single disks, but why would one do that.
disks are so cheap today, use a mirror up to 3 TB instead.
Quite the same performance on ZFS like a Raid-0 but redundancy

One CPU should be enough. RAM for caching is more important
 
Last edited:
what you may consider:

If you have not yet bought the disks, avoid 4k models like the Samsung F4
use 512B Hitachis for example. If you have them, just use them.

use 3TB models, you need less of them, cheaper, less power, less failures
(possible with your LSI 2008 controller)

Unfortunately I already have the drives.

do not build a pool from one ZFS Z2 vdev of 8 disks with two hotspares
build a ZFS Z3 vdev with one hotfix. Same capacity, think like "one hotfix is already online"

I meant vdev of 10 disks, 8 for data with two parity drives with no hot spares.

Intel 311 SSD for boot are not necessary
may use cheaper disks but they are very good, also as a write cache

I know the Intel SSDs are overkill but so is a 120Gb HDD, besides they use less power and wont be affected by vibrations if I end up using velcro to attach them to the sides of the case (as somebody suggested)

A write cache/ log device is only used for syncronous writes for
example if you have NFS storage for ESXi or for databases.
Without these, you do not need a write cache

You can build vdevs from single disks, but why would one do that.
disks are so cheap today, use a mirror up to 3 TB instead.
Quite the same performance on ZFS like a Raid-0 but redundancy

The MythTV data will just be reads/writes of multi-gig media files. It really doesn't need redundancy. I wanted to figure out the best option out of 3x2TB striped or using the main pool with a write cache.
 
I have just set up an all-in-one system running Solaris 11 Express (snv_151a) on ESXi 4.1 and I'm not getting the performance I expected. Specifically the problem is with encrypted folders, where it does not appear that AES-NI acceleration is working.

The configuration is as follows:
Xeon E3-1230
Supermicro X9SCA-F
8GB ECC RAM
4x WD20EARX + 1x WD20EARS in RAID-Z1 (on the PCH)
Crucial M4 64GB L2ARC (also on PCH)
Sil3132 controller (for the solaris disk, a ST980811AS)

AES-NI is enabled in the BIOS and works fine in a Windows 2008 R2 VM using TrueCrypt, giving 2.5 GB/s encryption speed with AES.

However, in Solaris performance is much worse. Here are some examples using dd:

First the unencrypted FS:
write 10.24 GB via dd, please wait...
time dd if=/dev/zero of=/tank/dd.tst bs=1024000 count=10000

10000+0 records in
10000+0 records out

real 1:19.7
user 0.0
sys 2.6

10.24 GB in 79.7s = 128.48 MB/s Write

read 10.24 GB via dd, please wait...
time dd if=/tank/dd.tst of=/dev/null bs=1024000

10000+0 records in
10000+0 records out

real 31.0
user 0.0
sys 2.1

10.24 GB in 31s = 330.32 MB/s Read

And the encrypted (aes-256-ccm):
time dd if=/dev/zero of=/tank/crypt/dd.tst bs=1024000 count=10000
10000+0 records in
10000+0 records out

real 4m18.391s
user 0m0.013s
sys 0m2.868s

(39.7 MB/s write)

time dd if=/tank/crypt/dd.tst of=/dev/null bs=1024000
10000+0 records in
10000+0 records out

real 2m6.504s
user 0m0.019s
sys 0m2.299s

(80.6 MB/s read)

CPU usage is quickly alternating between high and low when writing to the encrypted FS, but constantly high (100%) when reading from it, which makes me suspect that AES-NI is not being utilized.

Is there any way to confirm that AES-NI is not working? More importantly, any ideas on how to accelerate encryption in Solaris?

Update: I did manage to check if AES-NI is working now, using the information provided here: http://blogs.oracle.com/DanX/entry/intel_aes_ni_optimization_on
Running that small program I get: "AES-NI instructions are present.", so it seems like it should work.

Now the remaining question is why the encrypted filesystem is so slow. Any help figuring this out would be greatly appreciated.
 
Last edited:
If I have a storage pool made up of multiple RaidZ vdevs and I lose an entire vdev for a few minutes (eg to a power loss) what happens and does having an SSD write cache help?
 
I am also very interested in any response to tocket's question. Will buy a Xeon 1230 next month to unleash its AES-NI on encrypted ZFS pools...
 
Has anyone seen the error that says

mDNSResponder: Error: getOptRdata - unknown opt 4

This is on openindiana 151 with nappit and afp installed (running in virtualbox)

See this error every couple of minutes.

Paul
 
I have napp-it 0.500s on top of Solaris 11 Express. I had previously tested the email alerts and they worked exactly as I expected. However since then my SMTP server changed. At first I didn't see where I could change it, so I tried deleting the job and recreating it. Now instead of only alerting me when there is an error, I get an email every day saying there are no errors. Did I do something wrong and how can I get it back to just alerting if there is an error?
 
Napp-it has been great to get my zfs array up and running. However I've stumbled in some unix vs windows file permission issues.

I'm using samba to share my zfs folder with windows pc's. In the nappit webconsole > ZFS FOLDER, SMB-SHARE-all is set to full_set, unix permissions are set to 777. When I add a file to the root folder, unix permissions are indeed 777 and everybody can read/write to the file. However if I add a file to a subfolder, unix permissions are set to 700 so only the user that created the file can read it or write to it. Anyone know how to fix this? :)
 
I have just set up an all-in-one system running Solaris 11 Express (snv_151a) on ESXi 4.1 and I'm not getting the performance I expected. Specifically the problem is with encrypted folders, where it does not appear that AES-NI acceleration is working.

The configuration is as follows:
Xeon E3-1230
Supermicro X9SCA-F
8GB ECC RAM
4x WD20EARX + 1x WD20EARS in RAID-Z1 (on the PCH)
Crucial M4 64GB L2ARC (also on PCH)
Sil3132 controller (for the solaris disk, a ST980811AS)

AES-NI is enabled in the BIOS and works fine in a Windows 2008 R2 VM using TrueCrypt, giving 2.5 GB/s encryption speed with AES.

However, in Solaris performance is much worse. Here are some examples using dd:

First the unencrypted FS:


And the encrypted (aes-256-ccm):


CPU usage is quickly alternating between high and low when writing to the encrypted FS, but constantly high (100%) when reading from it, which makes me suspect that AES-NI is not being utilized.

Is there any way to confirm that AES-NI is not working? More importantly, any ideas on how to accelerate encryption in Solaris?

Update: I did manage to check if AES-NI is working now, using the information provided here: http://blogs.oracle.com/DanX/entry/intel_aes_ni_optimization_on
Running that small program I get: "AES-NI instructions are present.", so it seems like it should work.

Now the remaining question is why the encrypted filesystem is so slow. Any help figuring this out would be greatly appreciated.

My first test was with your settings and I was getting 4.6GB/s, forgot to get it outside of ram :p
Code:
:/tank/stuff/test$ time dd if=/dev/zero of=/tank/stuff/test/dd.tst bs=1024000 count=30000
30000+0 records in
30000+0 records out
30720000000 bytes (31 GB) copied, 183.325 s, 168 MB/s

real    3m3.415s
user    0m0.046s
sys     0m11.328s
:/tank/stuff/test$ time dd if=/tank/stuff/test/dd.tst of=/dev/null bs=1024000
30000+0 records in
30000+0 records out
30720000000 bytes (31 GB) copied, 149.778 s, 205 MB/s

real    2m29.781s
user    0m0.055s
sys     0m12.519s

This is with a Xeon X3470, which does not have AES-NI, on a 6 disk RAIDZ1 with no SSD attached to it.

I am also running on Solaris 11 Express, I wonder if they just haven't optimized it for AES-NI yet. Another snag is that mine is dedicated, no virtualization.

ETA: On the read, I also found out that it trips the onboard overheat alarm for my CPU. Guess I need to plug some fans back in.
 
I have napp-it 0.500s on top of Solaris 11 Express. I had previously tested the email alerts and they worked exactly as I expected. However since then my SMTP server changed. At first I didn't see where I could change it, so I tried deleting the job and recreating it. Now instead of only alerting me when there is an error, I get an email every day saying there are no errors. Did I do something wrong and how can I get it back to just alerting if there is an error?

are you sure, you set an alert job and not a status job?
 
Napp-it has been great to get my zfs array up and running. However I've stumbled in some unix vs windows file permission issues.

I'm using samba to share my zfs folder with windows pc's. In the nappit webconsole > ZFS FOLDER, SMB-SHARE-all is set to full_set, unix permissions are set to 777. When I add a file to the root folder, unix permissions are indeed 777 and everybody can read/write to the file. However if I add a file to a subfolder, unix permissions are set to 700 so only the user that created the file can read it or write to it. Anyone know how to fix this? :)

You do not use SAMBA with napp-it, it's the kernel based Solaris SMB server instead.

about your problem:

1.
you need an ACL permission like everyone: modify with inheritance=on
on the folder to allow anyone access new files.

-> Start with unix permission 777
and let the server reduce this according to the used ACL

2.
Kernel based SMB server is like Windows: ACL only
forget any knowledge about unix permissions, think ACL only
- use it just like on a real Windows box
 
Last edited:
are you sure, you set an alert job and not a status job?

Yes, I have a separate job that is set up for my status to run once a week. The alert job shows as this:

email alert send to <MyEmailHere> send by <MySMTPServerHere> every every every every 1317566205 active 23.oct 17:26 - - run now delete

The message I get is as follows:

Subject: *** napp-it ERROR-Alert *** on <MyZFSServerHere>&#8207;
Body: Alert/ Error on <MyZFSServerHere>&#8207; from 23.10.2011 10:57

-disk errors: none

and at the bottom it has the results from a zpool list command.
 
Regarding security, im not sure how to proceed with binding napp-it to an ip.
Server is as follows;

Code:
esxi5 &#9516; vSwitch0 (nic1) &#9516; port group/192.168.1.254 (esxi management)
      &#9474;                 &#9492; port group/192.168.1.201 (OI vm - napp-it management)
      &#9492; vSwitch1 (nic2) &#9472; port group/192.168.1.200 (OI vm)

Ive disabled nwam and enabled physical. Both adapters in OI have been configured with static ip's (as above)

How do I bind mini-httpd to 192.168.1.201?
I want the napp-it management to only be available from one other pc on the network.
Where do I implement the firewall (on OI, esxi, or the router)? Which ports need to be open - ei just port 81?
 
Last edited:
I am also very interested in any response to tocket's question. Will buy a Xeon 1230 next month to unleash its AES-NI on encrypted ZFS pools...

+1
The only reason I am on SolEx11 with napp-it is encryption and I'd love to shift that load away
from the standard cores to make room for more VMs.
 
+1
The only reason I am on SolEx11 with napp-it is encryption and I'd love to shift that load away
from the standard cores to make room for more VMs.

Not sure what level of protection you re after here, but running encryption in a VM is not secure - from what i understand and have read...

"Actions outside the GUestOS control:
memory swapping by the hypervisor, uses the .vswp file on the VMFS (you can prevent this by setting resource constraints)
snapshot taking a memory image. uses a file on the VMFS (you have to be sure not to check the dump memory checkbox)
Suspending a VM creates the .vmss memory image file on the VMFS (never suspend a VM)
using tools such as vm-support will also drop a VM memory image (only by an Administrator)
several other items, like sending a signal to the VM to drop its memory image (only by an Administrator)"

This might be fine for your use case, but be aware of the gaps and plan accordingly.

IMHO, best to run encryption on dedicated hardware.
 
Not sure what level of protection you re after here, but running encryption in a VM is not secure - from what i understand and have read...

[...]

This might be fine for your use case, but be aware of the gaps and plan accordingly.

IMHO, best to run encryption on dedicated hardware.

I won't deny that you have a point here.
But the OP was regarding the fact that SolEx11 with an encrypted pool would not acknowledge the hardware AES-NI capabilities of the platform, although it should
do that, according to documentation.

Before I invest in a new infrastructure, I'd like to double check that the system will deliver
as planned.
 
quick question, recently acquired a C300 for stupid cheap, was wondering how to add it as the ZIL or L2ARC for my zfs server. If its even possible since its been running.
 
I would think it would 'just work'. You should have some idea whether you need a ZIL or L2ARC first, no? Either do 'zpool add POOL log DISKNAME' or 'zpool add POOL cache DISKNAME'. Be aware that if the pool is less than version 19, you can't remove a log device once added.
 
Back
Top