OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Is everybody here using the recommended number of drives for RAIDZ ? I'm planning on using 12-drives RAIDZ2 and that's not recommended, some go even as far as saying that performance will be horrible (but I haven't seen numbers). Usage will be storage server with minimal I/O demands.

I would mostly ignore these "golden numbers" of 4/8/16... datadisks per vdev.
A 12 disk Raid-Z2 is mostly faster than a 11 disk Raid-Z2
- sometimes it is only as fast as a Raid-Z2 with fewer disks - but who cares.

ZFS is build for unbalanced pools and vdevs.
Each Pool expand leads to the same problem.
 
Sorry if this isn't the place to ask, but I am in the process of putting together parts for my new file server...

My current server is running Ubuntu w/ ZFS + SMB which is fine, but I've run out of room in my case, and the MB, CPU and RAM is limited (old S939 platform).

So I have recently purchased a P67X-UD3-B3, but am now wondering if I jumped the gun and will have compatibility issues with a Solaris derived OS? It is obviously not listed on any HCL.

The only thing I want this box to do is to run ZFS, Samba/NFS, and iSCSI. I will be running Intel NIC's and compatible controller cards. So I'm wondering if I am safe with my mobo choice?

Thanks!
 
Hi Gea,

Thanks a lot for napp-it! I have been running it for 1½ years now and it's been very useful. I sent you a small donation to show my appreciation ;)

I'm having one issue now with my encrypted ZFS folders. Whenever Solaris is restarted (usually due to the mysterious ca 9:45 am crash) the encrypted shares are locked as they should be. However, after I unlock them I have to manually reactivate their respective CIFS and NFS shares. Before upgrading to napp-it 0.9 I only had to re-enable the CIFS share. Because they are listed as active shares in napp-it I have to first disable the CIFS share (sharesmb off) in napp-it and then turn it on again. Pre-upgrade I only had to enable it.

NFS is even more complicated. It can only be turned on by manually running 'zfs set sharenfs=on'.
 
Sorry if this isn't the place to ask, but I am in the process of putting together parts for my new file server...

My current server is running Ubuntu w/ ZFS + SMB which is fine, but I've run out of room in my case, and the MB, CPU and RAM is limited (old S939 platform).

So I have recently purchased a P67X-UD3-B3, but am now wondering if I jumped the gun and will have compatibility issues with a Solaris derived OS? It is obviously not listed on any HCL.

The only thing I want this box to do is to run ZFS, Samba/NFS, and iSCSI. I will be running Intel NIC's and compatible controller cards. So I'm wondering if I am safe with my mobo choice?

Thanks!
It should work fine. I'm not sure about the onboard realtek NIC, but if you're putting an Intel NIC that won't matter anyway.

The downside is that you can't use ECC RAM on that MB. You'll most likely be fine without, but ECC is great for your peace of mind.
 
Hi guys:

Major issue regarding Open Indiana today. It seemed one of the SATA connectors on my disk came lose through vibration yesterday, and on rectifying that one, another came lose.. I now have this:

Faulted.PNG


Can someone advise me on the best next-steps?

Edit: zpool clear tank gives this:

root@oizfs01:~# zpool clear tank
cannot clear errors for tank: I/O error

and likewise the actual

root@oizfs01:~# zpool clear tank c4t3d0
cannot clear errors for c4t3d0: I/O error

Edit:

zfs export tank
zfs import tank

resilvering the drive that was disconnected last week it turns out

wicked sick! love ZFS!
 
Last edited:
It should work fine. I'm not sure about the onboard realtek NIC, but if you're putting an Intel NIC that won't matter anyway.

The downside is that you can't use ECC RAM on that MB. You'll most likely be fine without, but ECC is great for your peace of mind.
Yes you're right, that is the other downside of that "desktop" MB I suppose... so far my current setup without ECC RAM has been solid as a rock, so I'm hoping the same with new new/improved setup that I'm trying to keep within my budget.

Thanks for your reply!

Cheers
 
I've got a second nic I'd like to utilize for iSCSI. Do I want need to setup link aggregation or just setup COMSTAR iSCSI tagret portal group with both adapters?

I setup link aggregation and seem to approach 200MBps on some sequential read tests from IOmeter so I gather that's the better approach. "dladm" will be a useful search string for the next guy looking to set this up. You'll also want to disable NWAM and set your aggr policy to L2,L3 - it appears...

But I'm getting wildly different random read results when testing against different volumes in the same pool. Tests are identical and run from the same VM. Pool is 6x2TB Sata 7.2k disks in rZ2. 3 Volume LUs are presented via iSCSI on a single target. System is a Dell PE 2900 running two quad-core Xeon E5410 with 48Gbs ram. Bonnie results on the pool are at the bottom for reference. The trouble is tests 2 and 4 on Vol 1. Those results are consistent. Each volume is hosting a single live VM in addition to the test volume but activity is very light.

Anyone have any idea what might be going on. I actually have two of these boxes and noticed similar performance in testing on the other box when testing one of 6 volumes setup as ZFS file systems. I blasted these away and published a single LUN instead but now I realize it may have been volume specific ...somehow.

VOLUME 1
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 17.94 3282 102 6%
RealLife-60%Rand-65%Read 145.74 365 2 11% :confused:
Max Throughput-50%Read 10.17 5639 176 8%
Random-8k-70%Read 153.67 333 2 10% :eek:

VOLUME 2
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 16.93 3520 110 10%
RealLife-60%Rand-65%Read 4.04 14057 109 48%:cool:
Max Throughput-50%Read 9.69 5891 184 10%:D
Random-8k-70%Read 4.12 13840 108 9%:cool:

VOLUME 3
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 16.98 3511 109 8%
RealLife-60%Rand-65%Read 4.12 11207 87 14%
Max Throughput-50%Read 9.62 5987 187 9%
Random-8k-70%Read 4.69 8886 69 12%


Bonnie++ results:
Date(y.m.d) 2013.02.25
File 96G
Seq-Wr-Chr 87 MB/s
%CPU 90
Seq-Write 270 MB/s
%CPU 58
Seq-Rewr 143 MB/s
%CPU 37
Seq-Rd-Chr 76 MB/s
%CPU 97
Seq-Read 474 MB/s
%CPU 49
Rnd Seeks 863.9/s
%CPU 2
Files 16
Seq-Create 21737/s
Rnd-Create 19914/s
 
Hi Gea,

Thanks a lot for napp-it! I have been running it for 1½ years now and it's been very useful. I sent you a small donation to show my appreciation ;)

I'm having one issue now with my encrypted ZFS folders. Whenever Solaris is restarted (usually due to the mysterious ca 9:45 am crash) the encrypted shares are locked as they should be. However, after I unlock them I have to manually reactivate their respective CIFS and NFS shares. Before upgrading to napp-it 0.9 I only had to re-enable the CIFS share. Because they are listed as active shares in napp-it I have to first disable the CIFS share (sharesmb off) in napp-it and then turn it on again. Pre-upgrade I only had to enable it.

NFS is even more complicated. It can only be turned on by manually running 'zfs set sharenfs=on'.

First, thank you for your donation.
Second: Are you using Solaris 11.1?

(Oracle has changed share management from Solaris 11.0 to 11.1 -
napp-it 0.9 supports only Solaris 11.1 )
 
Oops... that may well be the issue. I haven't upgraded to 11.1 yet. I will try that.

Another question: I'm currently running a pool consisting of a 5x2 TB disk RAID-Z. However, since I managed to fill that one up I ordered 5x4 TB disks that I want to add to the pool. Can I simply add another RAID-Z vdev to the existing pool despite them being different in size?
 
so i've upgraded to the latest solaris 11.1 with napp-it v. 0.9a5 nightly Jan.22.2013 and i'm trying to run a benchmark but for some reason the bonnie one will not run, i have disabled the monitoring at the top as is required but when i click start the screen refreshes and nothing happens?

any ideas?
 
Bonnie++ isn't included in Solaris anymore, so the benchmarks can't run it.

You can download, compile and install it yourself. After you instal it you can use it from inside napp-it. See 2nd comment at the end if this. Its a simple install and I did it with no problem - now bonnie++ works just fine inside of napp-it again.

SUNWfrk says:
March 27, 2011 at 7:49 AM

Hi Anthony, you can grab the source tarbals at http://www.coker.com.au/bonnie++/ (use bonnie++-1.03d.tgz because the latest version with direct io didn’t work for me) and compile bonnie++. I also had to use gmake instead of make.. good luck!
 
Yes you're right, that is the other downside of that "desktop" MB I suppose... so far my current setup without ECC RAM has been solid as a rock, so I'm hoping the same with new new/improved setup that I'm trying to keep within my budget.

Thanks for your reply!

Cheers

if you need a desktop version of low budget NAS/File server.
you can pickup AMD version. I know some people hate AMD..

pick AM3 or AM3+ low power processor and slap it with AMD motherboard ( I know most Asus AMD motherboard officially support ECC, or some manufacturers unofficially support ECC on the BIOS).

has been in running AMD low budget server( all in one: nas, daemons/services, and devel sand box).
I am using ZoL on Centos 6.3 now. moved from OI that had some degrading network performance and need a devel sand box.

on the other side, FX 95W processor with 990FX motherboard for ESXi, 990 motherboard on the market are 99% support hardware passthrough.

if... you need a low budget term, AMD still is feasible by assuming you do not hate AMD products ehehe.

I am open-minded with Intel or AMD. 2 competitor is better than 1 monopoly :)
 
First, thank you for your donation.
Second: Are you using Solaris 11.1?

(Oracle has changed share management from Solaris 11.0 to 11.1 -
napp-it 0.9 supports only Solaris 11.1 )

Now I've upgraded to 11.1, but unfortunately I still have the same problem.
 
Now I've upgraded to 11.1, but unfortunately I still have the same problem.

I will put it on my todo list
(sadly Solaris is more and more incompatible from a Illumos/OmniOS/OpenIndiana view - my main platforms).
Every new version increases the gap
 
So I've finally got my new file server built, but having some issues with performance compared to my almost identical setup (HDD wise) on my old FS running Ubuntu.

I'm missing about 10-15MB/s on a read over the network on the new FS and I think it's because of the physical sector size is not being set to 4K.

I tried editing the sd.conf, rebooting, etc.. but napp-it will just not set to ashift=12 no matter what.

Am I missing something? or is something just broken?

XDrt.jpeg.jpg


XDrI.jpeg.jpg
 
I will put it on my todo list
(sadly Solaris is more and more incompatible from a Illumos/OmniOS/OpenIndiana view - my main platforms).
Every new version increases the gap

I'd love to switch from Solaris, but it has something I want that the alternatives don't; encryption.
 
I'd love to switch from Solaris, but it has something I want that the alternatives don't; encryption.

I'm quite pleased with my multi-threaded and actually AES-NI-aware full-disk encryption on FreeBSD. *shrug*

Does AES-NI actually work now on Solaris?
 
I'm quite pleased with my multi-threaded and actually AES-NI-aware full-disk encryption on FreeBSD. *shrug*

Does AES-NI actually work now on Solaris?

That encryption is not done inside of ZFS, but rather on a disk level, right? So you have to enter the password for each disk? That's a lot of work if you have many disks and secure passwords.

Solaris supposedly does use AES-NI, but the implementation seems to be rather flawed as it doesn't really provide any acceleration.
 
That encryption is not done inside of ZFS, but rather on a disk level, right? So you have to enter the password for each disk? That's a lot of work if you have many disks and secure passwords.

You can use passwordless keyfiles which you can store in a small passworded container. A little scripting and the effect is the same as that of a single password. Since each disk uses a different key, you could even say it's more secure.

Another note, disk-level encryption means you're encrypting parity data as well of course. So depending on the number of disks you get some significant overhead compared to just encrypting usable data.
 
Last edited:
So I've finally got my new file server built, but having some issues with performance compared to my almost identical setup (HDD wise) on my old FS running Ubuntu.

I'm missing about 10-15MB/s on a read over the network on the new FS and I think it's because of the physical sector size is not being set to 4K.

I tried editing the sd.conf, rebooting, etc.. but napp-it will just not set to ashift=12 no matter what.

Am I missing something? or is something just broken?

XDrt.jpeg.jpg


XDrI.jpeg.jpg


Enter Iostat -En for Vendor and Product.
Vendor must be 8 char (fill with spaces)
https://www.illumos.org/issues/2665
 
Has anyone be able to create symbolic links in netatalk 3.0.2 on Solaris 11.1?
it claims this bug has been fixed in 3.0.2 but it still doesn't work.
I have set:
follow symlinks=yes
in afp config file.



"NEW: Add a new volumes option follow symlinks. The default setting is false, symlinks are not followed on the server. This is the same behaviour as OS X’s AFP server. Setting the option to true causes afpd to follow symlinks on the server. symlinks may point outside of the AFP volume, currently afpd doesn’t do any checks for "wide symlinks"."


I tested it on clients: Mac Os 10.6 and 10.7.

Symbolic links work fine over NFS

Thanks in advance
 
Enter Iostat -En for Vendor and Product.
Vendor must be 8 char (fill with spaces)
https://www.illumos.org/issues/2665

That worked once I figured out that I had a comma instead of a semi-colon at that the end of the line :p

Thanks :)

Unfortunately, the read speeds are still horrid with OI vs. my Ubuntu ZFS box:

Ubuntu ZFS FS:
copy_from_ubuntuzfs.png


OI ZFS FS:
copy_from_OIzfs.png


Interestingly enough.. running a dd test on both systems shows the OI system to be almost 50% faster:

Ubuntu:
Code:
root@batcavefs:~# time dd if=/dev/zero of=/tank/dd.tst bs=2048000 count=32752
32752+0 records in
32752+0 records out
67076096000 bytes (67 GB) copied, 550.273 s, 122 MB/s

real    9m10.299s
user    0m0.376s
sys     4m51.378s

OI:
Code:
sascha@batcavefs1:~# time dd if=/dev/zero of=/tank_z1/dd.tst bs=2048000 count=32752
32752+0 records in
32752+0 records out

real    5m44.047s
user    0m0.098s
sys     0m21.641s

So this could be a Network issue? I'm using the onboard Realtek on the Ubunu system, and a Dual port Intel E1000 NIC on the OI system (but with both ports configured).
 
Last edited:
That worked once I figured out that I had a comma instead of a semi-colon at that the end of the line :p

Thanks :)

Unfortunately, the read speeds are still horrid with OI vs. my Ubuntu ZFS box:

.....

Interestingly enough.. running a dd test on both systems shows the OI system to be almost 50% faster:

Ubuntu:
Code:
root@batcavefs:~# time dd if=/dev/zero of=/tank/dd.tst bs=2048000 count=32752
32752+0 records in
32752+0 records out
67076096000 bytes (67 GB) copied, 550.273 s, 122 MB/s

real    9m10.299s
user    0m0.376s
sys     4m51.378s

OI:
Code:
sascha@batcavefs1:~# time dd if=/dev/zero of=/tank_z1/dd.tst bs=2048000 count=32752
32752+0 records in
32752+0 records out

real    5m44.047s
user    0m0.098s
sys     0m21.641s

So this could be a Network issue? I'm using the onboard Realtek on the Ubunu system, and a Dual port Intel E1000 NIC on the OI system (but with both ports configured).

ubuntu is on SAMBA(CIFS/SMB) and OI is on SMB/CIFS builtin ZFS :)
Not really have knowledge in SMB/CIFS of ZFS, you should check your configuration
in SAMBA, you can set "wacky" configuration. by the default, Ubuntu gives a sample comfiguration that would work for gener

yeah, realtek nic is not good :p...... I had problem with them. try to use the same NIC for comparison.

I honestly, not pay attention in DD since sequential.
since ZFS and ZoL implementation are a bit difference internally...

try to us Bonnie++ or others tools to test.

on not all SATA interfaces are create equally :D. when you do some testing, pick the same hardware and configuration where you can neglect hardware factor in your testing.


those are my understanding, someone can disagree...

Good luck!!
 
ubuntu is on SAMBA(CIFS/SMB) and OI is on SMB/CIFS builtin ZFS :)
Not really have knowledge in SMB/CIFS of ZFS, you should check your configuration
in SAMBA, you can set "wacky" configuration. by the default, Ubuntu gives a sample comfiguration that would work for gener

yeah, realtek nic is not good :p...... I had problem with them. try to use the same NIC for comparison.

I honestly, not pay attention in DD since sequential.
since ZFS and ZoL implementation are a bit difference internally...

try to us Bonnie++ or others tools to test.

on not all SATA interfaces are create equally :D. when you do some testing, pick the same hardware and configuration where you can neglect hardware factor in your testing.


those are my understanding, someone can disagree...

Good luck!!

My issue here is that my newer/better/faster system is slower then my old fileserver running far inferior hardware.

My new specs are:
Gigabyte P67X-UD3-B3
i3-2120
32GB RAM
Intel dual port E1000
currently using the on-board Intel SATA ports
2x 320GB Segates mirrored rpool for system
3x 2TB WD Green for storage (same drives in the old FS, both running RAIDZ1).

Added Bonnie++ Bench:
Code:
NAME 	 SIZE 	 Bonnie 	 Date(y.m.d) 	 File 	 Seq-Wr-Chr 	 %CPU 	 Seq-Write 	 %CPU 	 Seq-Rewr 	 %CPU 	 Seq-Rd-Chr 	 %CPU 	 Seq-Read 	 %CPU 	 Rnd Seeks 	 %CPU 	 Files 	 Seq-Create 	 Rnd-Create 
   	   	   	   	   	   	   	   	   	   	   	   	   	   	   	   	   	   	   	   
 tank_z1 	 5.44T 	 start 	 2013.02.28 	 64G 	 139 MB/s 	 99 	 183 MB/s 	 15 	 88 MB/s 	 13 	 110 MB/s 	 89 	 169 MB/s 	 9 	 645.3/s 	 2 	 16 	 +++++/s 	 +++++/s

Just noticed very high CPU usage... is that normal? Maybe that is part of the issue?
 
Last edited:
Has anyone messed with Link Aggregation on OI?

I running mine as an all-in-one on ESXi, but had poor network performance using the VMXNET3 adapter.
I then switched to E1000, but because of the lower bandwidth properties of the E1000, I'm not able to utilize the 2xdual port NIC's I have in ESXi.

Some say that E1000 can run speeds of up to 2-3Gbit/s, but I'm getting precisely 1Gbit/s, not more, not less.

Has someone successfully aggregated 2 or more E1000 NIC's on OI?

Thanks
Jim
 
So I did a test and now know what is causing the degradation in throughput.

It's the Intel NIC. I installed FreeNAS 8 on a USB stick and observed the same speed issue, I then switched to the on-board Realtek NIC and boom I went from 40MB/s read to 78MB/s.

I then thought maybe something is wrong with my Intel E1000 Dual port, but I have a known good single port E1000 as well. After swapping that in and re-running the test, again I am back down to 40-45MB/s compared to the Realtek.

Anyone know why the E1000's which I understand to be one of the best/most compatible cards around to have such a huge performance hit? Doesn't make any sense to me.

Another strange thing seems to be that writing via the E1000 seems to allow slightly more throughput then reading.
 
Last edited:
So I did a test and now know what is causing the degradation in throughput.

It's the Intel NIC. I installed FreeNAS 8 on a USB stick and observed the same speed issue, I then switched to the on-board Realtek NIC and boom I went from 40MB/s read to 78MB/s.

I then thought maybe something is wrong with my Intel E1000 Dual port, but I have a known good single port E1000 as well. After swapping that in and re-running the test, again I am back down to 40-45MB/s compared to the Realtek.

Anyone know why the E1000's which I understand to be one of the best/most compatible cards around to have such a huge performance hit? Doesn't make any sense to me.

Another strange thing seems to be that writing via the E1000 seems to allow slightly more throughput then reading.
What is the exact part number of your card? There are several versions of cards that operate under the more generic e1000 label.
 
I will have to get that... the single port card that also didn't work was the Intel Pro 1000 GT.

The dual port card I will have to check.

But for what it is worth, I enabled the RTL8111 in OI and it's blazing fast. Write speeds are around 70MB/s and read speeds are over 100MB/s...
 
I will have to get that... the single port card that also didn't work was the Intel Pro 1000 GT.

The dual port card I will have to check.

But for what it is worth, I enabled the RTL8111 in OI and it's blazing fast. Write speeds are around 70MB/s and read speeds are over 100MB/s...

Ohhh - all makes sense now.

You are using an EISA (PCI-X) card? Plugged into the 16-bit PCI slot on the P67X-UD3-B3? No surprise then that you are getting crappy GigE performance. The 16bit PCI slot can't handle enough bandwidth to do GigE at full speed...they built that card with the extra-length PCI-X plug for a reason!

Get yourself a PCIe based intel Pro-1000 card, preferably the "PT" model.
 
Has anyone messed with Link Aggregation on OI?
Has someone successfully aggregated 2 or more E1000 NIC's on OI?
I just grouped two broadcom nics on a standalone OI box. I could help you with the OI commands - basically, google open solaris dladm create-aggr - but I think you've got more pieces to fit together in the all-in-one setup, right? And did you set a corresponding port group on your switch? And are you testing with two clients running IOMeter or something? Whats network traffic look like in esxi using esxtop during an IO test?
 
Ohhh - all makes sense now.

You are using an EISA (PCI-X) card? Plugged into the 16-bit PCI slot on the P67X-UD3-B3? No surprise then that you are getting crappy GigE performance. The 16bit PCI slot can't handle enough bandwidth to do GigE at full speed...they built that card with the extra-length PCI-X plug for a reason!

Get yourself a PCIe based intel Pro-1000 card, preferably the "PT" model.

The Dual port card is indeed an extended PCI card.. but I never thought it would be able to saturate a PCI slot.

The single card is a regular PCI card though... guess that is why both cards have the same performance.

Thanks for the insight! I'm finally enjoying my new file server under OI + napp-it... everything seems to be working just as it should now.

Cheers!
 
.....

But for what it is worth, I enabled the RTL8111 in OI and it's blazing fast. Write speeds are around 70MB/s and read speeds are over 100MB/s...

ha... you should use pci-express Intel NIC, you would love in the long run when more than one client are connecting to your NAS.

I do not recommend realtek for NAS/server. "crappy" performance...
try to use bonnie++ and compare with intel versus realtek .... and you would never look back to realtek again.



one note on SMB/CIFS: if you are coming from SAMBA, you would see some differences on SMB/CIF in ZFS. SAMBA is more unix style.

enjoy your new NAS...
 
Well PCI 32bits is 133MB/s in theory, but on modern motherboards the port is using a converter and put at the end of the saturated DMI connection so expect crappy performance. I would only use that port for a legacy sound card or something like that.
 
So I had a bad morning.. my DC decided to take a dump and of course since migrating from Xen to ESXi I never bothered to do a backup of that VM...

Had to do a complete reinstall, and while I was at it I changed the hostname for the file server.. that to my surprise rendered the napp-it plugin's (like for ACL control) useless.

Looking closer into it, it seems Gea doesn't offer a cost effective license for home users? No one in their right mind will pay 100 Euro/year for menu system. It doesn't make any sense to me to charge that much

Actually, it seems that napp-it might continue to work without a license but the plugins stop working? How about a structured price scheme?

I only care about iscsi and ACL's the rest is straight forward stuff... I don't care about replication, etc.
 
For home users the license is 300 for all extensions and unlimited time:

Special offer: Napp-it Pro, complete edition unlimited for noncomercial home use: 300 Euro (with two keys for your home and backupserver)
Buy one year complete and send us a mail with the two servernames and the info "noncomercial home use"
 
So I had a bad morning.. my DC decided to take a dump and of course since migrating from Xen to ESXi I never bothered to do a backup of that VM...

Had to do a complete reinstall, and while I was at it I changed the hostname for the file server.. that to my surprise rendered the napp-it plugin's (like for ACL control) useless.

Looking closer into it, it seems Gea doesn't offer a cost effective license for home users? No one in their right mind will pay 100 Euro/year for menu system. It doesn't make any sense to me to charge that much

Actually, it seems that napp-it might continue to work without a license but the plugins stop working? How about a structured price scheme?

I only care about iscsi and ACL's the rest is straight forward stuff... I don't care about replication, etc.

No need to do anything
napp-it free edition covers Comstar iSCSi and basic settings from the ACL extension-
enough for home use. Yoy may also request a new Pro trial key at napp-it.org.
All essential things and unlimited storage capacity are in the free edition without a key or registration.

If you like to support napp-it there are offers for noncommercial home use like the
unlimited home licence (two keys for a NAS and a second backup system).
 
Last edited:
Ah your'e right... I guess you've just blocked the ability to add AD users & groups without a registered extension.

And yes I've seen the offer for non-commercial home use... non-expiring key is 300 Euro... and while I enjoy your menu application, I cannot see anyone paying that kind of money to unlock a few things (like being able to set AD ACL's for instance) for simple home use.

I would be inclined to donate/pay for a key, but not anywhere near that amount, it's simply not justifiable.

I have re-requested a pro-key for now to further evaluate, since after the hostname change the original eval. key no longer works. Just waiting on the email.

Thanks for your help so far and to the community!
 
Back
Top