OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

What if I don't want to use mediatomb, can I easily remove it?

Matej

You can delete or you do not need to activate.
For next release I wiil keep the BE prior add-ons so you can go back.
 
I'm having CPU spikes on my all-in-one that temporarily kicks people off the SMB share.

OmniOS is on a SATA datastore and then sharing out two pools. One for a few VMs and one for SMB for users to work on.

I installed Zabbix on a VM and I think it actually that VM that is spiking the CPU. It happens about 2 or 3 times a day from what I have gathered.

So while I try to figure out how to tune Zabbix (or switch to another monitor), how can I keep it from affecting other VMs? I don't know enough about ESXi yet. I set the OmniOS CPU reservation to "high" and all other VMs to "low", but I still had the issue.

Can I add another CPU to OmniOS without causing instability? Right now all my VMs are using 1 virtual CPU.
 
Are two SSD better put to use in a SSD only pool or as a ZIL in front of a disk pool for the purpose of data storage over NFS for an ESX?

Thanks
 
Just ran the Bonnie benchmark. 10x Toshiba 3TB in RAIDZ2 via IBM M1015. Do the speeds seem to be fine?

http://imageshack.us/photo/my-images/826/wwt7.jpg/

They seems a little low, but not out of the ballpark. You're getting almost 1,000MB/s seq writes and over 1,000MB/s seq reads. Over 10 drives that's 100MB/s per spindle. I wouldn't consider this disappointing.

How are you connecting 10 drives to the M1015? Expander, more than one M1015, or are two of the drives on MB ports?
 
A 10-drive RAIDZ2 has 8 data spindles, so 125MB/s per.

Facepalm...

The Parity in RaidZ/RaidZ2/Raid5/Raid6/whatever is rotated among the drives. All 10 drives contain live data. Reads are multiplexed evenly across all 10 spindles and ZFS doesn't even bother to read the parity segments AT ALL except when it needs to - when there is a failed drives, a checksum error detected that needs correcting or when writing and it needs to be reconstructed.
 
Hi There,
I use napp-it to go straight out of the box on a n54l microserver and I am very impressed and pleased with the results, however, I have 2 issues:
The first is a stupid issue, I would like to change the timezone to Hong Kong via the GUI, but it says that the town is invalid...
I am not too proficient with the CLI, but I managed to changed the timezone through command line with export TZ=HongKong or rtc -z HongKong, but every time I reboot, it will go away, is it possible to know how I can fix it please?
The second issue is a bit trouble some, with a 5x2TB WD green raidz, I reach only 45MB/s in read, but I go up to 100MB/s in write... which is surprizing. So, I am troubleshooting it right now.
Testing all individual disk first and another router second...

Anyway, great work!!
 
Hi There,
I use napp-it to go straight out of the box on a n54l microserver and I am very impressed and pleased with the results, however, I have 2 issues:
The first is a stupid issue, I would like to change the timezone to Hong Kong via the GUI, but it says that the town is invalid...
I am not too proficient with the CLI, but I managed to changed the timezone through command line with export TZ=HongKong or rtc -z HongKong, but every time I reboot, it will go away, is it possible to know how I can fix it please?
The second issue is a bit trouble some, with a 5x2TB WD green raidz, I reach only 45MB/s in read, but I go up to 100MB/s in write... which is surprizing. So, I am troubleshooting it right now.
Testing all individual disk first and another router second...

Anyway, great work!!

Have you selected Asia and HongKong in menu system-localization?

about bad read values
- check if you have a pool with correct ashift=12
first gen of wd greens report wrong physical sector size: overwrite in sd.conf

other possible problem:
Windows with Realtek. Compare another PC best with Intel Nics or newest Realtek driver.
 
Benchmarks do not request sync write.
Compare sync=always vs sync=disabled

If you use ChrystalDiskMark, create a 50 GB LU (volume based) and do the test via iSCSI (100MB testfile is ok) from
Windows as this is the fastest way to connect ZFS.
If you post a screenshot of Chrystal values, I would add it to my benchmark overview.


I have not done extensive tests underpovision vs using whole disks wtth low usage.
I would not expect significant differences especially with the Intel 3700.

Test re-done with OmniOS and Napp-IT

Intel DC S3700 100GB as ZIL device partitioned to 3GB

iSCSI, 50GB LU (volume based)

Sync=Disabled
http://imageshack.us/photo/my-images/855/p6sz.jpg/

Sync=Always
http://imageshack.us/photo/my-images/850/wjox.jpg/

Sync=Standard
http://imageshack.us/photo/my-images/12/elk7.jpg/

edit: 5 x vdevs of RAIDZ (with 3 disks each)
 
Last edited:
If I compare to first values at
http://hardforum.com/showpost.php?p=1040226516&postcount=5398

..
I know that Solaris is mostly faster than BSD but your difference is extreme.
Sequential write values are over 20% better on OmniOS and the important 4k value
with larger queue depth is 5 x better with sync write enabled.

Maybe some tweaking on BSD can lower the difference or BSD is not the best with iSCSI as well as with SMB
Anyway, the 100 GB Intel 3700 seems to be a perfect ZIL and the 3700 line a perfect candidate for SSD only pools.
 
Last edited:
If I compare to first values at
http://hardforum.com/showpost.php?p=1040226516&postcount=5398

..
I know that Solaris is mostly faster than BSD but your difference is extreme.
Sequential write values are over 20% better on OmniOS and the important 4k value
with larger queue depth is 5 x better with sync write enabled.

Maybe some tweaking on BSD can lower the difference or BSD is not the best with iSCSI as well as with SMB
Anyway, the 100 GB Intel 3700 seems to be a perfect ZIL and the 3700 line a perfect candidate for SSD only pools.

Solaris IS faster than BSD, period! :) From my real world application test and these benchmarks, OmniOS was ALWAYS faster in every way. I went with FreeNAS because I was having NFS disconnect issue with OmniOS using HP SmartArray p400 but now I finally got the M1015 in IT-Mode, so we'll see if I still have disconnect issue. Either way, I am going to use iSCSI instead of NFS.

I have no idea why the 4K 32Queue Depth test was way faster in OmniOS. I ran several tests just to make sure too.


Anyway, I'm now back to OmniOS and Napp-IT.

Edit:
Oh, I forgot to add that with the OmniOS and Napp-IT re-test, I have 5 x vdevs of RAIDZ (with 3 disks each) but that shouldn't make a huge difference though....
 
Last edited:
GEA, can you elaborate on the Napp-IT feature of "Monitoring"? What exactly does it do? That's the only add-on that I might need with Napp-IT.

I now that it allows me to see those LED light for "pool, cap, disk, net, cpu, and job" What else does it do?
 
They seems a little low, but not out of the ballpark. You're getting almost 1,000MB/s seq writes and over 1,000MB/s seq reads. Over 10 drives that's 100MB/s per spindle. I wouldn't consider this disappointing.

How are you connecting 10 drives to the M1015? Expander, more than one M1015, or are two of the drives on MB ports?

Sorry, I have the 10 drives across 2x M1015.

Thanks for the feedback. I am going to update the firmware on the M1015 to the newer v17. Then update the driver in ESXi to use the newer 17 driver. I am currently using stock driver. Perhaps that will squeeze out a little more performance.
 
Sorry, I have the 10 drives across 2x M1015.

Thanks for the feedback. I am going to update the firmware on the M1015 to the newer v17. Then update the driver in ESXi to use the newer 17 driver. I am currently using stock driver. Perhaps that will squeeze out a little more performance.

I don't think updating the firmware or ESXi driver will change much - but it can't hurt.

What MB are you using? Are you sure both PCIe slots are really x8? Lots of boards have x8 slots that are x4 electrically.
 
Facepalm...

The Parity in RaidZ/RaidZ2/Raid5/Raid6/whatever is rotated among the drives. All 10 drives contain live data. Reads are multiplexed evenly across all 10 spindles and ZFS doesn't even bother to read the parity segments AT ALL except when it needs to - when there is a failed drives, a checksum error detected that needs correcting or when writing and it needs to be reconstructed.

Duh, dunno what I was thinking.
 
I don't think updating the firmware or ESXi driver will change much - but it can't hurt.

What MB are you using? Are you sure both PCIe slots are really x8? Lots of boards have x8 slots that are x4 electrically.

Motherboard is in my sig: Supermicro X9SCM-IIF

I'm pretty sure I am using the proper x8 slots per mobo manual. Will double check this weekend.
 
ZzBloopzZ - I have the same mobo, its got 4 - 8x slot size and first two are 8x electrically and 2 are 4x electrically.
 
question Gia, will snapraid continue to be put into latest builds? just curious cause they released an update! and latest beta supports quad parity (not that I will be using it)
 
Q: Why does concurrent Write and read operation cause massive performance degradation? It it because it sorta transform sequential operations into a more random pattern (of read/write)?

Would anything help in this type of scenario?
 
Have you selected Asia and HongKong in menu system-localization?

about bad read values
- check if you have a pool with correct ashift=12
first gen of wd greens report wrong physical sector size: overwrite in sd.conf

other possible problem:
Windows with Realtek. Compare another PC best with Intel Nics or newest Realtek driver.


Thanks, I was mislead because there is a HongKong in the region, before chosing the city...
Now it works.

ashift is 12.
iozone on single drive shows high numbers, more than the network bottleneck, so I believe it's correct.

I want to use iperf or any other solution to test the network performance. But when I try to do pkg install iperf, it says it's not part of the available packaging. I am digging online. But does napp-it comes with any native solution to test the network speed bandwith?
My setup is win8 as HTPC, napp-it to go as NAS, both connected to a Linksys WRT610N.

Thanks for your help.
 
I want to use iperf or any other solution to test the network performance. But when I try to do pkg install iperf, it says it's not part of the available packaging. I am digging online. But does napp-it comes with any native solution to test the network speed bandwith?

If you want to install applications via pkg install, you must select a repo with the app:
http://omnios.omniti.com/wiki.php/Packaging

If you like the newest apps (independent from used Solarish, works on Omni/OI/SmartOS/Solaris):
http://www.perkin.org.uk/pages/pkgsrc-binary-packages-for-illumos.html

The last is my preferred source

btw.
iperf is included in napp-it:
see /var/web-gui/data/tools
 
If you want to install applications via pkg install, you must select a repo with the app:
http://omnios.omniti.com/wiki.php/Packaging

If you like the newest apps (independent from used Solarish, works on Omni/OI/SmartOS/Solaris):
http://www.perkin.org.uk/pages/pkgsrc-binary-packages-for-illumos.html

The last is my preferred source

btw.
iperf is included in napp-it:
see /var/web-gui/data/tools

_Gea, Thank you very much for your help. I believed iperf was included in napp-it, but when I type the command, it tells me it does not know it. Through CLI, I am navigating to the folder and checking the inside but so far, I did not find iperf.
I am sorry for the noob I am... :(
Thank for your help

edit: Ok, so I visited name revelant directories in search of iperf but could not find it.
I am downloading the source you indicated to install iperf. :) it will take a while.
Thanks
 
Last edited:
has anyone gotten sickbeard/sabnzbd/couchpotato running on their nexenta box? Ive found instructions here for openindiana: http://blog.damox.net/?p=95

The repos and package utility is different for nexenta however. I know for a fact that one person on the nexenta ce forums has done it, havent gotten an email reply from him yet.
 
_Gea, Thank you very much for your help. I believed iperf was included in napp-it, but when I type the command, it tells me it does not know it. Through CLI, I am navigating to the folder and checking the inside but so far, I did not find iperf.
I am sorry for the noob I am... :(
Thank for your help

edit: Ok, so I visited name revelant directories in search of iperf but could not find it.
I am downloading the source you indicated to install iperf. :) it will take a while.
Thanks

The path is
/var/web-gui/data/tools/iperf/iperf

You can use WinSCP to browse your systemdisk
(if yo allow root at services - ssh, you can login as root with full permissions)

other option
use midnight cokmmande at CLI (mc) as local filebrowser
 
The path is
/var/web-gui/data/tools/iperf/iperf

You can use WinSCP to browse your systemdisk
(if yo allow root at services - ssh, you can login as root with full permissions)

other option
use midnight cokmmande at CLI (mc) as local filebrowser

I am using putty and I allowed ssh root, I am using root account and it says:
"No such file or directory". Once in /var/web-gui/data/tools I don't have iperf appearing after I ls the folder. I am using napp-it to go for microserver that I downloaded from napp-it website.

If I am not mistaken, looks like something is missing.

edit: got iperf installed, after I solved an error typing pkgin -y update, it told me there was something missing with lybcrypto.so.0.9.8
I found that it needs to be changed to 1.0.0
anyway, now that iperf is installed, I need to test it :)

edit2: got 800 Mbits from htpc to nas and 700Mbits from nas to htpc...
I think I got myself carried away... if I use the share, copy from nas to htpc is 70 MB/s but Crystal Mark still tell me read from the drive is capped at 45MB/s
So... case solved.

Last weird thing, I can't copy anything from the NAS to my C:drive... might be a windows protection.

Anyway, Gea, thanks for your work, really appreciate it.!
 
Last edited:
Should 6x 2TB drive perform better in seq-write/seq-read in a:

2x vdev of raidz (3 disks vdev)
or
1x vdev of raidz2?(6 disks vdev)

The first option should give 2x the IOPS of the second however the second is more "safe" what about throughput in both read and write?

Thanks
 
Last edited:
Should 6x 2TB drive perform better in seq-write/seq-read in a:

2x vdev of raidz (3 disks vdev)
or
1x vdev of raidz2?(6 disks vdev)

The first option should give 2x the IOPS of the second however the second is more "safe" what about throughput in both read and write?

Thanks

When streaming sequential data and If you ignore I/O and other limitations,
throughput can scale with number of data disks that are used simultaniously.

In both cases, you have 4 data disks, so sequential values can be similar.
 
When streaming sequential data and If you ignore I/O and other limitations,
throughput can scale with number of data disks that are used simultaniously.

In both cases, you have 4 data disks, so sequential values can be similar.

With 2 3-way mirrors you have 6 data disks for reads.
 
I just wanted to jump in here and give some real world production feedback on Napp-It and an AIO setup.

Our server has been running for 242 days now and to date we have had exactly zero errors located on our scrubs. I am running Toshiba DT01ACA200 drives (24 currently) under a mirrored ZFS configuration.

My build was a pair of E5-2620s, 192GB of RAM, and a Supermicro X9DR3 motherboard. We are using a 36 bay Supermicro case and 3 M1015's.

Right now this is hosting about 40 VMs and will host more as we move data off of our slowly dying MD3000i. Performance has been the biggest surprise where our consumer 2TB drives are running significantly faster than the RAID10 15k drives in the MD3000i. One of our big database jobs actually ran 300% faster on the Napp-IT NFS array.

Intel SSDs are being used for cache and I kept the configuration simple. I've tweaked nothing outside of the UI except for the SSD partitions which was an issue with an older GUI (now fixed).

We've been EXTREMELY happy with this setup and we did it for a fraction of the cost of what a Nexenta vendor would offer.

Our backups are going to PHD Virtual backup devices which then go off to tape. Our only issue so far was a DOA drive from Newegg which was replaced in 2 days by Toshiba. Those have been great drives despite the Newegg reviews and I am starting to think that Newegg may just be crappy at handling their drives. Without a single bad block I consider these to be pretty rock solid.

My only issue now is just a general fear of upgrading Napp-It. Since this is running over half of our production environment one slip up or issue with an update will have me printing fresh copies of the resume.

I'm very happy with what _Gea has put together and this has been a major cost savings for my company and has removed the existing Dell vendor lock-in with drives.

I just wanted to give a production review of this since there is a lot of lab/hobby builds with Napp-It on Hardforum. I'm actually looking at building a bare metal Napp-It server at the end of the year for a second level backup storage location since tape is becoming too slow for us to continue to use that in our environment. The cost of upgrading to dual LTO-5 drives would get a nice server build and backup/recovery would be much faster.
 
My only issue now is just a general fear of upgrading Napp-It. Since this is running over half of our production environment one slip up or issue with an update will have me printing fresh copies of the resume.

Create a boot enviroment(beadm) and upgrade(I'm not sure, but I think napp-it makes BE on upgrade anyway). In case something goes wrong, just revert back to the old working BE(beadm activate 'old-working-be', restart and you are back online with the old config...

On the other hand, napp-it is only frontend, it does nothing to the system or ZFS, so upgrading napp-it could only break the GUI, not the config and server alone...

Matej
 
I just wanted to jump in here and give some real world production feedback on Napp-It and an AIO setup.

Our server has been running for 242 days now and to date we have had exactly zero errors located on our scrubs. I am running Toshiba DT01ACA200 drives (24 currently) under a mirrored ZFS configuration.

My build was a pair of E5-2620s, 192GB of RAM, and a Supermicro X9DR3 motherboard. We are using a 36 bay Supermicro case and 3 M1015's.

Right now this is hosting about 40 VMs and will host more as we move data off of our slowly dying MD3000i. Performance has been the biggest surprise where our consumer 2TB drives are running significantly faster than the RAID10 15k drives in the MD3000i. One of our big database jobs actually ran 300% faster on the Napp-IT NFS array.

Intel SSDs are being used for cache and I kept the configuration simple. I've tweaked nothing outside of the UI except for the SSD partitions which was an issue with an older GUI (now fixed).

We've been EXTREMELY happy with this setup and we did it for a fraction of the cost of what a Nexenta vendor would offer.

Our backups are going to PHD Virtual backup devices which then go off to tape. Our only issue so far was a DOA drive from Newegg which was replaced in 2 days by Toshiba. Those have been great drives despite the Newegg reviews and I am starting to think that Newegg may just be crappy at handling their drives. Without a single bad block I consider these to be pretty rock solid.

My only issue now is just a general fear of upgrading Napp-It. Since this is running over half of our production environment one slip up or issue with an update will have me printing fresh copies of the resume.

I'm very happy with what _Gea has put together and this has been a major cost savings for my company and has removed the existing Dell vendor lock-in with drives.

I just wanted to give a production review of this since there is a lot of lab/hobby builds with Napp-It on Hardforum. I'm actually looking at building a bare metal Napp-It server at the end of the year for a second level backup storage location since tape is becoming too slow for us to continue to use that in our environment. The cost of upgrading to dual LTO-5 drives would get a nice server build and backup/recovery would be much faster.

Awesome feedback, especially from the production point of view! Thanks!

Thank you also for the hard drive feed back. I was going to get 3.5" 7200rpm SAS enterprise drives but they are way too expensive. From this and other feedbacks, I think SATA drive will do and will try the entry-level enterprise drives instead (e.g. The Seagate ES series).

I'm in the process of putting together a production system for disaster recovery that will run at a co-location.
 
I want to add my own config.

I use about ten All-In-One's (university use) with 16-64 GB RAM (free and payed). On each of them we run only a few VMs,
mostly Windows server (AD 2012, Mailserver, Terminalserver, Webserver), OSX server for ical or OmniOS
based webserver/ webmailer.

I prefer many smaller systems over some large systems to have a better load balancing and to have not too many
VM's down on problems. I also use SSD only pools for VMs with two backups to disk pools (plus one external backup on different location)

All servers are connected via an CX4 10 GbE vlan based network for fast backup/ move/ clone/ vmotion + storage motion

I would always prefer such a multiple cheap systems config over one or two large systems that are not allowed to fail.
It is easier to update and test a small system out of many than to update the one system that is not allowed to fail-
even with ZFS where you can use BE system snaps to undo an update or All-In-Ones, where you can add a second
updated local SAN system where you only need to switch the HBA due to pass-through.

I also use external hardware raid-1 enclosures for 2 x 2,5" disks for the reason to clone system disks and to have the option
to boot different systems on the fly. You can also use my preconfigured ESXi 5.5 appliance image to update the whole system.

On a real hardware crash, we unplug the disks, put them into the next All-In-One (we have always enough free bays),
import the pool and the VMs and we are back on service after 30min (to avoid using backups that are not as up to date)
an to avoid real HA setups with its complexity.
 
Last edited:
Back
Top