Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
Wow, great work _Gea...I really admire all the work effort you've put into this to streamline the experience for people new to ZFS.
check your Paypal![]()
What if I don't want to use mediatomb, can I easily remove it?
Matej
Just ran the Bonnie benchmark. 10x Toshiba 3TB in RAIDZ2 via IBM M1015. Do the speeds seem to be fine?
http://imageshack.us/photo/my-images/826/wwt7.jpg/
A 10-drive RAIDZ2 has 8 data spindles, so 125MB/s per.
Hi There,
I use napp-it to go straight out of the box on a n54l microserver and I am very impressed and pleased with the results, however, I have 2 issues:
The first is a stupid issue, I would like to change the timezone to Hong Kong via the GUI, but it says that the town is invalid...
I am not too proficient with the CLI, but I managed to changed the timezone through command line with export TZ=HongKong or rtc -z HongKong, but every time I reboot, it will go away, is it possible to know how I can fix it please?
The second issue is a bit trouble some, with a 5x2TB WD green raidz, I reach only 45MB/s in read, but I go up to 100MB/s in write... which is surprizing. So, I am troubleshooting it right now.
Testing all individual disk first and another router second...
Anyway, great work!!
Benchmarks do not request sync write.
Compare sync=always vs sync=disabled
If you use ChrystalDiskMark, create a 50 GB LU (volume based) and do the test via iSCSI (100MB testfile is ok) from
Windows as this is the fastest way to connect ZFS.
If you post a screenshot of Chrystal values, I would add it to my benchmark overview.
I have not done extensive tests underpovision vs using whole disks wtth low usage.
I would not expect significant differences especially with the Intel 3700.
If I compare to first values at
http://hardforum.com/showpost.php?p=1040226516&postcount=5398
..
I know that Solaris is mostly faster than BSD but your difference is extreme.
Sequential write values are over 20% better on OmniOS and the important 4k value
with larger queue depth is 5 x better with sync write enabled.
Maybe some tweaking on BSD can lower the difference or BSD is not the best with iSCSI as well as with SMB
Anyway, the 100 GB Intel 3700 seems to be a perfect ZIL and the 3700 line a perfect candidate for SSD only pools.
They seems a little low, but not out of the ballpark. You're getting almost 1,000MB/s seq writes and over 1,000MB/s seq reads. Over 10 drives that's 100MB/s per spindle. I wouldn't consider this disappointing.
How are you connecting 10 drives to the M1015? Expander, more than one M1015, or are two of the drives on MB ports?
BSD's iscsi blows. wish they would port SCST or comstar.
Sorry, I have the 10 drives across 2x M1015.
Thanks for the feedback. I am going to update the firmware on the M1015 to the newer v17. Then update the driver in ESXi to use the newer 17 driver. I am currently using stock driver. Perhaps that will squeeze out a little more performance.
GEA, can you elaborate on the Napp-IT feature of "Monitoring"? What exactly does it do? That's the only add-on that I might need with Napp-IT.
I now that it allows me to see those LED light for "pool, cap, disk, net, cpu, and job" What else does it do?
Facepalm...
The Parity in RaidZ/RaidZ2/Raid5/Raid6/whatever is rotated among the drives. All 10 drives contain live data. Reads are multiplexed evenly across all 10 spindles and ZFS doesn't even bother to read the parity segments AT ALL except when it needs to - when there is a failed drives, a checksum error detected that needs correcting or when writing and it needs to be reconstructed.
I don't think updating the firmware or ESXi driver will change much - but it can't hurt.
What MB are you using? Are you sure both PCIe slots are really x8? Lots of boards have x8 slots that are x4 electrically.
Have you selected Asia and HongKong in menu system-localization?
about bad read values
- check if you have a pool with correct ashift=12
first gen of wd greens report wrong physical sector size: overwrite in sd.conf
other possible problem:
Windows with Realtek. Compare another PC best with Intel Nics or newest Realtek driver.
I want to use iperf or any other solution to test the network performance. But when I try to do pkg install iperf, it says it's not part of the available packaging. I am digging online. But does napp-it comes with any native solution to test the network speed bandwith?
If you want to install applications via pkg install, you must select a repo with the app:
http://omnios.omniti.com/wiki.php/Packaging
If you like the newest apps (independent from used Solarish, works on Omni/OI/SmartOS/Solaris):
http://www.perkin.org.uk/pages/pkgsrc-binary-packages-for-illumos.html
The last is my preferred source
btw.
iperf is included in napp-it:
see /var/web-gui/data/tools
_Gea, Thank you very much for your help. I believed iperf was included in napp-it, but when I type the command, it tells me it does not know it. Through CLI, I am navigating to the folder and checking the inside but so far, I did not find iperf.
I am sorry for the noob I am...
Thank for your help
edit: Ok, so I visited name revelant directories in search of iperf but could not find it.
I am downloading the source you indicated to install iperf.it will take a while.
Thanks
The path is
/var/web-gui/data/tools/iperf/iperf
You can use WinSCP to browse your systemdisk
(if yo allow root at services - ssh, you can login as root with full permissions)
other option
use midnight cokmmande at CLI (mc) as local filebrowser
Should 6x 2TB drive perform better in seq-write/seq-read in a:
2x vdev of raidz (3 disks vdev)
or
1x vdev of raidz2?(6 disks vdev)
The first option should give 2x the IOPS of the second however the second is more "safe" what about throughput in both read and write?
Thanks
When streaming sequential data and If you ignore I/O and other limitations,
throughput can scale with number of data disks that are used simultaniously.
In both cases, you have 4 data disks, so sequential values can be similar.
My only issue now is just a general fear of upgrading Napp-It. Since this is running over half of our production environment one slip up or issue with an update will have me printing fresh copies of the resume.
I just wanted to jump in here and give some real world production feedback on Napp-It and an AIO setup.
Our server has been running for 242 days now and to date we have had exactly zero errors located on our scrubs. I am running Toshiba DT01ACA200 drives (24 currently) under a mirrored ZFS configuration.
My build was a pair of E5-2620s, 192GB of RAM, and a Supermicro X9DR3 motherboard. We are using a 36 bay Supermicro case and 3 M1015's.
Right now this is hosting about 40 VMs and will host more as we move data off of our slowly dying MD3000i. Performance has been the biggest surprise where our consumer 2TB drives are running significantly faster than the RAID10 15k drives in the MD3000i. One of our big database jobs actually ran 300% faster on the Napp-IT NFS array.
Intel SSDs are being used for cache and I kept the configuration simple. I've tweaked nothing outside of the UI except for the SSD partitions which was an issue with an older GUI (now fixed).
We've been EXTREMELY happy with this setup and we did it for a fraction of the cost of what a Nexenta vendor would offer.
Our backups are going to PHD Virtual backup devices which then go off to tape. Our only issue so far was a DOA drive from Newegg which was replaced in 2 days by Toshiba. Those have been great drives despite the Newegg reviews and I am starting to think that Newegg may just be crappy at handling their drives. Without a single bad block I consider these to be pretty rock solid.
My only issue now is just a general fear of upgrading Napp-It. Since this is running over half of our production environment one slip up or issue with an update will have me printing fresh copies of the resume.
I'm very happy with what _Gea has put together and this has been a major cost savings for my company and has removed the existing Dell vendor lock-in with drives.
I just wanted to give a production review of this since there is a lot of lab/hobby builds with Napp-It on Hardforum. I'm actually looking at building a bare metal Napp-It server at the end of the year for a second level backup storage location since tape is becoming too slow for us to continue to use that in our environment. The cost of upgrading to dual LTO-5 drives would get a nice server build and backup/recovery would be much faster.