Thanks for the advice. Sorry, how do you perform a scrub?
Menu Jobs >> scrub >> create autoscrub
Then run either per timetable or manually per run now
or via CLI
zpool scrub pool
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Thanks for the advice. Sorry, how do you perform a scrub?
Next 0.9e contains 5 new day-trigger:
- mon-fri
- mon-sat
- sat-sun
- first-sat
- first-sun
wget -O - www.napp-it.org/afp | perl
-bash-4.2$ uname -a
SunOS ALINEA 5.11 omnios-6de5e81 i86pc i386 i86pc
running version : 0.9d2 nightly Nov.03.2013
Jan 7 12:52:08 ALINEA fmd: [ID 377184 daemon.error] SUNW-MSG-ID: SMF-8000-YX, TYPE: defect, VER: 1, SEVERITY: major
Jan 7 12:52:08 ALINEA EVENT-TIME: Tue Jan 7 12:52:08 CST 2014
Jan 7 12:52:08 ALINEA PLATFORM: XS23-TY3, CSN: 4011ML1.4, HOSTNAME: ALINEA
Jan 7 12:52:08 ALINEA SOURCE: software-diagnosis, REV: 0.1
Jan 7 12:52:08 ALINEA EVENT-ID: 27d19be0-d2ea-ea86-c85b-aeb458a3d9eb
Jan 7 12:52:08 ALINEA DESC: A service failed - a method is failing in a retryable manner but too often.
Jan 7 12:52:08 ALINEA Refer to http://illumos.org/msg/SMF-8000-YX for more information.
Jan 7 12:52:08 ALINEA AUTO-RESPONSE: The service has been placed into the maintenance state.
Jan 7 12:52:08 ALINEA IMPACT: svc:/network/netatalk:default is unavailable.
Jan 7 12:52:08 ALINEA REC-ACTION: Run 'svcs -xv svc:/network/netatalk:default' to determine the generic reason why the service failed, the location of any logfiles, and a list of other services impacted.
-bash-4.2$ svcs -xv svc:/network/netatalk:default
svc:/network/netatalk:default (Netatalk AFP Server)
State: maintenance since January 7, 2014 12:52:06 PM CST
Reason: Start method failed repeatedly, last died on Killed (9).
See: http://illumos.org/msg/SMF-8000-KS
See: /var/svc/log/network-netatalk:default.log
Impact: This service is not running.
ld.so.1: netatalk: fatal: libsasl2.so.2: open failed: No such file or directory
[ Jan 7 12:52:06 Method "start" failed due to signal KILL. ]
-bash-4.2$ /usr/local/sbin/afpd -V
ld.so.1: afpd: fatal: libsasl2.so.2: open failed: No such file or directory
Killed
-bash-4.2$ /usr/sbin/beadm list
BE Active Mountpoint Space Policy Created
napp-it-0.9a9 - - 127K static 2013-04-15 21:52
netatalk-3.0.2 - - 586K static 2013-04-15 21:55
netatalk-3.0.2-backup-1 - - 58.0K static 2013-04-25 21:48
netatalk-3.0.3 - - 3.71M static 2013-04-25 21:48
netatalk-3.0.3-backup-1 - - 70.0K static 2013-06-11 17:37
netatalk-3.0.4 NR / 5.10G static 2014-01-07 12:50
omnios - - 3.83M static 2013-04-15 21:24
omnios-backup-1 - - 65.0K static 2013-04-15 21:48
omnios-r151006 - - 831K static 2014-01-07 12:32
omnios-r151006-backup-1 - - 62.0K static 2014-01-07 12:50
omniosvar - - 31.0K static 2013-04-15 21:24
pre_napp-it-0.9a9 - - 41.0K static 2013-04-15 21:48
pre_netatalk-3.0.2_1366080807 - - 127K static 2013-04-15 21:53
pre_netatalk-3.0.3_1366944500 - - 51.0K static 2013-04-25 21:48
pre_netatalk-3.0.4_1370990221 - - 58.0K static 2013-06-11 17:37
pre_netatalk-3.0.4_1389120602 - - 45.0K static 2014-01-07 12:50
I think this should suffice. Thanks very much!
edit
I have tried to setup AFP with a new install on OmniOS 1008 and failed the same way.
My other config (OmniOS1006 with AFP installed and then updated to 1008 work)
I believe you because the older AFP was working on 1008 for me prior to me upgrading using wget. Can I roll that back somehow?
So apparently installing AFP on top of OmniOS 1008 is not the correct order of operations? Are you suggesting I should reinstall OmniOS 1006, then install AFP, then upgrade to 1008? How do I install 1006 now that it's not the most current stable release?
I've never reinstalled OmniOS.. how do I retain my settings and carryover my zpools? How do I backup and restore the napp-it settings? I obviously am very cautious about this because I do not want to lose any data.
# /usr/local/sbin/afpd -V
ld.so.1: afpd: fatal: libsasl2.so.2: open failed: No such file or directory
Killed
# beadm list
BE Active Mountpoint Space Policy Created
napp-it-0.9d2 - - 315K static 2014-01-08 15:18
netatalk-3.0.4 NR / 2.55G static 2014-01-08 15:29
omnios - - 5.60M static 2014-01-08 14:51
omnios-backup-1 - - 76.0K static 2014-01-08 15:15
omnios-backup-2 - - 115K static 2014-01-08 15:17
omniosvar - - 31.0K static 2014-01-08 14:51
pre_napp-it-0.9d2 - - 41.0K static 2014-01-08 15:15
pre_netatalk-3.1_1389216429 - - 1.00K static 2014-01-08 15:27
Menu Jobs >> scrub >> create autoscrub
Then run either per timetable or manually per run now
or via CLI
zpool scrub pool
So I installed OmniOS from scratch using the 151006 USB stick installer. I left things at 151006 (all I did was configure an IP and user). Then I installed napp-it, rebooted, then afp, rebooted. AFP is still not starting and throwing the same error:
Code:# /usr/local/sbin/afpd -V ld.so.1: afpd: fatal: libsasl2.so.2: open failed: No such file or directory Killed
The BE's that have been created thus far:
I had a question about iSCSI/COMSTAR best practices.
Because of limitations in the free backup tool I use to write shit to tape, I need to convert all my CIFS shares into raw iSCSI storage space mapped to a windows file server.
What is the best way to convert a raw, empty pool into usable iSCSI space? Currently I set up a volume that consumes ~90% of the capacity of the entire pool, and map the volume as a COMSTAR logical unit.
My other options include making a single thin provisioned LU, and then doing a full format in windows to balloon out the file to prevent later space related 'oops'.
I'm leaning more towards the thin provisioning, just because it seems less janky than having to create a volume under each pool (which has to be a multiple of the pool cluster/stripe size), then making an LU out of it.
Any thoughts or links to any kind of iSCSI best practices for this would be super appreciated, I desperately need to get all my data spooled off to tape.
Next problem.. now my sharing is no longer working. I guess those settings don't carry over after a rebuild.
I have no idea how I had this working before, but I need napp-it to host my SMB shares, NFS exports, and AFP shares all on the same files.
I currently have a local user added through napp-it working so that my windows desktop can read and write to folders on the SMB share, but I can't seem to delete the files I create. Permission denied.
Next, I have a linux desktop that mounts the same share via NFS, and I can only modify the files as root. I need to be able to view, modify the files as the same user (same unix and windows usernames).
I also need to be able to manage this using my Mac AFP share of the same filesystem. Read, write, modify the files.
This always seems so hard for me.. I've looked at the guides again, and I can't seem to find the clear instructions on how to set up sharing in this type of environment.
Seems like you forgot to reboot after installing AFP.
The AFP installer creates a BE and activates this BE.You are asked to reboot, If you do not every change is not in the next default BE
AFP, NFS and SMB are highly incompatible
- NFS (v3) connects without authentication based on client ip on a good will base, different on every OS
- AFP authenticates and use Unix UID/GID and Unix permissions but respects ACL
- SMB authenticated and use Windows SID and ACL only
So the only minimal common sense is setting permission to allow everyone. I would at least set NFS to allow everyone If you need authentication, go SMB all the way and use AFP/NFS only when absolutely needed (Even Apple moves to SMB as the default protocol in 10.9 Maverick)
Comstar iSCSI is very fast and you can use it with a Windows server for special Windows features that need ntfs. I would just create a volume (90% is too much - keep some space for snaps).
Main problem: If you need files from a snap, you must clone or rollback the whole filesystem to have access (You may use Windows shadow copies that this is not at the same level like ZFS snaps). This is unlike a pure Solaris SMB server where you have a filebased access to snaps via Windows previous version. There must be a very important reason to loose this feature beside the stability and simplicity of a Solaris SMB server. In my own config I was happy for every Windows server that i was able to replace with a Solaris server. My last Windows server are several AD server and a Windows webserver where I need Windows AD credidentials.
My tape backup program is more or less windows only, and needs to use VSS to take the backups. They don't have a linux client worth a shit, much less a solaris one. So in order to spin all my stuff to tape, I need to use iSCSI volumes mapped directly to the file server I spin up.
Unless you know of a backup solution I can use directly on OmniOS that gives me the ability to spin data directly to a SuperLoader 3 tape deck.
Next 0.9e contains 5 new day-trigger:
- mon-fri
- mon-sat
- sat-sun
- first-sat
- first-sun
The following two aren't working on my napp-it 0.9e2. Haven't checked the others yet.
- sat-sun
- mon-sat
Has somebody tried the new day triggers with success?
So I'm running into an issue with my new SAN host setup. I'm running a Supermicro X9SRi motherboard, Xeon E5 proc, and 32 GB of ECC UDIMMs. It's all in a Supermicro 24 bay case with the EC26 SAS expander.
I built the server out with OmniOS, the latest stable build, and ran a pkg update on it to bring it to current.
Currently there is a Intel 520-DA 10GbE card in it, as well as my LSI 9211-8i card. I imported two of my old pools, a 8 disk raidz2 made of 1.5 TB WD Green drives (I KNOW they're awful), and another 10 disk raidZ2 made of 3TB reds.
I ran a scrub on both pools since I haven't done so in a few months, and the system is crashing our and core dumping. Below is the system logs. I think my 2.5 year old LSI card is finally starting to crap out on me, but it could be something else. RAM is ECC and passes memtest just fine. Based on what I understand of the logs, the system tries to talk to the card, and the card doesn't respond correctly, and the system faults because it suddenly lost the hardware.
http://pastebin.com/YWCyyz03
Anyone have an idea?
Comstar iSCSI is very fast and you can use it with a Windows server for special Windows features that need ntfs. I would just create a volume (90% is too much - keep some space for snaps).
Main problem: If you need files from a snap, you must clone or rollback the whole filesystem to have access (You may use Windows shadow copies that this is not at the same level like ZFS snaps). This is unlike a pure Solaris SMB server where you have a filebased access to snaps via Windows previous version. There must be a very important reason to loose this feature beside the stability and simplicity of a Solaris SMB server. In my own config I was happy for every Windows server that i was able to replace with a Solaris server. My last Windows server are several AD server and a Windows webserver where I need Windows AD credidentials.
So I'm running into an issue with my new SAN host setup. I'm running a Supermicro X9SRi motherboard, Xeon E5 proc, and 32 GB of ECC UDIMMs. It's all in a Supermicro 24 bay case with the EC26 SAS expander.
I built the server out with OmniOS, the latest stable build, and ran a pkg update on it to bring it to current.
Currently there is a Intel 520-DA 10GbE card in it, as well as my LSI 9211-8i card. I imported two of my old pools, a 8 disk raidz2 made of 1.5 TB WD Green drives (I KNOW they're awful), and another 10 disk raidZ2 made of 3TB reds.
I ran a scrub on both pools since I haven't done so in a few months, and the system is crashing our and core dumping. Below is the system logs. I think my 2.5 year old LSI card is finally starting to crap out on me, but it could be something else. RAM is ECC and passes memtest just fine. Based on what I understand of the logs, the system tries to talk to the card, and the card doesn't respond correctly, and the system faults because it suddenly lost the hardware.
http://pastebin.com/YWCyyz03
Anyone have an idea?
I would attach the disks directly without expander to the 9211 + Sata ports and retry to check if the expander is the problem.
Gea, do you have recommendations for using a ZFS share as the wwwroot for IIS or a simple ubuntu LAMP server? A NFS share shows up as a network drive in windows, which i dont think IIS lets you use as a wwwroot directory. What about just a samba mount in linux? Can you set your document root to a samba share in linux? Unfortunately we cannot use LAMP directly on our omni server due to various compatibility issues.
I would not expect problems with IIs and wwwroot on a SMB network share nor with Linux and SMB or NFS mounts.
ps
What's your compatibility problem with LAMP on OmniOS?
the dev doesn't know solaris really and i think drupal has a bunch of dependencies that it wouldn't be able to satisfy.
also, UNRELATED BUG:
I just came across a weird bug in .9d2 nightly running OI today. Any appliance that has an exclamation point at the end of the operator password fails to add to the extensions->add appliance menu. If i change the password to not have an exclamation point then the appliance adds fine. Not sure if you knew about it, i know its a slightly older version of napp-it. Just figured id give you the heads up
While you may get all what is needed from the SmartOS repo that is working wth OmniOS/OI as well, see http://pkgsrc.joyent.com/packages/SmartOS/2013Q3/x86_64/www/ - I understand the position to prefer a alreadytested config. (howto install, see http://www.napp-it.org/downloads/binaries_en.html)
About the password problem
You should follow the restriction from the napp-it setting menu (allowed: [a-zA-Z0-9,.-;:_#]) as you have the additional layers Perl, Html and a webserver where special characters may have special meanings and must be processed separately when needed.
One last question, i promise! Does napp-it do any sort of "padding" when initializing disks and creating pools so that a drive from a different vendor with slightly smaller/larger sectors will still work?
Hi guys, I am building a new ZFS server for production use.
I have the following specs but I am not exactly sure if I have enough RAM and how I should split the L2ARC and the ZIL.
2 x six core Intel CPU
36 x 4TB WD4003FZEX drives, expect to install 9 at the time, 4 mirrors and 1 spare, with a 3-4 month waiting time in between so I dont get all 36 disks from the same batch... dont want all to fail one after another.
256 GB RAM, should I upgrade to 384?
4 x 640GB IO Accelerators for ZIL and L2ARC, they show up as 2x300GB disks when installed so it means I have 8 x 300GB showing, I thought I would use 2 x 2 mirror for ZIL and the last 4 striped for L2ARC - I am not sure whats best here?
It will host virtual machines so I think dedup can be disabled since its unlikely to be able to dedup much at all, I will enable default compression.
does it make any difference if I run Solaris 11.1 with ZFS version 34 instead of OpenIndiana with ZFS version 28?
connectivity is Infiniband 2x40Gbit + 2x10gbit ethernet in case I need to publish LUNS across longer distances...
input on optimal config in terms of cache/dedup/compression is welcome - I expect to end up with 32 drives which will be 16 mirrors and 4 spares. all together 64TB rather fast storage.
br,
Paniolo
Only some hints
- with 4 TB disks, I would think of 3way mirrors due the large rebuild time, increases read perforamce as well
- do you need that many space? VMs are mostly quite small and performance hungry.
Maybee a smaller high performance pool build from good SSDs and a larger pool from multiple Raid-Z2 is an option
- With that many RAM for Arc you may skip the slower SSD L2Arc and add as many RAM as possible.
- You can do L2Arc tests to see if L2ARC helps.
- I do not know, what you mean with 640 GB IO accelerator but I suppose it is maybee a pci-e card with 2 x 320 SSD each.
Are they really good for ZIL regarding latency and small writes? They are mostly optimized for larger reads/writes and good for L2ARC.
- never mix ZIL + any other usage on the same SSD.
Usually the best of all is a 8GB Dram based ZeusRAM followed by SSDs like Intels S3700.
If you have 2 x 40Gb network, you have a theoretical max of about 80 GB network traffic in 10s (you usually scale size to 10s traffic). I do not suppose that you can really do more than about 20 GB so 2-3 x ZeusRAM or one 200 GB Intel S3700 is what I would suggest usually. Maybee your accelerator is very good and a better option.
Avoid dedup in any case. Disable compress or use LZ4 (OmniOS)
I would prefer OmniOS over OI as it is stable and better maintained.
If you use Solaris11.1 you have encryption as the main advantage (not usefull for high performance) and you need a license from Oracle (You really need as iSCSI seems broken and the fix is not free)
How do you backup your storage?
The IO Accelerator is just a PCI-E with 2x300GB on it, when I test write speed and IOPS on a simple CrystalDiskMark I get around 5000 IOPS on 4K write and a bit more with a Queue Depth of 32, but not much more. Write Speed about 300-350 MB/Sec on 512K sequential write. Latency is good. This is under low to medium load, measured from a VM located on one of the 300GB partitions in vSphere 5.5. If I mirror 2x300GB and another 2x300GB as ZIL will that be too slow? Any idea how much better a ZeusRAM would perform?
What do you mean by "followed by SSDs"? A fast ZeusRAM ZIL + SSD ZIL?
I will try OmniOS, would you prefer it over Solaris 11.1? I don't mind the license/service contract for Solaris.
I have a backup server which will use Veeam to backup the virtual machines to a fiber based SAN, effectively 16Gbit Fiber connection so should be OK speed.
I can also skip dedup and compress, hopefully 64TB should allow ~250 or so Virtual machines.
br,
Paniolo