Strange, this is what we had to do instead:
zpool replace rpool c1t5000C500234C8147d0s0 c5t50000394182AA306d0
which worked.
We then (i assume needed since this disk needs to be bootable?):
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c5t50000394182AA306d0s0
Let me know if my...
ok, perfect. next problem (ha! sorry) my google Foo is failing me.
in the Web UI, I try to replace the disk (which i removed and replaced in same slot with new disk):
Here is the rpool:
pool: rpool
state: DEGRADED
status: One or more devices has been removed by the administrator...
I was able to locate the disk, thanks!
Another question though. I got a failure alert for this disk last night. but just got the same alert tonight. Does it repeat every 24 hours? I dont see anything else faulted so I assume that is it.
Figured it out. since the disk had just started to die, it was still in the pool faulted, but trying to run. this hung up the format command. I let it keep running and eventually the disk was removed and the UI was responsive again.
I assume we just need to pull the disk, wait a bit for things...
Hey Gea,
Just got an alert that one of my rpool disks died. laaaaaame. However, when i login the web UI is unresponsive. If i recall previously, you said it is depending on certain commands completed, one of them being format.
When I login to the console and execute format, it just says...
Yup, bad disk, you will need to replace (you can do inside of Napp-IT, and it will resilver)
as for the email part, curious to hear from Gea about this as well. (I dont recall if it is part of the licensed/paid add-on's, but I would like this as well)
So I know which disk is bad and I am planning to pull it tonight and replace.
Is there anything I need to do before pulling it? (it is not attached to any pool)
Is there anything I should do before replacing it? (clear anything, etc)
In the past, we have added drives a bunch at a time. My...
Looks like i spoke too soon. I let the browser spin and spin, wrote the above, came back 5 mins later and it came up. Now when I hit pools or disks it comes up reasonably fast.
Not sure if this has anything to do with it, fairly certain one of the newly added disks is bad (*shakes fist!!*)...
Running OmniOS + Napp-IT 17.01
Been working great so far, but just added 10 new disks (SAS disks connected via LSI 2007 HBA)
OS seems fine, ZFS pools still working, but in Napp-IT, if I click on disks or pools, browser just spins.
minilog
--- _lib _lib/illumos/get-disk.pl &get_mydisk_format...
yeah, your calc's make sense for your use case. for me its diff. the omnios/napp-it box is my backup "disaster recovery" system (should the primary HA cluster fail) the only traffic is coming from the other ZFS systems via zfs-send/rec
but even on the primary, im still using raidz1 for...
Did you get the ESX pass through and all that stuff working?
I just did nearly the same setup (except im using seagate nearline SAS drives)
My setup:
OmniOS (instead of OI)
Once I installed ESXi5.1 all i had to do is:
Create local storage on one controller for the OmniOS VM OS...
Trying to answer my own question. so i tried:
zfs mount bakpool/infra
To which i get: cannot mount 'bakpool/infra': 'canmount' property is set to 'off'
You cant see it in the image, but here is the canmount property from the same page:
bakpool/infra canmount off local
can i just...
Good morning,
So i have a napp-it "all in one" running omnios. its primary use is to back up my nexenta boxes. I am first trying "auto-sync" which automates zfs-send/rec.
I created a pool on napp-it called "bakpool"
I am backing up the folder "infra" from nexenta box
When the job...
So i started out with mirrors originally, however, this pool is mainly for reads (application servers, etc) and will not be very write heavy. The majority of reads will hit the Arc and L2arc, so i decided to switch to raidz1 to maximize the usage of disks. (FS is replicated to second array)...
Thanks, its been a long road (nothing to do with this project, just other stuff coming up)
I have HA all setup and running, slowly moving over dev/test VM's to see how perf is, how good failover works, etc etc. I'm pleased so far but will post a more in depth update soon.
Hrmmm, apparently nexenta will let auto-sync replicate to a non-nexenta box, but limits it to work over SSH only. im fine with this, so ill give this a try too in addition to zrep.
ok, copied zrep to /usr/bin
cleared zrep, dumped testpool/fs and started over. now it works:
root@bns-bsan1:/testpool/fs# touch text.txt
root@bns-bsan1:/testpool/fs# ls
text.txt
root@bns-bsan1:/testpool/fs# zrep refresh bakpool/fs
DEBUG: refresh step 1: Going to 127.0.0.1 to snapshot...
hrmmm, trying this locally first withing the napp-it server.
so i created a pool, called testpool, then i created a pool called bakpool.
next, created an FS called bakpool/fs
ran: ./zrep init bakpool/fs 127.0.0.1 testpool/fs
it created testpool/fs for me and i see it in the napp-it GUI.
i...
I know several ppl on here use zrep to backup their ZFS storage servers, so i'm hoping somone can chime in here.
I am going to be using zrep to one way sync my production ZFS servers to a napp-it "all in one" server for disaster recovery purposes.
seems like zrep is the easiest way to do...
Gea, Thanks! I decided to just re-install with the stable ISO, i didnt realize the OVA was the "bloody" version. After redoing it, it works like a charm.
Perl API version v5.16.0 of IO::Tty does not match v5.14.0 at /usr/perl5/5.14.2/lib/i86pc-solaris-thread-multi-64int/DynaLoader.pm line 213.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
BEGIN failed--compilation aborted at...
Gea, many thanks for putting together and maintaining napp-it. I am trying for the first time to backup my nexenta boxes. As suggested, i used latest omniOS OVA:
http://omnios.omniti.com/media/OmniOS-bloody-first-boot.ova
Then i followed the all-in-on how to (including the omnios...
Things have gone well, this project got shelved for several months do to other things in the pipeline getting bumped up.
Recently got the HA cluster up. I initially tried to do myself to no avail. Followed the HA cluster installation/setup guide and tried both the command line and GUI...
my .02 (been in your position before) eat the cost and go pro with a reputable recovery service. most will not charge you unless they can successfully extract data.
I bought the exact same chassis about a year ago and was told by SMC support that a interposer will not fit (not enough clearance for the latch to close on the tray) but ive seen some hacks out on the net to make it work. (unless the chassis has changed) YMMV
My response probably didnt take into account the existing hardware you already have very well. im not knowledgeable about the MD3000 stuff, but i dont see any issues as long as you have backups and a contingency plan in place (if you replace the other controller, etc) but i wouldnt drop TOO...
I like the idea of going with ZFS in your case, my worry tho would be downtime in the event that something goes wrong.
So my first question would be, how critical is uptime? if its not (meaning, storage being offline an hour) i'd say go for it. if it is crucial, i'd go with a commercial ZFS...
Whats you comfort level with *nix? That may be a good starting question. Although you mention having ext3/4 volumes so I would assume your comphy with it.
for ZFS, i would go with the OI+NapIT (sp?) setup, you'll pretty much always see that thread in the first or second page on here.
You...
Depends on what kind of throughput you plan to put through it. you dont have to (but you may want to), you can just cascade more JBODS to the same HBA using the ports on the expander.
@Dan, i wont hijack this thread. but im interested to learn more about zrep. I am considering going with OI on my backup san instead of paying for another nexenta LIC that i use on the primaries (if the second box is pure disaster readyness). googled zrep and found the website, will search a...
you can use RDM (raw device mapping)
http://www.tumfatig.net/20120226/raw-device-mapping-of-local-sata-disks-on-esxi/
(hopefully [H] doesnt butcher this link)
its still early and i havent googled this, but:
- isnt the point of a ZIL/Slog so that the client write does not have to wait for the write to hit stable media before finishing the process? isnt that what makes NFS so slow unless you add a zil? (or change the sync setting)
- Wouldnt cheaper...
I was hoping for more of a consensus, but i see what your saying. it's annoying becuse its the nexenta dude i talked to in the first palce that told me about the Talos C's! lol. I guess we'll see if anyone else chimes in and then ill go from there. Should have figured it would be too good to...
I am no OCZ fanboy (i own none) but i have a need for SSD's for ZFS L2arc. I need SAS because i use HA and need the dual interface (plus all the other drives are SAS and its on an expander)
L2arc is not as cirical for me (that i know of at least), boxes will have plenty of Ram, will have a...
although im with the posters above regarding commercial solutions (easier support) i wouldnt rule out ZFS, you could also go with a commercial ZFS provider, like area data systems. http://areasys.com/ but i would definately quote out the EMC's and the other big guys too,.