Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
The 'going back to 512-byte sectors' is completely normal; no body ever got any different behavior!I had the same problem with it going back to 512K sectors. I wouldn't write to the array until this is fixed, my array is now corrupt because I tried the non destructive benchmark.
[root@zfsguru /]# zdb -e Storage_1 | grep ashift
zdb: can't open Storage_1: File exists
ashift=12
[root@zfsguru /]#
[root@zfsguru /]# ls /dev/gpt/
Raid5_1 Raid5_2 Raid5_3 Raid5_4 Raid5_5 Raid5_6 start
[root@zfsguru /]#
The only caveat after using 4k-override is that HDDs are referenced by partition into the pool instead of the gpt label. Not a real issue. Just wondering what's causing it.
http://lists.freebsd.org/pipermail/freebsd-scsi/2010-September/004520.html> - Sometimes you'll run into a device that fails part of the probe on boot,
> and you'll end up running into the run_interrupt_driven_config_hooks
> timeout. You see some aborts during probe, and then the 5 minute probe
> timeout kicks in and panics the kernel. For instance:
>
> (probe4:mps0:0:20:0): SCSI command timeout on device handle 0x0012 SMID 81
> mps0: mpssas_abort_complete: abort request on handle 0x12 SMID 81 complete
> run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config
Yes, planned for next release. Also IP-based authentication and optionally firewall protection; so triple protection against unauthorized access.Secondly, is there a password protect mechanism for the web-GUI in the works?
Yes, your physical memory is 8GiB, because you used memory tuning during installation, the kernel memory is set to be 1.5 times your RAM size; thus exceeding your RAM.Finally...I noticed that the status page says:
Memory size 8 GiB, of which 12 GiB kernel memory (307.2 GiB max.)
Update number? The 0.1.8 release should be about a month away. It will be worth the wait, though! With extensions that add functionality as biggest hot new feature; this is needed to give everyone the functionality that he/she wants without overburdening the main system with things you don't need.
I'll keep you guys updated on my progress.
Oh yes! More than 4000 PENDING sectors; these are VERY dangerous and just one of them can cause most Hardware RAID / Onboard RAID / Windows-based software RAIDs to kick them out of the array and you have a lot of headaches as a result!
Pending sectors are bad sectors on ACTIVE data; data that the HDD *SHOULD* be able to read but CANNOT because the bit errors exceed the internal 40-bit ECC correction capabilities.
Since your disks are EADS, they have normal 512-byte sectors. Due to their high data density the 40-byte ECC of those 512-byte sectors simply isn't enough to prevent occurrences like yours. Using 4KiB sector drives with 100-byte ECC would be able to correct more damage and becomes necessary with ever increasing data densities.
In other words, 2TB 512-byte sector disks could have serious amnesia. Your two disks appear to be suffering from just that.
First power down and install the new harddrives, leaving the existing ones untouched; though it shouldn't harm if you change cables or something; ZFS 'smells' the disks to see who they really are.
Now power up with your new two disks attached, format those disks (not your existing ones!) with GPT. You may need reserved space 0 depending on whether the new disk is smaller than the existing or not.
Once you have your disks formatted, you can execute this on the root command line:
zpool replace Raid1z-8TB gpt/1 gpt/FIRSTNEWDISK
and for the second disk:
zpool replace Raid1z-8TB gpt/2 gpt/SECONDNEWDISK
Of course, replace the name in capital letters with your chosen GPT name for those new disks. You MUST use different label names for your new disks!
You may also want to consider using RAID-Z2 (RAID6) with two parity drives; though that would require you copying all data to some temporary location and then creating a new RAID-Z2 pool from your 7 (?) disks.
What does it mean when it is done repairing? Do I still need to buy 2 new drives and get them replaced?[ssh@zfsguru /]$ zpool status -v
pool: Raid1z-8TB
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: scrub in progress since Tue Jan 25 20:34:12 2011
4.37T scanned out of 5.12T at 8.58M/s, 25h15m to go
858K repaired, 85.46% done
config:
NAME STATE READ WRITE CKSUM
Raid1z-8TB ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gpt/1 ONLINE 0 0 0 (repairing)
gpt/2 ONLINE 0 0 0
gpt/3 ONLINE 0 0 0
gpt/4 ONLINE 0 0 0
gpt/5 ONLINE 0 0 0
errors: No known data errors
pool: zfs_root
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zfs_root ONLINE 0 0 0
gpt/0 ONLINE 0 0 0
errors: No known data errors
[ssh@zfsguru /]$
Hey justin2net,
Appears you encountered one of the known bugs in the 'unfinished' mps driver:
http://lists.freebsd.org/pipermail/freebsd-scsi/2010-September/004520.html
Can you check whether this always occurs? Also, does it work if you connect/passthrough only the new SAS2008 controller, thus removing the LSI 1068E?
If you still can't get it working could you try booting the system directly with ZFSguru livecd, circumventing ESXi? Just to rule out any issue there.
I could make a PR and let the BSD people look at it. The LSI people are working together on this driver, so i would expect this issue to be solved at one point; though i can't give you any ETA on that.
Please help diagnose this issue so i can report it. At least one other person has reported success with ZFSguru and the SAS2008 controller; but he had no other controllers and did not use ESXi.
I would really want the SAS2008 to work very well with ZFSguru; i think it's the best controller chip for ZFS available right now.
Yes, planned for next release. Also IP-based authentication and optionally firewall protection; so triple protection against unauthorized access.
For now, i recommend you use other means to protect from unauthorized access. Anyone access to the ZFSguru interface basically has full root access to your system! Note that by default it does block non-local IP addresses (not starting with 10.x.x.x or 192.168.x.x) but that's the only protection right now, so beware.
Yes, your physical memory is 8GiB, because you used memory tuning during installation, the kernel memory is set to be 1.5 times your RAM size; thus exceeding your RAM.
The trick here is to limit the ARC but give KMEM more headroom, even more than your actual RAM. This makes sure that memory fragmentation does not cause you to hit memory panics with you having enough memory free.
You can see how much ZFS is using at the moment in the 'top' output (login SSH or execute locally on the shell). Wired memory is kernel memory, InActive is UFS caching (should be low since you're using ZFS-only system!), Active means programs that are running and Free is totally unused/wasted memory. Ideally you would want Wired to take almost all of your RAM, with little "free" memory.
The maximum kmem is determined whether you run 32-bit system (1GiB max kmem) or 64-bit. Not sure how the value is calculated, but this is how big kernel memory could theoretically be scaled to. Higher settings would prevent booting, so 1GiB kmem limit on 32-bit systems is quite a limitation.
I would RMA the two disks which showed more than a 1000 pending sectors, for sure. You should definitely get it refunded/replaced through warranty.
It appears ZFS fixed all the damage though, now look with SMART information and see if all the active bad sectors are gone and now are new passive bad sectors are displayed instead. You can see this on the ZFSguru Disks->SMART page.
[ssh@zfsguru /]$ su
[root@zfsguru /]# zpool status -v
pool: Raid1z-8TB
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scan: scrub in progress since Thu Feb 3 23:56:09 2011
784G scanned out of 5.12T at 9.74M/s, 130h2m to go
64.5K repaired, 14.97% done
config:
NAME STATE READ WRITE CKSUM
Raid1z-8TB DEGRADED 69 0 0
raidz1-0 DEGRADED 69 0 1
gpt/1 REMOVED 0 0 0
gpt/2 ONLINE 69 0 0 (repairing)
gpt/3 ONLINE 0 0 0 (repairing)
gpt/4 ONLINE 0 0 0 (repairing)
gpt/5 ONLINE 0 0 0 (repairing)
errors: Permanent errors have been detected in the following files:
Raid1z-8TB:<0x2ca08>
Raid1z-8TB:<0x2ca0a>
Raid1z-8TB:<0x2ca0b>
Raid1z-8TB:<0x2ca0c>
Raid1z-8TB:<0x2ca0f>
Raid1z-8TB:<0x2ca12>
Raid1z-8TB:<0x2ca14>
Raid1z-8TB:<0x2ca15>
Raid1z-8TB:<0x2ca18>
Raid1z-8TB:<0x2ca1b>
Raid1z-8TB:<0x2ca1d>
Raid1z-8TB:<0x2ca1f>
Raid1z-8TB:<0x2ca40>
Raid1z-8TB:<0x2ca4d>
Raid1z-8TB:<0x2ca50>
Raid1z-8TB:<0x2ca52>
Raid1z-8TB:<0x2ca53>
Raid1z-8TB:<0x2ca57>
Raid1z-8TB:<0x2ca61>
Raid1z-8TB:<0x2ca6d>
Raid1z-8TB:<0x2ca75>
Raid1z-8TB:<0x2ca77>
Raid1z-8TB:<0x2ca79>
Raid1z-8TB:<0x2c97d>
Raid1z-8TB:<0x2c97e>
Raid1z-8TB:<0x2c97f>
Raid1z-8TB:<0x2c980>
Raid1z-8TB:<0x2c983>
Raid1z-8TB:<0x2c98b>
Raid1z-8TB:<0x2c98c>
Raid1z-8TB:<0x2ca9d>
Raid1z-8TB:<0x2c99f>
Raid1z-8TB:<0x2c9a3>
Raid1z-8TB:<0x2c9aa>
Raid1z-8TB:<0x2c9ac>
Raid1z-8TB:<0x2c9b3>
Raid1z-8TB:<0x2c9b4>
Raid1z-8TB:<0x2c9bd>
Raid1z-8TB:<0x2c9c2>
Raid1z-8TB:<0x2cac3>
Raid1z-8TB:<0x2c9c8>
Raid1z-8TB:<0x2c9d2>
Raid1z-8TB:<0x2c9d3>
Raid1z-8TB:<0x2c9d6>
Raid1z-8TB:<0x2c9d8>
Raid1z-8TB:<0x2c9db>
Raid1z-8TB:<0x2c9dc>
Raid1z-8TB:<0x2c9de>
Raid1z-8TB:<0x2c9e0>
Raid1z-8TB:<0x2c9e1>
Raid1z-8TB:<0x2c9e2>
Raid1z-8TB:<0x2c9e5>
Raid1z-8TB:<0x2c9e7>
Raid1z-8TB:<0x2c9ee>
Raid1z-8TB:<0x2c9ef>
Raid1z-8TB:<0x2c9f3>
Raid1z-8TB:<0x2c9f4>
Raid1z-8TB:<0x2c9f7>
Raid1z-8TB:<0x2c9fc>
Raid1z-8TB:<0x2c9fe>
Raid1z-8TB:<0x2c9ff>
pool: zfs_root
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zfs_root ONLINE 0 0 0
gpt/0 ONLINE 0 0 0
errors: No known data errors
The disk dropout is not good; if it is dropped ZFS also can't fix its damage and you have no protection to correct any errors on the remaining disk. You need to have the original device back in there and then run the scrub.
Before running the scrub, you might clear all current errors, by issuing:
zpool clear <poolname>
If you perform a scrub on a DEGRADED pool, then ZFS cannot fix any errors that come across. This may also cause disks to drop on RAID controllers, because ZFS does not fix the damage by writing to the device. But ZFS can only do that if a redundant source exists. Your files may not have copies=2, which stores all files twice to protect against bad sectors/BER, but metadata always does. So it shouldn't kill ZFS metadata, but it may harm your data.I rebooted and the drive came back
When do I put in the 2 new disks? I only have 1 extra SATA port? Do I just do them 1 at a time? Right now it is resilvering... lol
Actually, the Virtualbox VM solution would probably be the first available extension for the new 0.1.8 release. I can't guarantee it would be part of the release, but if not i still expect it to be ready in February. This would allow Virtualbox with phpvirtualbox frontend to be installed and allow you to manage virtualbox and the guests OS through a flash-application so you could interact with the guest OS through the web-interface. Sleek huh?Is it possible to run a VM on top of ZFSguru? I would like to virtualize Windows Home Server mostly for the client backups and have it run on top of ZFSguru...
Actually, the Virtualbox VM solution would probably be the first available extension for the new 0.1.8 release. I can't guarantee it would be part of the release, but if not i still expect it to be ready in February. This would allow Virtualbox with phpvirtualbox frontend to be installed and allow you to manage virtualbox and the guests OS through a flash-application so you could interact with the guest OS through the web-interface. Sleek huh?
Other extensions would follow, but this can only be done after I implemented a extension framework. So it may still take until late February as reasonable estimate. But that would mean more extensions can be made either by me or by you guys, and shared and improved and thus extending functionality of ZFSguru systems greatly. This is quite exciting for me, and I'm sure you guys would be too once this is working properly.
Umm your on wifi what did you expect?
On contrary
A transfer of similar stuff to a Win7 single disk 160gb box, notice how the the strong dips are not there