ZFS pool errors under Napp-it and OI

ma7412

n00b
Joined
Apr 2, 2013
Messages
8
I've been running Napp-it on OpenIndiana for over a year using the all in one solution under ESXi. Up until a few weeks ago, everything was working perfectly. Then after a weekly scrub, it showed a couple of cksum errors on a couple of drives in the pool. The next week, it showed even more errors. Now I am at the point where some newly added files in the pool are inaccessible:(.

I'm looking for some guidance on what can be done to troubleshoot this issue.

Original hardware list (Jan 2012):
Case: Norco 4220
Motherboard: SuperMicro MBD-X9SCL+-F
CPU: Xeon E3-1220
Memory: Started with 2 x 4GB ECC Unbuffered (Kingston KVR1333D3E9SK2/8G RT)
Controller 1: IBM BR10i
Controller 2: IBM M1015 (both controllers are flashed to IT mode and passed through to the OpenIndiana VM)

I originally had 2 pools (in addition to the rpool) but in Nov 2012, I migrated all the data to 2 new pools created with 4k vdevs. Over time, many drives have been added and removed from the system.

In Feb 2013 I added 2 x 8GB ECC Unbuffered (Kingston KVR1333D3E9SK2/16G)

Here's the current status of the pools as reported by the Napp-it gui: http://pastebin.com/0uPdasnh

One interesting thing to note is that even the rpool is showing errors. This pool is on a vmdk file on a disk which is directly attached to the motherboard.

Here's the status of the drives as reported by Napp-it: http://pastebin.com/wfzb2cm4

All the really important data is backed up, but it would be a huge pita if I had to rebuild the whole thing.

Anyone have any advice on where I should begin troubleshooting? I can provide more details on the setup if required.

Thanks in advance for any help!
 
U got some bad drives my friend. That is what it sounds like. Those el cheapo green drives are poopy for 24/7 data warehousing. You should run WD RED or RE drives if you are going to stay WDigital.

Do some full smart test.

This is what you wanna see....
Code:
$ zpool status
  pool: Pool-00
 state: ONLINE
  scan: scrub in progress since Wed Apr  3 00:43:42 2013
        525G scanned out of 4.03T at 419M/s, 2h26m to go
        0 repaired, 12.72% done
config:

	NAME          STATE     READ WRITE CKSUM
	Pool-00       ONLINE       0     0     0
	  raidz2-0    ONLINE       0     0     0
	    ada1.nop  ONLINE       0     0     0
	    ada2.nop  ONLINE       0     0     0
	    ada3.nop  ONLINE       0     0     0
	    ada4      ONLINE       0     0     0
	    da0.nop   ONLINE       0     0     0
	    da1.nop   ONLINE       0     0     0
	    da2.nop   ONLINE       0     0     0
	    da3.nop   ONLINE       0     0     0
	cache
	  ada0        ONLINE       0     0     0

errors: No known data errors

and I am running these .... (Edited to remove my serial no's from ya'll public type thieves)

Code:
ada0 	114474MB 	ADATA SSD S510 120GB 3.3.2  	ADATA SSD S510 120GB  		Always on  	ZFS storage pool device  	ONLINE  	Edit disk  Delete disk
ada1 	1907730MB 	WDC WD20EFRX-68AX9N0 80.00A80  	WDC WD20EFRX-68AX9N0  	  	Always on  	ZFS storage pool device  	ONLINE  	Edit disk  Delete disk
ada2 	1907730MB 	WDC WD20EFRX-68AX9N0 80.00A80  	WDC WD20EFRX-68AX9N0  	 	Always on  	ZFS storage pool device  	ONLINE  	Edit disk  Delete disk
ada3 	1907730MB 	WDC WD20EFRX-68AX9N0 80.00A80  	WDC WD20EFRX-68AX9N0  	  	Always on  	ZFS storage pool device  	ONLINE  	Edit disk  Delete disk
ada4 	1907730MB 	WDC WD20EFRX-68AX9N0 80.00A80  	WDC WD20EFRX-68AX9N0  		Always on  	ZFS storage pool device  	ONLINE  	Edit disk  Delete disk
da0 	1907730MB 	ATA WDC WD20EFRX-68A 0A80  	WDC WD20EFRX-68AX9N0  	 	Always on  	ZFS storage pool device  	ONLINE  	Edit disk  Delete disk
da1 	1907730MB 	ATA WDC WD20EFRX-68A 0A80  	WDC WD20EFRX-68AX9N0  		Always on  	ZFS storage pool device  	ONLINE  	Edit disk  Delete disk
da2 	1907730MB 	ATA WDC WD20EFRX-68A 0A80  	WDC WD20EFRX-68AX9N0  	Always on  	ZFS storage pool device  	ONLINE  	Edit disk  Delete disk
da3 	1907730MB 	ATA WDC WD20EFRX-68A 0A80  	WDC WD20EFRX-68AX9N0  	 	Always on  	ZFS storage pool device  	ONLINE  	Edit disk  Delete disk
da4 	7386MB 	Patriot Memory PMAP  	Patriot Memory PMAP  	0708298FA81C7464  	Always on  	UFS  	ONLINE
 
Last edited:
Green drives are excellent for usage with ZFS. But I'm not at all convinced the harddrives are at fault.

Could you please try another solution - preferably my own: ZFSguru - and boot it in a live environment. Import your pool and see how BSD ZFS identifies your pool. I will require the output of 'zpool status' on the command line or SSH shell. After that, run a scrub on the pool and afterwards retrieve the zpool status output again.

Have you ran MemTest86+ to rule out bad memory? ECC can help you in normal cases, but if the RAM is truly faulty you will still run into issues. But such RAM corruption ought to show up in memtest86+.
 
If that many disks fail together, its not a disk problem but a hardware problem.
Due to the fact, that the problem is not related to a dedicated controller, it is
probably a RAM problem, followed by a power problem, then a cabling/mainboard/CPU problem.

As you had added RAM recently, I would remove this RAM pair first.
If problem remains, use/check the other pair.

Then it is a matter of replacing parts to find the reason (start with power).
To find the problem, you may check also (current napp-it adds some of them)
- systemlog (menu system -log)
- system faults (menu system-faults)
- disk details (ex controller, diskid, mpt numbers, sd numbers etc, menu disk-details-prtconfig)
- smart details and checks (menu disk-smartinfos).

Booting a live DVD (ZFSGuru, OI live or napp-it to Go=ready to use napp-it installation on USB stick) may help to rule out
the OS/ESXI as reason but I would not expect that this is the problem. A memtest may help to find bad RAM
(but ECC should have notified if the RAM itself is bad so it may be ex. a socket problem).
 
Thanks for the tips everyone. I've had Memtest86+ v4.20 running for nearly ten hours and it's just completed 2 passes of all the tests. Although, the first pass completed with no errors, the second pass came back with 600+ errors. So I guess it's bad Ram?
dKQ122G.png


Also, Memtest is showing ECC = off. Using Memtest's config, then changing ECC settings didn't seem to make a difference. Is this normal?

Is there any way from this Memtest report to tell which stick is bad? Or just run it against each stick individually?

Also, does this mean that ECC is not doing it's job? What's the point in purchasing ECC memory if it's still causing your data to become corrupt?

Thanks again for the help!
 
Which slots do you have the RAM in? A lot of times you have to install only pairs of identical RAM in the same channels.

Also, some server motherboards are really finicky about RAM brand, check your manual to see if the RAM you bought is manufacturer certified.

Also, a lot of server motherboards do not tolerate mixing RAM at all. It's one of those things where you just gotta drop all the cash at once for a full config of matching sticks.
 
I didn't see a setting in the bios to specifically enable ecc, but the bios seems to indicate that ECC is enabled. Not sure why memtest shows it as off:
http://imgur.com/CjIlzUE

Both pairs of Dimms are in their proper slots. The 4GB sticks were in slot 1A and 1B and the 8GB sticks were in 2A and 2B. This is how they should be installed according to the manual.

The memory is not on Supermicro's list of tested memory, but it is on Kingston's list of compatible memory for this motherboard.

I ran memtest on each of the two newer dimms individually and it is obvious that 1 dimm is bad. Each test was run with a single dimm in the same slot:
Bad Dimm
Good Dimm

Now to get it replaced...

Thanks again for the help everyone. I'm sure I will have some more questions once the system is back together and I need to sort out the damaged files.
 
I didn't see a setting in the bios to specifically enable ecc, but the bios seems to indicate that ECC is enabled. Not sure why memtest shows it as off:
http://imgur.com/CjIlzUE
Memtest seems to have a problem showing correct ECC status
I verified using this
http://hardforum.com/showthread.php?t=1693051

please continue to post your experience, I'd like to know how it turns out

edit: you may also check this
http://hardforum.com/showpost.php?p=1038749808&postcount=15
 
I ran memtest on each of the two newer dimms individually and it is obvious that 1 dimm is bad.
I love the ZFS mighty heavy protection, it warns you immediately if there are any problems anywhere in the chain:pSU, RAM, disk controller, cables, disk, etc. I wonder if any other filesystem would have detected your faulty RAM?

I feel safe with ZFS, it will tell me att once if there are any problems, whatsoever. Here is a faulty PSU detected by ZFS:
https://blogs.oracle.com/elowe/entry/zfs_saves_the_day_ta

Another success story for ZFS.
 
Thanks for those links EnderW.

Brutalizer, yes, I am happy that ZFS made me aware of a hardware issue rather than blindly committing bad data to disk.

I ran the code from an Ubuntu 12.04 LiveCD to check ecc and it appears to be enabled (see output below). I guess I'm still confused as to why the memory is causing corrupt data in my ZFS pool. Is it so badly damaged that even ECC can't detect or correct the errors? Is this a typical failure mode for ECC memory?

For now, I'm going to RMA the bad memory and go back to having only 8GB for a while -- guess I'll have to cut back on the number of running VMs.

Code:
ubuntu@ubuntu:~$ sudo gcc ecc.c -o ecc
ecc.c: In function ‘main’:
ecc.c:37:21: warning: format ‘%lx’ expects argument of type ‘long unsigned int’, but argument 2 has type ‘int’ [-Wformat]
ecc.c:43:21: warning: format ‘%lx’ expects argument of type ‘long unsigned int’, but argument 2 has type ‘int’ [-Wformat]
 
ubuntu@ubuntu:~$ sudo ./ecc
5004-5007h: 20 10 66 3
5008-500Bh: 20 10 66 3
ubuntu@ubuntu:~$

Code:
ubuntu@ubuntu:~$ sudo dmidecode -t memory
# dmidecode 2.11
SMBIOS 2.7 present.
 
Handle 0x0027, DMI type 16, 23 bytes
Physical Memory Array
        Location: System Board Or Motherboard
        Use: System Memory
        Error Correction Type: Single-bit ECC
        Maximum Capacity: 32 GB
        Error Information Handle: No Error
        Number Of Devices: 4
 
Handle 0x002A, DMI type 17, 34 bytes
Memory Device
        Array Handle: 0x0027
        Error Information Handle: No Error
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 8192 MB
        Form Factor: DIMM
        Set: None
        Locator: DIMM_1A
        Bank Locator: BANK0
        Type: DDR3
        Type Detail: Synchronous
        Speed: 1333 MHz
        Manufacturer: Kingston        
        Serial Number: 2F000000  
        Asset Tag: A1_AssetTagNum0
        Part Number: 9965525-037.A00LF
        Rank: 2
        Configured Clock Speed: Unknown
 
Handle 0x002D, DMI type 17, 34 bytes
Memory Device
        Array Handle: 0x0027
        Error Information Handle: No Error
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 4096 MB
        Form Factor: DIMM
        Set: None
        Locator: DIMM_2A
        Bank Locator: BANK1
        Type: DDR3
        Type Detail: Synchronous
        Speed: 1333 MHz
        Manufacturer: Kingston        
        Serial Number: 44000000  
        Asset Tag: A1_AssetTagNum1
        Part Number: 9965525-018.A00LF
        Rank: 2
        Configured Clock Speed: 32768 MHz
 
Handle 0x0030, DMI type 17, 34 bytes
Memory Device
        Array Handle: 0x0027
        Error Information Handle: No Error
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 8192 MB
        Form Factor: DIMM
        Set: None
        Locator: DIMM_1B
        Bank Locator: BANK2
        Type: DDR3
        Type Detail: Synchronous
        Speed: 1333 MHz
        Manufacturer: Kingston        
        Serial Number: 2F000000  
        Asset Tag: A1_AssetTagNum2
        Part Number: 9965525-037.A00LF
        Rank: 2
        Configured Clock Speed: 6 MHz
 
Handle 0x0033, DMI type 17, 34 bytes
Memory Device
        Array Handle: 0x0027
        Error Information Handle: No Error
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 4096 MB
        Form Factor: DIMM
        Set: None
        Locator: DIMM_2B
        Bank Locator: BANK3
        Type: DDR3
        Type Detail: Synchronous
        Speed: 1333 MHz
        Manufacturer: Kingston        
        Serial Number: 44000000  
        Asset Tag: A1_AssetTagNum3
        Part Number: 9965525-018.A00LF
        Rank: 2
        Configured Clock Speed: Unknown
 
IMHO even ECC memory could corrupt data. I would MD5 all your files and compare them to your (oldest) backup. Some types of files have internal checksums you can check as well (FLAC, RAR etc) If your data was corrupt when when writing them to ZFS, they will of course be corrupt on disk as well - and silent data corruption is not fun...
 
I ran the code from an Ubuntu 12.04 LiveCD to check ecc and it appears to be enabled (see output below). I guess I'm still confused as to why the memory is causing corrupt data in my ZFS pool. Is it so badly damaged that even ECC can't detect or correct the errors? Is this a typical failure mode for ECC memory?
I am by no means an expert on this subject, but I believe ECC memory is susceptible to failure like any other memory. ECC is merely a feature and can only work properly when the module itself is functioning correctly.
 
If the memory was the problem you may have got corrupted data after all. If the corruption occurs before ZFS can calculate the checksum, there is not much it can do about it.
 
This is a similar situation like with a raid controller and disks. The controller can detect a faulted disk but cannot detect other hardware, a driver or a pci-slot problem.
ZFS detects most of these errors anyway because it calculates checksums on real data.
 
I had a stick of kingston ECC ram that went totally to crap. I snapped it in half, threw it out and replaced it with crucial (which I should have bought to begin with...)
 
IMHO even ECC memory could corrupt data. I would MD5 all your files and compare them to your (oldest) backup. Some types of files have internal checksums you can check as well (FLAC, RAR etc) If your data was corrupt when when writing them to ZFS, they will of course be corrupt on disk as well - and silent data corruption is not fun...
How could ecc corrupt data? Do you have more information on this? Links?
 
IMHO even ECC memory could corrupt data. I would MD5 all your files and compare them to your (oldest) backup. Some types of files have internal checksums you can check as well (FLAC, RAR etc) If your data was corrupt when when writing them to ZFS, they will of course be corrupt on disk as well - and silent data corruption is not fun...

If by ECC memory you mean "bad ECC memory" then I agree. As with all memory I would run tests on newly bought sticks, like a 24 hours run of memtest86. I would even do this after updating the BIOS or changing BIOS parameters related to memory.

ECC is there first and foremost to correct or at least detect errors caused by external influence on the memory : cosmic rays, electromagnetic interferences, etc.

Now, if a stick goes bad, ECC can still help, if the defect is small enough it can correct it, if a little bigger, detect it.

If all goes to hell then it's no help.

As for ZFS and corrupt data, in the context of a file server I would always make an initial check if possible with checksums, the idea being that after you don't need to do it again, ever. If you generate the files over the network (database, work documents), then the need to keep many backups arise.
 
Memtest86+ v5rc1 ran on the bad DIMM for about 1h20m and then locked up. But it reported 32,000 errors during that time.

I'm quite confident that the memory is the cause of the problem.
 
I had a stick of kingston ECC ram that went totally to crap. I snapped it in half, threw it out and replaced it with crucial (which I should have bought to begin with...)
Why destroy it when you could have it replaced with lifetime warranty and then sell it? Kingston is quality memory, popular choice for servers.


How could ecc corrupt data? Do you have more information on this? Links?
I believe he's just saying bad memory, ECC or not, can corrupt data. It is my understanding that ECC memory, when functioning correctly, can correct errors caused by other sources, but if a stick is bad, ECC is not going to help. Checksums (via ZFS or other means) should still show you that something is amiss as in OP's case.


Interesting. But is it possible for one stick to give false failures while the other 3 sticks don't give any failures? Regardless, before I return the stick that appears bad, I will run Memtest86+ v5rc1 against it for a few hours and see what it reports.
Anything is possible :p
It definitely sounds like you have a bad stick, just thought I would post that link as it was new info for me and might be helpful for future testing.
 
I got the stick over a year ago and had no warranty/purchase info. I guess it's all subjective - I have always gone crucial and had zero problems. I'm not the only person I know who's had issues with kingston. Maybe it was because the 8gb sticks were new and they had QA issues? Dunno...
 
On the other hand out of 4 sticks of Crucial I ever bought, I had to do 6 RMAs until I gave up.
 
LOL. I guess it is personal experience. All I can say is 'fool me once shame on you, ...' You probably feel the same way.
 
So I have my server back together and the pool online and would like to resolve any data corruption issues I have. A scrub of the entire pool has just completed and zpool status still reports many errors. Here is the output:
Code:
sudo zpool status -v pool_4k
  pool: pool_4k
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 128K in 9h16m with 23 errors on Tue Apr  9 16:40:33 2013
config:

        NAME                        STATE     READ WRITE CKSUM
        pool_4k                     DEGRADED     0     0    32
          mirror-0                  DEGRADED     0     0     0
            c10t50014EE0036CF68Ed0  ONLINE       0     0     0
            c10t50014EE0036D0018d0  DEGRADED     0     0     0  too many errors
          mirror-2                  DEGRADED     0     0    28
            c9t2d0                  DEGRADED     0     0    29  too many errors
            c9t3d0                  DEGRADED     0     0    28  too many errors
          mirror-3                  DEGRADED     0     0     0
            c9t4d0                  ONLINE       0     0     0
            c9t5d0                  DEGRADED     0     0     0  too many errors
          mirror-4                  DEGRADED     0     0    10
            c10t50014EE207A0D2F9d0  DEGRADED     0     0    10  too many errors
            c10t50014EE0AE214005d0  DEGRADED     0     0    10  too many errors
          mirror-5                  DEGRADED     0     0    26
            c9t13d0                 DEGRADED     0     0    26  too many errors
            c9t14d0                 DEGRADED     0     0    26  too many errors
        logs
          c3t2d0                    ONLINE       0     0     0
        cache
          c10t5001517BB29D9C18d0    ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        /pool_4k/user/lightroom/data/2013/03/20130331_091431.nef
        /pool_4k/user/lightroom/data/2013/03/20130331_091906.nef
        pool_4k/user:<0x4a123>
        /pool_4k/user/lightroom/data/2013/03/20130330_162202.nef
        pool_4k/user:<0x4a033>
        pool_4k/user:<0x4a03e>
        /pool_4k/user/lightroom/data/2013/03/20130331_100942.nef
        /pool_4k/user/lightroom/data/2013/03/20130330_170617-2.nef
        /pool_4k/user/lightroom/data/2013/03/20130330_170942.nef
        pool_4k/user:<0x4a06c>
        /pool_4k/user/lightroom/data/2013/03/20130331_101210-2.nef
        pool_4k/user:<0x4a17c>
        /pool_4k/user/lightroom/data/2013/03/20130331_103309.nef
        /pool_4k/user/lightroom/data/2013/03/20130331_103315.nef
        /pool_4k/user/lightroom/data/2013/03/20130331_090643.nef
        pool_4k/user:<0x4a095>
        /pool_4k/user/lightroom/data/2013/03/20130331_105016.nef
        pool_4k/user:<0x4a0a3>
        /pool_4k/user/lightroom/data/2013/03/20130331_123306.nef
        /pool_4k/user/lightroom/data/2013/03/20130331_123312.nef
        /pool_4k/user/lightroom/data/2013/03/20130331_123408.nef
        pool_4k/user:<0x4a0d0>
        /pool_4k/user/lightroom/data/2013/03/20130331_090941.nef
        /pool_4k/user/lightroom/data/2013/03/20130331_160243.nef
        /pool_4k/user/lightroom/data/2013/03/20130331_160254-2.nef
        /pool_4k/user/lightroom/data/2013/03/20130331_160352.nef
        /pool_4k/vmware/plexbuntu/plexbuntu-flat.vmdk
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130331_091906.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130330_162227.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130330_165648-2.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130330_170617-2.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130330_174025.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130331_101210-2.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130331_090435.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130331_103315.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130331_090643.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130330_174250.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130330_174507.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130331_123312.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130331_123408.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130331_085959.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130331_160243.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130331_160352.nef
        pool_4k/user@daily-1353013159_2013.04.01.23.00.05:/lightroom/data/2013/03/20130331_090022.nef
        pool_4k/media:<0x390702>
        pool_4k/media@daily-1353013772_2013.04.02.06.00.03:/games/steam/steamapps/common/Torchlight II/PAKS/DATA.PAK
        pool_4k/media@daily-1353013772_2013.04.02.06.00.03:/games/steam/steamapps/common/dota 2 beta/dota/pak01_018.vpk

The files listed are basically a recent camera import, a vmdk, and snapshots from around that time. All of which can be easily recovered. Can I simply delete all the files? What are the entries like these pool_4k/user:<0x4a06c>?

Thanks again for all the help, this is a learning experience for me!
 
-backup all data
-delete corrupted files and folders

- rerun a scrub
- if there are new errors, problem persist

http://docs.oracle.com/cd/E19082-01/817-2271/gbctx/index.html

Thanks Gea,

All the important data in this pool is backup up on an external system. It appears that it was only newly written data which has any problem.

Each time I delete a file, zpool status -v changes from showing the filename to showing (what I think is) it's object id.

Before executing the rm command, there is a line such as
  • /pool_4k/user/lightroom/data/2013/03/20130330_170617-2.nef
After running admin@zfstank:~$ sudo rm /pool_4k/user/lightroom/data/2013/03/20130330_170617-2.nef there is now the following instead
  • pool_4k/user:<0x4a061>

Also, I deleted the snapshots which had bad data associated with them. Now there are entries similar to the following
  • <0x20a1>:<0x4a095>

Will all these object id entries disappear after the next scrub? If not, how can they be cleaned up?
 
Will all these object id entries disappear after the next scrub? If not, how can they be cleaned up?

These are references to a file without a file beeing there (Bad Metadata).

Oracle writes:
If an object in the meta-object set (MOS) is corrupted, then a special tag of <metadata>, followed by the object number, is displayed.

If the corruption is within a directory or a file's metadata, the only choice is to move the file elsewhere. You can safely move any file or directory to a less convenient location, allowing the original object to be restored in place.


So you can either move/restore the file or other option may delete the folder with the problem.
 
Back
Top