Deleted Folder in Root of ZFS Share-Need to Restore

JWeavis

[H]ard|Gawd
Joined
Aug 11, 2000
Messages
1,722
It appears that a folder got deleted on my system. Of course I had the mapping done using the admin account, so there was no prompt and I didn't notice it until today. It contained all of my actual SD/DVD rips. I haven't scrubbed it but also have no snaps. Any chance of getting the folder back?

Well, I might have some old snaps:
Code:
Datapool Snapshots

NAME	USED	AVAIL	REFER	MOUNTPOINT
rpool/ROOT/oi_151a@install	25.1M	-	3.38G	-
rpool/ROOT/oi_151a@2011-02-05-20:39:53	87.0M	-	3.57G	-
rpool/ROOT/oi_151a@2011-03-23-14:45:32	179M	-	3.90G	-
rpool/ROOT/oi_151a@2011-11-26-19:39:36	145M	-	4.55G       -
 
Did you have a storage pool or was everything on your root pool?

Please provide a 'zpool status' and 'zfs list -t all'.
 
Folder that was deleted was jweavis/jw/videos.

Code:
zpool status
  pool: jweavis
 state: ONLINE
  scan: scrub repaired 0 in 11h29m with 0 errors on Mon Nov 21 10:29:52 2011
config:

	NAME         STATE     READ WRITE CKSUM
	jweavis      ONLINE       0     0     0
	  raidz1-0   ONLINE       0     0     0
	    c1t1d0   ONLINE       0     0     0
	    c1t2d0   ONLINE       0     0     0
	    c1t10d0  ONLINE       0     0     0
	    c1t4d0   ONLINE       0     0     0
	    c2d1     ONLINE       0     0     0
	cache
	  c1t8d0     ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	rpool       ONLINE       0     0     0
	  c2d0s0    ONLINE       0     0     0

errors: No known data errors
Code:
zfs list -t all
NAME                                         USED  AVAIL  REFER  MOUNTPOINT
jweavis                                     6.32T   824G  54.3K  /jweavis
jweavis/jw                                  6.32T   824G  6.32T  /jweavis/jw
jweavis/otherz                              51.1K   824G  51.1K  /jweavis/otherz
rpool                                       10.7G  23.1G  46.5K  /rpool
rpool/ROOT                                  6.76G  23.1G    31K  legacy
rpool/ROOT/oi_151a                          6.75G  23.1G  3.98G  /
rpool/ROOT/oi_151a@install                  25.1M      -  3.38G  -
rpool/ROOT/oi_151a@2011-02-05-20:39:53      87.0M      -  3.57G  -
rpool/ROOT/oi_151a@2011-03-23-14:45:32       179M      -  3.90G  -
rpool/ROOT/oi_151a@2011-11-26-19:39:36       145M      -  4.55G  -
rpool/ROOT/openindiana                      17.2M  23.1G  4.55G  /
rpool/ROOT/pre_napp-it-0.415d_update_02.05   100K  23.1G  3.57G  /
rpool/ROOT/pre_napp-it-0.415k_update_03.23    93K  23.1G  3.90G  /
rpool/dump                                  1.87G  23.1G  1.87G  -
rpool/export                                20.0M  23.1G    32K  /export
rpool/export/home                           19.9M  23.1G    32K  /export/home
rpool/export/home/jweavis                   19.9M  23.1G  19.9M  /export/home/jweavis
rpool/swap                                  1.99G  24.9G   126M  -
 
Ok, so what can I do to make it so:
1) I cannot delete the folders in jw but can delete all child objects in those folders?
2) Have a "recycle bin" for something like this?

I have a scrub scheduled, but it doesn't appear that it's run for a while.
Code:
scrub	autoscrub	scrub pool	jweavis	 	 	every	sun	23	0	1297117093	active	20.nov 23:00	-	-	run now	delete

I was in the process of getting ready to copy this data off the system to USB drives, but have been unable to get them to mount. About ready just to mount them over the network and back them up that way.

Thanks for the help!
 
1. some kind of solaris acl thing maybe? no idea, sorry.

2. the easiest way to do this with ZFS is to periodically make snapshots of the entire tree. that way, if you spazz and delete something by mistake, you can recover files and such from the automatically mounted snapshots.
 
2. the easiest way to do this with ZFS is to periodically make snapshots of the entire tree. that way, if you spazz and delete something by mistake, you can recover files and such from the automatically mounted snapshots.
This.

I would also recommend using different filesystems (eg zfs create pool/MYIMPORTANTRIPS :D) for any data that has obvious delineation. And then set up automatic snapshots for that file system.

Of course, that also means this would be a separate CIFS Share (or NFS Mount).

Sucks bro, live and learn.
 
+ You can create an area to save your important stuff that is rarely modified, and then set it readonly. Then just set it writable to add new stuff and make it readonly again after..
 
Ok, is there any way to see what's in the Pool Snaps in the first post?
Code:
rpool/ROOT/oi_151a@2011-02-05-20:39:53	87.0M	-	3.57G	-
rpool/ROOT/oi_151a@2011-03-23-14:45:32	179M	-	3.90G	-
rpool/ROOT/oi_151a@2011-11-26-19:39:36	145M	-	4.55G
Created a new snap now and might change the mapping from jw to the actual folders so that they cannot be accidentally deleted until I can understand all the ACLs.
 
Ok, is there any way to see what's in the Pool Snaps in the first post?
Code:
rpool/ROOT/oi_151a@2011-02-05-20:39:53	87.0M	-	3.57G	-
rpool/ROOT/oi_151a@2011-03-23-14:45:32	179M	-	3.90G	-
rpool/ROOT/oi_151a@2011-11-26-19:39:36	145M	-	4.55G
Created a new snap now and might change the mapping from jw to the actual folders so that they cannot be accidentally deleted until I can understand all the ACLs.
By the way, you can't change the mapping of your storage zpool zfs' filesystems to the actual folders (maybe mountpoints but it defeats the purpose of the kind of management I'm suggesting).

My (our) suggestions is to create segregated filesystems with auto-snapshots. This helps with space management when you potentially have a filesystem storing data that changes frequently (maybe disk/image-level backups that is only stored for a given period) but doesn't necessarily need to keep a lot of past snapshots compared to a more steady-state filesystem storing data that you want to make sure you can revert any changes to (documents,media,photos, etc).

You can view past snapshots via CIFS via right clicking and browsing the Previous Versions tab (assuming client is Windows basd). In OpenIndiana you can browse via nautilus' Timeslider. I can't remember how to do it via Terminal. I think its like cd /zpool/filesystem/.zfs (or /zpool/filesystem/subfolderetcetc/.zfs

What you're linking there is the rpool, which is your Root storage pool, eg where your OS is installed. You really think the data you're missing is in there?
 
@jonnyjl - Thanks for the suggestion. I'll work on mapping that out. Until then, I have removed the map to \jw\ from my windows system and setup maps for the folders I need very frequently.

No, I'm guessing there is no data I need there, guess I didn't understand what they were. Still very green when it comes to *nix/zfs. The OpenIndiana is the only non-Windows system I have. Didn't go Homeserver since they removed the cool disk features.
 
@jonnyjl - Thanks for the suggestion. I'll work on mapping that out. Until then, I have removed the map to \jw\ from my windows system and setup maps for the folders I need very frequently.

No, I'm guessing there is no data I need there, guess I didn't understand what they were. Still very green when it comes to *nix/zfs. The OpenIndiana is the only non-Windows system I have. Didn't go Homeserver since they removed the cool disk features.
Cool, its a little daunting at first. I think Oracle still has the zfs administration docs available, so it might be helpful to read that. That's how I learned, and OpenSolaris was my real first *nix-based system too.

Quick and dirty, ZFS.
ZFS "starts" with zpools
zpools are made up for vdevs (zpool status)
vdevs are made up of your disks (usually the entire thing). This is where you choose your redundancy level (vdevs made up of mirrors, parity (raidz#), or none)

Data is stored in Filesystems (or if you need a block device, zvols) on the zpools (zfs list)
Filesystems are contain controllable attributes (compression, dedup, CIFS, nfs, quotas, some ACL inheritance stuff).
You snapshot a Filesystem
Snapshots can be controlled by Time-slider (time-slider-setup).

For CIFS (how we share in a Windows world), Filesystems are shared out. As far as I know, "Folders" within the filesystem cannot be separate shares (departure from Windows' world).

PS If you haven't already, you might want to look into _Gea's project, Napp-it.
 
This wouldn't be possible via the kernel-level CIFS service implemented in OI, no?

Isn't that a CIFS client not a server? Although I have almost no experience with OI.
 
@jonnyjl - Thanks, I'm already using Napp-it, if it wasn't for that, I wouldn't be using any of this at all =)
 
Isn't that a CIFS client not a server? Although I have almost no experience with OI.
No. It's a kernel-level service.

You can run Samba, but in the past, I've heard it has very iffy performance on OS. I've never looked into on OI, the built-in stuff works for my purposes (ACLs/Previous versions is all I really care about when it comes to CIFS)
 
Interesting. That would be one difference between linux and opensolaris. In linux the kernel level CIFS is a client only (allows you to connect to other systems CIFS shares but does not allow other systems access to its own shares) with the CIFS server part being a user space application that is completely separate from the kernel.
 
Interesting. That would be one difference between linux and opensolaris. In linux the kernel level CIFS is a client only (allows you to connect to other systems CIFS shares but does not allow other systems access to its own shares) with the CIFS server part being a user space application that is completely separate from the kernel.

Although you can use SAMBA on Solaris, most persons use the kernel-based CIFS/SMB server developed by SUN for Solaris.
It mosty performs best, has better Windows and ACL integration and acts more like a real Windows server.

And it is perfectly integrated into ZFS as a simple ZFS dataset property. Beside easy handling you can export/import such a pool
and the SMB and NFS shares keep intact.

Caveat: because Sharing is a property of a ZFS folder which is an independant filesystem, just like a partition on conventional
filesystems with independant properties, you cannot nest them like you can do with SAMBA or Windows.

With this in mind, you must be careful with your filesystems. If you delete one, it was not the deletion of a folder, its a delete
of a file-system (like destroying a partition). There is no unformat in ZFS that can help then. Even the snaps from that ZFS dataset are deleted
because they were part of the filesystem. With snaps, you can perfectly restore a former state of files, folders or volumes in a filesystem.
If you destroy the filesystem itself, you need a backup or a replication (1:1 copy).

If you destroy a complete pool, you can import it unless needed disks are available.


Terminology:
You have disks. From disks you can build vdevs (single disks or Raids)
One or more vdevs build a pool. The poolsize can grow nearly unlimited by adding more vdevs

On a pool you can create ZFS folders (very very flexible handling of the whole pool capacity).
Such a ZFS folder=dataset=filesystem=pool-partition is much much more than a simple folder within a filesystem -
even when it looks similar at first view!, So never name a ZFS folder just a folder.
 
Thanks a lot for the explanation. This interests me at work. I may have to setup a test server to compare performance and features. Although with that said I am not sure when, the boss keeps me very busy and my primary role is research programmer not system administrator..
 
Back
Top