Bulk restore files from "deleted files"

I’m running 10.0.10 on an 14.04.6 ubuntu server (yes it’s old and needs updating I know)

I have had an “incident” where a lot of files seem to have been deleted and I need to restore them. I have two things that I’m after some pointers / help with.

1/ Is there a way to find out which user deleted a file and which sync client they were using at the time (or if they were using the web interface)

2/ Is there a way to bulk restore files. I think I want to select all files deleted on a specific day (27/10/2020) and have them restored to their original location unless the file exists in the original location.

I know I can scroll through the deleted files area and tick each file but that will take me a very, very long time so looking for something maybe on the command line using OCC?

Thanks in advance

1 Like

Have a look in your apache and ownCloud logs. Depending on your loglevel in ownCloud you might find it there. If not the apache access log should show every single request to the server. There you should be able to find the WebDAV DELETEs, with an IP address and perhaps a username.

I don’t think there is a way in the web interface. I would have a look if there is something in the API (I doubt it, as the ownCloud client doesn’t offer any file restore functionality) and otherwise just restore the files in the storage backend and run a files scan for the user.

1 Like

Thanks for the reply @eneubauer,

The apache logs and backend restore / re-scan were on my list but in the “ways I really don’t want to do this if I can avoid it” pile. Do you know if version history will be retained if I recover files that way?

I’d better get grepping log files.

I’m pretty sure that this won’t be retained, as it will essentially be a new file in the backend.

It’s really not that hard, I honestly find it kind of fun and therapeutic :wink: But I can also see how that wouldn’t be everyone’s definition of fun :joy:

It will probably look something like that:
grep <time & date> /path/to/logfile | grep DELETE
A few further tips:

  1. If your output is flooded with unrelated messages filter them with: grep -v "first\|second unrelated message[\|...]"
  2. If you want only specific fields, pipe into cut: cut -f <field numbers seperated by comma> -d "<delimiter>"
  3. Once you’re down to the different users pipe into sort | uniq to get every result just once.
  4. With sort | uniq -c | sort -nk1 you can count uniq appearances and sort that again. But I don’t think that’s useful for you.
1 Like

I like your style.

Normally working through logs, grepping, sorting etc would be quite relaxing and satisfying but when there’s a load of data riding on it the stress levels creep up a bit!

Will see how I get on, thanks for the pointers.

2 Likes

Due to the client problem last week there is a new solution to bulk restoring files:

1 Like

We have hundreds of thousands of files that ended up in the trash. The restore script will take hundreds of days, at its current rate.

The web interface won’t let us access the trash:
This directory is unavailable, please check the logs or contact the administrator

Can anyone help? If I could get to the web interface, maybe there is a “select all”.

I’m still looking for help on this.