Shared files broken after moving install to a different filesystem location




I just migrated an ownCloud install from one server to another for a client. On the new server, I wanted to put the owncloud install in a different filesystem location compared to where it had been on the old server. I did that and then upgraded to the latest 8.2 release and started testing the system.

I then found there were problems access shared files. After a lot of digging around, I found this issue, which matched what I was seeing:

I tried to manually edit the IDs in the oc_storages table that contained the absolute path to the storage, but I found that didn't completely fix the problem. Eventually I saw in the above thread that some absolute paths may be stored as MD5 hashes, which makes them quite difficult to fix manually.

So, I have started from scratch and placed the ownCloud install in the same filesystem location where it was on the previous server, even though this is less than ideal. I've not tried upgrading ownCloud again, so it's still on v8.2.3.

OwnCloud is now working once again, but I'd like to eventually rid the database of those pesky absolute paths. According to this pull request, the 'occ maintenance:repair' script should fix those storage IDs:

However, when I run 'occ mailtenance:repair', I just get the following output:
- Repair mime types
- Repair legacy storages
- Repair config
- Clear asset cache after upgrade
- Asset pipeline disabled -> nothing to do
- Generate ETags for file where no ETag is present.
- ETags have been fixed for 0 files/folders.
- Clean tags and favorites
- 0 tags for delete files have been removed.
- 0 tag entries for deleted tags have been removed.
- 0 tags with no entries have been removed.
- Drop old database tables
- Drop old background jobs
- Remove getetag entries in properties table
- Removed 0 unneeded "{DAV:}getetag" entries from properties table.
- Repair outdated OCS IDs
- Repair invalid shares

It looks like the repair function isn't actually finding any of the full path records in order to fix them. Or something. There are quite a lot of values in the 'id' column of the 'oc_storages' table similar to this:
..for the various user accounts on the system. But the maintenance:repair script doesn't appear to modify any of them.

Is that 'maintenance:repair' function known to work? Or do I need to upgrade to a newer version of ownCloud for it to work?

Any help or advice would be greatly appreciated.

Many thanks,


@idmacdonald the routine only runs once, but was supposed to have run already when you upgraded to 8.1 in the past. Check the oc_appconfig table for the key "repairlegacystoragesdone" and remove it. Then rerun occ maintenance:repair.


Hi @PVince81,

Thanks for your help. I just tried deleting that key and then running 'occ maintenance:repair' once again. The command output a series of errors like this:

WARNING: Could not repair legacy storage local::/var/www.virtualdomains/ automatically.

Is there a way to debug why the script was not able to repair them? Or do you have some other suggestion?

Thanks again,


Hmm, so it looks like you have a conflict situation that cannot be automatically be repaired.

This means that each of the listed users have both a "local::" format storage and "home::" format storage. Depending on your update scenario, you need to manually delete the one or the other.

In your case you said your system has already been running for a long time, so it is likely that the "home::" storages are the correct ones and "local::" are unused. A way to check for this is to pick the numeric id of both storages from the "oc_storages" table, and then do a select max(mtime), storage from oc_filecache where storage in (1,3) group by storage (replace 1 and 3 with the respective numeric ids). Whichever has the higher mtime is likely to be the one that you should keep.

Let me know if that's enough information or need further help.


Ref: a ticket where other people have similar issues with some (possibly confusing) explanations:


Hi, thanks for the link to that issue. That has some useful information, though as you say, it's a bit confusing.

I can indeed see that a number of users have both 'local::' and 'home::' entries in oc_storages. If I go to oc_filecache and select files that belong to either one of the storage records for a user, I can see that the user has files in oc_filecache belonging to both storage IDs. And while there may be some duplication, there are quite a few files that are only listed once, linked to either the 'local::' or the 'home::' storage record for the user.

If I delete the 'local::' row from the oc_storages table, what will happen to the oc_filecache entries that were linked to that storage ID? When I tried to move the install to a different filepath on the server, users' access to files that had been shared with them was broken. I'm worried that access to shared files would break once again.

My instinct tells me that if I delete the 'local::' entry in oc_storages for a user, I should also update the oc_filecache entries that pointed to the old 'local::' storage record and update them with the storage ID number of the remaining 'home::' storage entry. Am I correct in this understanding? Or can I really just delete records from oc_storages without having users go crazy because they can no longer access files that have been shared with them?



Hi @PVince81,

Sorry to bother you, but I'm curious whether you can answer my question above. I'd like to eliminate the absolute paths from the database, but not if it means people will loose access to some files that have been shared with them.

Thanks in advance,


First, the question is whether the users in question already lost access to the shares or do the shares work correctly ?

If they already lost access to the shares at this time, it means that OC is connecting to the wrong storage. By default, OC would use "local::" if the path matches the one from config.php's datadirectory. In that situation, if you delete the "local::" entry, OC will automatically switch to "home::$user" and users should see the shares again.

However, if the users are already seeing the correct shares, then it means that the shares are linked with the "local::" storage, so the "home::" one must be deleted.

Here's a SQL query that can help to track which storage has shares associated with it:
select,s.numeric_id,fc.fileid,fc.path,,sh.uid_owner,sh.share_with from oc_storages s, oc_filecache fc, oc_share sh where and fc.fileid=sh.file_source;