Server refuses to delete some files

I am removing some links form the log because I am only allowed to post 2 links (why?)

Expected behaviour

I delete a file locally and expect it gets deleted on the server

Actual behaviour

The file stays on the server. The client gives me warnings that it cannot update the file. It says “423 LOCKED”

The web interface allows me to download the file but attempting to delete it just produces the message “Cannot delete”. The file had normal permissions when on my system.

Incidentally this happens only on files >10 MB

Steps to reproduce

N/A

Server configuration

Operating system: No idea

Web server: infomaniak.com

Database:

PHP version:

ownCloud version:

Storage backend (external storage):

Client configuration

Client version: ownCloud version 2.4.1

Operating system: 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

OS language: UK

Qt version used by client package (Linux only, see also Settings dialog): Using Qt 5.9.5, built against Qt 5.9.4

Client package (From ownCloud or distro) (Linux only): libowncloudsync0/bionic,bionic,now 2.4.1+dfsg-1 amd64 [installed,automatic]
owncloud-client/bionic,bionic,now 2.4.1+dfsg-1 amd64 [installed]
owncloud-client-doc/bionic,bionic,bionic,bionic,now 2.4.1+dfsg-1 all [installed,automatic]
owncloud-client-l10n/bionic,bionic,bionic,bionic,now 2.4.1+dfsg-1 all [installed,automatic]
owncloud-files/unknown,now 10.4.1-1+1.1 all [installed]

Installation path of client: /usr/bin/owncloud

Logs

Please use Gist or a similar code paster for longer
logs.

Template for output < 10 lines

  1. Client logfile: Output of owncloud --logwindow or owncloud --logfile log.txt
    (On Windows using cmd.exe, you might need to first cd into the ownCloud directory)

  2. Web server error log:

  3. Server logfile: ownCloud log (data/owncloud.log):

Example with photos/2019/log :

08-17 09:47:23:347 [ info sync.csync.reconciler ]: INSTRUCTION_REMOVE server file: photos/2019/log
08-17 09:47:24:854 [ info sync.engine ]: blacklist entry for “photos/2019/log” has expired!
08-17 09:47:32:679 [ info sync.propagator ]: Starting INSTRUCTION_REMOVE propagation of “photos/2019/log” by OCC::PropagateRemoteDelete(0x55f7fcad96b0)
08-17 09:47:32:679 [ info sync.accessmanager ]: 5 “” “http://focus-e.ch/fichiers/remote.php/dav/files/pkoppenb/photos/2019/log” has X-Request-ID “f43a1d97-7074-478b-b4dd-d3c095fdb33c”
08-17 09:47:32:679 [ info sync.networkjob ]: OCC::DeleteJob created for […] + “/photos/2019/log” “OCC::PropagateRemoteDelete”
08-17 09:47:40:908 [ warning sync.networkjob ]: QNetworkReply::NetworkError(UnknownContentError) “Server replied “423 Locked” to “DELETE […]/pkoppenb/photos/2019/log”” QVariant(int, 423)
08-17 09:47:40:911 [ info sync.networkjob.delete ]: DELETE of QUrl([…] remote.php/dav/files/pkoppenb/photos/2019/log") FINISHED WITH STATUS QNetworkReply::NetworkError(UnknownContentError) “Server replied “423 Locked” to “DELETE […]/remote.php/dav/files/pkoppenb/photos/2019/log””
08-17 09:47:40:911 [ info sync.database ]: Setting blacklist entry for “photos/2019/log” 72 “Server replied “423 Locked” to “DELETE […]/photos/2019/log”” 1597650460 0 1576762960 “20c460a6988a73aed29fa51a3cfc7e46” “” 0
08-17 09:47:40:911 [ warning sync.propagator ]: escalating soft error on “photos/2019/log” to normal error, 423
08-17 09:47:40:911 [ warning sync.propagator ]: Could not complete propagation of “photos/2019/log” by OCC::PropagateRemoteDelete(0x55f7fcad96b0) with status 2 and error: “Server replied “423 Locked” to “DELETE […]/2019/log””

Please check logs for transactional locking:
https://doc.owncloud.com/server/admin_manual/configuration/files/files_locking_transactional.html

This helps in cases where multiple clients are connected. While file is downloaded on another device, the file can’t be deleted. Should be unlocked normally after the operations finish. Problem might be maybe PHP timeouts that maybe kill the process before it’s unlocked again.

Does it cleanup the locks later? (1hr or so?)

Yeah, for me it looks as if the situation slowly improved itself.

So… while one computer/client was orchestrating my “move” the other computer thought the directory was still there and got an error trying to access it. If the system would better track what was going on it would know the reason for the error, and issue the proper feedback to the user: Warning, an operation on the directory “WDC” is in progress will update later. (if at all).

Thanks for the quick reactions. Thant makes a lot of sense.
The problem is that the files (there are three) have been in this state for weeks now. So I feel there’s nothing that can unlock them. I now have two computers connected. One is actively downloading. I’ll check if things get improved when it’s done.

I reported this morning that I thought it had stopped complaining. I was wrong. I’m now getting a “wdc could not be synced…” more than once a minute.

P.S. It seems I ended up with two accounts on two different computers.

Update: one computer is now done syncing. Both still see the 423 locked error. Is there a way to force the server to delete a file?

Same question here: I still get the error, apparently when I’m working in the cloud directory I get the message every 30 seconds or so, when I’m not working, something like every hour or so.

I might have logged into the cloud server and moved/deleted a file there. It might be using a cache of “what files exist” that still thinks it should be there. How can I trigger a rebuild?

The problem persists.

The error that pops up tells me to look at the log. I’m not sure what log it means (it doesn’t specify that), but I looked at the log in the owncloud desktop app, and… it says that 21 files have not been synced.

“The file may be locked” is not a solution. Maybe for someone who is intimate with the code: yes, but not for me as a user. I’ve read the documentation about file locking and… it tells me it is all auotmatic and does not provide me with a way to overrule or to check on what is going on.

I strongly suspect that locking is NOT the problem. I think the server keeps a database of files and that I logged into the server and moved a directory. (At the time I didn’t know that it would correctly just MOVE the file instead of signalling Hey, this file got removed. Oh, by the way, here is a new file. In fact, I still don’t know. At least now I know it will “work”. Not that it will work efficiently.)

So now the database is corrupted, or at least not in sync with the files on disk. So how do I resync that? With a month having past, I’m pretty sure that there is not a cronjob that checks the database every day/week/month.

Oops. After writing the above, I forgot to hit “post” somehow.
I have now found a workaround: I have moved the whole directory that was moved on the server to outside the “synced” directory. Then I waited for the sync to happen. Then I created the directory (empty this time) again, and again waited for the sync. Next I removed the empty directory. Now it seems it is totally happy.

If you’ve handled any files on the filesystem and not via the ownCloud interface, then you’ll want to run occ files:scan to resync the database. https://doc.owncloud.com/server/admin_manual/configuration/server/occ_command.html#file-operations

1 Like

I already did that a few days ago. Did not bring relief. This morning I created the directory that the synchronization process was complaining about. The empty directory is now synced to exist on the server, but not on the other client.

What I find annoying is that I’m now checking on the other client. It shows two green checkmarks for the two synced directories… Good! But no sync happening.

Then under activity I see some activity, but nothing recent. Quite possible that nothing changed recently except that mkdir I did this morning. Odd that it isn’t being copied…
Then the other tab “not synced (15)” one error message: file * server After resizing the window and the field I can finally read the error message “Filename is a reserved word, trying again in 4h”.
Turns out I’ve managed to create a file called “*”. (Possibly in 2005 as that’s the timestamp on the file).
Why that counts as “not synced 15” I cannot understand.
Why that counts as “green checkmark sync OK!” I cannot understand.

I renamed the pdf called ‘*’.

Now back to the problematic directory. I ran occ files:scan --all again, synced both clients a bunch of times while working on the above issue with the ‘*’ file. created the problematic directory on both system, added a file, checked that it was synced. then I removed the directory on one system, but it remains on both the server and the other client.

Then when I remove it on the other client, I get 423 LOCKED. creating the directory again removes the error in the status screen and things seem to work again. So keeping the directory I wanted removed/cleaned up in the first place seems to prevent me getting distracted with the “failed to sync directory xxx” message every few minutes…

Browsing around for other activity on this forum…

I logged into the server, as root:

mysql
database owncloud; 
DELETE FROM oc_file_locks WHERE 1;
Query OK, 20427 rows affected (0.65 sec)

That seems to have worked!

Funny that deleting 20k entries from the database was necessary…

Hey,

i think this shouldn’t be necessary if a recent ownCloud version (maybe some older versions had some bugs while clearing the locks?) and the following suggestions are used:

From what i have read in the past this shouldn’t be used regularly / on normal operation. I can’t remember the details anymore but i think it was something about checksums or similar which might differ or something similar.

I’m not familiar with the desktop client’s error code, but maybe that means the file is blacklisted (?)
Maybe the file is rejected or ignored, probably in the server, and after multiple failures the desktop just blacklist the file and ignores it. That could explain why you get the green checkmark.

For the problem with the locks, obviously it isn’t normal and could be explained by the errors caused by the ‘*’ file. I think stray locks are automatically deleted after 24h, so you shouldn’t need to touch the DB.

occ files:scan is considered as an expensive operation, and as such, you should avoid it as much as possible. Some usages that could be fine are:

  • Initial FS setup: upload the files directly in the ownCloud’s data directory and sync them with the command. This could be fine for the initial setup, but you shouldn’t do it regularly.
  • Fix some FS hicups, in the case the file is present in the data directory but not showing in the web UI

Apart from possible bugs like the checksum problem (hopefully fixed for 10.6, when it’s released), the command could take a lot of time consuming a lot of resources mostly to do nothing. The recommendation is to use it only if you need it.

1 Like

IF the database becomes corrupted, manually doing an “expensive” operation once is not a big problem.
The occ files:scan did not resolve the problem. The delete all lock entries from the database did.
It is possible that installing the cronjob also would have worked, but I did that after deleting all the locks from the database.
It is not a “good omen” when a cronjob is required to clean up “forgotten locks”. It smells as if the software regularly forgets to clean up a lock and that this requires the cronjob to run. I don’t like the feeling that maybe someday I will find that an important file was not synced because of a stray lock that had not been cleaned up by the cronjob yet.

Hey @jvillafanez,

it is great to see some one having technical knowledge of ownCloud to post in this forums from time to time. Maybe you could also confirm or correct my assumption to the text below?

I’m not familiar with ownCloud but i think it has the same limitations every PHP based software has which is processes running into timeouts and getting terminated by PHP.

For example a longer running process for uploading or moving a file is locking a file but getting killed after hitting the max_execution_time of PHP. Then i don’t think that PHP is allowing to let the script to do further clean ups and just terminates the process which leaves the stale lock behind in the database or in redis.

From what i have read in the past ownCloud has a clean up job for such cases in its background jobs. The default to run such background jobs is AJAX according to the documentation which is only running one clean up operation at the same time and only if some one is actively browsing the Web GUI. This also means locking information might pile up which ultimately leads to the situation discussed in this topic. :frowning_face:

That is probably the reason why the cron based background job is recommended in the documentation previously linked.

Personally i have started to use ownCloud since version 10 and i’m using both, the redis based file locking and the cron based background jobs nearly the beginning and never had such issues with locked files even with heavy usage of the sync client.

I guess the problem can be sumed up in “abnormal termination” of the PHP script execution. Some reasons for this could be:

  • Server (or the PHP script) killing the PHP process due to the “max_execution_timeout” hit
  • Server non-graceful restart.
  • External process killing the PHP script or the web server (host reboot, for example)
  • PHP script hitting memory limits and crashing.

In those cases, the PHP process can be killed without letting it clean up anything. This would lead to stray locks that will need to be removed somehow.

As a general recomendation, if you encounter a Lock Exception, if it isn’t a legitimate exception (the file could be really in use), it could be cause by a crash that needs to be investigated. There should be logs either in ownCloud or the web server about that crash.

About memory crashes, note that the web server can be overwhelmed by requests and each one will take some memory. Even if ownCloud takes care of using as less memory as possible (which, honestly, isn’t necessary true), the PHP script could still crash because it can’t allocate enough memory

1 Like

So how did I end up with 20 thousand stray locks? I have a total of 4609 files on my owncloud server. I’m the only user.

For the “clean up locks” part of the cronjob there is an easy solution. Before issuing the error message: “locked”: you run the full “clean up locks” procedure from the context of the user, OR if you’re afraid that will take too long, you check this lock for having been expired and ignore it if it is.

Here we go. It comes again. I had a computer crash and had to reinstall all my files from owncloud. Magically this solved my recurrent messages about files that are locked on the server. So maybe the problem is also on the local machine?

But now I had again a file that was not properly deleted because the connection got interrupted during the update. And now it’s locked on the server and I keep getting messages it cannot delete it :frowning:

What annoys me most is that owncloud uses 100% CPU while it tries to connect. And as long as it fails to delete the ominous file on the server it will use 100%.

+1 from me. If user is on the server, there should be an over-ride which allows them to delete any and all files and folders if they desire. This would make it so much easier for an owner to fix issues and clearly there are some.
My owncloud was working perfectly until I uploaded dropbox files to it.
Now the clients are constantly trying to sync and the cloud server is overloaded as I cant clear the stubborn directories and files.
===========================–======================
LATER
I only had 3 x txt files which were locked for some unknown reason. Was easily sorted by using the helpful info HERE

Still suggest that an owner on the host should be able to do the above through owncloud settings menu.
Thanks, Al.