"423 Locked” on "MOVE" after upload is finished and also 400 The computed checksum does not match the one received from the client

Hi community:

I’m new to this forum but have been using ownCloud at home since long ago. However, I’m experiencing an issue with a big file (3GB) and I’m not sure how to progress with the diagnostic.

My problem is that I have a big file which gets correctly uploaded to the temporary folder of the server, but then fails on the remote.php MOVE command with a 504 Gateway Timeout.

I have followed the advice about uploading big files in the ownCloud’s manual and everything seems to be right. In fact, the file is uploaded correctly to the server, but then fails to be moved from the .upload directory.

Given that I’m not an expert in PHP and its architecture I’m a bit lost on how to continue :frowning:.

Can anyone give any advice? Thanks.

BTW, in case it is necessary, my current setup is: Fedora 29 + Apache + PHP-FPM + MySQL.

Cheers,
Ivan

Of course it’s necessary. Can you provide even more info? oC version, log message? how you upload? Client or WebUI? What browser, what plugins what client version what OS

Oh yes, sorry. I knew I was forgetting something :sweat_smile:.

I’m using ownCloud desktop client on Fedora 30 to upload the file (2.4.0-5.fc30).
ownCloud server version is 10.0.10.

The error is shown in the client and says:

"Server returned “504 Gateway Timeout” on “MOVE https://zeus/owncloud/remote.php/dav/uploads/ivan/1053991620/.file

I have tried to look at the server logs, but there are so many that I’m lost: I got logs for Apache, for PHP-FPM, for ownCloud, … and none of them seem to give useful information.

As I said my main problem is that I’m lost on how to continue…

current desktop client is 2.5.4. Any reason why you don’t update?

:astonished: The client says there are no newer versions.

Looks like Fedora repos are not updated. I will install the latest version manually.

However, I don’t think it is an issue with the client because the timeout happens in the server after the file is fully uploaded. But I will try the new version to narrow down the search.

I’ll report how it goes.

Thanks.

So, I have updated the client to the latest version and now I get a different behaviour:

The file is uploaded completely, but the server complains with:

“423 Locked” on "MOVE https://zeus/owncloud…/uploads/ivan/2108610851/.file"

I have performed a select in the oc_file_locks table and it is empty. I have redis running.

Do I need to reboot or something?

did you install the client from https://owncloud.org/download/#owncloud-desktop-client-linux?

I installed it from: https://software.opensuse.org/download/package?project=isv:ownCloud:desktop&package=owncloud-client

The one for Fedora 29.

OK, I have more info.

Even if the client says the server failed with a 504, I can see this in the owncloud.log, which suggests that something is wrong with the checksum of the file.

Any idea on how can I compare the client and server checksums?

{

“reqId”: “8399ad19-4d42-44f3-be46-babc5f77bb96”,

“level”: 4,

“time”: “2019-05-28T18:51:21+00:00”,

“remoteAddr”: “192.168.1.114”,

“user”: “ivan”,

“app”: “webdav”,

“method”: “MOVE”,

“url”: “\/owncloud\/remote.php\/dav\/uploads\/ivan\/2004934607\/.file”,

“message”: “Exception: HTTP\/1.1 400 The computed checksum does not match the one received from the client.: {“Exception”:“Sabre\\DAV\\Exception\\BadRequest”,“Message”:“The computed checksum does not match the one received from the client.”,“Code”:0,“Trace”:”#0 \\/var\\/www\\/owncloud\\/apps\\/dav\\/lib\\/Connector\\/Sabre\\/Directory.php(172): OCA\\DAV\\Connector\\Sabre\\File->put(Resource id #623)\n#1 \\/var\\/www\\/owncloud\\/lib\\/composer\\/sabre\\/dav\\/lib\\/DAV\\/Tree.php(314): OCA\\DAV\\Connector\\Sabre\\Directory->createFile(‘data2.img’, Resource id #623)\n#2 \\/var\\/www\\/owncloud\\/lib\\/composer\\/sabre\\/dav\\/lib\\/DAV\\/Tree.php(130): Sabre\\DAV\\Tree->copyNode(Object(OCA\\DAV\\Upload\\FutureFile), Object(OCA\\DAV\\Connector\\Sabre\\Directory), ‘data2.img’)\n#3 \\/var\\/www\\/owncloud\\/lib\\/composer\\/sabre\\/dav\\/lib\\/DAV\\/Tree.php(161): Sabre\\DAV\\Tree->copy(‘uploads\\/ivan\\/20…’, ‘files\\/ivan\\/Docu…’)\n#4 \\/var\\/www\\/owncloud\\/lib\\/composer\\/sabre\\/dav\\/lib\\/DAV\\/CorePlugin.php(642): Sabre\\DAV\\Tree->move(‘uploads\\/ivan\\/20…’, ‘files\\/ivan\\/Docu…’)\n#5 [internal function]: Sabre\\DAV\\CorePlugin->httpMove(Object(Sabre\\HTTP\\Request), Object(Sabre\\HTTP\\Response))\n#6 \\/var\\/www\\/owncloud\\/lib\\/composer\\/sabre\\/event\\/lib\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\n#7 \\/var\\/www\\/owncloud\\/lib\\/composer\\/sabre\\/dav\\/lib\\/DAV\\/Server.php(479): Sabre\\Event\\EventEmitter->emit(‘method:MOVE’, Array)\n#8 \\/var\\/www\\/owncloud\\/lib\\/composer\\/sabre\\/dav\\/lib\\/DAV\\/Server.php(254): Sabre\\DAV\\Server->invokeMethod(Object(Sabre\\HTTP\\Request), Object(Sabre\\HTTP\\Response))\n#9 \\/var\\/www\\/owncloud\\/apps\\/dav\\/lib\\/Server.php(298): Sabre\\DAV\\Server->exec()\n#10 \\/var\\/www\\/owncloud\\/apps\\/dav\\/appinfo\\/v2\\/remote.php(31): OCA\\DAV\\Server->exec()\n#11 \\/var\\/www\\/owncloud\\/remote.php(165): require_once(’\\/var\\/www\\/ownclo…’)\n#12 {main}",“File”:"\\/var\\/www\\/owncloud\\/apps\\/dav\\/lib\\/Connector\\/Sabre\\/File.php",“Line”:210}"

}

Interesting.

I found the /media/owncloud/data/ivan/uploads folder, where ownCloud seems to put the chunks of the upload.

Apparently the directories containing the chunks are not full. If I run “du -s -h *” in the uploads folder I get this:

461M 1484556893

4,9M 43a14b631eef7febaf6279d40ed5b7501559064800461

2,8G 639098003

So, no directory contains the expected 3G of the uploaded files.

Would that mean that the client is not being able to upload the whole file and thus the server complains?

If so, why doesn’t the client complain about the upload?

And, how can I troubleshoot the upload?

Thanks for the help.

Thanks for the verbose debugging :slight_smile:

I get in touch with our desktop specialists and get back to you with hopefully a solution

One last detail…

I have two incomplete uploads now in the uploads folder (remember that the real size is 3000000):

[root@zeus uploads]# du -s  *
2929804	639098003
2929808	935159965

(size is to the left, upoad id to the right)

Inside those two folders I have:

[root@zeus uploads]# ls -la 639098003
-rw-r--r--. 1 apache apache  10000000 may 28 12:42 00000000
-rw-r--r--. 1 apache apache  69226075 may 28 12:42 00000001
-rw-r--r--. 1 apache apache 100000000 may 28 12:42 00000002
-rw-r--r--. 1 apache apache 100000000 may 28 12:43 00000003
-rw-r--r--. 1 apache apache 100000000 may 28 12:43 00000004
-rw-r--r--. 1 apache apache 100000000 may 28 12:44 00000005
-rw-r--r--. 1 apache apache 100000000 may 28 12:44 00000006
-rw-r--r--. 1 apache apache 100000000 may 28 12:44 00000007
-rw-r--r--. 1 apache apache 100000000 may 28 12:45 00000008
-rw-r--r--. 1 apache apache 100000000 may 28 12:45 00000009
-rw-r--r--. 1 apache apache 100000000 may 28 12:45 00000010
-rw-r--r--. 1 apache apache 100000000 may 28 12:46 00000011
-rw-r--r--. 1 apache apache 100000000 may 28 12:46 00000012
-rw-r--r--. 1 apache apache 100000000 may 28 12:46 00000013
-rw-r--r--. 1 apache apache 100000000 may 28 12:47 00000014
-rw-r--r--. 1 apache apache 100000000 may 28 12:47 00000015
-rw-r--r--. 1 apache apache 100000000 may 28 12:48 00000016
-rw-r--r--. 1 apache apache 100000000 may 28 12:48 00000017
-rw-r--r--. 1 apache apache 100000000 may 28 12:48 00000018
-rw-r--r--. 1 apache apache 100000000 may 28 12:49 00000019
-rw-r--r--. 1 apache apache 100000000 may 28 12:49 00000020
-rw-r--r--. 1 apache apache 100000000 may 28 12:49 00000021
-rw-r--r--. 1 apache apache 100000000 may 28 12:50 00000022
-rw-r--r--. 1 apache apache 100000000 may 28 12:50 00000023
-rw-r--r--. 1 apache apache 100000000 may 28 12:50 00000024
-rw-r--r--. 1 apache apache 100000000 may 28 12:51 00000025
-rw-r--r--. 1 apache apache 100000000 may 28 12:51 00000026
-rw-r--r--. 1 apache apache 100000000 may 28 12:52 00000027
-rw-r--r--. 1 apache apache 100000000 may 28 12:52 00000028
-rw-r--r--. 1 apache apache 100000000 may 28 12:52 00000029
-rw-r--r--. 1 apache apache 100000000 may 28 12:53 00000030
-rw-r--r--. 1 apache apache  20773925 may 28 12:53 00000031

And:
[root@zeus uploads]# ls -la 935159965
-rw-r--r--. 1 apache apache  10000000 may 29 06:16 00000000
-rw-r--r--. 1 apache apache  71889632 may 29 06:16 00000001
-rw-r--r--. 1 apache apache 100000000 may 29 06:16 00000002
-rw-r--r--. 1 apache apache 100000000 may 29 06:17 00000003
-rw-r--r--. 1 apache apache 100000000 may 29 06:17 00000004
-rw-r--r--. 1 apache apache 100000000 may 29 06:17 00000005
-rw-r--r--. 1 apache apache 100000000 may 29 06:18 00000006
-rw-r--r--. 1 apache apache 100000000 may 29 06:18 00000007
-rw-r--r--. 1 apache apache 100000000 may 29 06:18 00000008
-rw-r--r--. 1 apache apache 100000000 may 29 06:19 00000009
-rw-r--r--. 1 apache apache 100000000 may 29 06:19 00000010
-rw-r--r--. 1 apache apache 100000000 may 29 06:19 00000011
-rw-r--r--. 1 apache apache 100000000 may 29 06:20 00000012
-rw-r--r--. 1 apache apache 100000000 may 29 06:20 00000013
-rw-r--r--. 1 apache apache 100000000 may 29 06:20 00000014
-rw-r--r--. 1 apache apache 100000000 may 29 06:21 00000015
-rw-r--r--. 1 apache apache 100000000 may 29 06:21 00000016
-rw-r--r--. 1 apache apache 100000000 may 29 06:21 00000017
-rw-r--r--. 1 apache apache 100000000 may 29 06:22 00000018
-rw-r--r--. 1 apache apache 100000000 may 29 06:22 00000019
-rw-r--r--. 1 apache apache 100000000 may 29 06:23 00000020
-rw-r--r--. 1 apache apache 100000000 may 29 06:23 00000021
-rw-r--r--. 1 apache apache 100000000 may 29 06:23 00000022
-rw-r--r--. 1 apache apache 100000000 may 29 06:24 00000023
-rw-r--r--. 1 apache apache 100000000 may 29 06:24 00000024
-rw-r--r--. 1 apache apache 100000000 may 29 06:24 00000025
-rw-r--r--. 1 apache apache 100000000 may 29 06:25 00000026
-rw-r--r--. 1 apache apache 100000000 may 29 06:25 00000027
-rw-r--r--. 1 apache apache 100000000 may 29 06:25 00000028
-rw-r--r--. 1 apache apache 100000000 may 29 06:25 00000029
-rw-r--r--. 1 apache apache 100000000 may 29 06:26 00000030
-rw-r--r--. 1 apache apache  18110368 may 29 06:26 00000031

It might be a coincidence, but looks like a pattern… I would expect 30 chunks of 100MB each, but there are 31 in both cases and:

  • the first one is 10MB in both

  • the second one looks broken

  • the last one looks broken

  • but… the sum of the second plus the last one is always 90MB, which summed to the first chunk of 10MB makes a total of 100MB, making the total sum of chunks add up to 3G :astonished:

So:

  1. the sizes reported by du are wrong (probably du is not reporting real sizes, but estimated ones)

  2. the uploads seem to be correct (at least in size)

I will continue investigating, but if the desktop experts can provide any hint it would be very much appreciated.

Cheers,

Ivan

I had the same problem by uploading big files on webInterface… was a total fail…

How does this contribute to helping this guy?

can you detail what exactly did you change, activate, deactivate and what values you using now? Just so we are in the clear

@dmitry
should give a headups to ownCloud in order to provide a clear solution…

@izaera Could you start the client with logging to stdout? Then you should see the checksum and also the matching request ID reqId.
We need to make sure we look at the same upload attempt / sync run

owncloud --logfile - --logdebug

Also helpful for debugging is to tail -f the log files for Apache error log and owncloud.log

OK. Done that.

I dumped the output of the log to the owncloud-client.log file and ran:

fgrep -e sync.checksums -e sync.networkjob.mkcol -e sync.propagator.upload owncloud-client.log

which gave me:

05-29 15:30:42:532 [ info sync.checksums ]:	Computing "SHA1" checksum of "/home/ivan/OwnCloud/Documentos/Backups/data2/data2.img" in a thread
05-29 15:30:53:248 [ info sync.networkjob.mkcol ]:	MKCOL of QUrl("[https://zeus/owncloud/remote.php/dav/uploads/ivan/1760773490](https://zeus/owncloud/remote.php/dav/uploads/ivan/1760773490)") FINISHED WITH STATUS "OK"
05-29 15:30:56:745 [ info sync.propagator.upload ]:	Chunked upload of 10000000 bytes took 3489 ms, desired is 60000 ms, expected good chunk size is 171969045 bytes and nudged next chunk size to  90984522 bytes
05-29 15:31:13:621 [ info sync.propagator.upload ]:	Chunked upload of 90984522 bytes took 16780 ms, desired is 60000 ms, expected good chunk size is 325332021 bytes and nudged next chunk size to  100000000 bytes
05-29 15:31:32:832 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 19074 ms, desired is 60000 ms, expected good chunk size is 314564328 bytes and nudged next chunk size to  100000000 bytes
05-29 15:31:52:393 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 19422 ms, desired is 60000 ms, expected good chunk size is 308928019 bytes and nudged next chunk size to  100000000 bytes
05-29 15:32:11:190 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18653 ms, desired is 60000 ms, expected good chunk size is 321664075 bytes and nudged next chunk size to  100000000 bytes
05-29 15:32:29:973 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18645 ms, desired is 60000 ms, expected good chunk size is 321802091 bytes and nudged next chunk size to  100000000 bytes
05-29 15:32:48:510 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18424 ms, desired is 60000 ms, expected good chunk size is 325662179 bytes and nudged next chunk size to  100000000 bytes
05-29 15:33:07:546 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18870 ms, desired is 60000 ms, expected good chunk size is 317965023 bytes and nudged next chunk size to  100000000 bytes
05-29 15:33:25:863 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18179 ms, desired is 60000 ms, expected good chunk size is 330051157 bytes and nudged next chunk size to  100000000 bytes
05-29 15:33:44:669 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18702 ms, desired is 60000 ms, expected good chunk size is 320821302 bytes and nudged next chunk size to  100000000 bytes
05-29 15:34:02:329 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 17423 ms, desired is 60000 ms, expected good chunk size is 344372381 bytes and nudged next chunk size to  100000000 bytes
05-29 15:34:23:927 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 21512 ms, desired is 60000 ms, expected good chunk size is 278914094 bytes and nudged next chunk size to  100000000 bytes
05-29 15:34:41:299 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 17118 ms, desired is 60000 ms, expected good chunk size is 350508236 bytes and nudged next chunk size to  100000000 bytes
05-29 15:34:57:808 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 16289 ms, desired is 60000 ms, expected good chunk size is 368346737 bytes and nudged next chunk size to  100000000 bytes
05-29 15:35:16:378 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18321 ms, desired is 60000 ms, expected good chunk size is 327493040 bytes and nudged next chunk size to  100000000 bytes
05-29 15:35:35:733 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 19232 ms, desired is 60000 ms, expected good chunk size is 311980033 bytes and nudged next chunk size to  100000000 bytes
05-29 15:35:53:909 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18057 ms, desired is 60000 ms, expected good chunk size is 332281109 bytes and nudged next chunk size to  100000000 bytes
05-29 15:36:12:833 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18809 ms, desired is 60000 ms, expected good chunk size is 318996225 bytes and nudged next chunk size to  100000000 bytes
05-29 15:36:31:338 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18424 ms, desired is 60000 ms, expected good chunk size is 325662179 bytes and nudged next chunk size to  100000000 bytes
05-29 15:36:48:168 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 16584 ms, desired is 60000 ms, expected good chunk size is 361794500 bytes and nudged next chunk size to  100000000 bytes
05-29 15:37:06:765 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18534 ms, desired is 60000 ms, expected good chunk size is 323729362 bytes and nudged next chunk size to  100000000 bytes
05-29 15:37:26:225 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 19373 ms, desired is 60000 ms, expected good chunk size is 309709389 bytes and nudged next chunk size to  100000000 bytes
05-29 15:37:44:231 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 17940 ms, desired is 60000 ms, expected good chunk size is 334448160 bytes and nudged next chunk size to  100000000 bytes
05-29 15:38:02:370 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18054 ms, desired is 60000 ms, expected good chunk size is 332336324 bytes and nudged next chunk size to  100000000 bytes
05-29 15:38:21:317 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18829 ms, desired is 60000 ms, expected good chunk size is 318657390 bytes and nudged next chunk size to  100000000 bytes
05-29 15:38:40:055 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18639 ms, desired is 60000 ms, expected good chunk size is 321905681 bytes and nudged next chunk size to  100000000 bytes
05-29 15:38:59:513 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 19354 ms, desired is 60000 ms, expected good chunk size is 310013433 bytes and nudged next chunk size to  100000000 bytes
05-29 15:39:17:567 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 17948 ms, desired is 60000 ms, expected good chunk size is 334299086 bytes and nudged next chunk size to  100000000 bytes
05-29 15:39:38:732 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 21092 ms, desired is 60000 ms, expected good chunk size is 284468044 bytes and nudged next chunk size to  100000000 bytes
05-29 15:39:57:172 [ info sync.propagator.upload ]:	Chunked upload of 100000000 bytes took 18170 ms, desired is 60000 ms, expected good chunk size is 330214639 bytes and nudged next chunk size to  100000000 bytes
05-29 15:40:15:803 [ info sync.propagator.upload ]:	Chunked upload of 99015478 bytes took 18502 ms, desired is 60000 ms, expected good chunk size is 321096566 bytes and nudged next chunk size to  100000000 bytes
05-29 15:40:15:805 [ info sync.propagator.upload ]:	"/owncloud/remote.php/dav/files/ivan/Documentos/Backups/data2/data2.img" "SHA1:02c80d9fcb2c12c607594863b6134da1e3249f1f"
05-29 15:41:15:932 [ debug sync.propagator.upload ]	[ OCC::PropagateUploadFileCommon::commonErrorHandling ]:	"<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\">\n<html><head>\n<title>504 Gateway Timeout</title>\n</head><body>\n<h1>Gateway Timeout</h1>\n<p>The gateway did not receive a timely response\nfrom the upstream server or application.</p>\n</body></html>\n"
05-29 15:42:07:208 [ info sync.checksums ]:	Computing "SHA1" checksum of "/home/ivan/OwnCloud/Documentos/Backups/data2/data2.img" in a thread
05-29 15:42:17:150 [ info sync.propagator.upload ]:	Resuming  "Documentos/Backups/data2/data2.img"  from chunk  31 ; sent = 3000000000
05-29 15:42:17:150 [ info sync.propagator.upload ]:	"/owncloud/remote.php/dav/files/ivan/Documentos/Backups/data2/data2.img" "SHA1:02c80d9fcb2c12c607594863b6134da1e3249f1f"
05-29 15:42:19:518 [ debug sync.propagator.upload ]	[ OCC::PropagateUploadFileCommon::commonErrorHandling ]:	"<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<d:error xmlns:d=\"DAV:\" xmlns:s=\"[http://sabredav.org/ns\">\n](http://sabredav.org/ns\)  <s:exception>OCA\\DAV\\Connector\\Sabre\\Exception\\FileLocked</s:exception>\n  <s:message>\"Documentos/Backups/data2/data2.img\" is locked</s:message>\n</d:error>\n"
05-29 15:42:35:810 [ info sync.checksums ]:	Computing "SHA1" checksum of "/home/ivan/OwnCloud/Documentos/Backups/data2/data2.img" in a thread

Which seems to imply that a timeout plus a locking happened after the upload finished.

Then I have this in owncloud’s server log:

{"reqId":"74865c24-4cce-4cb8-8155-6df17d224d6e","level":4,"time":"2019-05-29T13:30:19+00:00","remoteAddr":"192.168.1.114","user":"ivan","app":"webdav","method":"MOVE","url":"\/owncloud\/remote.php\/dav\/uploads\/ivan\/935159965\/.file","message":"Exception: HTTP\/1.1 400 The computed checksum does not match the one received from the client.: {\"Exception\":\"Sabre\\\\DAV\\\\Exception\\\\BadRequest\",\"Message\":\"The computed checksum does not match the one received from the client.\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/Directory.php(172): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\File->put(Resource id #623)\\n#1 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Tree.php(314): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory->createFile('data2.img', Resource id #623)\\n#2 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Tree.php(130): Sabre\\\\DAV\\\\Tree->copyNode(Object(OCA\\\\DAV\\\\Upload\\\\FutureFile), Object(OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory), 'data2.img')\\n#3 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Tree.php(161): Sabre\\\\DAV\\\\Tree->copy('uploads\\\/ivan\\\/93...', 'files\\\/ivan\\\/Docu...')\\n#4 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(642): Sabre\\\\DAV\\\\Tree->move('uploads\\\/ivan\\\/93...', 'files\\\/ivan\\\/Docu...')\\n#5 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpMove(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#6 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#7 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(479): Sabre\\\\Event\\\\EventEmitter->emit('method:MOVE', Array)\\n#8 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(254): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#9 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Server.php(298): Sabre\\\\DAV\\\\Server->exec()\\n#10 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v2\\\/remote.php(31): OCA\\\\DAV\\\\Server->exec()\\n#11 \\\/var\\\/www\\\/owncloud\\\/remote.php(165): require_once('\\\/var\\\/www\\\/ownclo...')\\n#12 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/File.php\",\"Line\":210}"}
{"reqId":"c9dea2d2-f9d3-4567-a94f-cd0f7af54482","level":4,"time":"2019-05-29T13:42:19+00:00","remoteAddr":"192.168.1.114","user":"ivan","app":"webdav","method":"MOVE","url":"\/owncloud\/remote.php\/dav\/uploads\/ivan\/1760773490\/.file","message":"Exception: HTTP\/1.1 423 \"Documentos\/Backups\/data2\/data2.img\" is locked: {\"Exception\":\"OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Message\":\"\\\"Documentos\\\/Backups\\\/data2\\\/data2.img\\\" is locked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Tree.php(314): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory->createFile('data2.img', Resource id #624)\\n#1 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Tree.php(130): Sabre\\\\DAV\\\\Tree->copyNode(Object(OCA\\\\DAV\\\\Upload\\\\FutureFile), Object(OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory), 'data2.img')\\n#2 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Tree.php(161): Sabre\\\\DAV\\\\Tree->copy('uploads\\\/ivan\\\/17...', 'files\\\/ivan\\\/Docu...')\\n#3 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(642): Sabre\\\\DAV\\\\Tree->move('uploads\\\/ivan\\\/17...', 'files\\\/ivan\\\/Docu...')\\n#4 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpMove(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#5 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#6 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(479): Sabre\\\\Event\\\\EventEmitter->emit('method:MOVE', Array)\\n#7 \\\/var\\\/www\\\/owncloud\\\/lib\\\/composer\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(254): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#8 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Server.php(298): Sabre\\\\DAV\\\\Server->exec()\\n#9 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v2\\\/remote.php(31): OCA\\\\DAV\\\\Server->exec()\\n#10 \\\/var\\\/www\\\/owncloud\\\/remote.php(165): require_once('\\\/var\\\/www\\\/ownclo...')\\n#11 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/Directory.php\",\"Line\":184}"}


Which shows the checksum mismatch (but note that it is dumped just before the upload begins, not after) and the lock error.

In Apache’s error log I have:

[Wed May 29 15:41:15.877856 2019] [proxy_fcgi:error] [pid 1425:tid 2687474496] (70007)The timeout specified has expired: [client 192.168.1.114:37952] AH01075: Error dispatching request to : (polling)

In requests log:

[29/May/2019:15:40:35 +0200] 192.168.1.114 TLSv1.3 TLS_AES_256_GCM_SHA384 "PROPFIND /owncloud/remote.php/dav/files/ivan/ HTTP/1.1" 398
[29/May/2019:15:41:09 +0200] 192.168.1.114 TLSv1.3 TLS_AES_256_GCM_SHA384 "PROPFIND /owncloud/remote.php/dav/files/ivan/ HTTP/1.1" 398
[29/May/2019:15:40:15 +0200] 192.168.1.114 TLSv1.3 TLS_AES_256_GCM_SHA384 "MOVE /owncloud/remote.php/dav/uploads/ivan/1760773490/.file HTTP/1.1" 247
[29/May/2019:15:41:28 +0200] 192.168.1.114 TLSv1.3 TLS_AES_256_GCM_SHA384 "GET /owncloud/ocs/v2.php/apps/notifications/api/v1/notifications?format=json HTTP/1.1" 108


And, in access log:

192.168.1.114 - - [29/May/2019:15:41:09 +0200] "PROPFIND /owncloud/remote.php/dav/files/ivan/ HTTP/1.1" 207 398
192.168.1.114 - - [29/May/2019:15:40:15 +0200] "MOVE /owncloud/remote.php/dav/uploads/ivan/1760773490/.file HTTP/1.1" 504 247
192.168.1.114 - - [29/May/2019:15:41:28 +0200] "GET /owncloud/ocs/v2.php/apps/notifications/api/v1/notifications?format=json HTTP/1.1" 200 108
192.168.1.114 - - [29/May/2019:15:41:43 +0200] "PROPFIND /owncloud/remote.php/dav/files/ivan/ HTTP/1.1" 207 398


So, looking at this, I suppose that the root cause of the errors are the timeouts, but I don’t know what the lock and checksum mismatch means…

Also, I don’t know where to tweak that timeout. I have set:

  • max_execution_time in php.ini

  • max_input_time in php.ini

  • upload_max_filesize and post_max_size in php.ini

  • upload_max_filesize and post_max_size in .user.ini

to huge values (1 hour for times, 16GB for sizes).

Any ideas?

After inspecting that max_execution_time is right with phpinfo() I’ve increased default_socket_timeout from 60 to 3600.

Will report how it goes…

OK. By now I’m pretty sure that it is a true timeout what is causing this to fail.

But just for the sake of documentation and in case it helps someone I’ll explain my last test:

I have tried to upload the file from the web client and everything went perfect (several PUT uploads were issued to send the chunks of the file) until the 3GB got uploaded. Then the web client issued a MOVE to convert the chunks to a single file and the browser received the 504 in about a minute.

Then I went to the server and saw a php-fpm process running at full CPU. I straced that process and it is apparently reading and writing the chunks.

So, my bet is that Apache is cutting the connection with php-fpm because of the timeout.

I’m going to try to rise Apache timeouts and try again.