Broken uploads of large files with fast connection but successful with slow connection

I’ve just installed the current client 2.5.4 on Windows 10 and the current stable server application on a shared hosting. Everything seems to work fine except a weird problem that I haven’t found anyone experiencing: when I copy a large file (like 200-400 MB) to the ownCloud folder the desktop client begins uploading but always gets to the point of failure after transferring a few tens of megabytes. The error is a simple “Unable to write file” but there’s nothing in the log file that would explain the problem any better than “Unable to write file”.

After a while the client will resume the upload (however, it will usually restart a few megabytes earlier that it stopped with the error) and then the error will occur again. Sometimes the whole file will get to the server eventually on its own, but sometimes it will finally give up.

The weird thing is that if I limit the upload speed then large files will transfer smoothly without any interruptions. My connection upload speed peaks at around 13 Mbit/s and at this speed large uploads are erroneous. But if I limit the speed to about 8 Mbit/s (1000 KB/s) or lower then all is fine - no errors. I tried limiting in the ownCloud client and also using NetBalancer - both methods worked fine.

I have set the upload size to 2G in .htaccess. The PHP timeouts should not be a problem, either, since slow and long uploads work perfectly fine. I’ve also tried decreasing the chunkSize in owncloud.cfg to 1 MB - I suppose that would have worked if this option really worked - by looking at the uploads folder on the server only the first chunk file was 1 MB while all the subsequent chunks were much larger like about 50 or 90 MB. I suspect this is where the problem may be - I noticed that at higher connection speeds the chunks are larger than at low speeds and possibly the larger chunks are problematic on my shared host for some reason. If all the chunks were the size that is set in owncloud.cfg then I suppose this problem would not exist. The default chunkSize appears to be 10000000 bytes (10 MB) - that size would work fine but only the first chunk is that size, the subsequent ones are much larger.

Any ideas of how to solve this problem? Now I keep the upload speed at 1000 KB/s to stay safe but would prefer not to limit the speed.

This is only correct for the server, and basically only applies for uploads via the UI. The client has a default chunk size of 5MB but adjusts this according to the upload speed, which explains the behavior you are describing.

So, in order to move forward, we need to find out why bigger chunks are failing on your server setup. There are a couple of things you can try:

If you hit F12 while you have your ownCloud client open you can see the client log and save it if you want to. Perhaps it will show some more errors.

You could increase the chunk size for uploads in the WebUI and see whether you’re able to reproduce the error this way.

What I heard last is, that the chunkSize shouldn’t be adjusted in the owncloud.cfg and that it is set as an environment variable of the ownCloud client, but this should also only be adjusted for testing purposes.
Perhaps @guruz can shed some more light into this?

1 Like

This isn’t an issue with the ownCloud desktop sync client. But in the logs of the desktop sync client you’ll find X-REQUEST-ID with ever error, so you can grep the owncloud.log on the ownCloud server to see what’s wrong with the setup. Without knowing the details, I’d say your shared hoster is too limited for a good ownCloud setup.

More information on X-REQUEST-ID:
https://doc.owncloud.com/server/admin_manual/configuration/server/request_tracing.html

1 Like

Yes, I think this would explain it - the chunks are not constant size but vary depending on the upload speed. When I upload large files via web UI they work perfectly fine and I can see all the chunks are fixed at 10 MB.

What I found about my server is that it has a fixed maximum upload size in its nginx config, which appears to be around 64 MB. Possibly, if a chunk is bigger for whatever reason the upload fails. I don’t think I can change that server limit.

But here is a log from a failed upload, from the beginning to the first error:

08-12 11:27:17:356 [ info sync.networkjob.put ]:	PUT of "https://owncloud.example.com/remote.php/dav/uploads/admin/2292627051/00000000" FINISHED WITH STATUS "OK" QVariant(int, 201) QVariant(QString, "Created")
08-12 11:27:17:383 [ info sync.propagator.upload ]:	Chunked upload of 10000000 bytes took 5104 ms, desired is 60000 ms, expected good chunk size is 117554858 bytes and nudged next chunk size to  63777429 bytes
08-12 11:27:17:459 [ info sync.accessmanager ]:	3 "" "https://owncloud.example.com/remote.php/dav/uploads/admin/2292627051/00000001" has X-Request-ID "79254ac7-7158-4508-b345-eb9f9291886c"
08-12 11:27:17:460 [ info sync.networkjob ]:	OCC::PUTFileJob created for "https://owncloud.example.com" + "" "OCC::PropagateUploadFileNG"
08-12 11:27:18:658 [ debug gui.account.state ]	[ OCC::AccountState::checkConnectivity ]:	"admin@owncloud.example.com" The last ETag check succeeded within the last  30  secs. No connection check needed!
08-12 11:27:18:799 [ info sync.accessmanager ]:	6 "PROPFIND" "https://owncloud.example.com/remote.php/dav/files/admin/" has X-Request-ID "769515ba-9a43-4e2d-8104-3b4063716e21"
08-12 11:27:18:799 [ debug sync.cookiejar ]	[ OCC::CookieJar::cookiesForUrl ]:	QUrl("https://owncloud.example.com/remote.php/dav/files/admin/") requests: (QNetworkCookie("oc_sessionPassphrase=nnDDyhzmHnPW8KBmK0Id4SjD68snAzqSjeTNhsfo9EjWsAXGt4ehZiYbX9MQNxo6LGBRkokXLu6gtoVo8ShPZvJk%2F1WbdUhfjqxh1c7m8XqdEp5Q%2BqUrMlkRuMvwiOAT; secure; HttpOnly; domain=owncloud.example.com; path=/"), QNetworkCookie("ocrdki0ek0ba=c8d65708dd058e3831f734c4bc4a755c; secure; HttpOnly; domain=owncloud.example.com; path=/"))
08-12 11:27:18:800 [ info sync.networkjob ]:	OCC::PropfindJob created for "https://owncloud.example.com" + "/" "OCC::QuotaInfo"
08-12 11:27:19:069 [ info sync.networkjob.propfind ]:	PROPFIND of QUrl("https://owncloud.example.com/remote.php/dav/files/admin/") FINISHED WITH STATUS "OK"
08-12 11:27:19:070 [ debug sync.networkjob ]	[ OCC::AbstractNetworkJob::slotFinished ]:	Network job OCC::PropfindJob finished for "/"
08-12 11:27:46:636 [ warning sync.networkjob ]:	QNetworkReply::NetworkError(UnknownNetworkError) "Unable to write" QVariant(Invalid)
08-12 11:27:46:637 [ info sync.networkjob.put ]:	PUT of "https://owncloud.example.com/remote.php/dav/uploads/admin/2292627051/00000001" FINISHED WITH STATUS "UnknownNetworkError Unable to write" QVariant(Invalid) QVariant(Invalid)
08-12 11:27:46:638 [ debug sync.propagator.upload ]	[ OCC::PropagateUploadFileCommon::commonErrorHandling ]:	""
08-12 11:27:46:639 [ debug sync.database.sql ]	[ OCC::SqlQuery::bindValue ]:	SQL bind 1 QVariant(QString, "large/mysql-installer-community-8.0.13.0.msi")
08-12 11:27:46:640 [ debug sync.database.sql ]	[ OCC::SqlQuery::exec ]:	SQL exec "SELECT lastTryEtag, lastTryModtime, retrycount, errorstring, lastTryTime, ignoreDuration, renameTarget, errorCategory, requestId FROM blacklist WHERE path=?1 COLLATE NOCASE"
08-12 11:27:46:641 [ warning sync.propagator ]:	Could not complete propagation of "large/mysql-installer-community-8.0.13.0.msi" by OCC::PropagateUploadFileNG(0x7bed3e8) with status 1 and error: "Unable to write"
08-12 11:27:46:650 [ debug sync.statustracker ]	[ OCC::SyncFileStatusTracker::slotItemCompleted ]:	Item completed "large/mysql-installer-community-8.0.13.0.msi" 1 8
08-12 11:27:46:651 [ debug sync.database.sql ]	[ OCC::SqlQuery::bindValue ]:	SQL bind 1 QVariant(qlonglong, 1999444939382454132)
08-12 11:27:46:652 [ debug sync.database.sql ]	[ OCC::SqlQuery::exec ]:	SQL exec "SELECT path, inode, modtime, type, md5, fileid, remotePerm, filesize,  ignoredChildrenRemote, contentchecksumtype.name || ':' || contentChecksum FROM metadata  LEFT JOIN checksumtype as contentchecksumtype ON metadata.contentChecksumTypeId == contentchecksumtype.id WHERE phash=?1"
08-12 11:27:46:653 [ debug sync.database.sql ]	[ OCC::SqlQuery::bindValue ]:	SQL bind 1 QVariant(qlonglong, 1999444939382454132)
08-12 11:27:46:654 [ debug sync.database.sql ]	[ OCC::SqlQuery::exec ]:	SQL exec "SELECT path, inode, modtime, type, md5, fileid, remotePerm, filesize,  ignoredChildrenRemote, contentchecksumtype.name || ':' || contentChecksum FROM metadata  LEFT JOIN checksumtype as contentchecksumtype ON metadata.contentChecksumTypeId == contentchecksumtype.id WHERE phash=?1"
08-12 11:27:46:656 [ debug sync.localdiscoverytracker ]	[ OCC::LocalDiscoveryTracker::slotItemCompleted ]:	inserted error item "large/mysql-installer-community-8.0.13.0.msi"
08-12 11:27:46:656 [ debug sync.networkjob ]	[ OCC::AbstractNetworkJob::slotFinished ]:	Network job OCC::PUTFileJob finished for ""
08-12 11:27:46:664 [ debug sync.database.sql ]	[ OCC::SqlQuery::exec ]:	SQL exec "SELECT phash, path FROM metadata order by path"
08-12 11:27:46:664 [ debug sync.database.sql ]	[ OCC::SqlQuery::exec ]:	SQL exec "PRAGMA wal_checkpoint(FULL);"
08-12 11:27:46:664 [ debug sync.database ]	[ OCC::SyncJournalDb::walCheckpoint ]:	took 0 msec
08-12 11:27:46:665 [ debug sync.database.sql ]	[ OCC::SqlQuery::exec ]:	SQL exec "SELECT path FROM conflicts"
08-12 11:27:46:665 [ debug sync.database ]	[ OCC::SyncJournalDb::commitInternal ]:	Transaction commit  "All Finished." 
08-12 11:27:46:667 [ info sync.database ]:	Closing DB "D:/ownCloud/._sync_3f1bf5db7a93.db"
08-12 11:27:46:668 [ debug sync.database ]	[ OCC::SyncJournalDb::commitTransaction ]:	No database Transaction to commit
08-12 11:27:46:707 [ info sync.engine ]:	CSync run took  37078 ms
08-12 11:27:46:708 [ debug sync.localdiscoverytracker ]	[ OCC::LocalDiscoveryTracker::slotSyncFinished ]:	sync failed, keeping last sync's local discovery path list
08-12 11:27:46:764 [ debug gui.folderwatcher ]	[ OCC::FolderWatcher::pathIsIgnored ]:	* Ignoring file "D:/ownCloud/._sync_3f1bf5db7a93.db-wal"
08-12 11:27:46:765 [ debug gui.folderwatcher ]	[ OCC::FolderWatcher::pathIsIgnored ]:	* Ignoring file "D:/ownCloud/._sync_3f1bf5db7a93.db"
08-12 11:27:46:765 [ debug gui.folderwatcher ]	[ OCC::FolderWatcher::pathIsIgnored ]:	* Ignoring file "D:/ownCloud/._sync_3f1bf5db7a93.db-shm"
08-12 11:27:46:766 [ debug gui.folderwatcher ]	[ OCC::FolderWatcher::pathIsIgnored ]:	* Ignoring file "D:/ownCloud/._sync_3f1bf5db7a93.db-wal"
08-12 11:27:46:766 [ info gui.folder ]:	Client version 2.5.4 (build 11415)  Qt 5.11.2  SSL  OpenSSL 1.1.1  11 Sep 2018
08-12 11:27:46:766 [ warning gui.folder ]:	SyncEngine finished with ERROR
08-12 11:27:46:778 [ info gui.folder ]:	Folder sync result:  1
08-12 11:27:46:779 [ info gui.folder ]:	the last 1 syncs failed
08-12 11:27:46:789 [ info gui.application ]:	Sync state changed for folder  "https://owncloud.example.com/remote.php/dav/files/admin/" :  "Error"
08-12 11:27:46:794 [ debug gui.folderwatcher ]	[ OCC::FolderWatcher::pathIsIgnored ]:	* Ignoring file "D:/ownCloud/.owncloudsync.log"
08-12 11:27:46:990 [ info gui.folder.manager ]:	<========== Sync finished for folder [D:\ownCloud] of account [admin@owncloud.example.com] with remote [https://owncloud.example.com/remote.php/dav/files/admin/]
08-12 11:27:46:991 [ info gui.folder.manager ]:	Starting the next scheduled sync in 9 seconds
08-12 11:27:49:071 [ info sync.accessmanager ]:	6 "PROPFIND" "https://owncloud.example.com/remote.php/dav/files/admin/" has X-Request-ID "6ad2ed94-feca-4d38-9ccb-51b7ce0c8b26"
08-12 11:27:49:072 [ debug sync.cookiejar ]	[ OCC::CookieJar::cookiesForUrl ]:	QUrl("https://owncloud.example.com/remote.php/dav/files/admin/") requests: (QNetworkCookie("oc_sessionPassphrase=nnDDyhzmHnPW8KBmK0Id4SjD68snAzqSjeTNhsfo9EjWsAXGt4ehZiYbX9MQNxo6LGBRkokXLu6gtoVo8ShPZvJk%2F1WbdUhfjqxh1c7m8XqdEp5Q%2BqUrMlkRuMvwiOAT; secure; HttpOnly; domain=owncloud.example.com; path=/"), QNetworkCookie("ocrdki0ek0ba=c8d65708dd058e3831f734c4bc4a755c; secure; HttpOnly; domain=owncloud.example.com; path=/"))
08-12 11:27:49:074 [ info sync.networkjob ]:	OCC::PropfindJob created for "https://owncloud.example.com" + "/" "OCC::QuotaInfo"
08-12 11:27:49:391 [ info sync.networkjob.propfind ]:	PROPFIND of QUrl("https://owncloud.example.com/remote.php/dav/files/admin/") FINISHED WITH STATUS "OK"
08-12 11:27:49:392 [ debug sync.networkjob ]	[ OCC::AbstractNetworkJob::slotFinished ]:	Network job OCC::PropfindJob finished for "/"
08-12 11:27:50:657 [ debug gui.account.state ]	[ OCC::AccountState::checkConnectivity ]:	"admin@owncloud.example.com" The last ETag check succeeded within the last  30  secs. No connection check needed!

Yes, I know this host is not ideal for a good setup since it appears to have an upload size limit at 64 MB. But if a browser can upload large files via UI without any issues then I suppose the desktop client could do it, too? If the chunks sent by the client did not exceed the configured size (like 10 MB) then all would work fine. Isn’t there a way to limit the chunk size sent from the client?

This will be the source of the problem. Not sure if there is a possibility for the client to detect this or how you should proceed, other then what you have already done with reducing the upload speed.

You could try opening a bug report on the github issue tracker, because I kind of feel the client should detect this and set this as maximum chunk size instead of failing to upload.

1 Like

Web UI uploads fixed-size chunks.

1 Like

Yes, I think I’ll open a bug report for this. I thought one of the main reasons for chunked uploads is to work around server limits so I am surprised the client uses variable chunk sizes, which can be quite big. Using a maximum chunk size would make the client work well with hosts that have upload limits.

Because the limit is server dependent I think it should be set in the server config and all the clients would then read it and use. The client detecting the limit would be the best solution, however I don’t know if it can be done reliably. In my case the connection request just drops and I don’t think the client can tell why this happens.

That’s what I noticed and I’m wondering why the client can’t do the same.

The main reason is to deal with unreliable network connection, usually wifi. If you’re uploading a big file you don’t want the network to fail just before the upload finish because you’d need to upload the whole file again. Imagine if you’re uploading a 40GB file during 1 hour or more.
Chunked uploads allows you to upload a piece of the file each time receiving feedback that the upload has finished correctly. In case of a piece failing due to a network error, the client can resume the upload from that failed piece. This means that if you’ve uploaded 30GB out of 40GB, the client can start from the 30-31 piece instead of from the beginning.

Taking this into account, a faster network is usually more reliable. Worst case, if your network speed allows uploading at 500MB/s you could upload chunks of 1GB without losing too much time in case the upload fails.

I think the “workaround server limits” it’s more a good side-effect than something intentional.

1 Like

@jvillafanez, thanks for the detailed explanation, now I understand the idea. As far I remember, the official system requirements for oC say there should be no upload limits on the server (or at least very high like 2 GB) so I think this would be more like a feature request than bug report.

Anyway, these are my first experiences with oC and I wasn’t expecting it to work at all on a shared host - but I gave it a try, anyway, and it worked out of the box and the installation was pretty easy so I was positively surprised! :+1: