Expected behaviour
All directories should get synced.
Actual behaviour
One directory (which has 12000+ files in the same directory) doesn’t get synced.
Steps to reproduce
-
I execute this command: owncloudcmd -u username -p password --trust --non-interactive /var/www/something/ https://myowncloudserver.domain.com
-
Then I get this error:
11-29 09:25:17:163 [ info sync.csync.csync ]: ## Starting remote discovery ##
11-29 09:25:17:163 [ info sync.accessmanager ]: 6 "PROPFIND" "https://cloud.mydomain.com/remote.php/dav/files/filesync/" has X-Request-ID "eecbc783-e450-4204-8510-117ced4e574a"
11-29 09:25:17:164 [ info sync.networkjob ]: OCC::LsColJob created for "https://cloud.mydomain.com" + "" "OCC::DiscoverySingleDirectoryJob"
11-29 09:25:17:239 [ info sync.networkjob.lscol ]: LSCOL of QUrl("https://cloud.mydomain.com/remote.php/dav/files/filesync/") FINISHED WITH STATUS "OK"
11-29 09:25:17:240 [ info sync.csync.updater ]: Checking for rename based on fileid 00227869ocpw2kdzfkgx
11-29 09:25:17:241 [ info sync.csync.updater ]: file: images, instruction: INSTRUCTION_NEW <<=
11-29 09:25:17:241 [ info sync.accessmanager ]: 6 "PROPFIND" "https://cloud.mydomain.com/remote.php/dav/files/filesync/images" has X-Request-ID "23660c82-4d0a-4e05-a319-5325dcab11a9"
11-29 09:25:17:241 [ info sync.networkjob ]: OCC::LsColJob created for "https://cloud.mydomain.com" + "/images" "OCC::DiscoverySingleDirectoryJob"
11-29 09:30:17:321 [ warning sync.networkjob ]: Network job timeout QUrl("https://cloud.mydomain.com/remote.php/dav/files/filesync/images")
11-29 09:30:17:321 [ warning sync.networkjob ]: QNetworkReply::NetworkError(OperationCanceledError) "Connection timed out" QVariant(Invalid)
11-29 09:30:17:321 [ info sync.networkjob.lscol ]: LSCOL of QUrl("https://cloud.mydomain.com/remote.php/dav/files/filesync/images") FINISHED WITH STATUS "OperationCanceledError Connection timed out"
11-29 09:30:17:321 [ warning sync.discovery ]: LSCOL job error "Operation canceled" 0 QNetworkReply::NetworkError(OperationCanceledError)
11-29 09:30:17:321 [ warning sync.csync.updater ]: opendir failed for images - errno 5
11-29 09:30:17:321 [ warning sync.engine ]: ERROR during csync_update : "An error occurred while opening a folder images: Operation canceled"
11-29 09:30:17:326 [ info sync.database ]: Closing DB "/var/www/path/uploads/._sync_e3c4d804bfac.db"
11-29 09:30:17:327 [ info sync.engine ]: CSync run took 302200 ms
11-29 09:30:17:329 [ info sync.database ]: Closing DB "/var/www/path/uploads/._sync_e3c4d804bfac.db"
Server configuration
Operating system:
Distributor ID: Debian
Description: Debian GNU/Linux 9.4 (stretch)
Release: 9.4
Codename: stretch
Web server:
Server version: Apache/2.4.25 (Debian)
Server built: 2018-03-31T08:47:16
Database:
mysql Ver 15.1 Distrib 10.1.26-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
PHP version:
PHP 7.0.27-0+deb9u1 (cli) (built: Jan 5 2018 13:51:52) ( NTS )
Copyright © 1997-2017 The PHP Group
Zend Engine v3.0.0, Copyright © 1998-2017 Zend Technologies
with Zend OPcache v7.0.27-0+deb9u1, Copyright © 1999-2017, by Zend Technologies
ownCloud version:
10.0.8
Client configuration
Client version:
owncloudcmd -v
ownCloud version 2.5.1 (build 10450)
Operating system:
Debian GNU/Linux 9 (stretch)
OS language:
English
Qt version used by client package (Linux only, see also Settings dialog):
Using Qt 5.10.1, built against Qt 5.10.1
Client package (From ownCloud or distro) (Linux only):
owncloud-client
Installation path of client:
/usr/bin/owncloudcmd
Logs
Server log:
{"reqId":"f1881c1c-c343-4d2d-bf36-7252a72a2167","level":3,"time":"2018-11-29T02:13:31+00:00","remoteAddr":"client_ip","user":"my_username","app":"PHP","method":"PROPFIND","url":"\/remote.php\/dav\/files\/filesync\/images","message":"Allowed memory size of 536870912 bytes exhausted (tried to allocate 20480 bytes) at \/var\/www\/owncloud\/lib\/composer\/doctrine\/dbal\/lib\/Doctrine\/DBAL\/Driver\/PDOStatement.php#142"}
We have increased the memory on our server from 2 GB to 12 GB and set the memory_limit
to -1
which is unlimited in the php.ini file. Rebooted the server, tried again and we still get the same message. There is almost 11 GB of unused memory on the server at the moment.
My best guess: somewhere in the PHP code the application reads the whole directory at once instead of part by part? I might be wrong on this. Don’t know how exactly this works in PHP but in C# you can use the type IEnumerable for this (instead of a generic List).