I manage to find a possible solution to this issue by limiting the max amount of memory available to GO. That is done using the GOMEMLIMIT env variable directly in the ocis env file.
Here is my etc.env file:
# Limit memory usage to 999 MB by triggering GO garbage collector
GOMEMLIMIT=999000000
OCIS_URL=https://my.domain:8443
PROXY_HTTP_ADDR=0.0.0.0:9200
PROXY_TLS=false
PROXY_ENABLE_BASIC_AUTH=true
OCIS_INSECURE=false
OCIS_LOG_LEVEL=warn
OCIS_CONFIG_DIR=/opt/ocis/etc
OCIS_BASE_DATA_PATH=/home/data/Backup/ocis
OCIS_LOG_FILE=/var/log/ocis/ocis.log
As soon as the memory usage goes above 999 MB, GO garbage collector kicks in and frees memory (50% or more in my tests). That prevents ocis from killing my system due to out-of-memory situation.
Hm, interesting. Actually we’re using GitHub - KimMachineGun/automemlimit: Automatically set GOMEMLIMIT to match Linux cgroups(7) memory limit. which is supposed to set GOMEMLIMIT base on limits that are configured via cgroups. So as you have already set limit in the systemd unit file that should be enough to work. Maybe there’s be a bug in the automemlimit module or we’re not using it correctly (it seems to work correctly when running ocis in a container)
I have tried to reproduce that on 4.0.5 with no luck so far. I uploaded 2k files through web, 90k images through the client and rcloned these across two instances. I see no significant ramp-up in memory usage.
@abasso can you please try to run rclone with --transfers=1 and please disable auth_basic and use bearer tokens to configure rclone. I would assume that this might bypass the problem. Also try to set STORAGE_USERS_OCIS_MAX_CONCURRENCY=5. Note: This will make the indexing very slow, but would help us to verify a theory we have that this is related to the ldap server and/or the paralell listing of directories with many files.