Memory Usage of oCIS

I manage to find a possible solution to this issue by limiting the max amount of memory available to GO. That is done using the GOMEMLIMIT env variable directly in the ocis env file.

Here is my etc.env file:

# Limit memory usage to 999 MB by triggering GO garbage collector
GOMEMLIMIT=999000000

OCIS_URL=https://my.domain:8443
PROXY_HTTP_ADDR=0.0.0.0:9200
PROXY_TLS=false
PROXY_ENABLE_BASIC_AUTH=true

OCIS_INSECURE=false
OCIS_LOG_LEVEL=warn

OCIS_CONFIG_DIR=/opt/ocis/etc
OCIS_BASE_DATA_PATH=/home/data/Backup/ocis
OCIS_LOG_FILE=/var/log/ocis/ocis.log

As soon as the memory usage goes above 999 MB, GO garbage collector kicks in and frees memory (50% or more in my tests). That prevents ocis from killing my system due to out-of-memory situation.

3 Likes

Hm, interesting. Actually we’re using GitHub - KimMachineGun/automemlimit: Automatically set GOMEMLIMIT to match Linux cgroups(7) memory limit. which is supposed to set GOMEMLIMIT base on limits that are configured via cgroups. So as you have already set limit in the systemd unit file that should be enough to work. Maybe there’s be a bug in the automemlimit module or we’re not using it correctly (it seems to work correctly when running ocis in a container)

I’d be interested in a memory profile. Could you run ocis with PROXY_DEBUG_PPROF="true". That will enable the go profiler.

Wait until you see a spike in memory usage, then run

curl http://127.0.0.1:9205/debug/pprof/heap > heap.out
curl http://127.0.0.1:9205/debug/pprof/allocs > allocs.out

and then zip and post the files on a pastebin. Or share it via ocis :wink:

you can also view these files in a browser with

go tool pprof -http :30000 heap.out
go tool pprof -http :30000 allocs.out

This is what a fresh ocis 5 heap looks like.

You can also directly post a screenshot of your heap:

go tool pprof -http :30000 http://localhost:9205/debug/pprof/heap

Then navigate to http://localhost:30000/ui/flamegraph2 take a screenshot and post it. That should at least give us an idea where to look.

I have tried to reproduce that on 4.0.5 with no luck so far. I uploaded 2k files through web, 90k images through the client and rcloned these across two instances. I see no significant ramp-up in memory usage.

I was able to reproduce it, seems like an issue with the garbage collector and the search service when running on low memory devices, bug discussion see: Memory Crisis with oCIS 4.0.5 · Issue #8257 · owncloud/ocis · GitHub

@abasso can you please try to run rclone with --transfers=1 and please disable auth_basic and use bearer tokens to configure rclone. I would assume that this might bypass the problem. Also try to set STORAGE_USERS_OCIS_MAX_CONCURRENCY=5. Note: This will make the indexing very slow, but would help us to verify a theory we have that this is related to the ldap server and/or the paralell listing of directories with many files.