Adding too many files locks the instance and requires a restart

Describe the bug

Adding a directory with over 19,000 files to a user via file system copy results in the container pinning the CPU usage and requires a restart to resolve.

Steps to reproduce

  1. Create new ocis instance using PosixFS driver
  2. Sign in as admin
  3. Create user test1
  4. Log in as user test1
  5. Upload directory with 13 files successfully
  6. File system copy 2 directories with 15 and 47 files successfully
  7. Attempt to add directory with 19023 files in total which caused the host CPU to spike until the container was restarted

Expected behavior

The container should process the files added via upload or file system copy without burning the CPU.

Actual behavior

A number of errors were encountered as part of the initial configuration and testing. I will list each one with log entries.

  1. Error: could not find space for path /var/lib/ocis-storage-users

    This error is the first received, and occurs 86 times:

    ocis  | 2025-04-07T03:39:58.296055180Z {"level":"error","service":"storage-users","pkg":"rgrpc","error":"could not find space for path /var/lib/ocis-storage-users","path":"/var/lib/ocis-storage-users","time":"2025-04-07T03:39:58Z","message":"could not assimilate item"}
    

    Even though the error occurs it does not seem to affect anything. What does it mean, and what are the implications?

    Additionally, some errors have the message “could not assimilate item” and others have the message “failed to assimilate item”. Is there a difference between these?

  2. Error: error happened in MultiHostReverseProxy

    Shortly after the server was started, the web ui was accesses at https://cloud.example.com and the following errors appeared:

    20250407-ocis-01.log

    Accessing any of these directly result in correctly rendered asset, e.g. https://cloud.example.com/icons/arrow-drop-left-line.svg, so I am not sure what this error relates to.

  3. Issues with spaces on initial login

    Server was started at 2025-04-07T03:39:57. Logged in as user admin at 03:41:10 and the log indicates errors, but the user interface was working:

    20250407-ocis-02.log

    There seems to be no discernable impact to these error messages.

  4. Issues with spaces after creating new user

    User “test1” was created at 2025-04-07T03:41:10. Space shared created at 2025-04-07T03:42:56. User test1 added to space shared. Logging on as user test1 produced similar error messages:

    20250407-ocis-03.log

    I am unable to determine if there is any missing functionality due to these errors.

  5. Entries for user and space management appearing in log

    A folder was uploaded for user1. A set of file was uploaded into space shared. The logs filled with entries for these items:

    20250407-ocis-04.log

    I do not expect these messages to appear in the log with my log level set to error. I do recognise that they are likely from a different logging system than the error messages. It might be a good idea to channel them to a different log output, or to assign them a level of info.

  6. Files added via file system also generate log level entries

    I copied in a set of files to test1 and noted that similar log entries appeared as ocis indexed the new directory:

    20250407-ocis-05.log

    There are a smattering of the similar space related error messages in with these, but the files are visible.

  7. Adding too many files locks the instance and requires a restart

    I uploaded a directory with 13 files, which were processed with errors but are successfully visible.

    I added using filesystem copy two directories with 15 files and 47 files, which were also processed with errors but visible.

    I then added using filesystem copy a directory with subdirectories that had a total of 19023 files. This spiked the CPU usage for the container, filled the logs with mlock entries, and ultimately failed with a nats error. Here’s an extract from the log:

    20250408-ocis-06.log

    Here’s an overview of the CPU usage (compared against the pydio containers):

    ocis cpu overview

    The first upload via the web ui resulted in the following CPU usage:

    ocis cpu first upload

    Adding a space and copying files into a user directory via the file system resulted in this CPU usage:

    ocis space and filesystem add

    Adding a directory with just over 19,000 files caused the following:

    ocis cpu after adding many files

    The server was restarted using commands docker compose down;sleep 2;docker compose up --detach and the log files captured after 2 hours of running:

    20250408-ocis-07.log

    The CPU usage is also a little excessive as well:

    Image

    I am going to let the container run for a few more hours to see if everything calms down, but this isn’t really usable.

Setup

I am hosting the ocis service using docker on an Intel NUC8i5BEH with 32GB RAM and 500GB SSD running Debian 12.9, Docker 28.0.0, and Docker Compose 2.33.0. The server is connected to a thunderbolt 8 bay drive array which is has 8 x 8TB drives in a ZFS striped pool.

I wish to use ocis to manage files in an existing folder structure for my users. I have exposed the OCIS web ui via port 9292 and can access the instance via url https://cloud.example.com using traefik and a wildcard certificate. I have configured the ocis container to use the PosixFS driver using the PosixFS storage specifications.

The following volumes are mounted:

Container Host Type
/etc/ocis /srv/ocis/config ext4
/var/lib/ocis /srv/ocis/config ext4
/var/lib/ocis-thumbnails /srv/ocis/thumbnails ext4
/var/lib/ocis-storage-users /mnt/storage/ocis zfs

Here is my compose.yaml file:

services:
  ocis:
    container_name: ${CONTAINER1_NAME}
    image: owncloud/ocis:${CONTAINER1_VERSION}
    hostname: ${CONTAINER1_HOSTNAME}
    domainname: ${DOMAINNAME}
    entrypoint: 
      - /bin/sh
    command: ["-c", "ocis init || true; ocis server"]
    ports:
      - ${CONTAINER1_PORT1}:9200  # web ui
    env_file:
      - .env
      - .env.secrets
    volumes:
      - ${DIRECTORY_CONFIG}:/etc/ocis
      - ${DIRECTORY_DATA}:/var/lib/ocis
      - ${DIRECTORY_THUMBNAILS}:/var/lib/ocis-thumbnails
      - ${DIRECTORY_USERS}:/var/lib/ocis-storage-users
      - /etc/timezone:/etc/timezone:ro
    restart: unless-stopped

Here is my .env file:

# Host specifics
CONTAINER1_NAME=ocis
CONTAINER1_VERSION=latest
CONTAINER1_HOSTNAME=ocis
CONTAINER1_PORT1=9292
DOMAINNAME=example.com
# Timezone from https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
TZ=America/Edmonton
# Directory locations
DIRECTORY_CONFIG=/srv/ocis/config
DIRECTORY_DATA=/srv/ocis/data
DIRECTORY_THUMBNAILS=/srv/ocis/thumbnails
DIRECTORY_USERS=/mnt/storage/ocis
# Container specifics
## docker specific
OCIS_DOCKER_TAG=latest
OCIS_LOG_LEVEL=error
OCIS_INSECURE=true
OCIS_URL=https://cloud.example.com
## proxy
PROXY_TLS=false
## thumbnails
THUMBNAILS_FILESYSTEMSTORAGE_ROOT=/var/lib/ocis-thumbnails
## idm
IDM_CREATE_DEMO_USERS=false
## notifications
NOTIFICATIONS_SMTP_HOST=smtp.sendgrid.net
NOTIFICATIONS_SMTP_PORT=465
NOTIFICATIONS_SMTP_SENDER=admin@example.com
NOTIFICATIONS_SMTP_INSECURE=false
NOTIFICATIONS_SMTP_AUTHENTICATION=auto
NOTIFICATIONS_SMTP_ENCRYPTION=starttls
## storage users
STORAGE_USERS_DRIVER=posix
STORAGE_USERS_ID_CACHE_STORE=nats-js-kv
STORAGE_USERS_ID_CACHE_STORE_NODES=127.0.0.1:9233
STORAGE_USERS_POSIX_ROOT=/var/lib/ocis-storage-users
STORAGE_USERS_POSIX_PERSONAL_SPACE_PATH_TEMPLATE=users/{{.User.Username}}
STORAGE_USERS_POSIX_GENERAL_SPACE_PATH_TEMPLATE=projects/{{.SpaceName}}
STORAGE_USERS_POSIX_SCAN_DEBOUNCE_DELAY=4s
STORAGE_USERS_POSIX_USE_SPACE_GROUPS=true
NOTIFICATIONS_SMTP_PASSWORD=[password]

Here is my .env.secrets file:

# Container secrets
IDM_ADMIN_PASSWORD=[password]
NOTIFICATIONS_SMTP_USERNAME=[username]
NOTIFICATIONS_SMTP_PASSWORD=[password]

The service was started using the command docker compose pull;docker compose down;sleep 2;docker compose up --detach. The logs were monitored to make sure any initialisation processes were completed before accessing the web ui and logging in as the admin account.

Additional context

CPU usage spiking when processing added files.

Update 1

After a couple of container restarts things have calmed down:

graph showing cpu usage dropping from 119% to 0.6%

The current logs are very calm too:

$ docker compose logs --timestamps
ocis  | 2025-04-08T20:09:32.538714701Z 2025/04/08 20:09:32 Could not create config: config file already exists, use --force-overwrite to overwrite or --diff to show diff
ocis  | 2025-04-08T20:09:33.295378214Z {"level":"error","error":"error connecting to nats cluster ocis-cluster: error connecting to nats at 127.0.0.1:9233 with tls enabled (false): nats: no servers available for connection","time":"2025-04-08T20:09:33Z","caller":"github.com/cenkalti/backoff@v2.2.1+incompatible/retry.go:24","message":"can't connect to nats (jetstream) server, retrying in 1.683137315s"}
ocis  | 2025-04-08T20:09:35.937356585Z {"level":"error","service":"storage-users","pkg":"rgrpc","error":"could not find space for path /var/lib/ocis-storage","path":"/var/lib/ocis-storage","time":"2025-04-08T20:09:35Z","message":"could not assimilate item"}

Whilst this is good news, the underlying issue is still persisting.

Update 2

Adding approximately 1,900 files by host file copy caused the following spike:

Image

The spike took 15 minutes to resolve. The log is fill of .mlock entries. Is there a way to make this smoother and faster?

Update 3

I have been slowly adding files to my OCIS instance over the past week. Here are the stats:

Image

  • Spike 1 - 21025 files
  • Spike 2 - 200 files
  • Spike 3 - 3035 files
  • Spike 4 - 23951 files

Spike 1 took from 08:18 to 13:35 on the same day, processing 21025 files without locking up - but still topping the CPU at 201% and slowing the host down:
Image

Spike 2 took from 06:29 to 06:43 on the same day, processing 200 files without locking up, and hitting a max CPU of 66%:
Image

Spike 3 took from 07:18 to 08:20 on the same day, processing 3035 files with no container crashes, and a max CPU of 201%:
Image

Spike 4 started on 15-Apr-2025 at 11:08 and completed on 17-Apr-2025 at 10:09 after multiple container restarts and a max CPU of 394%:
Image

I have high hopes for the PosixFS driver and want to continue using OCIS. If there are any additional tests or configuration I can try, please let me know.

Update 4

A few weeks ago I added another folder to my OCIS installation and the same thing has been happening since then:

posixfs pinning the cpu

The logs are full of the following:

ocis  | nats: slow consumer, messages dropped on connection [69] for subscription on "main-queue"
ocis  | nats: slow consumer, messages dropped on connection [69] for subscription on "main-queue"
ocis  | nats: slow consumer, messages dropped on connection [69] for subscription on "main-queue"
ocis  | nats: slow consumer, messages dropped on connection [69] for subscription on "main-queue"
ocis  | nats: slow consumer, messages dropped on connection [69] for subscription on "main-queue"
ocis  | nats: slow consumer, messages dropped on connection [69] for subscription on "main-queue"

Current Status

The ocis instance reports the following after restart:

$ docker compose logs --timestamps
ocis  | 2025-07-08T16:02:02.367401454Z 2025/07/08 16:02:02 Could not create config: config file already exists, use --force-overwrite to overwrite or --diff to show diff
ocis  | 2025-07-08T16:02:04.812150094Z {"level":"error","error":"error connecting to nats cluster ocis-cluster: error connecting to nats at 127.0.0.1:9233 with tls enabled (false): nats: no servers available for connection","time":"2025-07-08T16:02:04Z","caller":"github.com/cenkalti/backoff@v2.2.1+incompatible/retry.go:24","message":"can't connect to nats (jetstream) server, retrying in 2.157145348s"}
ocis  | 2025-07-08T16:02:10.119627609Z {"level":"error","error":"error connecting to nats cluster ocis-cluster: error connecting to nats at 127.0.0.1:9233 with tls enabled (false): nats: no servers available for connection","time":"2025-07-08T16:02:10Z","caller":"github.com/cenkalti/backoff@v2.2.1+incompatible/retry.go:24","message":"can't connect to nats (jetstream) server, retrying in 6.405977503s"}
ocis  | 2025-07-08T16:02:21.450110762Z {"level":"error","error":"error connecting to nats cluster ocis-cluster: error connecting to nats at 127.0.0.1:9233 with tls enabled (false): nats: no servers available for connection","time":"2025-07-08T16:02:21Z","caller":"github.com/cenkalti/backoff@v2.2.1+incompatible/retry.go:24","message":"can't connect to nats (jetstream) server, retrying in 10.755983915s"}
ocis  | 2025-07-08T16:02:49.750312652Z {"level":"error","service":"storage-users","pkg":"rgrpc","error":"could not find space for path /var/lib/ocis-storage","path":"/var/lib/ocis-storage","time":"2025-07-08T16:02:49Z","message":"could not assimilate item"}
ocis  | 2025-07-08T16:02:59.089794252Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_NOT_FOUND","event":{"Type":"events.UploadReady","ID":"d40e0c83-dd6e-4faa-84f4-a0e846986d44","TraceParent":"","InitiatorID":"","Event":{"UploadID":"","Filename":"","SpaceOwner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ExecutingUser":null,"ImpersonatingUser":null,"FileRef":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"e18000b5-8ddc-44cd-af80-6b6bcf8142b5","space_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c"}},"Timestamp":{"seconds":1750873312,"nanos":872471842},"Failed":false,"IsVersion":false}},"time":"2025-07-08T16:02:59Z","message":"could not process event"}
ocis  | 2025-07-08T16:03:08.798998721Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_NOT_FOUND","event":{"Type":"events.UploadReady","ID":"1a7a5c2b-0af6-404a-92d5-c34d73e518ad","TraceParent":"","InitiatorID":"","Event":{"UploadID":"","Filename":"","SpaceOwner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ExecutingUser":null,"ImpersonatingUser":null,"FileRef":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"7fd741a7-898e-429d-970d-e179f8228056","space_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c"}},"Timestamp":{"seconds":1750873313,"nanos":265240345},"Failed":false,"IsVersion":false}},"time":"2025-07-08T16:03:08Z","message":"could not process event"}
ocis  | 2025-07-08T16:04:30.483134241Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_NOT_FOUND","event":{"Type":"events.UploadReady","ID":"bbb923b1-1b28-4096-855a-292597a7c5a8","TraceParent":"","InitiatorID":"","Event":{"UploadID":"","Filename":"","SpaceOwner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ExecutingUser":null,"ImpersonatingUser":null,"FileRef":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"211d39cb-625a-4c71-ba4d-f7617b014687","space_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c"}},"Timestamp":{"seconds":1750873316,"nanos":908908053},"Failed":false,"IsVersion":false}},"time":"2025-07-08T16:04:30Z","message":"could not process event"}
ocis  | 2025-07-08T16:05:08.200330998Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_NOT_FOUND","event":{"Type":"events.UploadReady","ID":"378e9246-27d8-4f13-93bd-9eeba70449bf","TraceParent":"","InitiatorID":"","Event":{"UploadID":"","Filename":"","SpaceOwner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ExecutingUser":null,"ImpersonatingUser":null,"FileRef":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"c313d65d-ed48-49f8-b486-6b4eb22cf003","space_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c"}},"Timestamp":{"seconds":1750873320,"nanos":922631755},"Failed":false,"IsVersion":false}},"time":"2025-07-08T16:05:08Z","message":"could not process event"}
ocis  | 2025-07-08T16:06:05.131735962Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_NOT_FOUND","event":{"Type":"events.UploadReady","ID":"bbb923b1-1b28-4096-855a-292597a7c5a8","TraceParent":"","InitiatorID":"","Event":{"UploadID":"","Filename":"","SpaceOwner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ExecutingUser":null,"ImpersonatingUser":null,"FileRef":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"211d39cb-625a-4c71-ba4d-f7617b014687","space_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c"}},"Timestamp":{"seconds":1750873316,"nanos":908908053},"Failed":false,"IsVersion":false}},"time":"2025-07-08T16:06:05Z","message":"could not process event"}
ocis  | 2025-07-08T16:06:49.803593016Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_NOT_FOUND","event":{"Type":"events.UploadReady","ID":"378e9246-27d8-4f13-93bd-9eeba70449bf","TraceParent":"","InitiatorID":"","Event":{"UploadID":"","Filename":"","SpaceOwner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ExecutingUser":null,"ImpersonatingUser":null,"FileRef":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"c313d65d-ed48-49f8-b486-6b4eb22cf003","space_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c"}},"Timestamp":{"seconds":1750873320,"nanos":922631755},"Failed":false,"IsVersion":false}},"time":"2025-07-08T16:06:49Z","message":"could not process event"}
ocis  | 2025-07-08T16:07:23.636174316Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_NOT_FOUND","event":{"Type":"events.UploadReady","ID":"bbb923b1-1b28-4096-855a-292597a7c5a8","TraceParent":"","InitiatorID":"","Event":{"UploadID":"","Filename":"","SpaceOwner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ExecutingUser":null,"ImpersonatingUser":null,"FileRef":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"211d39cb-625a-4c71-ba4d-f7617b014687","space_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c"}},"Timestamp":{"seconds":1750873316,"nanos":908908053},"Failed":false,"IsVersion":false}},"time":"2025-07-08T16:07:23Z","message":"could not process event"}
ocis  | 2025-07-08T16:08:18.583954837Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_NOT_FOUND","event":{"Type":"events.UploadReady","ID":"378e9246-27d8-4f13-93bd-9eeba70449bf","TraceParent":"","InitiatorID":"","Event":{"UploadID":"","Filename":"","SpaceOwner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ExecutingUser":null,"ImpersonatingUser":null,"FileRef":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"c313d65d-ed48-49f8-b486-6b4eb22cf003","space_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c"}},"Timestamp":{"seconds":1750873320,"nanos":922631755},"Failed":false,"IsVersion":false}},"time":"2025-07-08T16:08:18Z","message":"could not process event"}
ocis  | 2025-07-08T16:08:28.872253128Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_NOT_FOUND","event":{"Type":"events.UploadReady","ID":"bbb923b1-1b28-4096-855a-292597a7c5a8","TraceParent":"","InitiatorID":"","Event":{"UploadID":"","Filename":"","SpaceOwner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ExecutingUser":null,"ImpersonatingUser":null,"FileRef":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"211d39cb-625a-4c71-ba4d-f7617b014687","space_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c"}},"Timestamp":{"seconds":1750873316,"nanos":908908053},"Failed":false,"IsVersion":false}},"time":"2025-07-08T16:08:28Z","message":"could not process event"}
[...]
ocis  | 2025-07-09T15:00:12.933506625Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_INTERNAL","event":{"Type":"events.ItemTrashed","ID":"b10699ae-1186-4bff-bcd3-7fa10d16f0b0","TraceParent":"","InitiatorID":"","Event":{"SpaceOwner":null,"Executant":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ID":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c","space_id":"8e57eb4e-b57c-4657-9f49-2dc2da1ec253"},"Ref":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"7a0e0f02-9856-4795-985e-e43335ffeeb0","space_id":"8e57eb4e-b57c-4657-9f49-2dc2da1ec253"},"path":".pydio"},"Owner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"Timestamp":{"seconds":1750967443,"nanos":278015150},"ImpersonatingUser":null}},"time":"2025-07-09T15:00:12Z","message":"could not process event"}
ocis  | 2025-07-09T15:00:12.935301609Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_INTERNAL","event":{"Type":"events.ItemTrashed","ID":"8168d698-0e85-4980-b719-cb15cf00042d","TraceParent":"","InitiatorID":"","Event":{"SpaceOwner":null,"Executant":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ID":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c","space_id":"225a33c5-a033-4218-891d-443db79d4edc"},"Ref":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"a292c786-6cda-4e90-9a22-8ac787526352","space_id":"225a33c5-a033-4218-891d-443db79d4edc"},"path":".pydio"},"Owner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"Timestamp":{"seconds":1750967443,"nanos":500815360},"ImpersonatingUser":null}},"time":"2025-07-09T15:00:12Z","message":"could not process event"}
ocis  | 2025-07-09T15:00:12.937109186Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_INTERNAL","event":{"Type":"events.ItemTrashed","ID":"01c38e97-d889-40e6-a81f-2367fc30f0ef","TraceParent":"","InitiatorID":"","Event":{"SpaceOwner":null,"Executant":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ID":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c","space_id":"d85a5212-33c9-4d51-9a1a-8922d84b0931"},"Ref":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"1d0a2e4c-65a1-4546-8ca5-00d8ab0360c4","space_id":"d85a5212-33c9-4d51-9a1a-8922d84b0931"},"path":".pydio"},"Owner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"Timestamp":{"seconds":1750967443,"nanos":755554876},"ImpersonatingUser":null}},"time":"2025-07-09T15:00:12Z","message":"could not process event"}
ocis  | 2025-07-09T15:00:12.939038595Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_INTERNAL","event":{"Type":"events.ItemTrashed","ID":"bc1f23fa-d6ae-4513-9838-50d1e95c8486","TraceParent":"","InitiatorID":"","Event":{"SpaceOwner":null,"Executant":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ID":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c","space_id":"e7a5c9e6-b9f6-42e4-a6d6-6a1a833d3ad3"},"Ref":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"225a33c5-a033-4218-891d-443db79d4edc","space_id":"e7a5c9e6-b9f6-42e4-a6d6-6a1a833d3ad3"},"path":".pydio"},"Owner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"Timestamp":{"seconds":1750967443,"nanos":782509066},"ImpersonatingUser":null}},"time":"2025-07-09T15:00:12Z","message":"could not process event"}
ocis  | 2025-07-09T15:00:12.942003024Z {"level":"error","service":"activitylog","error":"could not get resource info: unexpected status code while getting resource: CODE_INTERNAL","event":{"Type":"events.ItemTrashed","ID":"2efdf3a3-ca73-422c-b3de-b829fae10477","TraceParent":"","InitiatorID":"","Event":{"SpaceOwner":null,"Executant":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"ID":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"8cf8312d-4a3d-4c67-8cd2-034fd7aa1d3c","space_id":"85ac44cc-6422-48a9-a4ef-c9629bde4fb4"},"Ref":{"resource_id":{"storage_id":"eef68ee0-f334-408b-a42b-7e2157b3ec25","opaque_id":"800ddd8b-c442-4490-bad4-bc44b1844042","space_id":"85ac44cc-6422-48a9-a4ef-c9629bde4fb4"},"path":".pydio"},"Owner":{"idp":"https://cloud.example.com","opaque_id":"2ed6ea9f-64f9-43af-96d1-909f1752c625"},"Timestamp":{"seconds":1750967443,"nanos":893312206},"ImpersonatingUser":null}},"time":"2025-07-09T15:00:12Z","message":"could not process event"}
1 Like
  1. With the amount of workload in your server, I’d assume those errors are caused because some metadata for some files / folders wasn’t written properly or in time. Async uploads are enabled by default, so the browser might assume it can upload files into folders that might not be fully ready.
    I’m not sure how well this is handled in the (oCIS) posix FS.
  2. Likely caused by connections being cut by the browser, specially if the problem comes from loading assets. This is irrelevant.
  3. Likely caused by point 1, with the missing metadata.
  4. Probably same cause
  5. It seems to come from the inotifywatchgo library that the posix FS is using
  6. Probably same cause as 5
  7. I’m not sure how much we can do with such spike. You’d probably need to scale the services if you expect such a big spike (or put some kind of request limiter in traefik).
    I guess the problem is that the workload is so high that the services become too slow to respond.

In general, I’d suggest to use the default oCIS FS because it’s should behave better. The default FS doesn’t use the inotifywatch library, so it’s less noise and less workload for the server.

1 Like

Thank you for reviewing and replying to my questions. It looks like the majority of the issues can be ignored as warnings.

I wish to use the Posix FS to preserve the file and directory structure on my system, so finding a way to have OCIS manage the addition of files without blowing up the CPU would be handy.

I’ll be trying the OCIS_EXCLUDE_RUN_SERVICES=activitylog environment setting to see if this makes any difference.

1 Like