35GB+ File Uploads - Upload Speed Thrashing, Is There Anything I can Do To Reduce Load On Data Directory Drive?

My server is primarily used for uploading large video files via public links. As such, we need a lot of space so I’ve set up an 8-drive Raid50 as the data directory.

For the most part it works fine for individual file uploads, (consistant 70-80MB/s up) but as soon as a file starts processing as others are being uploaded the upload speed start thrashing. I think the read/write speeds of the Raid50 may be the bottleneck, so I’ve been trying to get a secondary Raid 0 on a different raid card going as a temp upload directory as outlined here.

Though I may have that bit misunderstood, as it seems to just be a location for temporary files, not a temporary location for uploaded files. Is this correct? And is there any other way to speed up uploads while chunks are being assembled? Or is there a way to delay the assembly of chunks until all files have been downloaded?

Thanks in advance!
E

Steps to reproduce

  1. Upload several large files
  2. Wait until one or more files are being processed while the others are still uploading

Expected behaviour

File uploads continue at a slower but consistant pace

Actual behaviour

Upload speed thrashes between 0-80MB/s for the duration of the upload

Server configuration

Operating system:

Ubuntu 20.04 LTS

Web server:

Apache2

Database:

MariaDB

PHP version:

7.4

ownCloud version: (see ownCloud admin page)

10.6

Updated from an older ownCloud or fresh install:

Fresh

Where did you install ownCloud from:

Ubuntu 20.04 Quick Installation Guide

**The content of config/config.php:**

“config”: {
“instanceid”: “ocshas3sduvi”,
“passwordsalt”: “REMOVED SENSITIVE VALUE”,
“secret”: “REMOVED SENSITIVE VALUE”,
“trusted_domains”: [
“192.168.0.148”,
“192.168.0.148”
],
“tempdirerctory”: “/media/user/Uploads/”,
“datadirectory”: “/media/user/RAID50/owncloud/data”,
“overwrite.cli.url”: “http://192.168.0.148”,
“dbtype”: “mysql”,
“version”: “10.6.0.5”,
“dbname”: “owncloud”,
“dbhost”: “localhost”,
“dbtableprefix”: “oc_”,
“mysql.utf8mb4”: true,
“dbuser”: “REMOVED SENSITIVE VALUE”,
“dbpassword”: “REMOVED SENSITIVE VALUE”,
“logtimezone”: “UTC”,
“files_external_allow_create_new_local”: “true”,
“apps_paths”: [
{
“path”: “/var/www/owncloud/apps”,
“url”: “/apps”,
“writable”: false
},
{
“path”: “/var/www/owncloud/apps-external”,
“url”: “/apps-external”,
“writable”: true
}
],
“installed”: true,
“memcache.local”: “\OC\Memcache\APCu”,
“memcache.locking”: “\OC\Memcache\Redis”,
“filelocking.ttl”: 36000,
“redis”: {
“host”: “127.0.0.1”,
“port”: “6379”
},
“maintenance”: false
},
“integritychecker”: {
“passing”: true,
“enabled”: true,
“result”: []
},
“core”: {
“backgroundjobs_mode”: “cron”,
“enable_external_storage”: “yes”,
“first_install_version”: “10.6.0.5”,
“installedat”: “1616533638.4573”,
“lastcron”: “1618433881”,
“lastupdateResult”: “{“version”:“10.7.0”,“versionstring”:“ownCloud 10.7.0”,“url”:“https:\/\/download.owncloud.org\/community\/owncloud-10.7.0.zip”,“web”:“https:\/\/doc.owncloud.org\/server\/10.6\/admin_manual\/maintenance\/upgrade.html”}”,
“lastupdatedat”: “1617997851”,
“public_files”: “files_sharing/public.php”,
“public_webdav”: “dav/appinfo/v1/publicwebdav.php”
},

How is the raid array connected to your system?

We use a NAS drive as our data store and had a significant improvement in upload performance by setting the “dav.chunk_base_dir” to the local drive on the server. This allows the server to store the ‘chunks’ on the local drive (fast access) and then only transmit the built file (at the end of the upload) over the network to our NAS. Being only a temporary location, you could get away with an SSD for this purpose rather than setting up a RAID0 array of multiple drives (spinning or otherwise).

1 Like

Hi aclemence,

The Raid50 is connected via an Adaptec 6805 to 2 Inwin 4-drive SAS/SATA bays and the Raid0 is connected to the server’s (HPE DL380p G8) integrated raid card, which doesn’t have HBA capabilities so raid is the only option there.

At the moment I’ve ended up pointing the tmp_upload_dir in config.php to our Raid0. I’ve also set the max_upload_size to a point that avoids chunking altogether, thereby avoiding the thrashing issue as the Raid50 isn’t needing to read/write to itself while it’s being written to. Though, this only effects uploads coming in via shared link and not user-based uploads.

I thought these anonymous uploads don’t use chunking.

I think you’re right - just did some testing on our old server and that seems to be the case.