I am hoping to get a general understanding of how webdav integrates into Owncloud. I’ve read the manuals and the postings here in the forum and still don’t understand the basics.
What I am trying to do - Give my ~100 users access to communal files via windows 10 file explorer without having to log into our VPN.
Owncloud is running on Ubuntu 20.04 server with Nginx as the web server. We are running over https.
The web page interface works perfectly but my users are accustomed to working through a file explorer. I have mapped a drive in explorer (using webdav) but making changes (ie adding a new directory or changing a filename) is very challenging - explorer freezes but the changes appear several minutes later.
I do not want to sync files locally and have not had much luck with the Owncloud desktop virtual file system (the syncing never seems to complete).
So the questions I have are:
Is webdav my best option?
If I make changes via webdav, are they synced to my database (and vice versa)?
Is there a different architecture I should be considering?
Your best option is the VFS in Windows10. With the 2.7 desktop client this works perfectly for me and should work for your users as well. Please provide details if this doesn’t work for you …
WebDAV in Windows10 has its challenges … its slow and Windows forgets the login, and there is fun with the combination of MS Office WebDAV …
Thank you. I will give it another try. I have about 750K files in a couple of the shared drives and I found that the desktop client never got done syncing before it would start over again. So I had great performance on a couple of directories but no scanning done at all of the majority.
That said. I will uninstall and do a clean reinstall of the desktop client to see if I get better results.
I have added MountainDuck to assist with the webdav mapping and that is taking care of the file locking issues we were experiencing. The only item that is problematic at the moment is the following:
If a user opens and updates a file or one of our legacy apps creates a new file (usually pdf), that file shows perfectly for all users in the file explorer. However, if one of our legacy apps overwrites an existing file and the file has not yet been opened by a user, the date of the file is not updated unless I run an occ /scan --all.
The problem I have here is that it takes 40 minutes to run a scan for one user and 6 hrs 45 minutes to run it for my 22 existing users (about to be ~100 users). So two questions. Is there a way for all users to share the same cache? If not, does anyone have any thoughts on updating my files more often than once every 24 hours? BTW, this is not a question of server power - rather I have a large file server.
Can you try instead redownloading with a fresh browser session I think it is caching the file.
You could also try restarting apache in between downloads to rule out server side caching.
Because the behavior you are describing makes no sense as ownCloud works the following way:
Objects in the database are being displayed in the webui, but what is downloaded is the file that is on disk.
If you overwrite a file the original does not exist anymore, therefore it can’t be downloaded any more.
But I guess your users never see the ownCloud interface and only interact with ownCloud through your legacy apps, which then only have the local data available not the ownCloud storage directly…
Yes, I mean how else should ownCloud know? If you put something into the storage backend you have to run a files:scan. Only then will ownCloud update its oc_filecache table which then triggers a download for the desktop client.
So finally you have to upload using ownCloud, you could try automating that using something like cadaver or curl. Finally I would look into mounting webdav with a fuse driver to get around the problem.
Thanks so much for your response. You are correct, our users only interact with OC through legacy apps and no files are stored locally (they need access to too many of them). I understand about the need for a scan to update the OC database - makes perfect sense. What doesn’t make sense is that a scan --all is required rather than being able to point each user to either their own cache or a common cache location.
I am not familiar with fuse drivers but will look into it. Thanks again for your suggestions.