Reworking filesystem base on Sabre nodes

filesystem

#1

Currently the OC filesystem is a virtual filesystem mostly managed by OC itself.
When an app wants to access a user's data, the app does \OC::$server->getUserFolder($userId).
In some cases one can also access the user's non-file data like trashbin, encryption keys by going through the root folder \OC::$server->getRootFolder($userId) . '/files_encryption'.

From what we see there are different kind of data in a user's home, not only files.
The trouble is that even when accessing non-file data, we need to call initMountPoints($userId) internally and this mounts ALL possible storages, shares, etc which aren't always needed.

Wouldn't it be nice if we had a system that only ever mounts what is being queried, on-demand ?

Thinking of it, this kind of virtual filesystem strangely looks similar to the way how SabreDAV works.
SabreDAV works with URLs and path sections. For example, the new files DAV endpoint uses the format "remote.php/dav/files/$userid/path/to/file". But it can also be used to retrieve other data like comments under "remote.php/dav/comments/$userid/...".

All this works thanks to SabreDAV's plugin system where new node types can be defined. If ownCloud's filesystem would work in a similar manner, it would make it possible and easier to lazy-load only the currently requested node. SabreDAV does it when calling getNodeForPath($path).

Now the big idea would be to have the ownCloud FS code call the Sabre nodes, and have all FS-related implementation inside the Sabre nodes. This would be the reverse from what we have now.

As a nice side-effect, we could get rid of the OC Node API and make people use the Sabre APIs directly. It would reduce the number of APIs and some code duplication.

@butonic @DeepDiver1975


#2

Just to not forget problems / challenges with the vfs:

mtime never propagates up the tree: https://github.com/owncloud/core/issues/11797#issuecomment-184352381

and from https://github.com/owncloud/core/pull/25817/files#r76378428
The hasUpdated stuff should be removed because we either need to periodically scan an external storage or have a notification mechanism in place. Dynamically detecting if anything changed is only possible if the storage supports some kind of history api (IIRC only dropbox and google have a delta API) or notifications (cifs ... maybe nfs). Both should be evecuted async in the background.