Change ownership of OwnCloud file

User case

We are looking to use OwnCloud as a centralized backup point. We have about 30 servers that generate archives locally. Each server has its own OC username/password and uploads its archive using CURL requests. This allows us to have all the files for all our infrastructure backed up in a single point and then, we can simply backup the hard drive of our OC server (or even run multiple OC servers that sync together).

Problems

  1. Security: If a server is compromised, the hacker will have access to the OC username / password and endpoint. He/she can then simply login and pull out all the historical backups.
    • Yes, in theory he/she would have access to the hacked server anyway, but some of our servers archive their data frequently (e.g. user logs for 2015 when we're in 2017 can be stored in something like Amazon Glacier in an encrypted format).
  2. Monitoring: in order for us to ensure all the files are backed-up as expected, we need to login to 30 different OC accounts which is unrealistic.

Solution

We'd like to run a cron job every day on the OC server that changes the ownership of all files to the admin account. This would solve the 2 problems above.

We have tried to move the files around but have had no luck. What is the cleanest way to do this ?

Specs

OC 9.1.4 on Ubuntu

A few remarks:

Do you really think ownCloud is the right software for your use case? There have been quite a few discussions here that ownCloud is not the right tool to be a backup endpoint, see also e.g. [1]. If its just a storage backend it might be not that critical but you still might to re-think your use-case to see if other tools like plain rsync via SSH might fit better into that.

[1] https://owncloud.org/faq/#backup

1 Like

In 10.0 there is a new occ command which can move selected folders or selected files from one user to another. Not sure how you would exactly script it, but thats the best path likely. Other option: Use a folder which is shared by Admin with each of the clients and take away delete and update rights. In this case your clients an write, but nothing else.
Of course, ownCloud is not really a backup solution :wink: But the above should enhance your use case and it is about syncing and sharing in some ways ...

1 Like

We saw the following cons of the rsync ssh (or any other file transfer protocol with rsync):
- You need RSync installed on each clients.
- You need to configure RSync on each clients.
- If one client is compromised, then the hacker has a ssh access to your backup server !
- You need to update each node's firewall to allow connection from the master and the master's firewall to allow each nodes.

Current solution offers the following:
- 1-line setup (CURL -XPOST .... ) in a cron and you're good to go.
- Highly segregated: (if I can manage to move the files automatically) a hacker who compromise node x will only have access to an empty OC account...
- Firewall is setup on the backup server only as files are pushed from nodes to the OC endpoint.

The points in OwnCloud's FAQ are invalid in this user case as each archive is kept (e.g. database-2017-01-01.sql , database-2017-01-02.sql etc.), is only used by 1 admin and is replicated in the background (e.g. even if owncloud doesn't work or corrupts the file, we can pull them back from Amazon Glacier and the likes).

When looking at different solutions and trying lots of them, OC actually seemed the best backup solution for an IT infrastructure that's expanding quickly. It is easy to setup (1-click run on most cloud providers and server providers) and offers a working GUI out of the box.

I will try what @hodyroff mentions, see how it integrates. The problem I see is that a hacker with an OC account (e.g. server-007) will have read capacity of all the files in the shared folder so it doesn't help much.

I really think there's a 5-line PHP script that we can pull off and run in a cron, even if it's ugly

Forgive my necromancy, but I just want to add some counter points for anyone considering going down the same path as OP, and maybe educate OP if he hasn’t learned what I’m about to share.

RE: Cons of rsync:

  • Most Linux distros have rsync installed by default.
  • You only need to “configure” rsync per-server if using daemon mode with the backup server pulling from each backup client.
  • If your backup server is pulling from backup clients, you mitigate hackers getting into the backup server from compromised clients. If you have backup clients pushing to the backup server, then configure ssh to heavily restrict the session.
  • You’re assuming rsync daemon mode with backup server pulling from clients. If you have the backup server pulling via ssh, I’m pretty sure you already have that port open. If you’re pushing from clients with rsync (in any mode) then you only have to configure the backup server’s firewall.

RE: Current solution:

  • You can 1-line rsync without the daemon.
  • Read up on sshd_config to learn how to segregate the rsync solution if sending files from backup clients, it would be far more powerful than what you’re suggesting. Key directives to study would be Match, ChrootDirectory, and ForceCommand. You may also find documentation about the authorized_keys file useful. You could probably even restrict the backup clients to pushing the current backup and block them from altering or reading existing backups, though that may require some scripting.
  • Again, you assume you need an rsync daemon running. With you mentioning AWS, if these systems are EC2 instances, you could have created a backup-client security group and applied it to each client server (you can have 4 each iirc). In any case, this is a non-issue.

I had the same question in my mind. Considering the use case is it a right software?