Storage tiers and multilevel backups

I have owncloud running on an HP DL380 server. I’ll have about 16TB of disk storage once completed. I wanted near local storage performance for the primary sync and virtual file support so I have some SAS SSD drives for the OS and a 10KRPM 2.7TB raid0 config and then 6 TB of ESATA in raid1. I have a couple more 900GB drives on the way. Performance is great with GigE linking it to the clients. the Owncloud data is on the SAS raid right now. I sync it with the ESATA drives manually for backup.

I was wondering if there was a way to split the database so that I can have certain folders backed up to the ESATA drives automatically at night? Think of the SAS raid0 drives as cache that is backed up later in the evening to the ESATA RAID1 pool depending on use? And then the data on the SAS raid0 would be virtualized?

I have another server as well. I was thinking that the primary server with raid0 would do all the work and bond some ethernet ports to the second server backup.

This might be outside of the scope of the product but it would be really cool to have like a multi-tier backup and sync solution. Files that aren’t hit that often would be moved to cold storage but available when clicked.

Thanks, great product.

Jerry

I would say this is a file system feature, not sure if Gluster/Ceph or ZFS implement something like this.

This has the advantage that it is completely transparent for ownCloud and nothing has to be changed there. The filesystem would cache recently accessed files on a faster storage tier, while older files will be removed from the cache.

1 Like

I work for Oracle so I’ll talk to my engineers about using ZFS. I bought an LTO tape drive so my plan is to cache on raid0 during the day using 4 x 900TB SAS 10K drives, then move it to 6TB of SATA raid and from there backup to tape, assuming the tape works and I can make it transparent. I’m using rsync right now in a script that halts owncloud at night, rsyncs and then starts it back up. I can probably automate it even more. The tape I bought on a whim as I have a customer taking out a ton of tape in favor of spinning ZFS storage, about 12PB. They are the ones that got me thinking of going backwards to tape. I can’t afford to lose the data and I’ve had hard drives go bad sequentially one after another. It seems like when one goes they all go.

Thanks for the input. I’ll keep working on my method using rsync.