Handover Documentation

Version 0.1

Stuff your users don't care to know about.


Status: work-in-progress


attic is chosen due to the following key features. For details on how it works, https://attic-backup.org/

  • compression
  • block-level data deduplication
  • data encryption
  • client-server over network
  • centralized backup server


Ideally, the attic repositories should be located on a server outside of openstack, with raided (or replicated) disk array. For the purpose of proof of concept, it is currently installed and configured on the VM chewaai (, /etc/cron.d/backup.

Tasks in the script are explained below:

30 23 * * * root /usr/local/bin/docker-compose -f /store/projects/alfresco/docker-compose.yml exec -T -u postgres postgresql pg_dump alfresco -f /dbdump/alfresco.sql
31 23 * * * root /usr/local/bin/docker-compose -f /store/projects/alfresco/docker-compose.yml exec -T -u postgres postgresql pg_dumpall --roles-only -f /dbdump/alfresco-roles.sql

Alfreso uses PostgreSQL database. pg_dump and pg_dumpall are used to produce a complete snapshot of the application database along with database roles/privileges.

/dbdump/... in the command is only valid in the database container. The actual directory pathname is /store/projects/dbdump/ as defined in the service docker-compose.yml.

30 23 * * * root /usr/local/bin/docker-compose -f /store/projects/redmine/docker-compose.yml exec -T -u postgres postgres pg_dump redmine -f /dbdump/redmine.sql
31 23 * * * root /usr/local/bin/docker-compose -f /store/projects/redmine/docker-compose.yml exec -T -u postgres postgres pg_dumpall --roles-only -f /dbdump/redmine-roles.sql

A similar pair of tasks are scheduled to produce database snapshot of the redmine database used by the https://projects.dirisa.ac.za/ service.

50 23 * * * root /bin/attic create /store/backups/projects.attic::projects-$(date +\%Y-\%m-\%d) /store/projects --exclude '**/pgdata'

This task creates a attic snapshot suffixed with YYYY-MM-DD in the snapshot filename for all the directories and files within /store/projects/, except subdirectories named pgdata.

This is because we have saved proper database snapshots in /store/projects/dbdump/ in the previous task.

Thus far, we are writing a new nightly snapshot into the same attic reposiory /store/backups/projects.attic.

The final task prunes the repository based on this backup policy:

  • Monday - Sunday - daily backup for 7 days
  • Sunday - weekly backup for 4 weeks (1 month)
  • End of month (last day of month) - monthly backup for 12 months
  • Last day of year - yearly backup for 3 years
0 23 * * * root /bin/attic prune --verbose --keep-daily 7 --keep-weekly 52 --keep-monthly 12 --keep-yearly 3 /store/backups/projects.attic


If and when such a server outside of openstack dedicated for backup has been identified, one could simply add a rsync task the the same backup script to copy the attic repository to the target server.

OpenStack backup

OpenStack backup is entirely a different matter. As at 2019-05-21, the lack of such a builtin facility is an outstanding issue, https://access.redhat.com/solutions/479663

General consideration and guidelines are https://wiki.openstack.org/wiki/OpsGuide/Backup_and_Recovery

To offer protection to tenants, there is Raksha https://wiki.openstack.org/wiki/Raksha

Raksha is a scalable data protection service for OpenStack cloud without the burden of handling complex administrative tasks associated with setting up backup products. OpenStack tenants can choose backup policies for their workloads and the Raksha service leverages existing hooks in Nova and Cinder to provide data protection services to tenants. The goal of this service is to provide data protection to OpenStack cloud, while automating data protection tasks including consistent snap of resources, creating space efficient data streams for snapped resources and streaming the backup data to swift end points. Just like any other service in OpenStack, Data Protection as a Service is consumed by tenants; hence, Horizon dashboard will be enhanced to support data protection service.

Without specialized software, the general approach would be:

  • first “freeze” the filesystem
  • take a snapshot
  • unfreeze the filesystem
  • then dump the snapshot to file

Specialized software to consider:

Last updated on 29 Mar 2019 / Published on 29 Mar 2019
Edit on GitHub