Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

Original files and derivatives

These are stored in S3, are backed up within S3 by a process managed by AWS, and end up in long-term storage [ask Chuck about details post-Dubnium retirement]. No Ansible cron jobs were ever used in this workflow, so there is thus no need to make any changes to our existing setup.

Database backups

A script on the production server, home/ubuntu/bin/postgres-backup.sh, performed the following tasks nightly:

  • pg_dump the production database to /backups/pgsql-backup/.

  • aws s3 sync the contents of that directory to s3://chf-hydra-backup/PGSql.

The above script will need to be discarded.

A second mechanism [ask Chuck for details] copies the S3 file to a local network storage mount (/media/SciHist_Digicoll_Backup). This then gets backed up to tape.

Heroku database backups

We have three backup mechanisms under Heroku:

1. Continuous protection

 Every professional tier Heroku Postgres database comes with a behind-the-scenes Continuous Protection mechanism for disaster recovery. This doesn’t involve any actual backup files that we can see, but it does ensures that, in the event of a disaster, we can roll back the database to a prior state using a command like:

heroku addons:create heroku-postgresql:standard-0 --rollback DATABASE_URL --to '2021-06-02 20:20+00' --app scihist-digicoll-production

Details are at https://devcenter.heroku.com/articles/heroku-postgres-rollback .

2. Nightly physical backups

We also have a regular physical database backup scheduled:

heroku pg:backups:schedules --app scihist-digicoll-production

(Physical backups on Heroku Postgres are binaries that include dead tuples, bloat, indexes and all structural characteristics of the currently running database.)

You can check the metadata on the latest physical backups like this: heroku pg:backups
heroku pg:backups:download a006 will produce a physical database dump.

Note that a physical dump can easily be converted to a “logical” .sql database file:

pg_restore -f logical_database_file.sql physical.dump.

Restoring from a physical backup involves uploading it to s3, creating a signed URL for the dump, and then running:

heroku pg:backups:restore '<SIGNED_URL_IN_S3>' DATABASE_URL # note the (DATABASE_URL is a literal, not a placeholder.)

More details on this process: https://devcenter.heroku.com/articles/heroku-postgres-import-export#import

3. Logical backups to s3

We supplement the above with a rake task, rake scihist:copy_database_to_s3, which will regularly on a one-off Heroku dyno, via the scheduler. This uploads a logical (plain vanilla SQL) database to s3, where it can wait to be harvested and put onto tape. This workflow serves preservation goals more than disaster recovery: logical .sql files offer portability (they’re UTF8), and are useful in a variety of situations, unlike the physical backups.

Given the size of the database in late 2020, the rake task takes 13 seconds, and the upload another 13. The entire job (with the overhead of starting up the dyno and tearing it down) takes a bit under a minute.

If our database grows much larger (20GB or more) we will probably have to get rid of these frequent logical backups.

Historical notes

Prior to moving off our Ansible-managed servers, we used backup mechanisms that used to be performed by cron jobs installed by Ansible.Backups and Recovery contains a summary of our pre-Heroku backup infrastructure.

  • No labels