Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

Original files and derivatives

These are stored in S3, are backed up within S3 by a process managed by AWS, and are then copied to long-term storage by SyncBackPro, which is Windows software running on Promethium managed by Chuck and Ponce (see https://www.2brightsparks.com/syncback/sbpro.html ).

Heroku database backups

We have three backup mechanisms under Heroku:

1. Continuous protection

Heroku offers us a behind-the-scenes Continuous Protection mechanism for database disaster recovery. This doesn’t involve any actual backup files that we can see or download. It does ensures that, in the event of a disaster, we can roll back the database to a prior state (right before a bunch of files were mistakenly deleted, for instance) using:

heroku addons:create heroku-postgresql:standard-0 --rollback DATABASE_URL --to '2021-06-02 20:20+00' --app scihist-digicoll-production

Details are at https://devcenter.heroku.com/articles/heroku-postgres-rollback .

For our database, this performs the rollback in under an hour. (The site remains up and usable while the rollback is being prepared and executed.)

2. Nightly physical backups

We supplement the above with a regular, 2am, nightly physical database scheduled backup:

heroku pg:backups:schedules --app scihist-digicoll-production

(Physical backups are binary files that include dead tuples, bloat, indexes and all structural characteristics of the currently running database.)

The backups are stored by Heroku and can be listed by running heroku pg:backups.

You can check the metadata on the latest physical backups like this: heroku pg:backups
heroku pg:backups:download a006 will produce a file like:

$ file physical.dump
physical.dump: PostgreSQL custom database dump - v1.14-0

(lightbulb) Note that a physical dump can easily be converted to a garden-variety “logical” .sql database file:

$ pg_restore -f logical_database_file.sql physical.dump.

$ file logical_database_file.sql
logical_database_file.sql: UTF-8 Unicode text, with very long lines

Restoring from a nightly physical backup.

For physical backups retained by Heroku (we retain up to 25) a restore takes about a minute and works like this:

heroku pg:backups:restore --app scihist-digicoll-production

If you downloaded a physical backup and have it stored on your local machine, and want to restore from that specific file, you first will need to upload it to s3, creating a signed URL for the dump, and then run:

heroku pg:backups:restore '<SIGNED_URL_IN_S3>' DATABASE_URL # note the (DATABASE_URL is a literal, not a placeholder.)

More details on this process, including how to create a signed s3 URL: https://devcenter.heroku.com/articles/heroku-postgres-import-export#import

3. Preservation (logical) backups to s3

We supplement the above with a rake task, rake scihist:copy_database_to_s3, which will regularly on a one-off Heroku dyno, via the scheduler. This uploads a logical (plain vanilla SQL) database to s3, where it can wait to be harvested and put onto tape.

This workflow serves more for preservation than for disaster recovery: logical .sql files offer portability (they’re UTF8), and are useful in a variety of situations, unlike the physical backups.

Given the size of the database in late 2020, the entire job (with the overhead of starting up the dyno and tearing it down) takes a bit under a minute.

If our database grows much larger (20GB or more) we will probably have to get rid of these frequent logical backups.

SyncBackPro on Promethium (managed by Chuck and Ponce) finally copies the S3 file to a local network storage mount (/media/SciHist_Digicoll_Backup), and that gets backed up to tape.

Historical notes

Prior to moving off our Ansible-managed servers, we used backup mechanisms that used to be performed by cron jobs installed by Ansible.Backups and Recovery contains a summary of our pre-Heroku backup infrastructure.

A script on the production server, home/ubuntu/bin/postgres-backup.sh, used to perform the following tasks nightly:

  • pg_dump the production database to /backups/pgsql-backup/.

  • aws s3 sync the contents of that directory to s3://chf-hydra-backup/PGSql.

  • No labels