Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

Here is the current backup strategy as a diagram:

Recovery Options

Fedora:

S3:

Currently we are using the s3 sync tool (akin to rsync for S3) to pull over key fedora data into the chf-hydra-backup bucket. This is a slight misnomer as it handles backups for ArchivesSpace as well now, but Fedora data is pulled over into

FedoraBackup (contains all Fedora binary data)

PGSql (contains the Fedora Postgres database fcrepo_backup)

Both sets are needed to do a full restore.

Note: As a reminder while S3's visual interface uses folders, those locations are actually just the first step in a path of individual block stored objects, folders do not exist in S3.

  1. Stop Tomcat
  2. Download the Postgres database fcrepo_backup.sql to the Fedora machine.
  3. Fedora might still have active connections to postgres, a restart sudo service postgresql restart will kill them.
  4. Import the database psql fcrepo < fcrepo_backup.sql
    1. If the database already exists, such as a sync, you will want to drop the existing database and then run the command.
  5. Check that the user trilby has permissions to access and use the newly made fcrepo database.
  6. Delete the existing folder(s) inside /opt/fedora-data (This step is not always required but makes it simpler)
  7. Using screen or tmux start an aws s3 sync to copy all the data over in the FedoraBackup "folder" to /opt/fedora-data
  8. Wait a while for all the data (>800 GB) to copy over.
  9. Run chown -R tomcat7:tomcat7 /opt/fedora-data to give ownership on the new files to the tomcat user so Fedora can access them.
  10. Restart Tomcat
  11. This will restore the Fedora database.  Current cost estimates (2/18) are about $.10 to do this restore.

Users:

  1. Go to S3 and download the postgres backup files.
  2. Stop Apache
  3. Restart the postgres service, this should remove the default connection to the Hydra database that Hydra has when running so you can change it.
  4. In Postgres delete the automatically generated chf_hydra database
    1. Log in via psql -u postgres
      1. The postgres account password is in ansible-vault (groupvars/all)
    2. Run: DROP DATABASE chf_hydra;
    3. Run: CREATE DATABASE chf_hydra;
  5. Then import the downloaded database
    1. Either:
      1. pg_restore -d chf_hydra -U postgres chf_hydra.dump
      2. psql chf_hydra < chf_hydra_dump.sql
  6. Then set permissions
    1. psql -U postgres
    2. GRANT Create,Connect,Temporary ON DATABASE chf_hydra TO chf_pg_hydra;
  7. You may now restart postgres and Apache2

Minter:

The minter is now part of postgres, just restore the chf_hydra database for the users to app and the minter will be restored.


Redis

Redis keeps a database in memory which handles the transaction record data such as the history of edits on a record. It does not contain the actual data, simply the timeline of changes. Losing this causes the history of object edits to be lost, but the objects themselves will be fine.

  1. Copy over redis-dump/dump.rdb to App
  2. It must be changed to be owned by redis
    1. sudo chmod -R redis:redis filename
  3. Then you will need to stop the redis server
    1. sudo service redis-server stop
  4. Move redis-dump.rdb to /var/lib/redis/dump.rdb overwriting the existing file there called dump.rdb
  5. Restart redis
    1. sudo service redis-server start
  6. When starting up redis will read the .rdb dump file and copy that data back to the in memory database.


Indexing:

The index is being backed up to speed up the time to recovery for DR or migrations. If you cannot access it, a manual reindex can be done with the instructions in Application administration. This process takes at least one business day, so is not recommended versus rebuilding from the backup.

  1. In the chf-hydra-backup, pull down the solr-backup.tar.gz file under Solr to the Solr server.
  2. Extract the archive
  3. Use the solr restore commands at Application administration
     


Last steps:

Costs

A quick cost analysis has restoration costing $30-35 dollars, this is as of 6/11/2018 with approximately 1 TB of data. Approximately 66% of the cost was due to inter-region transfer fees (moving data from US-WEST to US-EAST). The rest is standard LIST, GET, and related fees.


Kithe recovery

Kithe currently (March 2019) has a small set of data to be handled for recovery.

  • A postgres database which contains user data and item metadata
  • Original binary files
  • Derivative Files

The first two are the ones that are needed the derivatives are merely backed up because the cost to back them up is low relative to the amount of time saved on a recpvery by having a copy ready.

The postgres database is backed up to S3 (currently unscheduled, fix this)
The binary files are replicated via S3 replication to a second location in US-WEST rather than US-EAST in case of outages. When we actually switch over, these will also be backed over to local on-site storage.
The derivative files will also be replicated via S3 replication to a US-WEST location.

Recovery levels:

When we look at recovery it will be useful to distinguish between full and partial recoveries. The following classifications may prove helpful

  • Raw data only: In this case we only have access to the raw data, this is only a level in case of a massive outage that destroys all AWS.
  • Partial public, no staff recovery: In this case the public has access to limited functionality but features like derived images may not be fully restored, staff can access public functions but not do additional work.
  • Partial public, partial staff recovery:
  • Full public, no staff recovery
  • Full public, full staff recovery:



  • No labels