Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Migrate production to LibraryHost

    • Preparation:

    • Migration proper (this needs to take place within a short period: ideally, 2 or 3 business days).

      • Preparation

        • Send out a notice to users reminding them of the migration and warning them to make any final metadata changes before the end of the business day (tick)

        • Create an IAM policy and user that gives LibraryHost write access to the backup directory in S3, and to nothing else. (tick)

        • Users can make their final edits to the old ArchivesSpace metadata before the migration. (tick)

        • Move the latest backup of the old ArchivesSpace DB out of the regular backup directory and move it into another directory where it won’t get overwritten. (tick)

        • Create a dump of the database that can be sent to LibraryHost (tick)

        • Run the mapping script and send updated collection description links to to Gabriela and Caroline (tick)

        • Prevent users from logging in to the old site by turning off the ArchivesSpace daemon. (tick)

        • Turn off the EAD export (tick)

      • Migration

        • Eddie sends database over from EC2 ArchivesSpace to Manny. (tick)

        • Create an issue and PR to change the data in the digital collections to the new URLS. (These URLs won’t work until archives.sciencehistory.org is pointing at the new server). (tick)

        • Wait until our wildcard certificate is installed at the new server and it can accept connections from our domain. (tick)

        • Point archives.sciencehistory.org at the new LibraryHost server (with help from Chuck). (tick)

        • Update the OPAC and the digital collections to point at the new production URLS in the production PUI. (tick)

          💡 Note that external links to our HTML finding aids are rare. There should be no need to provide redirects from the old URLS to the PUI when we discontinue the HTML finding aids.

        • Spin down the ec2 servers - leave them around, inactive, for a month or so. (tick)

      • Exports and backups

        • Update our EAD export to point at the LibraryHost API instead of our ArchivesSpace server.

          Part of this task is re-adding the scheduled Heroku task, disabled during the migration. Add a task to run the following every week: bin/proximo bundle exec ruby run.rb && curl https://api.honeybadger.io/v1/check_in/OaIlNl &> /dev/null

          (tick)

        • Ensure backups are going to our s3 backup bucket as planned. (tick)

    • Cleanup (no deadline.)

      • Retire the Ansible codebase

      • Retire AWS services that existed only to support ArchivesSpace (such as our single remaining elastic IP address)

      • Make sure the EAD export is running weekly, instead of daily

      • Go back to a development ($5 / month) Proximo plan. We temporarily moved to the Development plan on May 23rd. See https://github.com/sciencehistory/export_archivesspace_xml/issues/18 for details and instructions.Permanently remove old production ArchivesSpace EC2 server (tick)

      • Since we will now have secure access to the API, possibly remove Proximo from export_archivesspace_xml altogether and run the export (over https) using plain username/password authentication. Decide whether we still want to back up the database on Dubnium; `/home/ubuntu/archivesspace_scripts/mysql-backup.sh`, which dumped the database to a location where it was harvested by the Dubnium backup scripts, will have stopped running. (tick)

      • Retire the Ansible codebase (tick)

      • Make sure Chuck’s backup pipeline is getting nightly backups (tick)

      • Retire AWS services that existed only to support ArchivesSpace (such as our single remaining elastic IP address) (tick)