Fall 2021
Eddie migrates our local production to 3.0.1. Should take about a day.
Subtask: green light from Patrick, Michelle, Chuck, and Kent
Subtask: create a “throwaway” demo 3.0.1 server
Subtask: confirm that current EAD export code still works against 3.0.1
Research backups from LibraryHost
LH can provide nightly buckets to go to our S3 bucket.
Update Ansible to build a 3.0.1 ArchivesSpace server.
Winter and spring 2021
Kent gets comfortable using 3.0.1
Eddie
Creates a new project to exports the EADs to s3.
Subtask: describe all AWS resources (bucket, user, policy, state infrastructure) in Terraform.
Unpublishes all digital objects (see issue)
With our contact at LibraryHost (Manny), makes sure:
LibraryHost API accepts secure (HTTPS) connections ⬅ secure access
Two interns
Add more collections to ArchivesSpace (starting in February and ending in May)
Sarah
Migrates selected data from 3.0.1 production to LibraryHost via spreadsheets
Tests and demos the PUI on LibraryHost using said data
Works with Eddie and CHSTM and UPenn so EADs are harvested from the new location in S3
Adds branding to the PUI (simply a header image).
Turns on on the PUI on the ec2 staging machine and (with Eddie) comes up with a recipe for mapping the old URLS to the new ones. (For instance, the Shoulders papers, at https://shi-staging.libraryhost.com/repositories/2/resources/2 in the LibraryHost staging setup, will actually be hosted at https://archivesspace.sciencehistory.org/repositories/3/resources/8 since that collection has resource id 8. )
Works out a new budget with Chuck and Library host for launch early in the new fiscal year
Summer 2022
Migrate production to LibraryHost
Preparation:
Purchase a production LibraryHost plan
Ensure the PUI is accessible.
Ensure the new server’s API is accessible via HTTPS and inaccessible via HTTP.
Migration proper (this needs to take place within a short period: ideally, 2 or 3 business days).
Monday July 11th
Send out a notice to users reminding them of the migration and warning them to make any final metadata changes before the end of the business day
Create an IAM policy and user that gives LibraryHost write access to the backup directory in S3, and to nothing else.
Users can make their final edits to the old ArchivesSpace metadata before the migration.
Move the latest backup of the old ArchivesSpace DB out of the regular backup directory and move it into another directory where it won’t get overwritten.
Create a dump of the database that can be sent to LibraryHost
Run the mapping script and send updated collection description links to to Gabriela and Caroline
Prevent users from logging in to the old site by turning off the ArchivesSpace daemon.
Monday July 11th:
Eddie sends database over from EC2 ArchivesSpace to Manny.
Create an issue and PR to change the data in the digital collections to the new URLS. (These URLs won’t work until the migration is complete).
Point
archives.sciencehistory.org
at the new LibraryHost server (with help from Chuck).Update the OPAC and the digital collections to point at the new production URLS in the production PUI – this may take a couple days.
💡 Note that external links to our HTML finding aids are rare. There should be no need to provide redirects from the old URLS to the PUI when we discontinue the HTML finding aids.
Spin down the s3 servers - leave them around, inactive, for a month or so.
Wednesday July 13th
Update our EAD export to point at the LibraryHost API instead of our ArchivesSpace server.
Ensure backups are going to our s3 backup bucket as planned.
Cleanup (no deadline; should happen by the end of July.)
Retire the Ansible codebase
Retire AWS services that existed only to support ArchivesSpace (such as our single remaining elastic IP address)
Change EAD export to run weekly instead of daily
Go back to a
development
($5 / month) Proximo plan. We temporarily moved to the Development plan on May 23rd. See https://github.com/sciencehistory/export_archivesspace_xml/issues/18 for details and instructions.Since we will now have secure access to the API, possibly remove Proximo altogether and run the export using plain username/password authentication
Decide whether we still want to back up the database on Dubnium; `/home/ubuntu/archivesspace_scripts/mysql-backup.sh`, which dumped the database to a location where it was harvested by the Dubnium backup scripts, will have stopped running.