...
Save the final
config/local_env.rb
files for staging and prod to a secure area on the P drive, for short-term reference.Stop all the
scihist_digicoll
s3 instances. Do not delete them yet:we can still switch back to them in the event of a major problem
we want them on hand for reference in the short term.
Turn off update cron jobs from the Management server
Get Ansible archivesspace ArchivesSpace build working from anywhere other than the management server.
Eddie’s laptop is fine for right now.
again from our development machines (PR)
Continue to resolve post-launch problems with Heroku.
Remove all ec2 snapshotsRemove Voices in Biotech code from Ansible .(PR)
Remove all ec2 snapshots and unused AMIs
in progress
Medium term
Once Heroku has been stable for 4 to 6 weeks:
Remove
All the scihist_digicoll boxes (yes, actually terminate the ec2 instances)
Management server (ditto: terminate)
The elastic IP addresses digicoll-staging and digicoll-production
local_env.rb
filesAll volumes that are not in use
Unnecessary playbooks and roles from Ansible. (Best done in a sequence of BitBucket PRs. This is actually quite a project in its own right.)
Keep
ArchivesSpace servers - staging and production
Anything in Ansible required to build the ArchivesSpace servers; the rest of Ansible can go away. 👋.
Review, in collaboration with Chuck and Vince
Reserved instances
Security groups.
.
Long term
Once Ansible codebase is simplified, move the contents to a new, private GitHub repository.
Once Sarah Newhouse is settled, invite her to convene ArchivesSpace stakeholders to discuss the future of the server and its functions.
Attempt issue #1043. Explore Terraform (or even a rake task or shell script) instead of Ansible to maintain our s3 configuration.
...