Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

ArchivesSpace is “an open source archives information management application for managing and providing web access to archives, manuscripts and digital objects”. In August 2022 we switched from hosting our own ASpace server on EC2 to a third-party-hosted instance at LibraryHost. Our annual plan renews in September. From 9/25/2022-9/26/2022 we are paying for a Light Plan

PUI: https://archives.sciencehistory.org

SUI: https://archives.sciencehistory.org/admin

API: https://sciencehistory-api.libraryhost.com/   

IP: 50.116.19.60

Child pages (Children Display)

...

OBSOLETE – Technical details about the server

This section should be removed later in 2022.

ArchivesSpace lives on an AWS S3 server ArchivesSpace-prod, at https://50.16.132.240/ (also found at https://archives.sciencehistory.org)

The current production version of Aspace is 3.0.1 .

Terminal access: ssh -i /path/to/production/pem_file.pem ubuntu@50.16.132.240

The ubuntu user owns all the admin scripts.

The relevant Ansible role is: /roles/archivesspace/ in the ansible-inventory codebase.

SSL is based on the following: http://www.rubydoc.info/github/archivesspace/archivesspace

The executables are at /opt/archivesspace/

The configuration file is /opt/archivesspace/config/config.rb
Logs are at: logs/archivesspace.out

Apache server is at /var/log/apache2/

Configuration for the Apache site is at /etc/apache2/sites-available/000-default.conf.

OBSOLETE – Startup

  • To start Archivesspace:

    • /opt/archivesspace/archivesspace.sh start (as user ubuntu)

  • There may be a short delay as the server re-indexes data.

OBSOLETE –Restarting the server to fix Tomcat memory leak

We restart the ArchivesSpace program (not the server) using a cronjob that runs /opt/archivesspace/archivesspace.sh restart every night at 2 am. This prevents a chronic memory leak from eating up all the CPU credits for the machine.

When the server is restarted, Jetty creates a set of temporary files in /tmp

that look like this:

hsperfdata_ubuntu
jetty-0.0.0.0-8089-backend.war-_-any-3200460420275417425
jetty-0.0.0.0-8090-solr.war--any-_1669707332158985985
jetty-0.0.0.0-8091-indexer.war-_aspace-indexer-any-3026688914663148716
jetty-0.0.0.0-8080-frontend.war--any-3028692540497613460
jetty-0.0.0.0-8081-public.war--any-268053434795494538
jetty-0.0.0.0-8082-oai.war--any-_243630232179303838

Only the most recent set are used by Jetty, but the old ones accumulate rapidly if the server is restarted nightly.

A cron job removes obsolete ones nightly.

Backups

A nightly backup is uploaded by LibraryHost to s3://chf-hydra-backup/Aspace/aspace-backup.sql.

Export

The ArchivesSpace EADs are harvested by:

Institution

Liaison

Contact

Center for the History of Science, Technology, and Medicine (CHSTM)

Richard Shrake

shraker13@gmail.com

University of Penn Libraries Special Collections

Holly Mengel

hmengel@pobox.upenn.edu

Both institutions harvest the EADs athttp://ead.sciencehistory.org/.

OBSOLETE – Backups

These consist of making backups of the sql database used by the ArchivesSpace program.

...

Place the Mysql database in /backup

...

mysql-backup.sh

...

Dumps the mysql database to /backup/aspace-backup.sql.
This script is run as a crontab by user ubuntu : 30 17 * * 1-5 /home/ubuntu/archivesspace_scripts/mysql-backup.sh

...

Sync /backup to an s3 bucket

...

s3-backup.sh

...

Runs an aws s3 sync command to place the contents of /backup at https://s3.console.aws.amazon.com/s3/object/chf-hydra-backup/Aspace/aspace-backup.sql?region=us-west-2&tab=overview.

This script is run as a crontab by user ubuntu : 45 17 * * 1-5 /home/ubuntu/archivesspace_scripts/s3-backup.sh

See Backups and Recovery (Historical notes) for a discussion of how the chf-hydra-backup s3 bucket is then copied to Dubnium and in-house storage.

Documentation

https://archivesspace.atlassian.net/wiki/home contains comprehensive documentation.

...