ArchivesSpace is “an open source archives information management application for managing and providing web access to archives, manuscripts and digital objects”. In August 2022 we switched from hosting our own ASpace server on EC2 to a third-party-hosted instance at LibraryHost.
Background
We store digital descriptions of our archival collections in the following places:
Location | Type of technology | Number of collections described | Source | Example | Who can see it? |
---|---|---|---|---|---|
| Word documents | Roughly 270, dates 1997 – present. | This is the initial description we create upon accessioning a collection. |
| Institute staff |
ArchivesSpace public user interface (PUI) | MySQL-backed website | Same as below | Public | ||
ArchivesSpace admin site | Same as above | Roughly 120 as of 2022 | Entered manually based on the P drive Word files. | Only logged in ArchivesSpace users | |
S3 EAD bucket | EAD (xml format) | Roughly 120 as of 2022 | Generated nightly from ArchivesSpace database | Public | |
OPAC | 460; see complete list | Exported manually as PDF from the ArchivesSpace site, then attached to the OPAC record for the collection | https://othmerlib.sciencehistory.org/articles/1065801.15134/1.PDF | Public | |
https://guides.othmerlibrary.sciencehistory.org/friendly.php?s=CHFArchives | LibGuide | Most collections, categorized by subject. | Created and maintained by Ashley Augustyniak | Technically public, but does not appear to be linked from anywhere. |
Workflow
Finding aids are first written up as Word documents at
Shared/P/Othmer Library/Archives/Collections Inventories/Archival Finding Aids and Box Lists
.Kent enters the data into ArchivesSpace. They finding aids are revised in the process.
Once they are in ArchivesSpace:
They are automatically exported by https://github.com/sciencehistory/export_archivesspace_xml to EAD files at http://ead.sciencehistory.org/
Kent also exports them to a PDF, which he then sends to Caroline. These are entered into the OPAC. (see e.g. https://othmerlib.sciencehistory.org/articles/1065801.15134/1.PDF )
Note: the PDF has to be manually updated in the OPAC every time the metadata in ArchivesSpace changes.
The OPAC also points to a PUI URL at https://archives.sciencehistory.org/ .
Certain works in the Digital Collections also point to the PUI. Example: https://digital.sciencehistory.org/works/81jkowj.
Finally, the exported EAD files are also ingested by University of Penn Libraries Special Collections and the Center for the History of Science, Technology, and Medicine (CHSTM).
Penn, in turn, processes these EAD files on a nightly basis and adds them to the Philadelphia Area Archives search portal, a service funded by PACSCL.
Likewise, CHSTM ingests these EADs and makes them searchable at its search portal.
OBSOLETE – Technical details about the server
This section should be removed later in 2022.
ArchivesSpace lives on an AWS S3 server ArchivesSpace-prod, at https://50.16.132.240/ (also found at https://archives.sciencehistory.org)
The current production version of Aspace is 3.0.1
.
Terminal access: ssh -i /path/to/production/pem_file.pem ubuntu@50.16.132.240
The ubuntu
user owns all the admin scripts.
The relevant Ansible role is: /roles/archivesspace/
in the ansible-inventory
codebase.
SSL is based on the following: http://www.rubydoc.info/github/archivesspace/archivesspace
The executables are at /opt/archivesspace/
The configuration file is /opt/archivesspace/config/config.rb
Logs are at: logs/archivesspace.out
Apache server is at /var/log/apache2/
Configuration for the Apache site is at /etc/apache2/sites-available/000-default.conf
.
OBSOLETE – Startup
To start Archivesspace:
/opt/archivesspace/archivesspace.sh start
(as userubuntu
)
There may be a short delay as the server re-indexes data.
OBSOLETE –Restarting the server to fix Tomcat memory leak
We restart the ArchivesSpace program (not the server) using a cronjob that runs /opt/archivesspace/archivesspace.sh restart
every night at 2 am. This prevents a chronic memory leak from eating up all the CPU credits for the machine.
When the server is restarted, Jetty creates a set of temporary files in /tmp
that look like this:
hsperfdata_ubuntu
jetty-0.0.0.0-8089-backend.war-_-any-3200460420275417425
jetty-0.0.0.0-8090-solr.war--any-_1669707332158985985
jetty-0.0.0.0-8091-indexer.war-_aspace-indexer-any-3026688914663148716
jetty-0.0.0.0-8080-frontend.war--any-3028692540497613460
jetty-0.0.0.0-8081-public.war--any-268053434795494538
jetty-0.0.0.0-8082-oai.war--any-_243630232179303838
Only the most recent set are used by Jetty, but the old ones accumulate rapidly if the server is restarted nightly.
A cron job removes obsolete ones nightly.
Export
The ArchivesSpace EADs are harvested by:
Institution | Liaison | Contact |
Center for the History of Science, Technology, and Medicine (CHSTM) | Richard Shrake | |
University of Penn Libraries Special Collections | Holly Mengel |
Both institutions harvest the EADs by automatically scraping http://ead.sciencehistory.org/.
OBSOLETE – Building the server
The server not yet fully ansible-ized.
What is missing from the ansible build:
The build doesn’t copy the scripts in /home/ubuntu over correctly. Passwords for the scripts also need to be provided.
All these directories under /var/www/html/ are also missing: css; ead; font-awesome-4.7.0; fonts; img; js.
The ubuntu user needs to be added to the www-data group
SSH keys are not loaded into /etc/ssl/private/
The archivesspace server is not actually started (sudo systemctl start archivesspace).
OBSOLETE – Backups
These consist of making backups of the sql database used by the ArchivesSpace program.
Place the Mysql database in |
| Dumps the mysql database to |
Sync |
| Runs an This script is run as a crontab by user |
See Backups and Recovery (Historical notes) for a discussion of how the chf-hydra-backup
s3 bucket is then copied to Dubnium and in-house storage.
OBSOLETE – Restoring from backup
You can get a recent backup of the database at https://s3.console.aws.amazon.com/s3/object/chf-hydra-backup/Aspace/aspace-backup.sql
Note that the create_aspace.yml
playbook creates a minimal, basically empty aspace
database with no actual archival data in it.
To restore from such a backup onto a freshly-created ArchivesSpace server,
copy your backup database to an arbitrary location on the new server
ssh in to the new server
Log into the empty
archivesspace
database:mysql archivesspace --password='the_archivessace_database_password' --user=the_user
Once at the mysql command prompt, load the database:
mysql>
\. /path/to/your/aspace-backup.sql
Documentation
https://archivesspace.atlassian.net/wiki/home contains comprehensive documentation.
If you have a sciencehistory.org
address, you can get access to it by filling out a form.