ArchivesSpace is “an open source archives information management application for managing and providing web access to archives, manuscripts and digital objects”. In August 2022 we switched from hosting our own ASpace server on EC2 to a third-party-hosted instance at LibraryHost.
Background
We store digital descriptions of our archival collections in the following places:
Location | Type of technology | Number of collections described | Source | Example | Who can see it? |
---|---|---|---|---|---|
| Word documents | Roughly 270, dates 1997 – present. | This is the initial description we create upon accessioning a collection. |
| Institute staff |
ArchivesSpace public user interface (PUI) | MySQL-backed website | Same as below | Public | ||
ArchivesSpace admin site | Same as above | Roughly 120 as of 2022 | Entered manually based on the P drive Word files. | Only logged in ArchivesSpace users | |
Public EAD bucket | EAD (xml format) | Roughly 120 as of 2022 | Generated weekly from ArchivesSpace database | Public | |
https://guides.othmerlibrary.sciencehistory.org/friendly.php?s=CHFArchives | LibGuide | Most collections, categorized by subject. | Created and maintained by Ashley Augustyniak | Technically public, but does not appear to be linked from anywhere. | |
WorldCat | Librarians manually update OCLC master records based on the metadata in ArchivesSpace. This is provided in the form of a MARCXML file by Kent and sent to Caroline. | Public |
Workflow
Finding aids are first written up as Word documents at
Shared/P/Othmer Library/Archives/Collections Inventories/Archival Finding Aids and Box Lists
.Kent enters the data into ArchivesSpace. They finding aids are revised in the process.
Once they are in ArchivesSpace:
Our EAD export app in Heroku (see EAD export app ) retrieves public EAD files from ArchivesSpace’s API and posts them to the Science History Institute EAD bucket
Kent also exports them to a PDF, which he then sends to Caroline. These are entered into the OPAC. (see e.g. https://othmerlib.sciencehistory.org/articles/1065801.15134/1.PDF )
Note: the PDF has to be manually updated in the OPAC every time the metadata in ArchivesSpace changes.
The OPAC also points to a PUI URL at https://archives.sciencehistory.org/ .
Certain works in the Digital Collections also point to the PUI. Example: https://digital.sciencehistory.org/works/81jkowj.
Finally, the exported EAD files are also ingested by University of Penn Libraries Special Collections and the Center for the History of Science, Technology, and Medicine (CHSTM).
Penn, in turn, processes these EAD files on a nightly basis and adds them to the Philadelphia Area Archives search portal, a service funded by PACSCL.
Likewise, CHSTM ingests these EADs and makes them searchable at its search portal.
OBSOLETE – Technical details about the server
This section should be removed later in 2022.
ArchivesSpace lives on an AWS S3 server ArchivesSpace-prod, at https://50.16.132.240/ (also found at https://archives.sciencehistory.org)
The current production version of Aspace is 3.0.1
.
Terminal access: ssh -i /path/to/production/pem_file.pem ubuntu@50.16.132.240
The ubuntu
user owns all the admin scripts.
The relevant Ansible role is: /roles/archivesspace/
in the ansible-inventory
codebase.
SSL is based on the following: http://www.rubydoc.info/github/archivesspace/archivesspace
The executables are at /opt/archivesspace/
The configuration file is /opt/archivesspace/config/config.rb
Logs are at: logs/archivesspace.out
Apache server is at /var/log/apache2/
Configuration for the Apache site is at /etc/apache2/sites-available/000-default.conf
.
OBSOLETE – Startup
To start Archivesspace:
/opt/archivesspace/archivesspace.sh start
(as userubuntu
)
There may be a short delay as the server re-indexes data.
OBSOLETE –Restarting the server to fix Tomcat memory leak
We restart the ArchivesSpace program (not the server) using a cronjob that runs /opt/archivesspace/archivesspace.sh restart
every night at 2 am. This prevents a chronic memory leak from eating up all the CPU credits for the machine.
When the server is restarted, Jetty creates a set of temporary files in /tmp
that look like this:
hsperfdata_ubuntu
jetty-0.0.0.0-8089-backend.war-_-any-3200460420275417425
jetty-0.0.0.0-8090-solr.war--any-_1669707332158985985
jetty-0.0.0.0-8091-indexer.war-_aspace-indexer-any-3026688914663148716
jetty-0.0.0.0-8080-frontend.war--any-3028692540497613460
jetty-0.0.0.0-8081-public.war--any-268053434795494538
jetty-0.0.0.0-8082-oai.war--any-_243630232179303838
Only the most recent set are used by Jetty, but the old ones accumulate rapidly if the server is restarted nightly.
A cron job removes obsolete ones nightly.
Export
The ArchivesSpace EADs are harvested by:
Institution | Liaison | Contact |
Center for the History of Science, Technology, and Medicine (CHSTM) | Richard Shrake | |
University of Penn Libraries Special Collections | Holly Mengel |
Both institutions harvest the EADs at http://ead.sciencehistory.org/.
OBSOLETE – Backups
These consist of making backups of the sql database used by the ArchivesSpace program.
Place the Mysql database in |
| Dumps the mysql database to |
Sync |
| Runs an This script is run as a crontab by user |
See Backups and Recovery (Historical notes) for a discussion of how the chf-hydra-backup
s3 bucket is then copied to Dubnium and in-house storage.
Documentation
https://archivesspace.atlassian.net/wiki/home contains comprehensive documentation.
If you have a sciencehistory.org
address, you can get access to it by filling out a form.