General outline
- Spin up machine
a. If mounting drives for fedora-data or for the tmp directory for migration, make sure to change the owner to tomcat7 (sudo chown tomcat7:tomcat7 folder) - Deploy Sufia
- Ensure apache is off
- Activate maintenance mode on old server
- Move over minter
- Fedora Export - see below
- migrate postgres
- Fedora Import - see below
- run (currently nonexistent) verification job
- migrate dump.rdb
- Reindex solr
Fedora export
In sufia 6 instance:
- Run audit script
RAILS_ENV=production bundle exec sufia_survey -v
- Run json export
$ RAILS_ENV=production bundle exec sufia_export --models GenericFile=Chf::Export::GenericFileConverter,Collection=Chf::Export::CollectionConverter
- Open up fedora port to the other server so it can grab the binaries
- Change all the 127.0.0.1 URIs to reflect actual host, e.g.
$ find tmp/export -type f -name "*.json" -print0 | xargs -0 sed -i "s/127\.0\.0\.1/staging.hydra.chemheritage.org/g"
- Move the resulting directory full of exported data from tmp/export to the new server's tmp/import (or wherever desired; this can be provided to the import script)
$ cd tmp; tar -czf json_export_201611141510.tgz export
- Then from your own machine:
$ scp staging:/opt/sufia-project/current/tmp/json_export_201611141510.tgz new_box_ip:/opt/sufia-project/current/tmp/.
Fedora import
On sufia 7 instance:
- unpack the exported json files
cd opt/sufia-project/current/tmp/tar -xzf json_export_201611141510.tgzmv export import
- configure sufia6_user and sufia6_password in config/application
- run the import
$ RAILS_ENV=production bundle exec sufia_import -d tmp/import --json_mapping Chf::Import::GenericFileTranslator=generic_file_
- You can use the little bash script I wrote to create batches of files if you want. It's at /opt/sufia-project/batch_imports.sh
$ RAILS_ENV=production bundle exec sufia_import -d /opt/sufia-project/import/gf_batch_0 --json_mapping Chf::Import::GenericFileTranslator=generic_file_
Postgres export/Import
On Staging
- Run the following to generate the export.
pg_dump -U postgres chf_hydra -Fp > chf_hydra_dump.sql
On Migration
From your machine run
scp -3 -i /path/to/test.pem ubuntu@staging:~/chf_hydra_dump.sql ubuntu@new_box_ip:~
- Run this command to get into postgres (password for user is stored elsewhere)
psql -U postgres
- Inside Postgres generate the database and required permissions
CREATE DATABASE chf_hydra;
GRANT Create,Connect,Temporary ON DATABASE chf_hydra TO chf_pg_hydra
- Then enter \q to quit
- Finally import the data you copied over with scp
psql _U postgres chf_hydra < chf_hydra_dump.sql
How to check the statefile
There are 3 parts to the state: sequence, counters, and seed. You need the correct combination of all three in order to have the right state. However, if you know you have two valid state files with the same origin you can do a rough comparison of their equivalence just by checking the sequence. To check sequence in our 7.2-based application:
$ cd /opt/sufia-project/current$ bin/rails c production> sf = ActiveFedora::Noid::Minter::File.new> state = sf.read> state[:seq]
To check in our 6.7-based application:
$ cd /opt/sufia-project/current$ bin/rails c production> sm = ActiveFedora::Noid::SynchronizedMinter.new> state = {}> ::File.open(sm.statefile, ::File::RDWR|::File::CREAT, 0644) do |f|> f.flock(::File::LOCK_EX)> state = sm.send(:state_for, f)> end> state[:seq]
To check sequence on a file that's not in the default location, pass the template and the filename when you create the object with 'new', e.g:
> sm = ActiveFedora::Noid::SynchronizedMinter.new(".reeddeeddk", "/var/sufia/minter-state")
Misc.
Postgres
You can get a list of all tables and fields with the command:
SELECT * FROM information_schema.columns WHERE table_schema = 'public'