Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 24 Next »

General outline

  1. Spin up machine
  2. Create 2 new drives following the instructions in Creating Drives under Build Box Changes
    a. If mounting drives for fedora-data or for the tmp directory for migration, make sure to change the owner to tomcat7 (sudo chown tomcat7:tomcat7 folder) for fedora-data and hydep:deploy for the tmp directory (sudo chown hydep:deploy folder)
  3. Deploy Sufia
  4. Ensure apache is off
  5. Activate maintenance mode on old server
  6. Move over minter
  7. Fedora Export - see below
  8. migrate postgres
  9. Fedora Import - see below
  10. run (currently nonexistent) verification job
  11. migrate dump.rdb
  12. Reindex solr

Box Build changes

The current build scripts in Ansible have problems with a few changes we need. They can be done in this order, or other orders if needed.

  1. Creating drives
  2. Migration from Postgresql 9.3 to 9.5
  3. Moving to Fedora 4.7

Creating Drives

  1. In the AWS visual interface, go to EC2
  2. Go to Volumes
  3. Select Create Volumes
  4. Make sure the volume is
    1. General Purpose SSD
    2. 150 GB
    3. Availability Zone b
  5. Create 2 of these
  6. Once each one is made, select it and under Actions choose Attach Volume. Type the name or id of the machine and attach the volume.
  7. ssh into the box
  8. Run sudo fdisk -l
    1. You should see /dev/vxdg and /dev/xvdh
    2. If not, check if the volumes are attached
  9. Create the filesystem for each disk
    1. sudo mkfs.xfs /dev/xvdg
    2. sudo mkfs.xfs /dev/xvdh
  10. Mount each disk
    1. sudo mount /dev/xvdg /opt/fedora-data
    2. sudo mount /dev/xvdh /opt/sufia-project/releases/XXXX/tmp
  11. Edit the fstab file to retain these mounts
    1. sudo vi /etc/fstab
      1. /dev/xvdg /opt/fedora-data xfs defaults 0 0

      2. /dev/xvdh /opt/sufia-project/releases/XXXX/tmp xfs defaults 0 0

Postgres

  1. Remove the old version of postgres:
    1. sudo apt-get purge postgresql*
  2. Create the file /etc/apt/sources.list.d/pgdg.list
  3. Add the line  deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main to that
  4. Add the repo key with this command
    1. wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
  5. sudo apt-get update
  6. sudo apt-get install postgresql-9.5
  7. Log in as postgres and enter psql
    1. sudo su postgres
    2. psql
  8. In Postgres create the chf_hydra and fcrepo databases
    1. CREATE DATABASE chf_hydra;
    2. CREATE DATABASE fcrepo;
  9. In Postgres create the needed users
    1. CREATE USER chf_pg_hydra WITH PASSWORD 'SEE ANSIBLE GROUPVARS/ALL';
    2. CREATE USER trilby WITH PASSWORD 'porkpie2';
  10. Grant each user access to their table
    1. GRANT Create,Connect,Temporary ON DATABASE chf_hydra TO chf_pg_hydra;
    2. GRANT All Privileges ON DATABASE fcrepo TO trilby;
  11. Set the user password for postgres
    1. \password postgres
    2. Enter password from groupvars/all
  12. sudo nano /etc/postgresql/9.5/main/pg_hba.conf
  13. Change the sections under "Database administrative login by Unix domain socket"
    1. peer should be set to md5 for
      1. local all postgres
      2. local all all
      3. host all all
  14. Restart postgres, try to log in with
    1. psql -U postgres
  15. If it lets you use the password, it works!

Fedora

  • Stop Tomcat
    • sudo service tomcat7 stop
  • Replace the current /etc/default/tomcat7 with

# A backup of the original file with addition options is at /etc/default/tomcat7.bak
TOMCAT7_USER=tomcat7
TOMCAT7_GROUP=tomcat7
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
#Postgres
JAVA_OPTS="-Dfcrepo.home=/opt/fedora-data -Dfcrepo.modeshape.configuration=classpath:/config/jdbc-postgresql/repository.json -Dfcrepo.postgresql.username=trilby -Dfcrepo.postgresql.password=porkpie2 -Dfcrepo.postgresql.host=localhost -Dfcrepo.postgresql.port=5432 -Djava.awt.headless=true -XX:+UseG1GC -XX:+UseCompressedOops -XX:-UseLargePagesIndividualAllocation -XX:MaxPermSize=128M -Xms512m -Xmx4096m -Djava.util.logging.config.file=/etc/tomcat7/logging.properties -server"

  • Start Tomcat
    • sudo service tomcat7 start

Fedora export

In sufia 6 instance:

  • Run audit script
RAILS_ENV=production bundle exec sufia_survey -v
  • Run json export
$ RAILS_ENV=production bundle exec sufia_export --models GenericFile=Chf::Export::GenericFileConverter,Collection=Chf::Export::CollectionConverter
  • Open up fedora port to the other server so it can grab the binaries
  • Change all the 127.0.0.1 URIs to reflect actual host, e.g.
$ find tmp/export -type f -name "*.json" -print0 | xargs -0 sed -i "s/127\.0\.0\.1/staging.hydra.chemheritage.org/g"
  • Move the resulting directory full of exported data from tmp/export to the new server's tmp/import (or wherever desired; this can be provided to the import script)
$ cd tmp; tar -czf json_export_201611141510.tgz export
  • Then from your own machine:
$ scp staging:/opt/sufia-project/current/tmp/json_export_201611141510.tgz new_box_ip:/opt/sufia-project/current/tmp/.

Fedora import

On sufia 7 instance:

  • Mount the /dev/xvdh drive on the tmp directory in sufia (/opt/sufia-project/releases/XXXXX/tmp)
  • Change the owner of the tmp directory
    • sudo chown hydep:deploy /opt/sufia-project/releases/XXXXX/tmp
  • unpack the exported json files
cd opt/sufia-project/current/tmp/
tar -xzf json_export_201611141510.tgz
mv export import
  • configure sufia6_user and sufia6_password in config/application
  • run the import
$ RAILS_ENV=production bundle exec sufia_import -d tmp/import --json_mapping Chf::Import::GenericFileTranslator=generic_file_
  • You can use the little bash script I wrote to create batches of files if you want. It's at /opt/sufia-project/batch_imports.sh
$ RAILS_ENV=production bundle exec sufia_import -d /opt/sufia-project/import/gf_batch_0 --json_mapping Chf::Import::GenericFileTranslator=generic_file_

Postgres export/Import

On Staging

  • Run the following to generate the export.
    pg_dump -U postgres chf_hydra -Fp > chf_hydra_dump.sql

On Migration

  • From your machine run 
    scp -3 -i /path/to/test.pem ubuntu@staging:~/chf_hydra_dump.sql ubuntu@new_box_ip:~


  • Run this command to get into postgres (password for user is stored elsewhere)
    psql -U postgres
  • Inside Postgres generate the database and required permissions
    CREATE DATABASE chf_hydra;
    GRANT Create,Connect,Temporary ON DATABASE chf_hydra TO chf_pg_hydra
  • Then enter \q to quit
  • Finally import the data you copied over with scp
    psql _U postgres chf_hydra < chf_hydra_dump.sql

How to check the statefile

There are 3 parts to the state: sequence, counters, and seed. You need the correct combination of all three in order to have the right state. However, if you know you have two valid state files with the same origin you can do a rough comparison of their equivalence just by checking the sequence. To check sequence in our 7.2-based application:

$ cd /opt/sufia-project/current
$ bin/rails c production
> sf = ActiveFedora::Noid::Minter::File.new
> state = sf.read
> state[:seq]


To check in our 6.7-based application:

$ cd /opt/sufia-project/current
$ bin/rails c production
> sm = ActiveFedora::Noid::SynchronizedMinter.new
> state = {}
> ::File.open(sm.statefile, ::File::RDWR|::File::CREAT, 0644) do |f|
>   f.flock(::File::LOCK_EX)
>   state = sm.send(:state_for, f)
> end
> state[:seq]


To check sequence on a file that's not in the default location, pass the template and the filename when you create the object with 'new', e.g:

> sm = ActiveFedora::Noid::SynchronizedMinter.new(".reeddeeddk", "/var/sufia/minter-state")


Misc.

Postgres

You can get a list of all tables and fields with the command:

SELECT * FROM information_schema.columns WHERE table_schema = 'public'


Cleanup

To clean up a server for a new migration test, take the following steps.

  1. Stop Tomcat and Solr
  2. Remove all the folders in /opt/fedora-data.
  3. Remove all the files in /var/solr/data/collection1/data/index/, and /var/sufia/derivatives
  4. Remove all the upload files in the tmp directory of the version of sufia used.
  5. Enter into postgres (psql -U postgres)
  6. Drop the fcrepo database (DROP DATABASE fcrepo;)
  7. Build a new fcrepo database (CREATE DATABASE fcrepo;)
  8. Grant the fcrepo user (currently tribly until we get a better user) all privileges on fcrepo. (grant all privileges on fcrepo to tribly;)
  9. Restart tomcat and solr
  • No labels