Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

General outline

  1. Move over minter
  2. Fedora Export - see below
  3. migrate postgres
  4. Fedora Import - see below
  5. run (currently nonexistent) verification job
  6. migrate dump.rdb
  7. Reindex solr

Table of Contents

  1. Spin up machine

    1. Run ansible scripts

  2. Creating

    Create Drives

    1. In the AWS visual interface, go to EC2

    2. Go to Volumes

    3. Select Create Volumes

    4. Make two volumes with the following features:

      1. General Purpose SSD

      2. 150 GB

      3. Availability Zone b

    5. Once each one is made, select it and under Actions choose Attach Volume. Type the name or id of the machine and attach the volume.

    6. ssh into the box

    7. sudo fdisk -l

      1. You should see /dev/vxdg and /dev/xvdh

      2. If not, check if the volumes are attached

    8. Create the filesystem for each disk

      1. sudo mkfs.xfs /dev/xvdg

      2. sudo mkfs.xfs /dev/xvdh

    9. Mount each disk

      1. sudo mount /dev/xvdg /opt/fedora-data

      2. sudo mount /dev/xvdh /opt/sufia-project/releases/XXXX/tmp

    10. Edit the fstab file to retain these mounts

      1. sudo vi /etc/fstab

      2. /dev/xvdg /opt/fedora-data xfs defaults 0 0

      3. /dev/xvdh /opt/sufia-project/releases/XXXX/tmp xfs defaults 0 0

    11. Change the owner of the two mount locations

      1. sudo chown -R tomcat7:tomcat7 /opt/fedora-data

      2. sudo chown -R hydep:deploy /opt/sufia-project/releases/XXXX/tmp

  3. Deploy Sufia

  4. Restart Solr

    1. If this is the first time sufia has been deployed, Solr now runs outside of tomcat and needs to be restarted after deployment.

      1. sudo service solr restart

  5. Ensure apache is off

    1. We don't want anyone doing stuff before we're ready.

  6. Activate maintenance mode on old server

  7. Move over minter statefile

Fedora export

...

  1. Export Fedora data (in sufia 6 instance

...

  1. )

    1. Run audit script

      1. RAILS_ENV=production

...

      1. bundle

...

      1. exec

...

      1. sufia_survey

...

      1. -v

    1. Run json export

      1. $

...

      1. RAILS_ENV=production

...

      1. bundle

...

      1. exec

...

      1. sufia_export

...

      1. --models

...

      1. GenericFile=Chf::Export::GenericFileConverter,Collection=Chf::Export::CollectionConverter

    1. Open up fedora port to the other server so it can grab the binaries

    2. Change all the 127.0.0.1 URIs to reflect

...

    1. internal IPs, e.g.

      1. $

...

      1. find

...

      1. tmp/export

...

      1. -type

...

      1. f

...

      1. -name

...

      1. "*.json"

...

      1. -print0

...

      1. |

...

      1. xargs

...

      1. -0

...

      1. sed

...

      1. -i

...

      1. "s/127\.0\.0\.1/

...

      1. [internal_ip_of_prod]/g"

      2. The internal IP of prod is: 
    1. Move the resulting directory full of exported data from tmp/export to the new server's tmp/import (or wherever desired; this can be provided to the import script)
      1. $

...

      1. cd

...

      1. tmp;

...

      1. tar

...

      1. -czf

...

      1. json_export_201611141510.tgz

...

      1. export
    1. Then from your own machine:
      1. $

...

      1. scp

...

      1. staging:/opt/sufia-project/current/tmp/json_export_201611141510.tgz

...

      1. new_box_ip:/opt/sufia-project/current/tmp/.
  1. Migrate postgres

Fedora import

...

  1. Import Fedora data (in sufia 7 instance

...

  • sudo chown hydep:deploy /opt/sufia-project/releases/XXXXX/tmp

...

  1. )

    1. Unpack the exported json files

      1. cd

...

      1. opt/sufia-project/current/tmp/

      2. tar

...

      1. -xzf

...

      1. json_export_201611141510.tgz

      2. mv

...

      1. export

...

      1. import

    1. configure sufia6_user and sufia6_password in config/application

    2. run the import

      1. $

...

      1. RAILS_ENV=production

...

      1. bundle

...

      1. exec

...

      1. sufia_import

...

      1. -d

...

      1. tmp/import

...

      1. --json_mapping

...

      1. Chf::Import::GenericFileTranslator=generic_file_,Sufia::Import::CollectionTranslator=collection_

  • You can use the little bash script I wrote to create batches of files if you want. It's at /opt/sufia-project/batch_imports.sh
$ RAILS_ENV=production bundle exec sufia_import -d /opt/sufia-project/import/gf_batch_0 --json_mapping Chf::Import::GenericFileTranslator=generic_file_


Postgres export/Import

On Staging

...

  • Then enter \q to quit
  • Finally import the data you copied over with scp
    psql _U postgres chf_hydra < chf_hydra_dump.sql



  1. run (currently nonexistent) verification job
  2. migrate dump.rdb
  3. Reindex solr

How to check the statefile

...