Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
  1. Spin up machine

    1. Run ansible scripts

    2. Make sure groupvars/all has
      1. ec2_instance_type: c4.2xlarge

  2. Activate maintenance mode on old server

  3. Export Fedora data (in sufia 6 instance)

    1. Run audit script (takes 4 or 5 mins)

      1. RAILS_ENV=production bundle exec sufia_survey -v

    2. Make sure you have the latest deployment

    3. Make sure tmp/export is empty
    4. Run json export (takes < 10 mins)

      1. $ RAILS_ENV=production bundle exec sufia_export --models GenericFile=Chf::Export::GenericFileConverter,Collection=Chf::Export::CollectionConverter

    5. Open up fedora port to the other server so it can grab the binaries

    6. Change all the 127.0.0.1 URIs to reflect internal IPs, e.g.

      1. $ find tmp/export -type f -name "*.json" -print0 | xargs -0 sed -i "s/127\.0\.0\.1/[internal_ip_of_prod]/g"

      2. The internal IP of prod is: 172.31.48.168
      3. The internal IP of staging is: 172.31.58.101
    7. Move the resulting directory full of exported data from tmp/export to the new server's tmp/import (or wherever desired; this can be provided to the import script)
      1. $ cd tmp; tar -czf json_export_201611141510.tgz export
    8. Then from your own machine:
      1. $ scp -3 -i ~/.ssh/test.pem hydep@staging:/opt/sufia-project/current/tmp/json_export_201612141435.tgz hydep@new_box_ip:~/.
  4. Migrate postgres

    1. Run the following to generate the export.
      1. pg_dump -U postgres chf_hydra -Fp > chf_hydra_dump.sql
    2. Copy the file to the new machine

      1. scp -3 -i ~/.ssh/test.pem ubuntu@production_ip:~/chf_hydra_dump.sql ubuntu@new_box_ip:~
    3. Import the file

      1. psql -U postgres chf_hydra < chf_hydra_dump.sql
  5. Deploy chf-sufia to new server

  6. Create Drives

    1. In the AWS visual interface, go to EC2

    2. Go to Volumes

    3. Select Create Volumes

    4. Make two volumes with the following features:

      1. General Purpose SSD

      2. 150 GB

      3. Availability Zone b

    5. Once each one is made, select it and under Actions choose Attach Volume. Type the name or id of the machine and attach the volume.

    6. ssh into the box

    7. sudo fdisk -l

      1. You should see /dev/vxdg and /dev/xvdh

      2. If not, check if the volumes are attached

    8. Create the filesystem for each disk

      1. sudo mkfs.xfs /dev/xvdg

      2. sudo mkfs.xfs /dev/xvdh

    9. Edit the fstab file to retain these mounts

      1. sudo vi /etc/fstab

      2. /dev/xvdg /opt/fedora-data xfs defaults 0 0

      3. /dev/xvdh /opt/sufia-project/releases/XXXX/tmp xfs defaults 0 0

    10. mount the disks
      1. sudo mount -a
    11. Change the owner of the two mount locations

      1. sudo chown -R tomcat7:tomcat7 /opt/fedora-data

      2. sudo chown -R hydep:deploy /opt/sufia-project/releases/XXXX/tmp

  7. Restart Solr

    1. If this is the first time sufia has been deployed, Solr now runs outside of tomcat and needs to be restarted after deployment.

      1. sudo service solr restart

  8. Ensure apache is off on new server

    1. We don't want anyone doing stuff before we're ready.

  9. Restart Tomcat on new server

    1. sudo service tomcat7 restart
  10. Move over minter statefile

    1. On Production
      1. sudo cp /var/sufia/minter-state ~
      2. sudo chown ubuntu:ubuntu minter-state
    2. Then copy the file
      1. scp -3 -i ~/.ssh/test.pem ubuntu@production_ip:~/minter-state ubuntu@new_box_ip:~
    3. On New Box
      1. sudo mv minter-state /var/sufia
      2. sudo chown hydep:deploy /var/sufia/minter-state
  11. Import Fedora data (in sufia 7 instance)

    1. Start a screen or tmux session

    2. Become hydep

    3. Unpack the exported json files

      1. cd /opt/sufia-project/current/tmp/

      2. cp ~/json_export_201612141435.tgz .

      3. tar -xzf json_export_201612141435.tgz

      4. mv export import

    4. configure sufia6_user and sufia6_password in config/application

    5. run the import

      1. $ RAILS_ENV=production bundle exec sufia_import -d tmp/import --json_mapping Chf::Import::GenericFileTranslator=generic_file_,Sufia::Import::CollectionTranslator=collection_
      2. $ time RAILS_ENV=production bundle exec sufia_import -d tmp/import --json_mapping Chf::Import::GenericFileTranslator=generic_file_,Sufia::Import::CollectionTranslator=collection_ > && >> import.log
  12. run

    (currently nonexistent) verification job

    verification job

    1. Currently the job itself is hung up in a continuous integration mess / awaiting code review. Here's how to do it manually
      1. $ bin/rails c prodction

      2. validator = Sufia::Migration::Validation::Service.new
        validator.call
        Sufia::Migration::Survey::Item.migration_statuses.keys.each { |st| puts "#{st}: #{Sufia::Migration::Survey::Item.send(st).count}" }
        [:missing, :wrong_type].each do |status|
        puts "#{status} ids:"
        Sufia::Migration::Survey::Item.send(status).each do |obj|
        puts " #{obj.object_id}"
        end
        end
  13. migrate dump.rdb

  14. Reindex solr

...

  1. Spin up machine

    1. Use ansible playbook, set groupvars/all to have
      1. ec2_instance_type: m4.large

  2. On the migration and downsize machine, stop tomcat.

    1. sudo service tomcat7 stop
  3. Install mdadm

    1. sudo apt-get install mdadm
  4. Create two new disks for the new machine

    1. In the AWS visual interface, go to EC2

    2. Go to Volumes

    3. Select Create Volumes

    4. Make two volumes with the following features:

      1. Magnetic

      2. 1 TB

      3. Availability Zone b

    5. Once each one is made, select it and under Actions choose Attach Volume. Type the name or id of the machine and attach the volume.

    6. ssh into the box

    7. sudo fdisk -l

      1. You should see /dev/vxdg and /dev/xvdh

      2. If not, check if the volumes are attached

    8. Create the filesystem for each disk

      1. sudo mkfs.xfs /dev/xvdg

      2. sudo mkfs.xfs /dev/xvdh

  5. Build a RAID 1 array with the two disks

    Mount them

    1. mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/xvdg /dev/xvdh

  6. Mount the array

  7. Migrate postgres from old machine to new machine

    1. Run the following to generate the export.
      1. pg_dump -U postgres chf_hydra -Fp > chf_hydra_dump.sql
      2. pg_dump -U postgres fcrepo -Fp > fcrepo_dump.sql
    2. Copy the files to the new machine

      1. scp -3 -i ~/.ssh/test.pem ubuntu@pbig:~/chf_hydra_dump.sql ubuntu@small:~
      2. scp -3 -i ~/.ssh/test.pem ubuntu@pbig:~/fcrepo_dump.sql ubuntu@small:~
    3. Drop the current chf_hydra and fcrepo
      1. psql -U postgres
        1. drop database chf_hydra;
        2. drop database fcrepo;
    4. Import the files

      1. psql -U postgres chf_hydra < chf_hydra_dump.sql
      2. psql -U postgres fcrepo < fcrepo_dump.sql
    5. Grant database permissions
      1. GRANT Create,Connect,Temporary ON DATABASE chf_hydra TO chf_pg_hydra;
      2. GRANT ALL privileges ON DATABASE fcrepo to trilby;
  8. Move the minter statefile from old machine to new machine

    1. On Bigbox
      1. sudo cp /var/sufia/minter-state ~
      2. sudo chown ubuntu:ubuntu minter-state
    2. Then copy the file
      1. scp -3 -i ~/.ssh/test.pem ubuntu@big_ip:~/minter-state ubuntu@small_box_ip:~
    3. On Small Box
      1. sudo mv minter-state /var/sufia
      2. sudo chown hydep:deploy /var/sufia/minter-state
  9. Move the derivative files from the old machine to the new machine

  10. Move dump.rdb from old machine to new machine

  11. Detach the fedora drive on the old machine

  12. Stop tomcat

  13. Attach the fedora drive on the new machine

    1. See visual guide
  14. Mount the fedora drive to /mnt

    1. sudo mount /dev/xvd? /mnt
  15. Copy the data from the fedora drive to the RAID array.

    1. sudo cp -ar /mnt/* /opt/fedora-data/
    2. sudo chown -R tomcat7:tomcat7 /opt/fedora-data/*
  16. Backup Solr and move the backup to the new machine

    1. curl 'http://localhost:8983/solr/collection1/replication?command=backup&name=migration&location=/solr-backup'
  17. Restore Solr on the new machine

    1. curl 'http://localhost:8983/solr/collection1/replication?command=restore&name=migration&location=/solr-backup'
  18. Go to Production and copy SSL certs and keyfiles

  19. Set up SSL redirection

...