Spin up machine
Run ansible scripts
- Make sure groupvars/all has
ec2_instance_type: c4.2xlarge
Activate maintenance mode on old server
Export Fedora data (in sufia 6 instance)
Run audit script (takes 4 or 5 mins)
RAILS_ENV=production bundle exec sufia_survey -v
Make sure you have the latest deployment
- Make sure tmp/export is empty
Run json export (takes < 10 mins)
$ RAILS_ENV=production bundle exec sufia_export --models GenericFile=Chf::Export::GenericFileConverter,Collection=Chf::Export::CollectionConverter
Open up fedora port to the other server so it can grab the binaries
Change all the 127.0.0.1 URIs to reflect internal IPs, e.g.
$ find tmp/export -type f -name "*.json" -print0 | xargs -0 sed -i "s/127\.0\.0\.1/[internal_ip_of_prod]/g"
- The internal IP of prod is: 172.31.48.168
- The internal IP of staging is: 172.31.58.101
- Move the resulting directory full of exported data from tmp/export to the new server's tmp/import (or wherever desired; this can be provided to the import script)
- $ cd tmp; tar -czf json_export_201611141510.tgz export
- Then from your own machine:
- $ scp -3 -i ~/.ssh/test.pem hydep@staging:/opt/sufia-project/current/tmp/json_export_201612141435.tgz hydep@new_box_ip:~/.
Migrate postgres
- Run the following to generate the export.
pg_dump -U postgres chf_hydra -Fp > chf_hydra_dump.sql
Copy the file to the new machine
scp -3 -i ~/.ssh/test.pem ubuntu@production_ip:~/chf_hydra_dump.sql ubuntu@new_box_ip:~
Import the file
psql -U postgres chf_hydra < chf_hydra_dump.sql
- Run the following to generate the export.
Deploy chf-sufia to new server
Create Drives
In the AWS visual interface, go to EC2
Go to Volumes
Select Create Volumes
Make two volumes with the following features:
General Purpose SSD
150 GB
Availability Zone b
Once each one is made, select it and under Actions choose Attach Volume. Type the name or id of the machine and attach the volume.
ssh into the box
sudo fdisk -l
You should see /dev/vxdg and /dev/xvdh
If not, check if the volumes are attached
Create the filesystem for each disk
sudo mkfs.xfs /dev/xvdg
sudo mkfs.xfs /dev/xvdh
Edit the fstab file to retain these mounts
sudo vi /etc/fstab
/dev/xvdg /opt/fedora-data xfs defaults 0 0
/dev/xvdh /opt/sufia-project/releases/XXXX/tmp xfs defaults 0 0
- mount the disks
- sudo mount -a
Change the owner of the two mount locations
sudo chown -R tomcat7:tomcat7 /opt/fedora-data
sudo chown -R hydep:deploy /opt/sufia-project/releases/XXXX/tmp
Restart Solr
If this is the first time sufia has been deployed, Solr now runs outside of tomcat and needs to be restarted after deployment.
sudo service solr restart
Ensure apache is off on new server
We don't want anyone doing stuff before we're ready.
Restart Tomcat on new server
- sudo service tomcat7 restart
Move over minter statefile
- On Production
- sudo cp /var/sufia/minter-state ~
- sudo chown ubuntu:ubuntu minter-state
- Then copy the file
scp -3 -i ~/.ssh/test.pem ubuntu@production_ip:~/minter-state ubuntu@new_box_ip:~
- On New Box
- sudo mv minter-state /var/sufia
- sudo chown hydep:deploy /var/sufia/minter-state
- On Production
Import Fedora data (in sufia 7 instance)
Start a screen or tmux session
Become hydep
Unpack the exported json files
cd /opt/sufia-project/current/tmp/
cp ~/json_export_201612141435.tgz .
tar -xzf json_export_201612141435.tgz
mv export import
configure sufia6_user and sufia6_password in config/application
run the import
- $ RAILS_ENV=production bundle exec sufia_import -d tmp/import --json_mapping Chf::Import::GenericFileTranslator=generic_file_,Sufia::Import::CollectionTranslator=collection_
- $ time RAILS_ENV=production bundle exec sufia_import -d tmp/import --json_mapping Chf::Import::GenericFileTranslator=generic_file_,Sufia::Import::CollectionTranslator=collection_ > && >> import.log
run verification job
- Currently the job itself is hung up in a continuous integration mess / awaiting code review. Here's how to do it manually
$ bin/rails c prodction
validator = Sufia::Migration::Validation::Service.new
validator.call
Sufia::Migration::Survey::Item.migration_statuses.keys.each { |st| puts "#{st}: #{Sufia::Migration::Survey::Item.send(st).count}" }
[:missing, :wrong_type].each do |status|
puts "#{status} ids:"
Sufia::Migration::Survey::Item.send(status).each do |obj|
puts " #{obj.object_id}"
end
end
- Currently the job itself is hung up in a continuous integration mess / awaiting code review. Here's how to do it manually
migrate dump.rdb
Reindex solr
...