Table of Contents |
---|
General outline
- Move over minter
- Fedora Export - see below
- migrate postgres
- Fedora Import - see below
- run (currently nonexistent) verification job
- migrate dump.rdb
- Reindex solr
Table of Contents |
---|
Spin up machine
Run ansible scripts
- Creating
Create Drives
In the AWS visual interface, go to EC2
Go to Volumes
Select Create Volumes
Make two volumes with the following features:
General Purpose SSD
150 GB
Availability Zone b
Once each one is made, select it and under Actions choose Attach Volume. Type the name or id of the machine and attach the volume.
ssh into the box
sudo fdisk -l
You should see /dev/vxdg and /dev/xvdh
If not, check if the volumes are attached
Create the filesystem for each disk
sudo mkfs.xfs /dev/xvdg
sudo mkfs.xfs /dev/xvdh
Mount each disk
sudo mount /dev/xvdg /opt/fedora-data
sudo mount /dev/xvdh /opt/sufia-project/releases/XXXX/tmp
Edit the fstab file to retain these mounts
sudo vi /etc/fstab
/dev/xvdg /opt/fedora-data xfs defaults 0 0
/dev/xvdh /opt/sufia-project/releases/XXXX/tmp xfs defaults 0 0
Change the owner of the two mount locations
sudo chown -R tomcat7:tomcat7 /opt/fedora-data
sudo chown -R hydep:deploy /opt/sufia-project/releases/XXXX/tmp
Deploy Sufia
Restart Solr
If this is the first time sufia has been deployed, Solr now runs outside of tomcat and needs to be restarted after deployment.
sudo service solr restart
Ensure apache is off
We don't want anyone doing stuff before we're ready.
Activate maintenance mode on old server
Move over minter statefile
Fedora export
...
Export Fedora data (in sufia 6 instance
...
)
Run audit script
RAILS_ENV=production
...
bundle
...
exec
...
sufia_survey
...
-v
Run json export
$
...
RAILS_ENV=production
...
bundle
...
exec
...
sufia_export
...
--models
...
GenericFile=Chf::Export::GenericFileConverter,Collection=Chf::Export::CollectionConverter
Open up fedora port to the other server so it can grab the binaries
Change all the 127.0.0.1 URIs to reflect
...
internal IPs, e.g.
$
...
find
...
tmp/export
...
-type
...
f
...
-name
...
"*.json"
...
-print0
...
|
...
xargs
...
-0
...
sed
...
-i
...
"s/127\.0\.0\.1/
...
[internal_ip_of_prod]/g"
- The internal IP of prod is:
- Move the resulting directory full of exported data from tmp/export to the new server's tmp/import (or wherever desired; this can be provided to the import script)
- $
...
- cd
...
- tmp;
...
- tar
...
- -czf
...
- json_export_201611141510.tgz
...
- export
- Then from your own machine:
- $
...
- scp
...
- staging:/opt/sufia-project/current/tmp/json_export_201611141510.tgz
...
- new_box_ip:/opt/sufia-project/current/tmp/.
Migrate postgres
Fedora import
...
Import Fedora data (in sufia 7 instance
...
- sudo chown hydep:deploy /opt/sufia-project/releases/XXXXX/tmp
...
)
Unpack the exported json files
cd
...
opt/sufia-project/current/tmp/
tar
...
-xzf
...
json_export_201611141510.tgz
mv
...
export
...
import
configure sufia6_user and sufia6_password in config/application
run the import
$
...
RAILS_ENV=production
...
bundle
...
exec
...
sufia_import
...
-d
...
tmp/import
...
--json_mapping
...
Chf::Import::GenericFileTranslator=generic_file_,Sufia::Import::CollectionTranslator=collection_
- You can use the little bash script I wrote to create batches of files if you want. It's at /opt/sufia-project/batch_imports.sh
$ RAILS_ENV=production bundle exec sufia_import -d /opt/sufia-project/import/gf_batch_0 --json_mapping Chf::Import::GenericFileTranslator=generic_file_
Postgres export/Import
On Staging
...
- Then enter \q to quit
- Finally import the data you copied over with scp
psql _U postgres chf_hydra < chf_hydra_dump.sql
- run (currently nonexistent) verification job
- migrate dump.rdb
- Reindex solr
How to check the statefile
...