Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Process documentation

Deploy the application with Capistrano

  •  cd into the root dir of our code repository

...

 

Deploy with downtime

bundle exec cap staging maintenance:enable REASON="a test of maintenance mode" UNTIL="12pm Eastern Time"
  • Deploy as usual / desired
  • Do anything else needed on the server that required the downtime
bundle exec cap staging maintenance:disable


Delete all the data

(Don't do this on prod!)

Shut down tomcat

...

psql -U chf_pg_hydra -d chf_hydra
delete from mailboxer_receipts where created_at < '2015-11-9';
delete from mailboxer_notifications where created_at < '2015-11-9';
delete from mailboxer_conversations where created_at < '2015-11-9';
delete from trophies where created_at < '2015-11-9';




Building a new machine on AWS with Ansible

  1. (Note: ansible-vault password and all current AWS keys are in shared network drive)
  2. Generate a new ssh key on AWS (EC2 > Keypairs)
    1. place it in ~/.ssh
    2. chmod 0600.
    1. useful command if you're having problems with the key: $ openssl rsa -in chf_prod.pem -check
  3. Check ansible variables
    1. $ ansible-vault edit group_vars/all
    2. Look for # Use these temporarily for new instances
    3. ensure your ssh key is listed under keys_to_add
  4. run the ansible playbook
    1. $ ansible-playbook -i hosts create_ec2.yml --private-key=/Users/aheadley/.ssh/chf_prod.pem --ask-vault-pass
    2. OR, if you're re-running scripts on an existing machine: 
      1. $ ansible-playbook -i hosts [playbook] --ask-vault-pass
    3. note that if there's a failure during postgres setup handlers may not run – watch out for this. if this happens it's potentially best to start over completely.
  5. Assign an elastic IP to the new box
  6. Ask IT to give you a DNS entry for the elastic IP if desired
  7. Consider naming the aws volumes for 'root' and 'data' – this isn't done in the scripts (but probably could be!)
  8. Set up to use capistrano (below) or just deploy with capistrano (above)

Set up Capistrano (first-time use)

Create an entry for the deploy user in your .ssh/config:

...

  • This keeps us from publishing server names, etc, in the cap config files which live in our public repo.
  • don't change the Host designation without:
    • Changing it in capistrano, e.g. deploy/staging.rb, to match
    • Clearing it with everyone who might deploy (they'll have to change their ssh config as well.
  • this will use your personal ssh key – the one that matches your public key on github, which is added to the deploy user by ansible scripts.

List all vars in ansible repo!

 

#!/bin/bash
 
for F in `find . -type d -name defaults`
do
  G=`find $F -name main.yml`
  echo
  echo $G
  cat $G
done


General Notes

Notes from conversations with Alicia

AWS Web Console

Administration & security > IAM

...

User - can change access keys if these credentials are leaked

 

AWS Architecture

pricing - unless you make a specific decision it's pay-as-you-go

Architecture design: 1 dev machine, 1 higher end. Fedora data on separate volume. b/u every 24 hrs. Keep every 7 days + first of every month (keep 2)

Instances

Root device (disc at '/') typically ~8G, just for OS. EBS -> persistent storage

...

Use DNS or elastic IPs to swap machines

Volumes

See mounted discs. State shows whether they're attached

Misc

  • snapshots
  • elastic IPs - external IPs you can move between machines to avoid messing with DNS
  • AMI - Amazon Machine Image. like a snapshot of a machine. Can be used to script ways to do as-needed load balancing, e.g.

Security

Keep names of users secret, along with keys, other obvious things

The stack

Web server will be Apache, due to Alicia's greater experience with that server

Installing Ruby from source

  • Advantages: standard, stable, known version of ruby (no problems with apt updates coming at a bad time)
  • Disadvantage: security bugs – always stay a version behind

Postgres uses peer authentication

  • superuser is the postgres user, who can log in to db without a password (auto-authenticates when you are this user)
  • Alicia changes default auth to MD5, makes a new user, with restricted permissions, for the rails database
  • settings in database.yml

Solr

app/config and tomcat both need to know the name of the solr instance. Alicia has called it hydra.xml, but she's making it a variable. Not a security issue b/c we close the port (8080) to all but localhost. Not sure why i have a note that says $curl localhost:8080/my_context/

...

  • $ ssh -L 2020:localhost:8080 target_machine
  • set up target_machine in sshconfig or remember you have to specify user, ip, cert, etc.
  • in browser, to go localhost:2020/solr_context
  • (things might break if there are redirects with ports in them)

Capistrano

shared files (linked files) are for two types of things

...

but for production environments, sharing the log directory and rotating the files makes more sense - especially if you test everything in a staging environment first so you only deploy successful code

Ansible

FYI, the ansible.cfg file included in the repo turns off host checking for ec2 instances. if you get rid of that, your playbook will fail after prompting you to add the RSA fingerprint to your known hosts list

...

  • the launch-ec2 one does the heavy lifting - creates the instance and then creates and attaches a volume
  • then the ec2 role puts aws-specific tools on the box for backups - for obvious reasons, that had to come later in the process
  • Backup script pulls the same variable used to create the backups, so it can vary by machine (i.e. the production machine could use "CHF-prod" and the staging machine "CHF-stage") and each machine will delete its own backups over time.

vault

The credentials in the vaulted files are all new. For backups, I generated a new IAM user, new credentials, and a policy that only has access to snapshots. For creating instances, I generated new credentials on the existing IAM user and turned off the old credentials (because they will be on GitHub now if you know how to find them).

...