Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

We don’t currently really have “infrastructure-as-a-service” with our heroku setup, it’s just set up on the heroku system (and third-party systems) via GUI and/or CLI’s, there isn’t any kind of script to recreate our heroku setup from nothing.

...

  • Delete all failed jobs in the rescue admin pages.

  • Make a rake task to enqueue all the jobs to the special_jobs queue.

    • (lightbulb) The task should be smart enough to skip items that have already been processed. That way, you can interrupt the task at any time, fix any problems, and run it again later without having to worry.

    • (lightbulb) Make sure you have an easy way to run the task on individual items manually from the admin pages or the console.

    • (lightbulb) The job that the task calls should print the IDs of any entities it’s working on to the Heroku logs.

    • (lightbulb) It’s very helpful to be able to enqueue a limited number of items and run them first, before embarking on the full run. For instance you could add an extra boolean argument only_do_10 (defaulting to false ) and add a variation on:

      Code Block
      scope = scope[1..10] if only_do_10
  • Test the rake task in staging with only_do_10 set to true.

  • Run the rake task in production but only_do_10 for a trial run.

  • Spin up a single special_jobs dyno and watch it process 10 items.

  • Run the rake task in production.

  • The jobs are now in the special_jobs queue, but no work will actually start until you spin up dedicated dynos.

  • 2 workers per special_jobs dyno is our default, which works nicely with standard-2x dynos, but if you want, try setting SPECIAL_JOB_WORKER_COUNT env variable to 3.

  • Our redis setup is capped at 80 connections, so be careful running more than 10 Max special_jobs dynos at oncewill be limited by the smaller of max postgres connections and max redis connections, including connections in use by web workers. Currently we have 500 max redis connections, and 120 max postgres connections. You may want to monitor the redis statistics during the job.

  • Manually spin up a set of special_worker dynos of whatever type you want at Heroku's "resources" page for the application. Heroku will alert you to the cost. (10 standard-2x dynos cost roughly $1 per hour, for instance; with the worker count set to two, you’ll see up to 20 items being processed simultaneously).

  • Monitor the progress of the resulting workers. Work goes much faster than you are used to, so pay careful attention to:

  • (lightbulb) If there are errors in any of the jobs, you can retry the jobs in the Rescue pages, or rerun them from the console.

  • Monitor the number of jobs still pending in the special_jobs queue. When that number goes to zero, it means the work will complete soon and you should start getting ready to turn off the dynos. It does NOT mean the work is complete, however!

  • When all the workers in the special_jobs queue complete their jobs and are idle:

    • (lightbulb) rake scihist:resque:prune_expired_workers will get rid of any expired workers, if needed

    • Set the number of special_workerdynos back to zero.

    • Remove the special_jobs queue from the resque pages.

...

  • Some config variables are set by heroku itself/heroku add-ons, such as DATABASE_URL (set by postgres add-on), and REDIS_URL (set by Redis add-on). They should not be edited manually. Unfortunately there is no completely clear documentation of which is which.

  • Some config variables include sensitive information such as passwords. If you do a heroku config to list them all, you should be careful where you put/store them, if anywhere.

...

  • Heroku postgres (an rdbms) (the standard-0 size plan is enough for our needs)

  • Heroku Stackhero redis (redis is a key/value store used for our bg job queue)

    • We are currently using premium-1 plan – our StackHero redis through heroku marketplace, their smallest $20/plan. Our redis needs are modest, but we seemed want enough redis connections to be able to have lots of temporary bg workers without running out of redis connetions on hirefire autoscale up of workers with the premium-0 planconnections, and at 500 connections this plan means postgres is the connection bottleneck not redis.

      • Note that “not enough connections” error in redis can actually show up as OpenSSL::SSL::SSLError we are pretty sure. https://github.com/redis/redis-rb/issues/980

      • The numbers don’t quite add up for this, I think resque_pool may be temporarily using too many connections or something. But for now we just pay for premium-1 ($30/month)

  • Memcached via the Memcached Cloud add-on

    • Used for Rails.cache in general – the main thing we are using Rails.cache for initially is for rack-attack to track rate limits. Now that we have a cache store, we may use Rails.cache for other things.

    • In staging, we currently have a free memcached add-on; we could also just NOT have it in staging if the free one becomes unavailable.

    • In production we still have a pretty small memcached cloud plan, if we’re only using it for rack-attack we don’t need hardly anything.

  • Heroku scheduler (used to schedule nightly jobs; free, although you pay for job minutes).

  • Papertrail – used for keeping heroku unified log history with a good UX. (otherwise from heroku you only get the most recent 1500 log lines, and not a very good UX for viewing them!). We aren’t sure what size papertrail plan we’ll end up needing for our actual log volume.

  • Heroku’s own “deployhooks” plugin used to notify honeybadger to track deploys. https://docs.honeybadger.io/lib/ruby/getting-started/tracking-deployments.html#heroku-deployment-tracking and https://github.com/sciencehistory/scihist_digicoll/issues/878

...