Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

When you deploy to Heroku, you don’t deal with OS installs or upgrades or security patches or creating user accounts, you just can tell Heroku “Here is an app I want to deploy”, as well as ask for the quantity of resources you need for deployment (CPU, processors, etc), expressed in terms of a Heroku abstraction called a “dyno”, which is a kind of “container”, available in different sizes.

  • https://devcenter.heroku.com/articles/dyno-types

  • The “standard” dynos actually run on shared infrastructure, other customers may be on the same hardware/VM as you. Which means actual CPU power can be variable. You have to pay a serious premium for exclusive “peformance” dynos, that we don’t plan to do.

In addition to sparing you the OS-level management of your dynos, heroku provides additional “plumbing”, to for instance support different components easily finding each other within your heroku deploy, or load-balancing a horizontally-scaled web app (you don’t have to, and essentially can’t, run your own load balancer as you’d have to on EC2 directly).

...

  • While heroku gets us out of the systems administrator business, developers will need to spend time on “operations”, especially to plan and implement migration but also ongoing.

  • As our usage changes (volume, or functionality), heroku costs could increase at a steep slope, we might not be prepared for.

  • Heroku plug-ins and SaaS offerings can be usage-metered in ways running our own on raw EC2 are not, for instance in terms of number of connections or number of requests per time period. This kind of metering tries to give you the “right” price for your “size” of use, but our usage patterns could be a weird match for how they are trying to price things, leading to unaffordable pricing.

    • The SaaS Solr offerings in particular are kind of expensive and potentially metered in ways that will be a problem for us. We might end up wanting to still run Solr on our own EC2, meaning we’d still need to have in-house or out-sourced systems administration competencies to some extent.

  • Might need to rewrite/redesign some parts of our app to work better/more affordably on heroku infrastrcuture. For instance, our fixity checking. This should be do-able, just will take some time, we might find more things in migration process, and might involve some additional heroku expenses (eg bigger redis holding fixity checks in queue).

    • Our ingest process is very CPU-intensive. (File analysis, derivative and DZI creation). This may not be a good fit for the shared infrastructure of heroku “standard” dynos? Is it possible heroku will get mad at us for using “too much CPU” for sustained periods? I don’t think so? But we may find it slower than we expect/current?

  • Some of our potential plans for dealing with access-controlled originals and derivatives for OH involved using nginx web server features which ordinarily would not be present in a heroku deploy. You hypothetically could run an nginx on a dyno, but it would be additional cost and setup work.

  • We require some custom software for media analysis/conversion (imagemagick, vips, mediainfo, etc). It should be possible to get these installed on heroku dynos using custom “buildpacks”, but if they are maintained by third-parties as open source they may be less reliable, or may require us to get into the systems administration task of “getting packages compiled/installed” after all.

  • Need to make sure our heroku deploy will reliably remain on AWS us-east-1, because if heroku were to move it, it would deleteriously effect our S3 access costs and performance.

...