Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Use heroku https://devcenter.heroku.com/articles/log-runtime-metrics experimental add-on to get more precise logging of our RAM use over time as we trigger actions.

  • Try passenger on heroku instead of puma, to compare apples to applies

  • Try the heroku buildpack for jemalloc and compiling ruby with that, which some people say makes ruby use RAM better. (We didn’t do that in our EC2 though). https://elements.heroku.com/buildpacks/gaffneyc/heroku-buildpack-jemalloc

  • Try a heroku standard-0 postgres and standard-1x dyno to be using actual resources we will be using, in case the ’hobby' ones we are using to test have different performance characteristics

    • Dyno standard-1x can easily be temporarily turned on and off, but db will probably stay there at $50/month

  • Actually analyze and try to optimize our app, RAM usage and performance

    • Make fixity report run on cronjob and give you stored results instead of running when you click on it

    • Make many-child pages use “infinite scroll” technique to only load first X and load more when you scroll down, instead of trying to load all at once

    • More efficient production of each child page element on work pages (hard-code URLs etc)

    • Use derailed gem to figure out what parts are using so much RAM and fix them https://github.com/schneems/derailed_benchmarks

  • While we can probably optimize our app, the fact that we weren’t forced to on manual EC2 but will on heroku worries us that we’re raising the skill level and time needed to maintain a working app on heroku? (actually already HAVE spent time optimizing app now, but apparently not yet good enough for heroku?)

RAM measure investigations

Using heroku log-runtime-metrics, confirm that our 1-worker-with-two-threads puma instance is starting at 316MB.

  • After just accessing home page, it’s up to 346.74MB

  • Accessing 115-child work ysnh5if, it’s up to 375MB, a few more times 386MB, then 392MB

  • Accessing ramelli it’s up to 444MB, a couple more times 493MB, then 511MB!!!

We may have a memory leak or bad memory behavior – but why isn’t it effecting us on passenger on our manual EC2s?

Wait, may be bad on passenger too! And yet it works on our EC2…

To measure on passenger, ssh to ubuntu@ staging web server,

  • run sudo passenger-memory-stats1.

  • run sudo PASSENGER_INSTANCE_REGISTRY_DIR=/opt/scihist_digicoll/shared passenger-status

passenger-memory-stats on web is showing instance VMSize from 536MB to 738MB. Has something happened to raise our memory usage since last time we looked? And why isn’t this machine swapping horribly? But it also says Total private dirty RSS: 463.93 MB, maybe the “Private” value matters more than the “VMSize” value… but not on heroku that measures actual VMSize? (passenger-status shows only 200M and down, they show different things – neither may be what heroku measures, but they are working okay on our raw EC2….

Wed Oct 7

We have a semi-functional app deployed to heroku – no Solr (so no searching), no background jobs, lots of edge case issues. But something to look at.

...