Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 19 Next »

Create a new user

run a rake task like, e.g.:

RAILS_ENV=production bundle exec rake chf:user:create['eatson@chemheritage.org']

Resque admin panel

If a file doesn't get characterized correctly the first thing to do is check the resque admin panel. There you can view failures and restart jobs. If you see a clear stack trace pointing to a bug this may not help, but if you see a vague error about "deadlock?" this might work. Restart the job and then check the site to see whether the derivatives came through.

  1. ssh to the machine
  2. edit /opt/sufia-project/current/config/initializers/resque_admin.rb
  3. Add 'return true' before the end of the method
  4. Save the file and restart the application
  5. Navigate in the browser to, e.g., hydra.chemheritage.org/admin/queues
  6. Make sure to change the initializer back when you are done

This process will improve once we have admin users in place; you'll just be able to log in and navigate straight to this page.

What version of fedora is running?

it's in the footer of the main page

for example http://localhost:8080/fedora/

Restart application without restarting apache

This will reload config files.

$ passenger-config restart-app

What version of the app is deployed?

$ cat /opt/sufia-project/current/REVISION 

Run the metadata report

# ssh to the prod machine as deploy user
$ cd /opt/sufia-project/current
$ bundle exec rake chf:metadata_report RAILS_ENV=production

Restart passenger without restarting apache

$ passenger-config restart-app

Reindex all of solr:

# ssh to server as deploy user
$ cd /opt/sufia-project/current
$ RAILS_ENV=production bundle exec rake chf:reindex

Or, try using cap remote rake:  `cap (production|staging) invoke:rake TASK=chf:reindex`

Note: If Reindexing due to a server move, import the Postgres database of users prior to reindexing. Otherwise you will need to reindex again once the users have been moved over.


Delete all the data

(Don't do this on prod!)

Optional: stop apache or use capistrano's maintenance mode

Shut down tomcat and solr

rm -rf /opt/fedora-data/*
rm -rf /opt/solr/collection1/data/* # solr 4
rm -rf /var/solr/data/collection1/data/* # solr 5

If using Sufia 7 also run

psql -U trilby -d fcrepo -c 'DELETE FROM modeshape_repository'

The temporary testing password for trilby is porkpie2

Delete database stuff (notifications, mostly)

(you'll need the password. it's in the ansible vault.)

psql -U chf_pg_hydra -d chf_hydra
delete from mailboxer_receipts where created_at < '2015-11-9';
delete from mailboxer_notifications where created_at < '2015-11-9';
delete from mailboxer_conversations where created_at < '2015-11-9';
delete from trophies where created_at < '2015-11-9';

Turn tomcat back on (and apache if needed)

Inspect stuff

Note when using the rails console to look at actual live production data it's possible to change and delete things! Please be very careful before submitting commands if you are working with live data. Consider a dry-run on the staging server before doing anything on the production box.

$ bundle exec rails c[onsole] production
# Or if you use my dev box, mess around on a development instance with just $bundle exec rails c
# Get a count of users
> User.all.count
# List all users (you can also work with users directly in pgsql)
> User.find_each { |u| p u.email } 
# Get a count of files
> GenericFile.all.count
# Inspect a file
> f = GenericFile.find(id='3b5918567')
> f.depositor
> f.filename
# etc.


State File

The state file (formerly in /tmp) has been moved to /var/sufia. It is currently being backed up nightly. It must be included in any server migrations to avoid generating errors when uploading (a new state file may try to use an already used fedora ID). 

  • No labels