Create a new user
First, an administrator should log into the Jobs server (production or staging, as needed), then run a rake task like, e.g.:
cd /opt/sufia-project/current/
RAILS_ENV=production bundle exec rake chf:user:create['username@sciencehistory.org']
Then, ask the new user do the following:
- Visit digital.sciencehistory.org/login or http://staging.digital.sciencehistory.org/login
- Enter their email address in the "Email" field.
- Click on the "Forget your password?" link.
- The user will get an email allowing them to then set their password to whatever they like.
*After new account has been created, file Help Desk ticket to add new user to Hydra User Group email list*
Lock out user
run a rake task like, e.g.:
RAILS_ENV=production bundle exec rake chf:lock_out[eatson@chemheritage.org]
Resque admin panel
If a file doesn't get characterized correctly the first thing to do is check the resque admin panel. There you can view failures and restart jobs. If you are logged in as an admin user you can view the admin panel at `digital.chemheritage.org/admin/queues`
What version of fedora is running?
it's in the footer of the main page
for example http://localhost:8080/fedora/
Restart application without restarting apache
This will reload config files.
$ passenger-config restart-app
What version of the app is deployed?
$ cat /opt/sufia-project/current/REVISION
Restart passenger without restarting apache
$ passenger-config restart-app
Reindex all of solr:
# ssh to server as deploy user$ cd /opt/sufia-project/current$ RAILS_ENV=production bundle exec rake chf:reindex
Or, try using cap remote rake: `cap (production|staging) invoke:rake TASK=chf:reindex`
Note: If Reindexing due to a server move, import the Postgres database of users prior to reindexing. Otherwise you will need to reindex again once the users have been moved over.
Reindex just works in solr:
# ssh to jobs server (either jobs-prod or jobs-stage)
$ cd /opt/sufia-project/current
$ RAILS_ENV=production bundle exec rake chf:reindex_works
Or, try using cap remote rake: `cap (production|staging) invoke:rake TASK=chf:reindex_works`
Note: reindex_works task only works when you already have a complete solr index, unlike the much much slower full reindex, which can be run with an empty index to begin with.
Note: make sure to use either "screen" or "nohup", so if/when you get disconnected from terminal on jobs, it’s still running.
Delete all the data
(Don't do this on prod!)
Optional: stop apache or use capistrano's maintenance mode
Shut down tomcat and solr
rm -rf /opt/fedora-data/*rm -rf /opt/solr/collection1/data/* # solr 4rm -rf /var/solr/data/collection1/data/* # solr 5
If using Sufia 7 also run
psql -U trilby -d fcrepo -c 'DELETE FROM modeshape_repository'
The temporary testing password for trilby is porkpie2
Delete database stuff (notifications, mostly)
(you'll need the password. it's in the ansible vault.)
psql -U chf_pg_hydra -d chf_hydradelete from mailboxer_receipts where created_at < '2015-11-9';delete from mailboxer_notifications where created_at < '2015-11-9';delete from mailboxer_conversations where created_at < '2015-11-9';delete from trophies where created_at < '2015-11-9';
Turn tomcat back on (and apache if needed)
Inspect stuff
Note when using the rails console to look at actual live production data it's possible to change and delete things! Please be very careful before submitting commands if you are working with live data. Consider a dry-run on the staging server before doing anything on the production box.
$ bundle exec rails c[onsole] production# Or if you use my dev box, mess around on a development instance with just $bundle exec rails c# Get a count of users> User.all.count# List all users (you can also work with users directly in pgsql)> User.find_each { |u| p u.email }# Get a count of files> GenericFile.all.count# Inspect a file> f = GenericFile.find(id='3b5918567')> f.depositor> f.filename
#Inspect a GenericWork as stored in Fedora
puts w.resource.dump :ttl
# etc.
State File
The state file (formerly in /tmp) has been moved to /var/sufia. It is currently being backed up nightly. It must be included in any server migrations to avoid generating errors when uploading (a new state file may try to use an already used fedora ID).
Rights statements
It's useful to periodically check that all publicly available works have rights statements. As of Summer 2018 this was in fact true, but if you want to quickly check for the ID's of any public works that still need rights statements, log onto jobs_stage or jobs_production, open a console (see "Inspect Stuff" above) and paste the following directly into the console:
GenericWork.search_in_batches('read_access_group_ssim'=>'public') do |group|
group.each do |gw|
if gw["rights_tesim"] == nil || gw["rights_tesim"].count == 0
puts gw["id"]
end #if
end #group.each
end #search_in_batches
Adding and removing items from large collections
This is a known bug as of Summer 2018. See https://github.com/sciencehistory/chf-sufia/issues/1068 for more details about this. We seldom need this functionality, but if you do, here's how to do it in the Rails console.
Assuming the work you want to add or remove has ID work_id and the collection you want to add it to or remove it from has id collection_id,
Removing:
the_collection = Collection.find(collection_id)
the_collection.members.delete(GenericWork.find(work_id))
the_collection.save
Adding:
the_collection = Collection.find(collection_id)
the_collection.members.push(GenericWork.find(work_id))
the_collection.save
For large collections, expect these operations to take five to ten minutes and place considerable load on the server.
Regenerating derivatives on a fileset
Log into the jobs server (prod or staging, depending on the situation.)
$ sudo su
$ cd /opt/sufia-project/current
$ bundle exec rails c production
irb(main) > file_set = FileSet.find("z029p569t")
irb(main) > file_set_id = file_set.files[0].id
irb(main) > CHF::CreateDerivativesOnS3Service.new(file_set, file_set_id).call