Create a new user in production
First, an administrator, from a local dev environment, should run the following:
cd /path/to/my/dev/chf-sufia
bundle exec cap production invoke:rake TASK=chf:user:create['username@sciencehistory.org']
Then, send the new user the following instructions. ("You" refers to the new user of course.)
- Visit digital.sciencehistory.org/login or http://staging.digital.sciencehistory.org/login
- Enter your email address in the "Email" field.
- Click on the "Forget your password?" link.
- You will then get an email allowing you to set your password to whatever you like.
*After new account has been created, file Help Desk ticket to add new user to Hydra User Group email list*
Lock out user
Likewise, from a local dev environment, run the following:
cap production invoke:rake TASK=chf:lock_out['username@sciencehistory.org']
Resque admin panel
If a file doesn't get characterized correctly the first thing to do is check the resque admin panel. There you can view failures and restart jobs. If you are logged in as an admin user you can view the admin panel at `digital.chemheritage.org/admin/queues`
What version of fedora is running?
it's in the footer of the main page
for example http://localhost:8080/fedora/
Restart application without restarting apache
This will reload config files.
$ passenger-config restart-app
What version of the app is deployed?
$ cat /opt/sufia-project/current/REVISION
Restart passenger without restarting apache
$ passenger-config restart-app
Reindex all of solr:
Check README for scihist_digicoll
Rebuild Solr with 0 Downtime tips:
For scihist_digicoll, we can easily build and swap in a new Solr server. This will result in downtime until the index is remade. While reindexing takes only a minute or two, the server changes being applied to jobs and web can take a while so there may be many minutes between when one of them has the new Solr server and the other does not during which time we can't reindex.
To minimize downtimes during Solr changes, the preferred method is to take a backup of the old Solr version (if it can be used with the new Solr version, test first) and then restore that backup on the new Solr server so that public users will always get content.
Core Name is scihist_digicoll
Location is build by ansible for backups, /backups/solr-backup
Backup Name can be anything you like
On the old Solr machines run
curl 'http://localhost:8983/solr/CORENAME/replication?command=backup&name=BACKUPNAME&location=/backups /solr-backup'
sudo tar czf /backups/solr-backup/solr-backup.tar.gz /backups/solr-backup/snapshot.BACKUPNAME
Then move/copy the backup tar to new server via whatever method you care to use
On the new Solr machine
Extra the tar to the /backups/solr-backup spot (or anywhere as long as the Solr user can access it)
Make sure all files are owned by Solr
Run
curl 'http://localhost:8983/solr/CORENAME/replication?command=restore&name=BACKUPNAME&location=/PATH'
Now the new machine has a recent backup and when you update the server IP address users will always get search results. Staff who have recently added or edited items may notice that they look off if it took place after the backup.
Once the servers are switched, run a reindex to catch any changes made during that time.
Clearing out the tmp directory (removes everything older than 8 days.)
This is invoked by a cron job on app prod, but just in case...
find /tmp/* -mtime +8 -delete