Monday, February 26, 2018

More Python, CSS, and some Classes

Finally got back to more of a system's focus this week even though I spent a a little time in the classroom.

  1. My newest python code is pretty cool, as it will now automatically restore and reindex the various collections is our eXist database, and if there are any errors you see them as well. This is the basic guts of the code below for the restore process:

    import config as cfg

    for directory in restore :

        directory_backup = backup_date + "/db/" + directory + "/__contents__.xml"
        arguments = ["-u", cfg.login['user'], "-p", cfg.login['password'], "-r", directory_backup]
        command = [restore_program]
        #restore directory
            out_bytes = subprocess.check_call(command, stdout=f,stderr=f)
        except subprocess.CalledProcessError as e:

    Basically it writes the standard output and errors to a file, which it later emails upon completion.

  2. I also spent an afternoon last week dropping in the new CSS code for the Alma mash_up that controls the Get It / View It windows in our Primo instance. Thank you to Paul Ojennus, Whitworth University for creating it. Below is what our Get It window now looks like, a serious upgrade from before:

  3. Also got to drop into two classes this week, one a Museum Studies course that might use Omeka, and the other a Civic Communication & Media class where I showed them how to use Zotero.

Wednesday, February 21, 2018

Instagram, Zotero Table, Omeka Class Session

A lot going on this week not related directly to library systems work, so my work on systems has slipped a bit. Here are three non-systems highlights from this past week:

  1. On my work with the Marketing committee, we now have a library Instagram account, we are using the strategy in this article as our approach.
  2. Wednesday of this last week was Valentine's day, so I decided to use a "Zotero loves students" sign at a table in Goudy. Got a couple people interested, and now I also plan to do this at least once a month.
  3. Did an introductory Omeka session for a history class who will be working on creating exhibits on the history of Willamette. 

Monday, February 12, 2018

encodeURIComponent, ssh keys, and Gift Books

  1. So this first one surprised me that no one mentioned it before, but I think that is because most librarians got straight to Advanced Search for their catalog searches.

    If you searched our catalog from the main page for a search string with a "&" in it, it would chop the search string and only pass on the first part of the string.

    I think we were okay with this until we switched to the new UI, to fix this I just added a javascript call to encode the URI components in the query.

    query = encodeURIComponent(query);
  2. SSH keys are going to save my fingers some work. I was able to use the technique described here to, to create public/private rsa key pairs so I can just ssh bkelm@libtest,  and I'm directly connected. I then also set up a shortcut in a bookmark to open an SSH connection, ssh://bkelm@libtest-1, and now I just have to choose that bookmark on my mac and I get a shell opened.
  3. I have always wanted to at least return a link to the list of gift books by a given donor . I knew I could do this if I could grab the data from the PNX record, and thanks to Corinna Baksik at the University of Harvard. She shared some examples at ELUNA 2016 which let me try out the following.  First I add a component, that I bind to prmBriefResultAfter, which will use a controller which then has access to the data model of the parent, which is the PNX record. Very cool, so the code below appends a link under our Gift Book line, to look at all Gift Books from a donor.

    I added the script to our Github repository:

Monday, February 5, 2018

Python, eXist, Alliance Share the Work

1. Python

    Did a little work in python this week to set up a script that we can just run with one command to restore the various directories in our eXist database. Prompts the user for which back up and then iterates through the directories we have indicated we need to restore.

#!/usr/bin/env python

import subprocess

#Directories to restore list

directories = ["apps/METSALTO", "bulletincatalogs", "collegian", "commencement", "handbooks", "puritan", "scene", "scrapbooks", "wallulah", "system/config/db"]

#Prompt user for backup directory

backup_date = raw_input('Enter the backup directory: ')

#Note to run by cron just place the date in the file here and comment line above out

#backup_date = ""

report_date = backup_date + ".txt"

rout = open(report_date, 'w')

program = "/var/lib/exist/eXist-db/bin/"

for directory in directories :

    directory_backup = "/backup/" + backup_date + "/db/" + directory + "/__contents__.xml"
    arguments = ["-u", "admin", "-p", "XXXXX", "-r", directory_backup]
    command = [program]

    #restore each directory and send output to file, stdout=rout)

2.  eXist

 I wrote a simple XQuery that could be run in the Exide that runs through the different eXist collections that we want to reindex after the restore command run from above.

xquery version "3.0" encoding "UTF-8";
declare option exist:serialize "method=xhtml media-type=text/html indent=yes";
    let $data-collection := '/db'
    let $login := xmldb:login($data-collection, 'admin', 'XXXXXX')
      for $directory in ("/bulletincatalogs/fulltext","/bulletincatalogs/mets","/collegian/fulltext","/collegian/mets","/handbooks/fulltext","/handbooks/mets","/puritan/fulltext","/puritan/mets","/scene/fulltext","scene/mets","scrapbook/fulltext","scrapbook/mets","walullah/fulltext","/walullah/mets")
        let $start-time := util:system-time()
        let $collection := concat($data-collection,$directory)
          let $reindex := xmldb:reindex($collection)
            let $runtime-ms := ((util:system-time() - $start-time)
                                 div xs:dayTimeDuration('PT1S'))  * 1000
                      <p>The index for {$collection} was updated in {$runtime-ms} milliseconds.</p>

                  So this works just fine, but my supervisor would prefer we did not run through the Exide interface over HTTP. Okay, so finished the week, by trying to write an ant task to accomplish that, I'll share my success or failure with that next week.

                  3. Alliance Sharing the Work

                  On Friday morning I had a great call with the DUX leaders Anne at UW and Molly at PSU and Cassie from the Alliance offices. In my opinion, our group was doing too much hand holding for the other libraries with each Primo Upgrade. ExLibris puts out plenty of information at each upgrade, and in the past, we had been doing a bunch of customization to that information. Where in my opinion it just was not necessary. And if anyone cared enough they would be able to go through the information from ExLibris and gather what they need from it. 

                  The folks on the DUX call agreed, and we also agreed to allow people to just add their own issues to a spreadsheet for tracking issues with each upgrade. If the issue is important to you document it in the spreadsheet, no need to send to me to document, you can edit the spreadsheet just as easy as I can. If you care about the upgrade you will do testing and put your results and calls in the Google Doc folder that everyone can edit and read.
                  I now get to share this information with my group, maybe I should have told them about this, but I have to think they will be for the change as well. Then I will present the changes on the Alliance Discovery call on the 15th.

                  Monday, January 29, 2018

                  eXist backup, Zotero and Apache Satisfy Any

                  Another fun week of more systems stuff:

                  1. Finally got Scott Pike's Zotero to work, it was the Word version the whole time!
                  2. Started working on the eXist backup, took way to many times to get correct, but not all on me. However, I got this error message: System maintenance task reported error: ERROR: failed to create report file in /backup. So figure it's a permission thing since eXist is now running with user exist. So I change to exist:exist for the backup diretcory, and then I got this the next morning: System maintenance task reported error: ERROR: failed to create report file in /backup, so not just permissions. So I became user exist and tried to just write to a file in the directory and I got: This account is currently not available. Found this, and it looks like the user was created with no login, exist:x:56808:56808::/home/exist:/sbin/nologin. Ran the ran sudo chsh -s /bin/bash exist, and then was all set. However, I failed to remember although I was changing the day of the week each time I ran into the fact we were no longer in the third week of the month. cron-trigger="0 0 23 ? 1/1 SUN#3, need that to be #4, Dooh! Finally got on Thursday night.
                  3. Apache Satisfy Any - this is another one of those things you can just do a number of ways. The satisfy any was not working if we had both CAS user and ldap attributes, and no matter which requires you put first that would take over as authoritative. So I decided to just use, the ldap-user attribute instead of the CAS. Mike found another way you could do it, by making sure you set AuthzLDAPAuthoritative off, this lets the CAS user element work too. Both ways work.

                  Thursday, January 18, 2018

                  Lesson Learned, Recommendations, Omeka, Libstats Google It

                  Pretty busy week, but I think a good one all around as school started back up.

                  1. Lesson learned from last week's attempt at multi-tasking when a system component was down. Doreen could not get access to the costumes and images collection which obviously no one uses or reports they can't access, as Mike had made a change back in October 2017. So in the gwt application you set configuration on tomcat in the /var/lib/tomcat/webapps/ There is a setting in there or the ssl secure server. In the end it turned out the request was being made from libtomcat not ssl as http, to the campus CAS server and then returned as https, and it was causing a mismatch in the tickets exchanged. Mike reverted to pre-October change and made the file tomcat call be to http and then the two tickets matched. We needed Casey to see the SSL log file to diagnose the problem. However, the important thing here s I held off on my lunch run until 2:30 until we got the system back up.
                  2. Turned on the recommendations in Primo, they're pretty cool and I think will get people to resources a bit faster than before.
                  3. Omeka, was a trying week as I learned before you can invite someone to our shared hosted site, they first need to set up an Omeka trial account. A little hokey to me, but it explains why we do not see usernames in our admin interface like we do in our own version of Omeka.
                  4. Finally worked on a report in libstats from a request from last October. A couple times I was tempted to quit on it and just get the report another way, as I was getting an error message, but I was sure the php "Class" it said did not exist did. So about to give up, then decided to Google part of the error message with libstats, and I hit the Google code archive which described the exact error I was getting. Added their fix, and it worked like a charm. So it had to do with the way the file for the report class I was working on was created and not being recognized. Thanks, Google

                  Friday, January 12, 2018

                  Kernel Upgrades, System Down, Communication, EZProxy Upgrade

                  Well, this was an eventful last three days of the week.

                  1. Started with WITS notifying us that 12 of our library servers needed to have their kernels updated due to the Intel chip security issues. Decided to do that early Thursday morning, all went well except for our DSpace server which had issues with the tomcat server running on it. After all, day spending time looking at the issue with WITS, Mike solved it as an issue with some SOLR statistics that existed that should not be around. Well, those statistics were generated on the first of the year, so there is no way that we would have known about this issue.

                  2. Also learned that if a system component is down, don't try to multi-task and take care of other stuff while it is down, work solely on that problem. I decided to go to the faculty retreat, and just listen while still researching on the problem was not acceptable by my supervisor. Okay, lesson learned now. This after I came in at 6 a.m. to the upgrades, and worked the entire day on trying to help fix the issue. It is what it is.

                  3. JIRA sucks for back and for communication on projects unless both people are actively responding to issues and questions and taking the time to completely read comments. Guess I need to start loving JIRA as a tool, but man it can get in the way sometimes.

                  4. Also conducted a very easy EZProxy upgrade, I just included it to show another piece of software that I get to deal with.