Friday, November 16, 2018

Week 12: Kaltura, ArchivesSpace, and Google Studio.

Three items of systems related success this week:

  1. Kaltura issues: so we are using up all the disk space on the server. I decided to try and just delete files to get enough room so if I have to remove content it is updated in the database properly.

    As I would expect it, I remove and make space one day with no luck, then the next day I move one of the videos over to WU's hosted server, and prep a page for it on my development server. I remember that there are some interviews also on our pages that I might have to bring over, and sure enough, the interview loads fine as well as well as the admin pages now. So well keep this as is until next week when we meet with Casey regarding disk space.
  2. ArchivesSpace issues: so everyone once in a while I think the archivesspace software loses its connection to the MySQL database. I think with a lot of the content, it just relies on the SOLR indexes, but on the main repositories query when clicking on "Collections" and if you tried to do a search in a specific collection both functions failed with an error unable to connect to the database. A reboot of the software fixed the problem and restored the collections listing and search.
  3. Google Studio: so last week I got to attend a great conference, it was the NWACC 2018 Instruction Technology Roundtable, NWACC is the Northwest Academic Computing Consortium. One of the best sessions was a breakout session on technology tools, and one of the top picks besides Adobe Spark that I picked up was Google Studio. I have started working on a number of different reports now, with different data sources, and it looks like might even be able to sync some of the data up with our Google Analytics, but well save that for another blog post. Here is a link to libguides usage at Willamette, with a data source being a Google Doc spreadsheet.

Friday, November 2, 2018

Week 10 - Fall 2018 (cron jobs, analytics, and archivesspace OAI)

Three items of systems related success this week:

  1. Cron jobs are now running to backup the archivesspace database, using cron and the .my.cnf file this is now working reasonably well with a backup every morning at 6:30 a.m. we are going to keep just the last two created
  2. The second thing I worked on was getting the new Finding Aids from ArchivesSpace piped into our Primo instance. So I had to delete the all of the old WUARCHIVES scope, and then harvest the new finding aids from the OAI (port 8082) of our server which for some unknown reason to me was not open. Then once WITS opened permanently for me, since I could open temporarily by editing the IP tables, I had to set up a proxy for it since ExLibris from off-campus had to do the harvesting. Which also had me create a new normalization rule similar to the CONTENTdm one with some minor modifications.
  3. Finished off this week with some alma analytics trying to clean up some old messy records on our system and also demo something for the upcoming systems call for analytics.

Friday, October 26, 2018

Week 9 - Fall 2018 (Kaltura, ArchiveSpace, Google Tag Manager)

Here are three systems related tasks/actions for the ninth week of the fall semester.



  1. This week I was dealing with a Kaltura player and playlist that we wanted to be responsive to its layout. Well, I spent way too much time on this one trying to customize the player and playlist using HTML and the Kaltura API. As it turned out, it was already in a responsive player set to a 16 x 9 ratio. However, it's width was actually large than the div it is was in so it would pooch out the side. Mike shared a way to make a player responsive, but to be honest, I don't think you need to use that trick anymore. Good thing this one got finished this week.
  2. Did some organizing to our Google Analytics and Tag Manager. It's perfect now as I went over with Mike how we had it set up, he shows me how we want to capture all pages, but with the way the application is set up, we also need to capture the History Changes. We verified this is in place and working for the Academic Commons, we also set it up for the new instance of ArchivesSpace.
  3. Finish the week off with another Kaltura related challenge, I'm looking at integrating Kaltura into Omeka, and Will at the IU Digital libraries had written an Omeka plugin for Kaltura. It's not official, but I thought I would give it a try. It basically takes the Kaltura information, uiconfID, parnterID, and then based on your parameters it will play a video. Turns out its really designed with PHP 7, and the libapps server is just on 5.3. So I'm going to move my development to libtest-1, which is running PHP 7. Something positive from that challenge last week. 

Friday, October 12, 2018

Summer Academic Commons Usability...

So I wanted to at least capture what we came up with during our testing of the Academic Commons of Summer 2018.

Here are the test folks and the date and time we had them test the site.

This was the moderator script and these were the notetaker sheets.

We had three folks take the test, the only problem which I had with them is that they all were library employees. Not sure it affects the outcomes, but to me, it just seems like it must.

These were the outcomes of the testing.

Mike does not believe we can do much with these now, so we will just sit on them. If we do ever return to looking at using the Academic Commons we may return to these test results.

Friday, July 13, 2018

IE and Conditional Styles, eXist indexing...

So two challenges this week:


  1. I had created an angular javascript addition to the catalog to display the gift book information of a title, so you can click on it, and then get a listing of all gift books from that donor. Well I have this in place for most of the year, and just happened to notice in IE, that it was not working, and it was placing the words "Gift of" on each title since the stylesheet declaration to not display was not working.

    For the longest time, I thought it was strictly a CSS issue and IE, but the more I looked into it I saw that my conditional CSS was not getting populated with anything, and just was style="$ctrl.show".

    So I started looking for information on CSS, angular ctrl elements, and IE. And sure enough, found that you must use ng-style instead of style, otherwise, some browsers like IE will remove invalid style attribute values (presence of {} etc makes it invalid) before even angular has a chance to render it. When you use ng-style angular will calculate the expression and add the inline style attributes to it.

    However, even just flipping to just ng-style, was not enough, I needed to make it ng-attr-style <span ng-attr-style="$ctrl.show"> and now it works properly in both browsers. I also rewrote what I had as a module which is better than just a component.
  2. The other challenge has been eXist, and I could not figure out how I was not able to get any results in my eXist queries to just search for full text, and I know that there must have been something different that I was doing as it had worked before.

    Below is an example:

    http://exist.willamette.edu:8080/exist/apps/METSALTO/api/SearchQuery.xquery?q=all^Editorial^phrase&collection=studentpubs&type=search

    and this returns a json string of matches, but for some reason, it would work any other collection but the one I had been working on in studentpubs. I think it came down to who I was logged in when I ran the full-text indexing query. Ran as administrator I would never get results, re-ran as myself and I get results just fine now.
I'm just glad I did not have to ponder these issues over the weekend.

Friday, July 6, 2018

ArchivesSpace Proxy...

This task should have been easier than it was. So I needed to proxy archivesspace similar to how proxy Kaltura and Omeka from libmedia.

Usual apache proxy, but on archivesspace, there are some config settings in the ruby application that need to be set.

AppConfig[:frontend_url] = "http://libtest-1.willamette.edu:8080/archives/asadmintest"

AppConfig[:public_url] = "http://libtest-1.willamette.edu:8081/archives/astest"

So for them, the staff side is the frontend URL. Once I had these set up I then set the proxy pass on libmedia:

ProxyPass /archives/asadmin http://archivesspace.willamette.edu:8080/archives/asadmin
ProxyPassReverse /archives/asadmin http://archivesspace.willamette.edu:8080/archives/asadmin

ProxyPass /archives/as http://archivesspace.willamette.edu:8081/archives/as
ProxyPassReverse /archives/as http://archivesspace.willamette.edu:8081/archives/as

Thought that would be it, but there was still a link on the main page of the interface displaying archivesspace.willamette.edu

So I added this to the config.rb file:

AppConfig[:frontend_proxy_url] = 'https://libmedia.willamette.edu/archives/asadmin'
AppConfig[:public_proxy_url] = 'https://libmedia.willamette.edu/archives/as'

Now my front page link is properly pointing at the proxied address.

Chrysalis, Chat Widget

I worked on a couple of systems projects over the last two weeks.


  1. The Chrysalis
  2. New Primo Chat widget

    This one I had been wanting to get in place for a long time. This last year since we moved to the new Primo Interface we were just using a link in the interface to our chat page, and it was not really a widget. On last Friday someone was nice enough to alert the ExLibris community that NYU libraries had posted code to GitHub that used the libraryh3lp service.

    I was able to install with npm, then I grabbed the code and integrated is as custom addition to our Primo Explore environment. The chat widget came out well, I imagine we will tweak colors and location as the summer goes on.

Friday, May 11, 2018

Usability Tests, Card Sorting, and Databases

Pretty busy week, even though finals are wrapping up next week at Willamette. So we finished the month of April with a week-long sprint for me on Usability Testing. Sara volunteered WU for working with the Digital Objects in Primo with the Alliance and that ended up being the same week that we were going to do the Usability Testing for the new Academic Commons.

Usability Testing Takeaways:
    1. We need to offer more than a $10 gift card if we want to get folks time for 20 to 30 minutes
    2. Fewer tasks in a session are better than more.
    3. Take the best notes you can, but having it videotaped would be so much easier. We have the technology lets use it.
    4. I hope we can actually test the main website, but I have a feeling we are going to have to wait until the end of the summer.

Card Sorting:


    1. Do not have a card sorting task with 66 cards in it, that's a bit of an overkill.
    2. The Optimalsort from the Alliance which came from Optimal Workshops is pretty slick, I have now signed up for a free account and we may use it for our next testing.
Databases:


    1. I finished going through all of our database entries so we were up to date with regards to whether or not the vendor had flipped to SSL. It was also just a good cleanup project, as a number of databases were not functioning. Then each time you might flip the URL to SSL you also had to update EZproxy configuration.

Wednesday, April 11, 2018

Google Feed Burner, SSL certs, and Google Analytics

I believe all three of these would qualify as systems related tasks for the first part of April.


  1. So RSS feeds with ExLibris have always been a challenge, be it getting the proper link, or elements we want to be included in the RSS feed. So about a month ago, Blake Galbreath at Washington State University posted a question on one of our Alliance Slack channels to see if anyone has had luck getting an RSS feed from Alma work in LibGuides. I decided to try it, as I know we would have done it if it worked, and sure enough, it did not and gave a 200 error message, when you use their tool to verify the RSS feed it does not validate.

    When I search the Primo listservs I noticed someone else somewhere in the midwest was bumping into the same problem. I then wondered if I took the feed and pumped it through another feed tool, like say Google FeedBurner could I then get a validated RSS feed. Boohyah, got it working and wrote it up in Confluence, and Blake and I also shared at the systems call on 4/11/17.
  2. SSL certificates, well nothing ever goes easy for me with SSL certs, but I think it's getting easier. Working on the Kaltura server, which eventually I should be able to get to Lets Encrypt, which was a pain right now, as they removed an easy way to install on sites that were already SSL. I decided to just place a new CSR request, and have Case submit to InCommon. Got the certs back, and I decided to first just copy the new certs into the same spots on the server with the old names of the files. However, even when I did that when I would check the SSL cert it was still showing the old date even on a reboot of the system. So Mike suggested it could be just as simple as the filenames still being the same. So this second time, I copied in the new certificates with their new names, updated kaltura ssl config to point to the new certs, and sure enough that fixed it. Nice.
  3. The final highlight at the start of the week, Archives needed some usage numbers on PNAA EADs, so I turned the Google Analytics which I have set up, and sure enough, I think we got some solid usage numbers. Then I even was able to show one of the archives assistants how to use Google Analytics to even get more details on where are actual use is coming from.  

Friday, March 30, 2018

Ides of March: Kaltura upgrade, OAI harvesting, Primo Explore

Three different systems things that I worked on the first couple of weeks of March:


  1. I upgraded our Kaltura instance, and usually, when I do the upgrade I run the install script and pass it an answer file. However, this time I decided to just create a brand new answer template from scratch and I got all the question entries correct for the first time. It 's like 50 answers, so I was pretty impressed.
  2. I delved back into the world of Primo harvesting probably at a bad time as there was a lot of Primo changes going on, but I was able to set up a pipe for the new Academic Commons, and also update the normalization rule for the journal articles in DSpace to be articles and the libguide PNX records to be websites.
  3.  I was able to also get our github repository up for our primo customizations here on github, I was also able to get the javascript to work to display our Gift Book links.

Personal Archiving Workshop, Library Carpentry Workshop, and Git

Not a super systems post, but some definite elements. Before Spring Break, I got to do two workshops, one at Willamette and another at Oregon State.


  1. The personal archiving workshop was organized by Sara Amato and presented by Danielle Mericle and had some great tips in it. I think the most important was is that you need to set aside the time to do the personal archiving each week, and don't try to bite off too much at the start. Really liked the libguide that she shared from Cornell. 
  2. At the Library Carpentry Workshop, we worked on Regular Expressions and on a tool called OpenRefine. I had recently used OpenRefine for our library breaklist at the end of the fall semester so it was nice to see some features I had not used in it before.
  3. Then two thoughts on Git, I had been declined a pull request but it had been waiting awhile and I had already done some other work on my repository. So I finally got the understanding down that I can just cancel the current pull request, send up my new changes to my branch and then make a new pull request. You can probably stack pull requests, but this method seemed to make more sense. Also, agree that we can keep old code around in Git or a Bitbucket repository, but let's label it as being retired or some way indicate it is no longer in production. 

Monday, February 26, 2018

More Python, CSS, and some Classes

Finally got back to more of a system's focus this week even though I spent a a little time in the classroom.

  1. My newest python code is pretty cool, as it will now automatically restore and reindex the various collections is our eXist database, and if there are any errors you see them as well. This is the basic guts of the code below for the restore process:

    import config as cfg

    for directory in restore :

        directory_backup = backup_date + "/db/" + directory + "/__contents__.xml"
        arguments = ["-u", cfg.login['user'], "-p", cfg.login['password'], "-r", directory_backup]
        command = [restore_program]
        command.extend(arguments)
        #restore directory
        try:
            out_bytes = subprocess.check_call(command, stdout=f,stderr=f)
        except subprocess.CalledProcessError as e:
            pass


    Basically it writes the standard output and errors to a file, which it later emails upon completion.

  2. I also spent an afternoon last week dropping in the new CSS code for the Alma mash_up that controls the Get It / View It windows in our Primo instance. Thank you to Paul Ojennus, Whitworth University for creating it. Below is what our Get It window now looks like, a serious upgrade from before:


  3. Also got to drop into two classes this week, one a Museum Studies course that might use Omeka, and the other a Civic Communication & Media class where I showed them how to use Zotero.

Wednesday, February 21, 2018

Instagram, Zotero Table, Omeka Class Session

A lot going on this week not related directly to library systems work, so my work on systems has slipped a bit. Here are three non-systems highlights from this past week:


  1. On my work with the Marketing committee, we now have a library Instagram account, we are using the strategy in this article as our approach.
  2. Wednesday of this last week was Valentine's day, so I decided to use a "Zotero loves students" sign at a table in Goudy. Got a couple people interested, and now I also plan to do this at least once a month.
  3. Did an introductory Omeka session for a history class who will be working on creating exhibits on the history of Willamette. 

Monday, February 12, 2018

encodeURIComponent, ssh keys, and Gift Books


  1. So this first one surprised me that no one mentioned it before, but I think that is because most librarians got straight to Advanced Search for their catalog searches.

    If you searched our catalog from the main page for a search string with a "&" in it, it would chop the search string and only pass on the first part of the string.

    I think we were okay with this until we switched to the new UI, to fix this I just added a javascript call to encode the URI components in the query.

    query = encodeURIComponent(query);
  2. SSH keys are going to save my fingers some work. I was able to use the technique described here tohttp://www.rebol.com/docs/ssh-auto-login.html, to create public/private rsa key pairs so I can just ssh bkelm@libtest,  and I'm directly connected. I then also set up a shortcut in a bookmark to open an SSH connection, ssh://bkelm@libtest-1, and now I just have to choose that bookmark on my mac and I get a shell opened.
  3. I have always wanted to at least return a link to the list of gift books by a given donor . I knew I could do this if I could grab the data from the PNX record, and thanks to Corinna Baksik at the University of Harvard. She shared some examples at ELUNA 2016 which let me try out the following.  First I add a component, that I bind to prmBriefResultAfter, which will use a controller which then has access to the data model of the parent, which is the PNX record. Very cool, so the code below appends a link under our Gift Book line, to look at all Gift Books from a donor.

    I added the script to our Github repository:

    https://github.com/hatfieldlibrary/primo-new-ui-components/blob/master/custom/js/7_gift_book.js

Monday, February 5, 2018

Python, eXist, Alliance Share the Work


1. Python

    Did a little work in python this week to set up a script that we can just run with one command to restore the various directories in our eXist database. Prompts the user for which back up and then iterates through the directories we have indicated we need to restore.

#!/usr/bin/env python

import subprocess

#Directories to restore list

directories = ["apps/METSALTO", "bulletincatalogs", "collegian", "commencement", "handbooks", "puritan", "scene", "scrapbooks", "wallulah", "system/config/db"]

#Prompt user for backup directory

backup_date = raw_input('Enter the backup directory: ')

#Note to run by cron just place the date in the file here and comment line above out

#backup_date = ""

report_date = backup_date + ".txt"

rout = open(report_date, 'w')

program = "/var/lib/exist/eXist-db/bin/backup.sh"

for directory in directories :

    directory_backup = "/backup/" + backup_date + "/db/" + directory + "/__contents__.xml"
    arguments = ["-u", "admin", "-p", "XXXXX", "-r", directory_backup]
    command = [program]
    command.extend(arguments)

    #restore each directory and send output to file
    subprocess.call(command, stdout=rout)

2.  eXist

 I wrote a simple XQuery that could be run in the Exide that runs through the different eXist collections that we want to reindex after the restore command run from above.

xquery version "3.0" encoding "UTF-8";
declare option exist:serialize "method=xhtml media-type=text/html indent=yes";
    let $data-collection := '/db'
    let $login := xmldb:login($data-collection, 'admin', 'XXXXXX')
      for $directory in ("/bulletincatalogs/fulltext","/bulletincatalogs/mets","/collegian/fulltext","/collegian/mets","/handbooks/fulltext","/handbooks/mets","/puritan/fulltext","/puritan/mets","/scene/fulltext","scene/mets","scrapbook/fulltext","scrapbook/mets","walullah/fulltext","/walullah/mets")
        let $start-time := util:system-time()
        let $collection := concat($data-collection,$directory)
          let $reindex := xmldb:reindex($collection)
            let $runtime-ms := ((util:system-time() - $start-time)
                                 div xs:dayTimeDuration('PT1S'))  * 1000
                return
                  <html>
                      <head>
                         <title>Reindex</title>
                      </head>
                      <body>
                      <h1>Reindex</h1>
                      <p>The index for {$collection} was updated in {$runtime-ms} milliseconds.</p>
                      </body>
                  </html>

                  So this works just fine, but my supervisor would prefer we did not run through the Exide interface over HTTP. Okay, so finished the week, by trying to write an ant task to accomplish that, I'll share my success or failure with that next week.

                  3. Alliance Sharing the Work

                  On Friday morning I had a great call with the DUX leaders Anne at UW and Molly at PSU and Cassie from the Alliance offices. In my opinion, our group was doing too much hand holding for the other libraries with each Primo Upgrade. ExLibris puts out plenty of information at each upgrade, and in the past, we had been doing a bunch of customization to that information. Where in my opinion it just was not necessary. And if anyone cared enough they would be able to go through the information from ExLibris and gather what they need from it. 

                  The folks on the DUX call agreed, and we also agreed to allow people to just add their own issues to a spreadsheet for tracking issues with each upgrade. If the issue is important to you document it in the spreadsheet, no need to send to me to document, you can edit the spreadsheet just as easy as I can. If you care about the upgrade you will do testing and put your results and calls in the Google Doc folder that everyone can edit and read.
                  I now get to share this information with my group, maybe I should have told them about this, but I have to think they will be for the change as well. Then I will present the changes on the Alliance Discovery call on the 15th.

                  Monday, January 29, 2018

                  eXist backup, Zotero and Apache Satisfy Any

                  Another fun week of more systems stuff:


                  1. Finally got Scott Pike's Zotero to work, it was the Word version the whole time!
                  2. Started working on the eXist backup, took way to many times to get correct, but not all on me. However, I got this error message: System maintenance task reported error: ERROR: failed to create report file in /backup. So figure it's a permission thing since eXist is now running with user exist. So I change to exist:exist for the backup diretcory, and then I got this the next morning: System maintenance task reported error: ERROR: failed to create report file in /backup, so not just permissions. So I became user exist and tried to just write to a file in the directory and I got: This account is currently not available. Found this, http://heelpbook.altervista.org/2017/this-account-is-currently-not-available-linux/ and it looks like the user was created with no login, exist:x:56808:56808::/home/exist:/sbin/nologin. Ran the ran sudo chsh -s /bin/bash exist, and then was all set. However, I failed to remember although I was changing the day of the week each time I ran into the fact we were no longer in the third week of the month. cron-trigger="0 0 23 ? 1/1 SUN#3, need that to be #4, Dooh! Finally got on Thursday night.
                  3. Apache Satisfy Any - this is another one of those things you can just do a number of ways. The satisfy any was not working if we had both CAS user and ldap attributes, and no matter which requires you put first that would take over as authoritative. So I decided to just use, the ldap-user attribute instead of the CAS. Mike found another way you could do it, by making sure you set AuthzLDAPAuthoritative off, this lets the CAS user element work too. Both ways work.

                  Thursday, January 18, 2018

                  Lesson Learned, Recommendations, Omeka, Libstats Google It

                  Pretty busy week, but I think a good one all around as school started back up.


                  1. Lesson learned from last week's attempt at multi-tasking when a system component was down. Doreen could not get access to the costumes and images collection which obviously no one uses or reports they can't access, as Mike had made a change back in October 2017. So in the gwt application you set configuration on tomcat in the /var/lib/tomcat/webapps/cview-1.4.2.dev/WEB-INF/web.xml. There is a setting in there or the ssl secure server. In the end it turned out the request was being made from libtomcat not ssl as http, to the campus CAS server and then returned as https, and it was causing a mismatch in the tickets exchanged. Mike reverted to pre-October change and made the file tomcat call be to http and then the two tickets matched. We needed Casey to see the SSL log file to diagnose the problem. However, the important thing here s I held off on my lunch run until 2:30 until we got the system back up.
                  2. Turned on the recommendations in Primo, they're pretty cool and I think will get people to resources a bit faster than before.
                  3. Omeka, was a trying week as I learned before you can invite someone to our shared hosted site, they first need to set up an Omeka trial account. A little hokey to me, but it explains why we do not see usernames in our admin interface like we do in our own version of Omeka.
                  4. Finally worked on a report in libstats from a request from last October. A couple times I was tempted to quit on it and just get the report another way, as I was getting an error message, but I was sure the php "Class" it said did not exist did. So about to give up, then decided to Google part of the error message with libstats, and I hit the Google code archive which described the exact error I was getting. Added their fix, and it worked like a charm. So it had to do with the way the file for the report class I was working on was created and not being recognized. Thanks, Google

                  Friday, January 12, 2018

                  Kernel Upgrades, System Down, Communication, EZProxy Upgrade

                  Well, this was an eventful last three days of the week.

                  1. Started with WITS notifying us that 12 of our library servers needed to have their kernels updated due to the Intel chip security issues. Decided to do that early Thursday morning, all went well except for our DSpace server which had issues with the tomcat server running on it. After all, day spending time looking at the issue with WITS, Mike solved it as an issue with some SOLR statistics that existed that should not be around. Well, those statistics were generated on the first of the year, so there is no way that we would have known about this issue.

                  2. Also learned that if a system component is down, don't try to multi-task and take care of other stuff while it is down, work solely on that problem. I decided to go to the faculty retreat, and just listen while still researching on the problem was not acceptable by my supervisor. Okay, lesson learned now. This after I came in at 6 a.m. to the upgrades, and worked the entire day on trying to help fix the issue. It is what it is.

                  3. JIRA sucks for back and for communication on projects unless both people are actively responding to issues and questions and taking the time to completely read comments. Guess I need to start loving JIRA as a tool, but man it can get in the way sometimes.

                  4. Also conducted a very easy EZProxy upgrade, I just included it to show another piece of software that I get to deal with.

                  Tuesday, January 9, 2018

                  Digital Collection Stats, Zotero Posters, and OpenRefine

                  Worked on a number of things to start this week:

                  1. Gathered our annual digital collection stats available here. Ran into a new issue with our DSpace SOLR statistics having been lost, so I used Google Analytics instead to get the views of items. The new API also has some real long ID for community IDs.
                  2. I decided that I want to start pusing Zotero more on campus, and so a quick Google search brought up some great marketing posters made by Kyle Denlinger at Wake Forest. I added a new font Open Sans Light to my windows machine to be able to edit them with Adobe Acrobat and I now have eight new posters in my H: drive in Projects-> Marketing-Zotero folder. Going to take them to the Faculty luncheons this semester.
                  3. Sara recommended using OpenRefine to deal with an XML issue I was having with creating a csv file for our student accounts office. I had been using and Excel XML import, but the newer versions were no longer supporting the option. So as long as I have the older XML I will use that process, then if unavailable we will now also have OpenRefine. It's a pretty awesome tool, and could be a possible presentation topic for Alma and fine export to a business office.

                  Wednesday, January 3, 2018

                  Omeka and Omeka-S

                  Well, not as involved as I would like to be with the class that is going to be use Omeka for an upcoming class project.

                  It will just have to wait till we get to a point that they need some help and instruction.

                  Mike had me order a subscription for the University for Omeka, only issue was we should have probably waited to order it when they started using it. Our year subscription clock starting ticking before Christmas and I don't see them touching this till at least the start of classes.

                  But whatever, on a side note I was able to get Omeka-s installed on my local machine with no issues. They (the Omeka folks) are not going to be offering this as a service for quite a long time, so no real need to learn it yet.

                  Fall Breaklist...XML with Python to CSV

                  Oh, the joys of the Breaklist, I did most all of it by myself this year. Not that bad, but still have to know another system.

                  Then once the final file is created on Alma after the charges are removed from patron records, you have to convert the xml file to a csv file. I used a Windows 7 Excel technique where you map the incoming XML file to specific columns. I think I would like to write a python script to do this next time.

                  Christmas Break and Primo Upgrade

                  I was hoping to take most of the break off, but I knew we had the November Release being installed on Primo.

                  It had some issues again, and I spent most of Christmas Eve going back and forth with ExLibris Support staff. This was the final explanation why our search results were so wacky:

                  Hi,

                  To clarify the delay in the full resolution of the issue was an error in the WOU_TEST view, where there was a tab with no scope defined (probably deleted).

                  Resolving the tab/scope issue of WOU_TEST view allowed the Deploy to finish, and resolve the issue. The Primo on-call analyst found this, but it took some time.

                  Apologies for the delays.

                  So I'm thinking no way can some mistake be accidentally made at Western Oregon University, actually break our system alliance wide?

                  Alon was nice enough to write back:

                  Case Title: No Summit or Local Results...

                  Last comment:
                  Hi Bill,

                  WOU_TEST, from the logs:
                  2017-12-24 17:20:32.917 - Preparing ui components information for view :WOU_Test; 63 out of 151
                  2017-12-24 17:20:32.932 - Cannot deploy Views to Front-End: java.lang.NullPointerException

                  So wow, that made for a fun Christmas.

                  Speaking of Primo over break also had to flip it to start using the logo upload option instead of embedding the element into the page with javascript. For some reason in Safari our logo was getting stretched across the screen.

                  ArchivesSpace Upgraded / Illiad HTTPS

                  I upgrade ArchivesSpace to version 2.2.0, followed all of the proper steps.

                  I also figured out why ILLiad was not going to https. We had a web config file, that was set up in the inetpub directory forwarding to http and not https.

                  ArchivesSpace Properly Configured

                  We now have archivesspace set up ran by user aspace, start and stop with as a Unix demon.


                  1. Created a symbolic link to /usr/local/archivesspace -> usr/local/archivesspace-2.1.2/archivesspace/
                  2. added an init.d script ln -s /usr/local/archivesspace/archivesspace.sh archivesspace
                  3. added it as a service sudo chkconfig --add archivesspace
                  4. it starts now as a service sudo /sbin/service archivesspace star and should come up on a reboot


                  Archivist Toolkit Migration to ArchivesSpace

                  We are looking at migrating to ArchivesSpace a jruby application. Very simple install, described well here: http://archivesspace.github.io/archivesspace/