Monday, January 29, 2018

eXist backup, Zotero and Apache Satisfy Any

Another fun week of more systems stuff:


  1. Finally got Scott Pike's Zotero to work, it was the Word version the whole time!
  2. Started working on the eXist backup, took way to many times to get correct, but not all on me. However, I got this error message: System maintenance task reported error: ERROR: failed to create report file in /backup. So figure it's a permission thing since eXist is now running with user exist. So I change to exist:exist for the backup diretcory, and then I got this the next morning: System maintenance task reported error: ERROR: failed to create report file in /backup, so not just permissions. So I became user exist and tried to just write to a file in the directory and I got: This account is currently not available. Found this, http://heelpbook.altervista.org/2017/this-account-is-currently-not-available-linux/ and it looks like the user was created with no login, exist:x:56808:56808::/home/exist:/sbin/nologin. Ran the ran sudo chsh -s /bin/bash exist, and then was all set. However, I failed to remember although I was changing the day of the week each time I ran into the fact we were no longer in the third week of the month. cron-trigger="0 0 23 ? 1/1 SUN#3, need that to be #4, Dooh! Finally got on Thursday night.
  3. Apache Satisfy Any - this is another one of those things you can just do a number of ways. The satisfy any was not working if we had both CAS user and ldap attributes, and no matter which requires you put first that would take over as authoritative. So I decided to just use, the ldap-user attribute instead of the CAS. Mike found another way you could do it, by making sure you set AuthzLDAPAuthoritative off, this lets the CAS user element work too. Both ways work.

Thursday, January 18, 2018

Lesson Learned, Recommendations, Omeka, Libstats Google It

Pretty busy week, but I think a good one all around as school started back up.


  1. Lesson learned from last week's attempt at multi-tasking when a system component was down. Doreen could not get access to the costumes and images collection which obviously no one uses or reports they can't access, as Mike had made a change back in October 2017. So in the gwt application you set configuration on tomcat in the /var/lib/tomcat/webapps/cview-1.4.2.dev/WEB-INF/web.xml. There is a setting in there or the ssl secure server. In the end it turned out the request was being made from libtomcat not ssl as http, to the campus CAS server and then returned as https, and it was causing a mismatch in the tickets exchanged. Mike reverted to pre-October change and made the file tomcat call be to http and then the two tickets matched. We needed Casey to see the SSL log file to diagnose the problem. However, the important thing here s I held off on my lunch run until 2:30 until we got the system back up.
  2. Turned on the recommendations in Primo, they're pretty cool and I think will get people to resources a bit faster than before.
  3. Omeka, was a trying week as I learned before you can invite someone to our shared hosted site, they first need to set up an Omeka trial account. A little hokey to me, but it explains why we do not see usernames in our admin interface like we do in our own version of Omeka.
  4. Finally worked on a report in libstats from a request from last October. A couple times I was tempted to quit on it and just get the report another way, as I was getting an error message, but I was sure the php "Class" it said did not exist did. So about to give up, then decided to Google part of the error message with libstats, and I hit the Google code archive which described the exact error I was getting. Added their fix, and it worked like a charm. So it had to do with the way the file for the report class I was working on was created and not being recognized. Thanks, Google

Friday, January 12, 2018

Kernel Upgrades, System Down, Communication, EZProxy Upgrade

Well, this was an eventful last three days of the week.

1. Started with WITS notifying us that 12 of our library servers needed to have their kernels updated due to the Intel chip security issues. Decided to do that early Thursday morning, all went well except for our DSpace server which had issues with the tomcat server running on it. After all, day spending time looking at the issue with WITS, Mike solved it as an issue with some SOLR statistics that existed that should not be around. Well, those statistics were generated on the first of the year, so there is no way that we would have known about this issue.

2. Also learned that if a system component is down, don't try to multi-task and take care of other stuff while it is down, work solely on that problem. I decided to go to the faculty retreat, and just listen while still researching on the problem was not acceptable by my supervisor. Okay, lesson learned now. This after I came in at 6 a.m. to the upgrades, and worked the entire day on trying to help fix the issue. It is what it is.

3. JIRA sucks for back and for communication on projects unless both people are actively responding to issues and questions and taking the time to completely read comments. Guess I need to start loving JIRA as a tool, but man it can get in the way sometimes.

4. Also conducted a very easy EZProxy upgrade, I just included it to show another piece of software that I get to deal with.

Tuesday, January 9, 2018

Digital Collection Stats, Zotero Posters, and OpenRefine

Worked on a number of things to start this week:

  1. Gathered our annual digital collection stats available here. Ran into a new issue with our DSpace SOLR statistics having been lost, so I used Google Analytics instead to get the views of items. The new API also has some real long ID for community IDs.
  2. I decided that I want to start pusing Zotero more on campus, and so a quick Google search brought up some great marketing posters made by Kyle Denlinger at Wake Forest. I added a new font Open Sans Light to my windows machine to be able to edit them with Adobe Acrobat and I now have eight new posters in my H: drive in Projects-> Marketing-Zotero folder. Going to take them to the Faculty luncheons this semester.
  3. Sara recommended using OpenRefine to deal with an XML issue I was having with creating a csv file for our student accounts office. I had been using and Excel XML import, but the newer versions were no longer supporting the option. So as long as I have the older XML I will use that process, then if unavailable we will now also have OpenRefine. It's a pretty awesome tool, and could be a possible presentation topic for Alma and fine export to a business office.

Wednesday, January 3, 2018

Omeka and Omeka-S

Well, not as involved as I would like to be with the class that is going to be use Omeka for an upcoming class project.

It will just have to wait till we get to a point that they need some help and instruction.

Mike had me order a subscription for the University for Omeka, only issue was we should have probably waited to order it when they started using it. Our year subscription clock starting ticking before Christmas and I don't see them touching this till at least the start of classes.

But whatever, on a side note I was able to get Omeka-s installed on my local machine with no issues. They (the Omeka folks) are not going to be offering this as a service for quite a long time, so no real need to learn it yet.

Fall Breaklist...XML with Python to CSV

Oh, the joys of the Breaklist, I did most all of it by myself this year. Not that bad, but still have to know another system.

Then once the final file is created on Alma after the charges are removed from patron records, you have to convert the xml file to a csv file. I used a Windows 7 Excel technique where you map the incoming XML file to specific columns. I think I would like to write a python script to do this next time.

Christmas Break and Primo Upgrade

I was hoping to take most of the break off, but I knew we had the November Release being installed on Primo.

It had some issues again, and I spent most of Christmas Eve going back and forth with ExLibris Support staff. This was the final explanation why our search results were so wacky:

Hi,

To clarify the delay in the full resolution of the issue was an error in the WOU_TEST view, where there was a tab with no scope defined (probably deleted).

Resolving the tab/scope issue of WOU_TEST view allowed the Deploy to finish, and resolve the issue. The Primo on-call analyst found this, but it took some time.

Apologies for the delays.

So I'm thinking no way can some mistake be accidentally made at Western Oregon University, actually break our system alliance wide?

Alon was nice enough to write back:

Case Title: No Summit or Local Results...

Last comment:
Hi Bill,

WOU_TEST, from the logs:
2017-12-24 17:20:32.917 - Preparing ui components information for view :WOU_Test; 63 out of 151
2017-12-24 17:20:32.932 - Cannot deploy Views to Front-End: java.lang.NullPointerException

So wow, that made for a fun Christmas.

Speaking of Primo over break also had to flip it to start using the logo upload option instead of embedding the element into the page with javascript. For some reason in Safari our logo was getting stretched across the screen.

ArchivesSpace Upgraded / Illiad HTTPS

I upgrade ArchivesSpace to version 2.2.0, followed all of the proper steps.

I also figured out why ILLiad was not going to https. We had a web config file, that was set up in the inetpub directory forwarding to http and not https.

ArchivesSpace Properly Configured

We now have archivesspace set up ran by user aspace, start and stop with as a Unix demon.


  1. Created a symbolic link to /usr/local/archivesspace -> usr/local/archivesspace-2.1.2/archivesspace/
  2. added an init.d script ln -s /usr/local/archivesspace/archivesspace.sh archivesspace
  3. added it as a service sudo chkconfig --add archivesspace
  4. it starts now as a service sudo /sbin/service archivesspace star and should come up on a reboot


Archivist Toolkit Migration to ArchivesSpace

We are looking at migrating to ArchivesSpace a jruby application. Very simple install, described well here: http://archivesspace.github.io/archivesspace/