Go to:
Discussion
Online Doc
File
Poll
Event
Meeting Request
Suggestion
Jennifer Bazeley's picture

CMERIG 2016 Midwinter Session Report

The Collection Management and E-Resource Interest Group held a 1-hour session at the 2016 ALA Midwinter Conference in Boston on Sunday, January 10 from 3-4 pm. Chair Jennifer Bazeley (Interim Head, Technical Services at Miami University Libraries) and Vice-Chair Sunshine Carter (Electronic Resources Librarian at the University of Minnesota Libraries) co-moderated the session. The session topic was troubleshooting workflows for e-resources and had approximately 60 attendees. An announcement was made in regard to getting volunteers for the CMERIG vice-chair position to begin after ALA Annual 2016 in Orlando.

The session format included a short 15-minute presentation by the vice-chair of CMERIG, Sunshine Carter, followed by a 30-minute facilitated discussion session where we asked the attendees to divide into three groups to discuss one of three discussion questions. In the final 15 minutes of the session, each of the three groups reported back to the entire group to share discussion points and solutions.

The 15 minute introductory presentation by Sunshine Carter looked at existing troubleshooting workflows in libraries, both at the University of Minnesota Libraries and in the library literature. The slides for this presentation can be viewed at https://docs.google.com/a/miamioh.edu/presentation/d/1cKV2DM_tVJEV0oABWo2ycuTuQrepGRPVBGK5-yQuqNA/edit?usp=sharing.

The attendees then divided into three discussion groups to focus on three questions posed by the CMERIG co-chairs. Each group was assigned a facilitator who led the discussion and took notes.

Question 1

Pro-active troubleshooting (facilitator: Jennifer Bazeley)

Reported, unreported and undiscovered e-access issues exist. Broken access is a waste of acquisitions dollars. How is your institution proactively checking access? What steps can your institution take to increase proactive troubleshooting?

Participants in this group generally agreed that proactively checking access is difficult, even at institutions with large staff, due to the sheer volume of e-resources. Participants noted that checking access was often comprised of checking several things: that an e-resource link is going to a valid page, that the institution actually has access to the material at the link, and that coverage dates for continuing resources were actually representative of what could be accessed.

Solutions for checking access included a wide variety of strategies. Many institutions agreed that checking access for discrete packages during e-resource transition times like annual renewal season or publisher platform changes (e.g., when the University of Chicago Press journals changed platforms from JSTOR to Atypon in winter 2015, most institutions checked access for the titles in this package) was a common occurrence. Schools with student workers reported using those student workers to check access in an annual or semi-annual process, or on a small project basis. One school reported using a random number checker to test access for e-resources for approximately one hour per week. A few schools had employed open source or commercial link checker products to locate links that were completely broken, especially for open access materials. The only issue with using these tools is that if they hit a commercial site numerous times to check access, the commercial publisher may mistake those hits for systematic downloading, and shut down access. All participants agreed that while automated solutions are helpful, the work of proactively checking access generally always requires human intervention. Many schools also reported checking access at the point of purchase and cataloging. Some participants found tools like the Project Transfer database (http://www.niso.org/workrooms/transfer/), which can be used to track what journals are changing platforms at any given time, to be helpful in tracking access issues. For libraries that utilize LibGuides version 2.0, it was noted that the link checking capabilities of version 2 of the software were quite powerful and an improvement over the functionality of this feature in version 1. One librarian at the University of Kentucky had written a program to check for access to electronic books at her institution. This solution generated a lot of interest among participants, and the librarian who wrote the program noted that the only issue was that the program had to be customized for each individual vendor, due to the disparity in how e-book vendors code their e-book platforms.

Question 2

Increasing troubleshooting staff (facilitator: Jessica Brangiel, Electronic Resources Management Librarian, McCabe Library, Swarthmore College)

The number of staff troubleshooting access issues is low, but access issues are aplenty. We will need more staff to handle proactive troubleshooting. What does your institution do to combat having few troubleshooting staff?  How can your institution incorporate non-technical service staff into the troubleshooting workflow?

Increasing troubleshooting staff:

Most attendees agreed that no one has enough staff for e-resource troubleshooting. ILL staff are key in helping to troubleshoot e-resource issues and it can be helpful for e-resources staff to spend time with ILL staff to discuss common problems. All group members commented that cross-training and collaboration across public services and/or circulation staff was key in order to expand troubleshooting of electronic resource issues.

How is training for other departments done?

Some libraries reported doing one on one training, some provided basic handouts or workflow diagrams to help walk front line staff through the process of troubleshooting e-resource problems. Documentation can be challenging because front line staff don’t want detailed descriptions of technological issues when they’re trying to help patrons.

Google forms was mentioned as a tool that can help to funnel an access problem through a workflow so it reaches the correct person. Project management tools like Trello were also reported as being used to track e-resource problems.

Having good documentation is key, however it is a challenge to keep documentation up to date with limited staff, especially with the number of changes that occur in the world of e-resources.

The group discussed methods of troubleshooting e-resource problems at the point of need. Wouldn’t it be great if a form was available to report an access problem at the moment the user encounters it, rather than the user having to know to go back to a librarian or a separate link to report a problem? This question led to the mention of an e-resources troubleshooting LibGuide created by Rachel Erb (Electronic Resources Management Librarian, Colorado State University Libraries) which has helped staff at Colorado State University Libraries. The LibGuide is available at http://libguides.colostate.edu/eresources.

One library reported utilizing EZProxy logs to respond to users when they encounter EZProxy error messages. The group also discussed how small libraries can get started with training in e-resource troubleshooting. A simple FAQ was suggested that could be linked from the library website.

The conversation evolved to comment on how many e-resource issues are actually vendor usability issues. Participants reported seeing more dedicated staff for user experience and/or usability in libraries. Many electronic resource librarians and staff work closely or collaborate with these positions in their institutions.

What’s the best way to track in person reported problems? Some libraries use IT ticketing systems or Google Docs.

Question 3

Troubleshooting metrics (facilitator: Amy Dumouchel, Electronic Resources Librarian, O’Neill Library, Boston College)

Few libraries collect troubleshooting metrics beyond issue counts. To analyze staffing capacity, efficiency and workflows, metrics will be necessary. What statistics, beyond counts, does your institution utilize or desire to analyze troubleshooting? How can you increase the diversity of metrics coming from troubleshooting?

The group began by discussing use cases for why we might want to classify troubleshooting statistics. Some of the ideas that came up included justifying additional staffing or requirements to choose a new system. There were additional ideas discussed in greater length. For example, the ability to report to vendors if they seem to have particularly egregious numbers of access issues. Additionally, it can help allow us to provide feedback regarding usability of resources, if patrons encounter difficulties. Finally, categorizing statistics could help to identify additional training needs. If users encounter what seem to be problems, but the resource is actually working as expected, it can lead to targeted instruction, the creation of FAQs, or usability feedback to vendors. As well as patron education, problems reported to the wrong department can identify areas for increased staff training, so that they can identify when an issue should be reported to the metadata or systems departments instead of to the acquisitions or e-resources departments.

Another issue that came up was the ability of ticketing systems to allow for the creation of categories to apply to tickets. While there was some discussion of recent presentations done about LibAnswers or LibAnalytics that seemed to be able to do this, it was observed that many libraries use systems provided either by a broader IT department or intended for use by multiple departments in the library. Of the individuals in the discussion, roughly half used a ticketing system of some sort. While these systems have the benefits of often coming at no cost to our departments, and also in allowing us to pass tickets on to other departments in the library, they do not necessarily allow for the granularity of categories that might be desirable within the department for the use cases that we discussed.

The act of referring a ticket to another department also raised another question. When a ticket is transferred, who receives credit for the statisti? Some of the ticketing systems used, like ServiceNow, are unable to track statistics that have been transferred to another department. As one of the use cases we had discussed was the ability to track how many tickets transferred to other departments, this was particularly problematic.

It was noted that many of the individuals using ticketing systems were from larger, research-level institutions. One piece of advice given by these larger libraries to smaller libraries was that the cost for many of these ticketing systems is based on the number of seats needed, which means that these ticketing products might be more affordable than expected for smaller libraries.

Finally, the discussion directed toward questions of response time. One of the most difficult issues is that it can be hard to judge when a ticket is complete and can be closed. This uncertainty can happen for several reasons, including that at a certain point it might be referred to a vendor, and is out of the library‘s hands, or because the reporting patron does not reply. Often, a patron will not reply to subsequent follow ups or of requests to test because they perceive the problem as resolved on their end. For instance, when a user reports being unable to access an article, it is common practice to send a copy of the article while still troubleshooting the issue. Once the user has the article, there is less incentive to reply.

Best practices for efforts of response time were also discussed. One idea was to send an alarm similar to vendor CRMs that would force an open ticket will close if there is no further response. While the time to resolve an issue can vary, based on its cause, it is ideal to follow up as soon as possible, so that the user is aware of our efforts to resolve the problem. This can help clarify to other departments that while time to resolution can vary, our departments place high priority on troubleshooting access.

A final issue brought up is that often the most urgent access problems are not sent through a ticketing system. Instead of opening a ticket (especially for urgent access problems, such as a major platform outage) a user might instead make a phone call, send an email, or appear at the office of an e-resources librarian. This is because tickets can be perceived as time consuming to create, and slower to receive a response. Unfortunately, these major outages are the occurrences that we most want to capture and categorize in our ticketing systems.