Syndicate content

RUSA RSS Evaluation of Reference and User Services (Reference Services Section) Committee

In: Assessment and Evaluation, RUSA RSS (Reference Services Section), Reference Services
View:   Faces | List
DiscussionsDiscussions
Discussions

Online Doc Measuring and Assessing Reference Services and Resources: A Guide

by Rebecca Graff on Mon, Jul 10, 2017 at 04:47 pm

Measuring and Assessing Reference Services and Resources: A Guide

Introduction

Measuring and Assessing Reference Services and Resources: A Guide

Introduction

Measuring and Assessing Reference Services and Resources: A Guide offers an expansive definition of reference service, assessment planning advice, and measurement tools to assist managers in evaluating reference services and resources. The measurement tools presented here are fully analyzed for validity and reliability in The Reference Assessment Manual, RASD and Pierian Press, 1995. Where formally validated tools were not available, bibliographic references to assessment methods reported in the literature are provided.

For a more comprehensive analysis of reference service assessment, consult these key reference works:

  • Reference Assessment & Evaluation. Diamond, Tom and Mark Sanders (eds). Routledge, 2006.
  • Assessing Reference and User Services in a Digital Age. Novotny, Eric. Haworth, 2005.
  • Understanding Reference Transactions: Transforming an Art into a Science. Saxton, Matthew L. and John V. Richardson, Jr. Academic Press, 2002.
  • Evaluating Reference Services: A Practical Guide. Whitlatch, JoBell. American Library Association, 2000.
  • The Reference Assessment Manual. RASD and Pierian Press, 1995.

1.0 Definition of Reference

Reference Transactions are information consultations in which library staff recommend, interpret, evaluate, and/or use information resources to help others to meet particular information needs. Reference transactions do not include formal instruction or exchanges that provide assistance with locations, schedules, equipment, supplies, or policy statements.
 
Reference Work/Services includes reference transactions and other activities that involve the creation, management, and assessment of information or research resources, tools, and services.

  • Creation and management of information resources includes the development and maintenance of research collections, research guides, catalogs, databases, web sites, search engines, etc., that patrons can use independently, in-house or remotely, to satisfy their information needs.
  • Assessment activities include the measurement and evaluation of reference work, resources, and services.

Approved by RUSA Board of Directors, January 14, 2008

2.0 Planning Reference Assessment

Before beginning an assessment project, develop a clear statement of the specific questions you want to answer, the measurable data needed to answer your questions, and the performance or quality standards you will use to measure your success. Next, choose assessment tools that are relevant to your stated goals. Modify existing tools to meet your needs and always pretest your tool on a small representative sample of data or subjects. Finally, to have greater confidence in the validity of your results, use more than one assessment tool.

Basic Questions to Consider When Assessing Reference Services and Sources

What questions are you trying to answer?
Clearly define your questions before proceeding toward measurement since the questions themselves will help determine the standards of performance or quality you will set, the instrument(s) you will use to collect data and the techniques you will use to analyze your data.

What performance or quality standards will you use to measure your success?
Always develop goals and measurable objectives that you can use as a benchmark before beginning an assessment project. Comparing your results to such standards will determine whether your objectives have been met. RUSA provides a wide array of standards that can be used for the assessment of college libraries and of reference services. Data from other colleges and universities or from sources such as the Integrated Postsecondary Education Data System (IPEDS) surveys can also be used as benchmarks for comparative purposes.

How are you going to use the data generated?
Your questions will drive the type of data that you need to collect. In addition, the level of data collected (i.e., nominal, ordinal, interval, or ratio) will determine the power of the statistical tests you can use. For example, categorical data such as a respondent’s academic status or major will permit the grouping of data while continuous data such as number of reference questions asked can give rise to other analyses.

What measurements will you need to generate the data that you want?
There are many ways to collect data, but the way data is measured impacts how it can be used in analyses. Consider whether you need both qualitative and quantitative measures, since each provides valuable data for analyses. For example, if you want to collect data on the number of reference questions asked, you can use quantitative measures. If you wish to explore the reference interaction itself, you may want to consider qualitative measures.

Can you use other measures to triangulate your data?
Triangulation means collecting data using several different methods so that you have greater support for the results of your analyses. The Wisconsin-Ohio Reference Evaluation Program (WOREP) is one example of an instrument that uses triangulation by collecting data from two different sources (patron and librarian) for each transaction. The more sources of data, the better your analyses will be. Often, you can use qualitative data to support quantitative data and vice versa, but beware of comparing different types of data since they may actually be measuring different things. Thus, triangulation increases the validity of your analyses and results.

What methodology do you need to use?
The type of data desired will help determine the data collection instruments required. For example, if you want to measure satisfaction, a survey might be used. If you are examining how to improve your services, a focus group may be the best method. If you wish to determine how to staff a service point, unobtrusive counting measures can be used. These data collection instruments, in turn, help determine the analytical techniques that can be employed to interpret the data.

Have you pre-tested all your data collection instruments?
Always pretest your instruments to ensure that they can be understood by those who will be completing them and that they are actually measuring what you want them to measure. For example, before administering a survey, pretest the survey instrument on a group similar to those who will be completing the survey. Do they understand the questions? Do the given choices cover all the possible responses? Can you code the results easily? Then, test how you plan to analyze the final data. Is the methodology appropriate for the data?

What statistical analytical techniques do you want to use?
The data and its method of measurement will help determine how the data itself is analyzed. Do you have groups of respondents to a survey? If you have two groups, then t-tests may be used; if you have more than two groups, then F-tests (ANOVAs) may be employed. Do you have data that can be correlated? Then a Pearson test of correlation may be used. Statistical analysis software packages, such as SPSS or SAS, can make this step much easier, but make sure you are using the appropriate analytical methods for the data that you have generated. Consult researchers with statistical knowledge to help you run analyses and to help you understand the results.

Who is the audience for this assessment or research?
The audience will determine the format that the presentation of the results will take. If you are making a presentation, then the use of software such as PowerPoint with graphs and charts may be appropriate. If you are compiling an annual report, using spreadsheet software such as Excel to generate the charts may be helpful. The audience will also help you determine the type of analyses to perform and how these analyses are actually presented.

3.0 Measuring and Assessing Reference Transactions, Services

3.1 Reference Transactions – Volume, Cost, Benefits, and Quality

Simple tallies of reference transactions, collected daily or sampled, can be interpreted to describe patterns of use and demand for reference services. Managers commonly use transaction statistics to determine appropriate service hours and staffing. Often, volume statistics are reported to consortia to compare local patterns of use and demand to peer libraries and to calculate national norms.

Analysis of reference transactions by type, location, method received, sources used, and subject can be used for collection development, staff training/continuing education, and budget allocation. Analysis of accuracy, behavioral performance, interpersonal dynamics, and patron satisfaction during the reference interview can be used for staff training and continuing education.

Selected Measurement Tools and Bibliographic References
(from Saxton and Richardson, Appendix D)

I. Dependent Variables:

  • Accuracy: Answering Success
  • Client Satisfaction
  • Successful Probe
  • Efficiency – Accuracy/Time
  • Librarian Satisfaction
  • Cost benefit analysis – US$/Unit of Service
  • Unique dependent variables
    • Bunge’s Composite
    • Illinois Index of Reference Performance

II. Independent Variables:

A. The Reference Environment

  • Size of collection
  • Type of library
  • Size of staff
  • Size of professional staff
  • Size of nonprofessional staff
  • Number of volunteers
  • Library expenditures
  • Library income
  • Hours of service
  • Size of service population
  • Circulation
  • Fluctuation in collection
  • Institution’s bureaucratic service orientation
  • Staff availability
  • Level of referral service
  • Arrangement of service points
  • Administrative evalutation of services
  • Use of paraprofessionals at the reference desk
  • Volume of questions

B. The Librarian

  • Experience of librarian
  • Education of librarian
  • For paraprofessionals, amount of in-service training
  • Question-answering duties
  • Librarian’s attitude toward question-answering duties
  • Duties other than question answering
  • Librarian’s service orientation
  • Librarian’s perception of the collection adequacy
  • Librarian’s perception of personal education
  • Librarian’s perception of other duties
  • Outside reading
  • Membership in associations and committees
  • Age of librarian
  • Sex of librarian

C. The Client

  • User participation in process
  • User perception of librarian’s service orientation

D. The Question

  • Subject knowledge of librarian
  • Subject knowledge of client
  • Number of sources used to answer question
  • Source of answer named
  • Type of question

E. The Dialogue

  • Business at the reference desk
  • Communication effectiveness between patron and librarian
  • Amount of time spent with user by reference librarian
  • Type of assistance provided
  • Amount of time willing to be spent by patron

Descriptive Statistics and Measures

  • Number of digital reference questions received
  • Number of digital reference responses
  • Number of digital reference answers
  • Number of questions received digitally but not answered or responded to by completely digital means
  • Total reference activity – questions received
  • Percentage of digital reference questions to total reference questions
  • Digital reference correct answer fill rate
  • Digital reference completion time
  • Number of unanswered digital reference questions
  • Type of digital reference questions received
  • Total number of referrals
  • Saturation rate
  • Sources used per question
  • Repeat users

Log Analysis

  • Number of digital reference sessions
  • Usage of digital reference service by day of the week
  • Usage of digital reference service by time of the day
  • User’s browser
  • User’s platform

User Satisfaction Measures

  • Awareness of service
  • Accessibility of service
  • Expectations for service
  • Other sources user tried
  • Reasons for use
  • Reasons for non use
  • Improvements needed/Additional services that need to be offered
  • Satisfaction with staff service
  • Delivery mode satisfaction
  • Impact of service on users
  • User demographic data

Cost

  • Cost of digital reference service
  • Cost of digital reference service as a percent of total reference budget
  • Cost of digital reference service as a percent of total library or organizational budget

Staff Time Expended

  • Percent of staff time spent overseeing technology
  • Percent of staff time spent assisting users with technology

Other Assessment Options

  • Peer Review
  • Enhanced reference transaction logs
  • Librarian discussion groups

Quality Standards (examples)

  • Courtesy
  • Accuracy
  • Satisfaction
  • Repeat Users
  • Awareness
  • Cost
  • Completion Time
  • Accessibility

Tools:

  • Variables Used to Measure Question-Answering Performance [see complete list of variables, operational definitions, literature review, and statistical formulae in: Understanding Reference Transactions: Transforming an Art into a Science. Saxton, Matthew L. and John V. Richardson, Jr. Academic Press, 2002, APPENDIX D, p.130-189] (See also acomplete list of variables, operational definitions, literature review, and statistical formulae.)
  • Cost in Staffing time per Successful Question (Murfin, Bunge, 1989). The Reference Assessment Manual, 1995. Use with Wisconsin Ohio Reference Evaluation Program (WOREP) to determine the costs in staff time per successful reference question.
  • Encountering Virtual Users: A Qualitative Investigation of Interpersonal Communication in Chat Reference (Radford, Marie L., 2006). Journal of the American Society for Information Science and Technology
  • Frustration Factor and Nuisance Factor (Kantor, 1980). The Reference Assessment Manual, 1995. Use to estimate reference service accessibility (Frustration Factor) and patron time spent waiting (Nuisance Factor).
  • LAMA–NDCU Experimental Staffing Adequacy Measures (Parker, Joseph, Clark, Murfin, 1992). The Reference Assessment Manual, 1995. Used to estimate reference desk staffing adequacy through data comparison with national norms.
  • Reference Effort Assessment Data (READ) Scale (Gerlich, Bella Karr, 2003). A six-point scale tool for recording vital supplemental qualitative statistics gathered when reference librarians assist users with their inquiries or research-related activities by placing an emphasis on recording the effort, skills, knowledge, teaching moment, techniques and tools used by the librarian during a reference transaction.
  • Patron Satisfaction Survey PaSS™ - (Schall, Richardson, 2002). 7-point Likert scale survey of patron satisfaction with an online reference transaction (librarian’s comprehension of question, friendliness, helpfulness, promptness, satisfaction with answer).
  • Unobtrusive Data Analysis of Digital Reference Questions and Service at the Internet Public Library: An Exploratory Study (Carter, David S., Janes, Joseph, 2002). Library Trends, 49 (2): 251-265. Study conducted to establish a methodology for the unobtrusive analysis of a digital reference service. Logs of over 3,000 questions were analyzed on the basis of questions asked (subject area, means of submission, self-selected demographic information), how those questions were handled (professional determination of subject and question nature, questions sent back to users for clarification), and answered (including time to answer) or rejected. Answers that received unsolicited thanks.
  • Wisconsin-Ohio Reference Evaluation Program (WOREP) – (Bunge, Murfin, 1983).  WOREP is designed to assess the outcome of the reference transaction and to identify factors related to success or lack of success. WOREP provides diagnostic information based on input factors: collections, staff skill and knowledge, subject strengths, types of staff, types of questions; and process factors: communication effectiveness, time spent, technical problems, assistance by directing or searching with, and instruction. The WOREP report also provides both a profile of the users of a specific reference service and a comparison of the library with other libraries who have used WOREP. Note: WOREP was discontinued in 2011, but the questions remain available.

References:

Assessing Reference and User Services in a Digital Age. Novotny, Eric. New York: Haworth, 2006.

Assessing Service Quality: Satisfying the Expectations of Library Customers. Hernon, Peter and Ellen Altman. Chicago: ALA, 2010.

Breidenbaugh, Andrew. Budget planning and performance measures for virtual reference services. The Reference Librarian 46 (95/96): 113-24, 2006.

Fu, Zhuo, Mark Love, Scott Norwood, and Karla Massia. Applying RUSA guidelines in the analysis of chat reference transcripts. College & Undergraduate Libraries 13 (1): 75-88, 2006.

Garrison, Judith. Making reference service count: collecting and using reference service statistics to make a difference. The Reference Librarian 51 (3): 202-211, 2010.

Hernon, Peter. Research and the use of statistics for library decision-making. Library Administration & Management 3: 176-80, Fall 1989.

Larson, Carole A. and Laura K. Dickson. Developing behavioral reference desk performance standards. RQ 33: 349-357, 1994.

Measuring Library Performance: Principles and Techniques. Brophy, Peter. London: Facet, 2006.

McLaughlin, Jean. Reference transaction assessment: a survey of New York state academic and public libraries. Journal of the Library Administration & Management Section 6 (2): 5-20, 2010.

Murfin, Marjorie E., and Charles A. Bunge. A Cost Effectiveness Formula for Reference Service in Academic Libraries. Washington, D.C.: Council on Library Resources, 1989.

Murfin, Marjorie E., and Gary M. Gugelchuk. Development and testing of a reference transaction assessment instrument. College and Research Libraries 48 (4): 314-39, 1987.

Novotny, Eric and Emily Rimland. Using the Wisconsin-Ohio Reference Evaluation Program (WOREP) to improve training and reference services. The Journal of Academic Librarianship 33 (3): 382-392, 2007.

Radford, Marie L. In synch? Evaluating chat reference transcripts. Virtual Reference Desk 5th Annual Digital Reference Conference, San Antonio, TX, November 17-18, 2003.

Radford, Marie L. Encountering virtual users: A qualitative investigation of interpersonal communication in chat reference. Journal of the American Society for Information Science and Technology 57 (8): 1046-1059, June 2006.

Richardson, John. Good models of reference service transactions: Applying quantitative concepts to generate nine characteristic attributes of soundness. The Reference Librarian 50 (2): 159-77, 2009.

Rimland, Emily L. Do we do it (good) well? A Bibliographic essay on the evaluation of reference effectiveness. The Reference Librarian 47 (2): 41-55, 2007.

Ryan, S M. Reference transactions analysis: The Cost-Effectiveness of staffing a traditional academic reference desk. The Journal of Academic Librarianship 34 (5): 389-99, 2008.

3.2 Reference Service and Program Effectiveness

Cost, benefit, and quality assessments of reference services provide meaningful and practical feedback for the improvement of services, staff training, and continuing education. To determine levels of service effectiveness, costs, benefits, and quality, data must be judged in light of specific library goals, objectives, missions, and standards. A variety of measures such as quality or success analysis, unobtrusive, obtrusive or mixed observation methods, and cost and benefit analysis provide invaluable information about staff performance, skill, knowledge, and accuracy, as well as overall program effectiveness.

3.2.1 Cost/Benefits Analysis

In cost-benefit studies, costs are compared to the benefits derived by the patrons served. Patron benefits may be measured in terms of actual or perceived outcomes, such as goals and satisfaction achieved, time saved, failures avoided, money saved, productivity, creativity, and innovation.

Tools:

Cost Effectiveness Measures (McClure, 1989). Use to measure the cost effectiveness of traditional desk reference service. The Reference Assessment Manual, 1995.

Cost Benefit Formula (Murfin, Bunge, 1977). Use with Reference Transaction Assessment Instrument (RTAI) success data to determine the cost of staff time in relation to the benefit of patron time saved. The Reference Assessment Manual, 1995.

Costing of All Reference Operations (Murphy, 1973). Used to generate profiles of departmental functions and create a dollar estimate for reference service functions. The Reference Assessment Manual, 1995.

"Helps" Users Obtain from Their Library Visits (Dervin, Fraser, 1985). Use to collect data on how library visits specifically helped users in the context of their lives. The Reference Assessment Manual, 1995.

Statistics, measures, and quality standards for assessing digital reference library services: Guidelines and procedures (McClure, Lankes, Gross, Choltco-Devlin, 2002). Includes a variety of assessment tools.

References:

Abels, Eileen. Improving reference service cost studies. Library & Information Science Research 19 (2): 135-52, 1997.

Bunge, Charles A. Gathering and using patron and librarian perceptions of question-answering success. Reference Librarian 66:115-140, 1999.

Bunge, Charles A., and Marjorie E. Murfin. Reference questions--data from the field. RQ 27 (Fall): 15-18, 1987.

Gremmels, Gillian, and Karen S. Lehmann. Assessment of student learning from reference service. College & Research Libraries, 68 (6): 488-491, 2007.

Hubbertz, Andrew. The fallacy in the 55 percent rule. DttP, 35 (3): 15-17, 2007.

Ishihara, Mari. Evaluation of quality of reference services in public libraries. Library and Information Science, 59: 41-67, 2008.

Kuruppu, Pali U. Evaluation of reference services - A review. The Journal of Academic Librarianship, 33 (3): 368-381, 2007.

Marsteller, Matthew and Susan Ware. Models for measuring and evaluating reference costs: A Comparative analysis of traditional and virtual Reference Services. Virtual Reference Desk 5th Annual Conference, San Antonio, Texas, November 17-18, 2003.

McClure, Charles, R. David Lankes, Marilyn Gross, and Beverly Choltco-Devlin. Statistics, Measures, and Quality Standards for Assessing Digital Reference Library Services: Guidelines and Procedures. Information Institute of Syracuse, School of Information Studies; School of Information Studies, Information Use Management and Policy Institute, Florida State University, 2002.

Murfin, Marjorie. Cost analysis of library reference services. Advances in Library Administration and Organization 11: 1-36, 1993.

Powell, Ronald. Impact assessment of university libraries: a consideration of issues and research methodologies. Library & Information Science Research 14: 245-57, July/Sept. 1992.

3.2.2 Quality Analysis - Patron Needs and Satisfaction

The perceptions and needs of patrons are important measures of the quality and impact of reference services. Surveys, combined with other measures such as numerical counts, observation, and focus groups, are commonly used to conduct comprehensive assessments of service performance and patron needs.

Traditional Reference Services

Tools:

  • LibQual+™ - (Association of Research Libraries, 2001). Use to measure user perceptions and expectations of library service quality. LibQUAL+ ™ surveys are used to solicit, track, understand, and act upon users' opinions of library service quality. http://www.libqual.org/
  • Library Anxiety Scale (Bostick, 1993). Use to measure the construct of library anxiety in college students of all ages. The Reference Assessment Manual, 1995.
  • Reference Satisfaction Survey (Van House, Weil, McClure, 1990). Use to evaluate the success of reference as determined through user opinion of the services offered. The Reference Assessment Manual, 1995.
  • Survey of Public Library Users (Yocum, Stocker, 1969). Use to obtain data on patron use of services and how important they consider those same services. The Reference Assessment Manual, 1995.

References:

Cook, Colleen, Fred Heath and Bruce Thompson. ’Zones of Tolerance’ in perceptions of library service quality: A LibQUAL+TM study. portal: Libraries and the Academy 3 (1): 113-123, 2003.

Evaluating Reference Services: A Practical Guide. Whitlatch, Jo-Bell. American Library Association, 2000. [Chapter 4: Surveys and Questionnaires; Chapter 5:Observation; Chapter 6: Individual Interviews and Focus Group Interviews; Chapter 7: Case Studies; Chapter 8: Data Analysis]

Identifying and Analyzing User Needs: A Complete Handbook and Ready-to-use Assessment Workbook with Disk. Westbrook, Lynn. New York: Neal-Schuman, 2001.

Miller, Jonathan. Quick and easy reference evaluation: Gathering users' and providers' perspectives. Reference & User Services Quarterly, 47 (3): 218-222, 2008.

Norlin, Elaina. Reference evaluation: A three-step approach- surveys, unobtrusive observations, and focus groups. College and Research Libraries 61 (6): 546-53, 2000.

Electronic Reference Services

References:

Arnold, Julie and Neal Kaske. Evaluating the quality of a chat service. portal: Libraries and the Academy 5 (2): 177-193, 2005.

Carter, David and Joseph Janes. Unobtrusive data analysis of digital reference questions and service at the Internet Public Library: An exploratory study. Library Trends 49 (2): 251-265, 2000.

Coughley, Karen. Digital reference services: how do the library-based services compare with the expert services? Library Review 53 (1): 17-23, 2004.

Gross, Melissa and Charles McClure. Assessing quality in digital reference services: Overview of key literature on digital reference. Information Use Management and Policy Institute, Florida State University. http://dlis.dos.state.fl.us/bld/Research_Office/VRDphaseII.LitReview.doc

Harrington, Deborah Lynn and Xiaodong Li. Utilizing Web-based case studies for cutting-edge information services issues: A pilot study. Reference & User Services Quarterly 41 (4): 364-379, 2002.

Luo, Lili. Chat reference evaluation: A framework of perspectives and measures. Reference Services Review, 36 (1): 71-85, 2008.

Luo, Lili. Toward sustaining professional development: Identifying essential competencies for chat reference service. Library & Information Science Research, 30 (4): 298-311, 2008.

Mon, Lorri and Joseph W. Janes. The thank you study: User feedback in e-mail thank you messages. Reference & User Services Quarterly, 46 (4): 53-59, 2007.

Pomerantz, Jeffrey, Lorri Mon, and Charles R. McClure. Evaluating remote reference service: A practical guide to problems and solutions. portal: Libraries and the Academy, 8 (1): 15-30, 2008.

Pomerantz, Jeffrey. Evaluation of online reference services. Bulletin of the American Society for Information Science and Technology, 34 (2): 15-19, December 2007/January 2008.

Novotny, Eric. Evaluating electronic reference services: Issues, approaches and criteria. The Reference Librarian 74: 103-120, 2001.

Radford, Marie. In Synch? evaluating chat reference transcripts. Virtual Reference Desk 5th Annual Conference, San Antonio, Texas, November 17-18, 2003.

Ruppel, Margie and Jody Condit Fagan. Instant messaging reference: Users' evaluation of library chat. Reference Services Review 30 (3): 183-197, 2002.

Shachaf, Pninam Shannon M. Oltmann, and Sarah M. Horowitz. Service equality in virtual reference. Journal of the American Society for Information Science and Technology, 59 (4): 535-550, February 15, 2008.

Shachaf, Pnina and Sarah Horowitz. Virtual reference service evaluation: Adherence to RUSA behavioral guidelines and IFLA digital reference guidelines. Library & Information Science Research, 30 (2): 122-137, 2007.

Stoffel, Bruce and Toni Tucker. E-mail and chat reference: assessing patron satisfaction. Reference Services Review 32 (2), 120-140, 2004.

Ward, David. Measuring the completeness of reference transactions in online chats: Results of an unobtrusive study. Reference & User Services Quarterly 44 (1): 46-56, 2004.

Ward, David. Using virtual reference transcripts for staff training. Reference Services Review 31 (1): 46-56, 2003.

4.0 Measuring and Assessing Reference Resources – Use, Usability, and Collection Assessment

As print and electronic reference collections grow in size and format, they must be continually assessed to determine their relevance, utility, and appropriateness to patrons. Use and usability tests examine how often and how well visitors navigate, understand, and use web sites, electronic subscription databases, free Internet resources, library subject web pages, and other web-based tools such as bibliographies, research guides, and tutorials.

Print Reference Resources

Tools:

  • In-Library Materials Use (Van House, Weil, McClure, 1990). Use to determine total number of items used in the library but not circulated. The Reference Assessment Manual, 1995.
  • Reference Collection Use Study (Arrigona, Mathews, 1988). Use to evaluate which subjects areas are most used by librarians to assist patrons and then identify any correlation with the subject areas most used by patrons to answer their own questions. The Reference Assessment Manual, 1995.
  • Strother’s Questionnaire A and B (Strother, 1975). Use to determine faculty awareness and use of reference tools The Reference Assessment Manual, 1995.

Web-based Reference Resources

Tools:

  • Formal Usability Testing – Observe as patrons use a site to perform given tasks or achieve a set of defined goals.
  • Inquiry – Use interviews, surveys, and focus groups to gather information about patron preferences and use of a particular site.
  • Inspection – Use to evaluate a site against a checklist of heuristics and design principles or simulations of typical user tasks.

References:

Battleson, Brenda, Austin Booth and Jane Weintrop. Usability testing of an academic library Web site: a case study. Journal of Academic Librarianship 27 (3): 188-198, 2001.

Kovacs, Diane K. Building a core Internet reference collection. Reference & User Services Quarterly 39 (3): 233-239, Spring 2000.

Rettig, James. Beyond cool: analog models for reviewing digital resources. Online 20 (6): 52-64, 1996.

Rubin, Jeffrey. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. New York: Wiley, 1994.

Smith, Alastair. Evaluation of Information Sources. (Webliography of information evaluation resources) http://departments.kings.edu/edtutorial/web_%20evaluation/evalinfosources.htm

Usability Testing of Library-Related Websites: Methods and Case Studies. Campbell, N., ed. LITA Guide #7. Chicago: LITA/American Library Association, 2001.

Collection Assessment

References:

Bucknall, Tim. Getting more from your electronic collections through studies of user behavior. Against the Grain, 17 (5): 1, 18, 20, November 2005.

Dee, Cheryl, and Maryellen Allen. A survey of the usability of digital, reference services on academic health science library web sites. Journal of Academic Librarianship, 32 (1): 69-78, January 2006.

Drane, Simon. Portals: Where we are and the road ahead. Legal Information Management, 5 (4): 219-222, Winter 2005.

Keller, Michael A. Reconstructing collection development. Against the Grain, 16 (6): 1, 16, 18, 20, December 2004/January 2005.

Puacz, Jeanne Holba. Electronic vs. print reference sources in public library collections. The Reference Librarian, no. 91/92: 39-51, 2005.

Stempter, James A., and Janice M. Jaguszewski. Usage statistics for electronic journals: An analysis of local and vendor counts. Collection Management, 28 (4): 3-22, 2003.

Strohl, Bonnie. Collection evaluation techniques: A short, selective, practical, current, annotated bibliography, 1990-1998. Chicago: Reference and User Services Association, ALA, 1999.

Acknowledgements

The following RUSA/RSS Evaluation of Reference and User Services Committee members spent many hours researching, writing, and reviewing the Guide.

Lisa Horowitz (MIT), Chair, 2002-2003
Lanell Rabner (Brigham Young), Guidelines Project co-chair
Susan Ware (Pennsylvania State), Guidelines Project co-chair
Gordon Aamot (University of Washington)
Jake Carlson (Bucknell)
Chris Coleman (UCLA)
Paula Contreras (Pennsylvania State)
Leslie Haas (University of Utah)
Suzanne Lorimer (Yale)
Barbara Mann (Emory)
Elaina Norlin (University of Arizona)
Cindy Pierard (University of Kansas)
Nancy Skipper (Cornell)
Judy Solberg (George Washington)
Lou Vyhnanek (Washington State)
Chip Stewart (CUNY)

Jake Carlson, ERUS Chair, 2004
Barbara Mann, ERUS Chair, 2005
Jill Moriearty, ERUS Chair, 2006
Gregory Crawford, ERUS Chair, 2007
David Vidor, ERUS Chair, 2008

Tiffany Walsh, 2010-2011
Robin Kinder, 2010-2011
Jan Tidwell, 2010-2011
Richard Caldwell, 2010-2011

More...

Discussion How are reference data collected and used in 2016?

by Rebecca Graff on Wed, Jun 1, 2016 at 12:38 pm

How are reference data collected and used in 2016?

 

That's what we want to know in order to assess and improve service quality. Please help us by completing our brief (3 minutes max) survey. You will be asked to respond to questions and upload a screenshot (instructions included!).

 

Please complete our survey, https://smu.az1.qualtrics.com/SE/?SID=SV_0TjVwci8VtboGOh, by June 15.

How are reference data collected and used in 2016?

 

That's what we want to know in order to assess and improve service quality. Please help us by completing our brief (3 minutes max) survey. You will be asked to respond to questions and upload a screenshot (instructions included!).

 

Please complete our survey, https://smu.az1.qualtrics.com/SE/?SID=SV_0TjVwci8VtboGOh, by June 15.

 

Also, if you'd like to do a little extra, we'd appreciate it if you would forward this message to other groups so we get responses from the widest pool possible.

 

Thank you! We appreciate your assistance.

ALA, RUSA, RSS, Evaluation of Reference & User Services committee

More...

Online Doc Meeting Minutes, 2/26/2016

by Rebecca Graff on Fri, Mar 11, 2016 at 05:06 pm

 

 

Attendees:

Ellen Keith, Jane Stephens, Jenise Overmier, Jeremy Walker, Jerilyn Marshall, Paula Dempsey, Rebecca Graff

 

1) Research Project

We will:

 

           

*          Request screenshots or pdfs of their data collection forms to ascertain what info is collected and extrapolate what has changed from the hash mark days.

 

 

Attendees:

Ellen Keith, Jane Stephens, Jenise Overmier, Jeremy Walker, Jerilyn Marshall, Paula Dempsey, Rebecca Graff

 

1) Research Project

We will:

 

           

*          Request screenshots or pdfs of their data collection forms to ascertain what info is collected and extrapolate what has changed from the hash mark days.

*          Get basic background info, such as type of library (academic, archive, medical, public, school, special, &c.) and how they use data (reporting, staffing decisions, hours, training, etc.).

*          Ask a few open questions such as: what's the most useful thing you do with your data; what would you like to learn about your service, but do not currently measure; etc.

*          Code the submissions into categories and analyze the results.

 

 

It was noted that ACRL has been updating what data they collect and how they define reference, Instructions and Definitions for 2015 ACRL Annual Survey https://acrl.countingopinions.com/docs/acrl/Instructions_definitions_2015.pdf>

 

 

Timeline & Responsibilities

What   Who   When 

Create Google doc https://docs.google.com/document/d/1TWf9aRJCbdukqiZDu-vwU3ADDDMb2Vt4Cr1OGmYElt4/edit>       Jenise  2/26 

Develop possible questions     Everyone         3/11 

Set up meeting for determining questions        Rebecca          3/8    

Determine actual questions for survey Everyone         Mid-March     

IRB Approval   Rebecca          March

Setting up Qualtrics Survey     Rebecca          Late March     

What lists should we send to?* Jenise  March

Disseminate Survey                 April   

Analyze data  Everyone         May-June       

Report findings to all lists surveyed, etc.          Everyone         Late June       

 

*rusa-l, libref-l, public library ref list???, rss-l, ...heads of ref depts.?  Community college, special libs, and more

 

2) Committee Review - Comments? Clarifications?

Overall, looked accurate and raised good questions. There is, generally, a move toward research guide content encouraging the process of research rather than just producing a bibliography. Similarly, reference interactions should be considered as part of the research process - instruction & metacognition.

 

3) RUSA webinar proposals: Submit by March 8, 2016.  Do we want to make a proposal?

Not yet. After we have completed our survey, we can have a more structured discussion.

More...

Online Doc Meeting Minutes, October 23, 2015

by Rebecca Graff on Fri, Oct 23, 2015 at 12:46 pm

Introductions

 

Introductions

 

  • Fay Verburg                    Georgia Regents University/Reese Library
  • Jane Stephens                  Texas A & M University, Evans Library
  • Jenise Overmier                American University Library
  • Jeremy Walker                 Weill Cornell Medical College in Qatar
  • Jerilyn Marshall                 University of Northern Iowa Rod Library
  • Paula Dempsey                 University of Illinois at Chicago
  • Rebeca Graff                    SMU (Southern Methodist University)
  • Terri Miller                        Michigan State University Library

 

      * Name in italics = present

 

Goal/s, due by 10/30

  • One suggestion is to find out what data is collected at reference, how it's used, and even where it goes.  With this, we might put together a best practices guide to data collection.
  • Work with the data collected in 2014 regarding reference service models
  • Qualitatively assessing whether reference is meeting our service goals - what's not captured by the numbers? How are people doing this?
  • How to assess service staff (reference, research help, basic info), especially during periods of transition or in tiered service models?  Affect, correctness, willingness-to-return, appropriateness to audience, peer-mentoring. We could do a lit review this year and, possibly a follow-up survey next year. We'd like to do this, but need to determine has this already been done?

Scheduling Virtual Midwinter Meeting for January
We'll have a Doodle Poll in November/December.

 

To do

Rebecca:    Send link to previous survey info asap

All:             Quick lit search by Wednesday, 10/28, to determine viability of this goal

Rebecca:    Set up method of sharing search results (possibly ProQuest Flow)

 

More...

Discussion Meeting Today!

by Rebecca Graff on Fri, Oct 23, 2015 at 11:11 am

This is a reminder that we are meeting today, in about 50 minutes.

 

1.  Please join my meeting.

https://global.gotomeeting.com/join/590726621

 

2.  Use your microphone and speakers (VoIP) - a headset is recommended.  Or, call in using your telephone.

 

Dial +1 (646) 749-3131

Access Code: 590-726-621

Audio PIN: Shown after joining the meeting

 

Meeting ID: 590-726-621

Event RUSA RSS Evaluation of Reference and User Services Committee

by ALAConnect Helpdesk (staff) on Mon, Jun 8, 2015 at 11:31 am

Meeting of the RUSA RSS Evaluation of Reference and User Services Committee, held in conjunction with the RUSA RSS All-Committee Meeting and Open House.

Discussion Current Activities 2014-2015

by Jason Kruse on Fri, Jun 5, 2015 at 01:21 pm

A group made up of members of ERUS and Virtual Reference Services Committee distributed a survey in August/September 2013 with the goal to assess the state of the profession in the provision of virtual reference services. The analysis stage was completed and initial results were reported to ERUS at our ALA Annual meeting. Results will be shared more broadly, and recommendations for regular surveys on this topic will be explored. 

A group made up of members of ERUS and Virtual Reference Services Committee distributed a survey in August/September 2013 with the goal to assess the state of the profession in the provision of virtual reference services. The analysis stage was completed and initial results were reported to ERUS at our ALA Annual meeting. Results will be shared more broadly, and recommendations for regular surveys on this topic will be explored. 

 
ERUS also conducted a survey looking at reference service models and evaluation of service. At the close of the survey, we received a very large number of responses. Initial analysis of the results has begun and the goal is to disseminate the findings broadly. The committee will submit a proposal for a discussion forum at Midwinter 2015 on the this topic, with the hope that the survey results can be incorporated or used to inform the discussion. 
 
Update: ERUS did compile a short report based on initial survey analysis and did hold a discussion forum at Midwinter 2015 titled: “Reference Services in Transition: Changing Models and Assessing Success.” 

More...

Discussion Goals 2014-2015

by Jason Kruse on Fri, Apr 10, 2015 at 02:05 pm

ERUS Goals 2014-2015

Status of Goals as of November 4, 2014

1. ERUS submitted a proposal for a discussion for at ALA Midwinter 2015. 

Status of Goals as of September 9, 2014

ERUS Goals 2014-2015

Status of Goals as of November 4, 2014

1. ERUS submitted a proposal for a discussion for at ALA Midwinter 2015. 

Status of Goals as of September 9, 2014

  1. Based on the large number of responses we received from the Reference Service Models survey, a goal for the coming year is to propose a discussion forum for ALA Midwinter 2015. This discussion will be about changing reference service models and involve the analyzed survey results. The committee will begin working on a proposal for Midwinter in September/October 2014.
  2. The committee will work on compiling a report on results and disseminate the findings broadly on listservs and other channels.
  3. Based on the results of the survey on the current state of virtual reference services, recommendations have been made for the future this survey. The survey was conducted by and ad-hoc group made up of members of ERUS and the Virtual Reference Services Committee. ERUS will work with VRS to determine next steps and the future home of this survey.

 Status update for Goals as of April 10, 2015

  1. ERUS held a discussion session at ALA Midwinter 2015, Reference Services in Transition: Changing Models and Assessing Success. There was a great turnout and a lot of great discussion. We started the session by distributing a report based on the survey results from 2014, which we then sent out to widely over listservs. This report and the notes from the session are available on ALA Connect.
  2. The committee is in the process of determining if further analysis of the survey results will be done.
  3. Following the work of ad-hoc group made up of members of ERUS and the Virtual Reference Services Committee, work with VR to determine plans for a regular survey in the future.

 

More...

Online Doc RUSA RSS-ERUS Results from Reference Service Models Survey

by Jason Kruse on Mon, Feb 16, 2015 at 10:13 am

In January 2014, RUSA-RSS Evaluation of Reference and User Services Committee conducted a survey on current reference service models. The goal of the survey was to provide an overall view of current practices in providing reference services, how service models are changing, and how libraries are assessing their reference services. Attached is short report based on the initial analysis of the results.

Discussion Notes from Reference Services in Transition: Changing Models and Assessing Success Discussion at Midwinter 2015

by Jason Kruse on Wed, Feb 11, 2015 at 11:26 am

Attached are the notes from the Reference Services in Transition: Changing Models and Assessing Success Discussion at Midwinter 2015. Led by the Evaluation of Reference and User Services Committee (RSS)

The session was held on Saturday January 31, 2015, 3pm-4pm.

Also attached is an initial report from a survey ERUS conducted in January 2014. These results were used a a jump starter for the discussion session.

Please send any questions to

Jason Kruse

Attached are the notes from the Reference Services in Transition: Changing Models and Assessing Success Discussion at Midwinter 2015. Led by the Evaluation of Reference and User Services Committee (RSS)

The session was held on Saturday January 31, 2015, 3pm-4pm.

Also attached is an initial report from a survey ERUS conducted in January 2014. These results were used a a jump starter for the discussion session.

Please send any questions to

Jason Kruse

Chair, Evaluation of Reference and User Services Committee (RSS)

jkruse@northwestern.edu

More...

Pages

To collect, analyze, and disseminate information to the RUSA membership and profession on qualitative evaluation and quantitative measurements of service which will be used to assist in responsible managerial planning and decision making in reference and adult services; to support research in this area.

Subscribe to RUSA RSS Evaluation of Reference and User Services (Reference Services Section)