Go to:
Discussion
Online Doc
File
Poll
Event
Picture
Ellen Keith's picture

Measuring and Assessing Reference Services and Resources: A Guide

Measuring and Assessing Reference Services and Resources: A Guide

Introduction

Measuring and Assessing Reference Services and Resources: A Guide offers an expansive definition of reference service, assessment planning advice, and measurement tools to assist managers in evaluating reference services and resources. The measurement tools presented here are fully analyzed for validity and reliability in The Reference Assessment Manual, RASD and Pierian Press, 1995. Where formally validated tools were not available, bibliographic references to assessment methods reported in the literature are provided.

For a more comprehensive analysis of reference service assessment, consult these key reference works:

  • Reference Assessment & Evaluation. Diamond, Tom and Mark Sanders (eds). Routledge, 2006.
  • Assessing Reference and User Services in a Digital Age. Novotny, Eric. Haworth, 2005.
  • Understanding Reference Transactions: Transforming an Art into a Science. Saxton, Matthew L. and John V. Richardson, Jr. Academic Press, 2002.
  • Evaluating Reference Services: A Practical Guide. Whitlatch, JoBell. American Library Association, 2000.
  • The Reference Assessment Manual. RASD and Pierian Press, 1995.

1.0 Definition of Reference

Reference Transactions are information consultations in which library staff recommend, interpret, evaluate, and/or use information resources to help others to meet particular information needs. Reference transactions do not include formal instruction or exchanges that provide assistance with locations, schedules, equipment, supplies, or policy statements. Reference Work/Services includes reference transactions and other activities that involve the creation, management, and assessment of information or research resources, tools, and services.

  • Creation and management of information resources includes the development and maintenance of research collections, research guides, catalogs, databases, web sites, search engines, etc., that patrons can use independently, in-house or remotely, to satisfy their information needs.
  • Assessment activities include the measurement and evaluation of reference work, resources, and services.

Approved by RUSA Board of Directors, January 14, 2008

2.0 Planning Reference Assessment

Before beginning an assessment project, develop a clear statement of the specific questions you want to answer, the measurable data needed to answer your questions, and the performance or quality standards you will use to measure your success. Next, choose assessment tools that are relevant to your stated goals. Modify existing tools to meet your needs and always pretest your tool on a small representative sample of data or subjects. Finally, to have greater confidence in the validity of your results, use more than one assessment tool.

Basic Questions to Consider When Assessing Reference Services and Sources

What questions are you trying to answer?
Clearly define your questions before proceeding toward measurement since the questions themselves will help determine the standards of performance or quality you will set, the instrument(s) you will use to collect data and the techniques you will use to analyze your data.

What performance or quality standards will you use to measure your success?
Always develop goals and measurable objectives that you can use as a benchmark before beginning an assessment project. Comparing your results to such standards will determine whether your objectives have been met. RUSA provides a wide array of standards that can be used for the assessment of college libraries and of reference services. Data from other colleges and universities or from sources such as the Integrated Postsecondary Education Data System (IPEDS) surveys can also be used as benchmarks for comparative purposes.

How are you going to use the data generated?
Your questions will drive the type of data that you need to collect. In addition, the level of data collected (i.e., nominal, ordinal, interval, or ratio) will determine the power of the statistical tests you can use. For example, categorical data such as a respondent’s academic status or major will permit the grouping of data while continuous data such as number of reference questions asked can give rise to other analyses.

What measurements will you need to generate the data that you want?
There are many ways to collect data, but the way data is measured impacts how it can be used in analyses. Consider whether you need both qualitative and quantitative measures, since each provides valuable data for analyses. For example, if you want to collect data on the number of reference questions asked, you can use quantitative measures. If you wish to explore the reference interaction itself, you may want to consider qualitative measures.

Can you use other measures to triangulate your data?
Triangulation means collecting data using several different methods so that you have greater support for the results of your analyses. The Wisconsin-Ohio Reference Evaluation Program (WOREP) is one example of an instrument that uses triangulation by collecting data from two different sources (patron and librarian) for each transaction. The more sources of data, the better your analyses will be. Often, you can use qualitative data to support quantitative data and vice versa, but beware of comparing different types of data since they may actually be measuring different things. Thus, triangulation increases the validity of your analyses and results.

What methodology do you need to use?
The type of data desired will help determine the data collection instruments required. For example, if you want to measure satisfaction, a survey might be used. If you are examining how to improve your services, a focus group may be the best method. If you wish to determine how to staff a service point, unobtrusive counting measures can be used. These data collection instruments, in turn, help determine the analytical techniques that can be employed to interpret the data.

Have you pre-tested all your data collection instruments?
Always pretest your instruments to ensure that they can be understood by those who will be completing them and that they are actually measuring what you want them to measure. For example, before administering a survey, pretest the survey instrument on a group similar to those who will be completing the survey. Do they understand the questions? Do the given choices cover all the possible responses? Can you code the results easily? Then, test how you plan to analyze the final data. Is the methodology appropriate for the data?

What statistical analytical techniques do you want to use?
The data and its method of measurement will help determine how the data itself is analyzed. Do you have groups of respondents to a survey? If you have two groups, then t-tests may be used; if you have more than two groups, then F-tests (ANOVAs) may be employed. Do you have data that can be correlated? Then a Pearson test of correlation may be used. Statistical analysis software packages, such as SPSS or SAS, can make this step much easier, but make sure you are using the appropriate analytical methods for the data that you have generated. Consult researchers with statistical knowledge to help you run analyses and to help you understand the results.

Who is the audience for this assessment or research?
The audience will determine the format that the presentation of the results will take. If you are making a presentation, then the use of software such as PowerPoint with graphs and charts may be appropriate. If you are compiling an annual report, using spreadsheet software such as Excel to generate the charts may be helpful. The audience will also help you determine the type of analyses to perform and how these analyses are actually presented.

3.0 Measuring and Assessing Reference Transactions, Services

3.1 Reference Transactions – Volume, Cost, Benefits, and Quality

Simple tallies of reference transactions, collected daily or sampled, can be interpreted to describe patterns of use and demand for reference services. Managers commonly use transaction statistics to determine appropriate service hours and staffing. Often, volume statistics are reported to consortia to compare local patterns of use and demand to peer libraries and to calculate national norms.

Analysis of reference transactions by type, location, method received, sources used, and subject can be used for collection development, staff training/continuing education, and budget allocation. Analysis of accuracy, behavioral performance, interpersonal dynamics, and patron satisfaction during the reference interview can be used for staff training and continuing education.

Selected Measurement Tools and Bibliographic References
(from Saxton and Richardson, Appendix D)

I. Dependent Variables:

  • Accuracy: Answering Success
  • Client Satisfaction
  • Successful Probe
  • Efficiency – Accuracy/Time
  • Librarian Satisfaction
  • Cost benefit analysis – US$/Unit of Service
  • Unique dependent variables
    • Bunge’s Composite
    • Illinois Index of Reference Performance

II. Independent Variables:

A. The Reference Environment

  • Size of collection
  • Type of library
  • Size of staff
  • Size of professional staff
  • Size of nonprofessional staff
  • Number of volunteers
  • Library expenditures
  • Library income
  • Hours of service
  • Size of service population
  • Circulation
  • Fluctuation in collection
  • Institution’s bureaucratic service orientation
  • Staff availability
  • Level of referral service
  • Arrangement of service points
  • Administrative evalutation of services
  • Use of paraprofessionals at the reference desk
  • Volume of questions

B. The Librarian

  • Experience of librarian
  • Education of librarian
  • For paraprofessionals, amount of in-service training
  • Question-answering duties
  • Librarian’s attitude toward question-answering duties
  • Duties other than question answering
  • Librarian’s service orientation
  • Librarian’s perception of the collection adequacy
  • Librarian’s perception of personal education
  • Librarian’s perception of other duties
  • Outside reading
  • Membership in associations and committees
  • Age of librarian
  • Sex of librarian

C. The Client

  • User participation in process
  • User perception of librarian’s service orientation

D. The Question

  • Subject knowledge of librarian
  • Subject knowledge of client
  • Number of sources used to answer question
  • Source of answer named
  • Type of question

E. The Dialogue

  • Business at the reference desk
  • Communication effectiveness between patron and librarian
  • Amount of time spent with user by reference librarian
  • Type of assistance provided
  • Amount of time willing to be spent by patron

Descriptive Statistics and Measures

  • Number of digital reference questions received
  • Number of digital reference responses
  • Number of digital reference answers
  • Number of questions received digitally but not answered or responded to by completely digital means
  • Total reference activity – questions received
  • Percentage of digital reference questions to total reference questions
  • Digital reference correct answer fill rate
  • Digital reference completion time
  • Number of unanswered digital reference questions
  • Type of digital reference questions received
  • Total number of referrals
  • Saturation rate
  • Sources used per question
  • Repeat users

Log Analysis

  • Number of digital reference sessions
  • Usage of digital reference service by day of the week
  • Usage of digital reference service by time of the day
  • User’s browser
  • User’s platform

User Satisfaction Measures

  • Awareness of service
  • Accessibility of service
  • Expectations for service
  • Other sources user tried
  • Reasons for use
  • Reasons for non use
  • Improvements needed/Additional services that need to be offered
  • Satisfaction with staff service
  • Delivery mode satisfaction
  • Impact of service on users
  • User demographic data

Cost

  • Cost of digital reference service
  • Cost of digital reference service as a percent of total reference budget
  • Cost of digital reference service as a percent of total library or organizational budget

Staff Time Expended

  • Percent of staff time spent overseeing technology
  • Percent of staff time spent assisting users with technology

Other Assessment Options

  • Peer Review
  • Enhanced reference transaction logs
  • Librarian discussion groups

Quality Standards (examples)

  • Courtesy
  • Accuracy
  • Satisfaction
  • Repeat Users
  • Awareness
  • Cost
  • Completion Time
  • Accessibility

Tools:

  • Variables Used to Measure Question-Answering Performance [see complete list of variables, operational definitions, literature review, and statistical formulae in: Understanding Reference Transactions: Transforming an Art into a Science. Saxton, Matthew L. and John V. Richardson, Jr. Academic Press, 2002, APPENDIX D, p.130-189] (See also acomplete list of variables, operational definitions, literature review, and statistical formulae.)
  • Cost in Staffing time per Successful Question (Murfin, Bunge, 1989). The Reference Assessment Manual, 1995. Use with Wisconsin Ohio Reference Evaluation Program (WOREP) to determine the costs in staff time per successful reference question.
  • Encountering Virtual Users: A Qualitative Investigation of Interpersonal Communication in Chat Reference (Radford, Marie L., 2006). Journal of the American Society for Information Science and Technology
  • Frustration Factor and Nuisance Factor (Kantor, 1980). The Reference Assessment Manual, 1995. Use to estimate reference service accessibility (Frustration Factor) and patron time spent waiting (Nuisance Factor).
  • LAMA–NDCU Experimental Staffing Adequacy Measures (Parker, Joseph, Clark, Murfin, 1992). The Reference Assessment Manual, 1995. Used to estimate reference desk staffing adequacy through data comparison with national norms.
  • Reference Effort Assessment Data (READ) Scale (Gerlich, Bella Karr, 2003). A six-point scale tool for recording vital supplemental qualitative statistics gathered when reference librarians assist users with their inquiries or research-related activities by placing an emphasis on recording the effort, skills, knowledge, teaching moment, techniques and tools used by the librarian during a reference transaction.
  • Patron Satisfaction Survey PaSS™ - (Schall, Richardson, 2002). 7-point Likert scale survey of patron satisfaction with an online reference transaction (librarian’s comprehension of question, friendliness, helpfulness, promptness, satisfaction with answer).
  • Unobtrusive Data Analysis of Digital Reference Questions and Service at the Internet Public Library: An Exploratory Study (Carter, David S., Janes, Joseph, 2002). Library Trends, 49 (2): 251-265. Study conducted to establish a methodology for the unobtrusive analysis of a digital reference service. Logs of over 3,000 questions were analyzed on the basis of questions asked (subject area, means of submission, self-selected demographic information), how those questions were handled (professional determination of subject and question nature, questions sent back to users for clarification), and answered (including time to answer) or rejected. Answers that received unsolicited thanks.
  • Wisconsin-Ohio Reference Evaluation Program (WOREP) – (Bunge, Murfin, 1983).  WOREP is designed to assess the outcome of the reference transaction and to identify factors related to success or lack of success. WOREP provides diagnostic information based on input factors: collections, staff skill and knowledge, subject strengths, types of staff, types of questions; and process factors: communication effectiveness, time spent, technical problems, assistance by directing or searching with, and instruction. The WOREP report also provides both a profile of the users of a specific reference service and a comparison of the library with other libraries who have used WOREP. Note: WOREP was discontinued in 2011, but the questions remain available.

References:

Assessing Reference and User Services in a Digital Age. Novotny, Eric. New York: Haworth, 2006.

Assessing Service Quality: Satisfying the Expectations of Library Customers. Hernon, Peter and Ellen Altman. Chicago: ALA, 2010.

Breidenbaugh, Andrew. Budget planning and performance measures for virtual reference services. The Reference Librarian 46 (95/96): 113-24, 2006.

Fu, Zhuo, Mark Love, Scott Norwood, and Karla Massia. Applying RUSA guidelines in the analysis of chat reference transcripts. College & Undergraduate Libraries 13 (1): 75-88, 2006.

Garrison, Judith. Making reference service count: collecting and using reference service statistics to make a difference. The Reference Librarian 51 (3): 202-211, 2010.

Hernon, Peter. Research and the use of statistics for library decision-making. Library Administration & Management 3: 176-80, Fall 1989.

Larson, Carole A. and Laura K. Dickson. Developing behavioral reference desk performance standards. RQ 33: 349-357, 1994.

Measuring Library Performance: Principles and Techniques. Brophy, Peter. London: Facet, 2006.

McLaughlin, Jean. Reference transaction assessment: a survey of New York state academic and public libraries. Journal of the Library Administration & Management Section 6 (2): 5-20, 2010.

Murfin, Marjorie E., and Charles A. Bunge. A Cost Effectiveness Formula for Reference Service in Academic Libraries. Washington, D.C.: Council on Library Resources, 1989.

Murfin, Marjorie E., and Gary M. Gugelchuk. Development and testing of a reference transaction assessment instrument. College and Research Libraries 48 (4): 314-39, 1987.

Novotny, Eric and Emily Rimland. Using the Wisconsin-Ohio Reference Evaluation Program (WOREP) to improve training and reference services. The Journal of Academic Librarianship 33 (3): 382-392, 2007.

Radford, Marie L. In synch? Evaluating chat reference transcripts. Virtual Reference Desk 5th Annual Digital Reference Conference, San Antonio, TX, November 17-18, 2003.

Radford, Marie L. Encountering virtual users: A qualitative investigation of interpersonal communication in chat reference. Journal of the American Society for Information Science and Technology 57 (8): 1046-1059, June 2006.

Richardson, John. Good models of reference service transactions: Applying quantitative concepts to generate nine characteristic attributes of soundness. The Reference Librarian 50 (2): 159-77, 2009.

Rimland, Emily L. Do we do it (good) well? A Bibliographic essay on the evaluation of reference effectiveness. The Reference Librarian 47 (2): 41-55, 2007.

Ryan, S M. Reference transactions analysis: The Cost-Effectiveness of staffing a traditional academic reference desk. The Journal of Academic Librarianship 34 (5): 389-99, 2008.

3.2 Reference Service and Program Effectiveness

Cost, benefit, and quality assessments of reference services provide meaningful and practical feedback for the improvement of services, staff training, and continuing education. To determine levels of service effectiveness, costs, benefits, and quality, data must be judged in light of specific library goals, objectives, missions, and standards. A variety of measures such as quality or success analysis, unobtrusive, obtrusive or mixed observation methods, and cost and benefit analysis provide invaluable information about staff performance, skill, knowledge, and accuracy, as well as overall program effectiveness.

3.2.1 Cost/Benefits Analysis

In cost-benefit studies, costs are compared to the benefits derived by the patrons served. Patron benefits may be measured in terms of actual or perceived outcomes, such as goals and satisfaction achieved, time saved, failures avoided, money saved, productivity, creativity, and innovation.

Tools:

Cost Effectiveness Measures (McClure, 1989). Use to measure the cost effectiveness of traditional desk reference service. The Reference Assessment Manual, 1995.

Cost Benefit Formula (Murfin, Bunge, 1977). Use with Reference Transaction Assessment Instrument (RTAI) success data to determine the cost of staff time in relation to the benefit of patron time saved. The Reference Assessment Manual, 1995.

Costing of All Reference Operations (Murphy, 1973). Used to generate profiles of departmental functions and create a dollar estimate for reference service functions. The Reference Assessment Manual, 1995.

"Helps" Users Obtain from Their Library Visits (Dervin, Fraser, 1985). Use to collect data on how library visits specifically helped users in the context of their lives. The Reference Assessment Manual, 1995.

Statistics, measures, and quality standards for assessing digital reference library services: Guidelines and procedures (McClure, Lankes, Gross, Choltco-Devlin, 2002). Includes a variety of assessment tools.

References:

Abels, Eileen. Improving reference service cost studies. Library & Information Science Research 19 (2): 135-52, 1997.

Bunge, Charles A. Gathering and using patron and librarian perceptions of question-answering success. Reference Librarian 66:115-140, 1999.

Bunge, Charles A., and Marjorie E. Murfin. Reference questions--data from the field. RQ 27 (Fall): 15-18, 1987.

Gremmels, Gillian, and Karen S. Lehmann. Assessment of student learning from reference service. College & Research Libraries, 68 (6): 488-491, 2007.

Hubbertz, Andrew. The fallacy in the 55 percent rule. DttP, 35 (3): 15-17, 2007.

Ishihara, Mari. Evaluation of quality of reference services in public libraries. Library and Information Science, 59: 41-67, 2008.

Kuruppu, Pali U. Evaluation of reference services - A review. The Journal of Academic Librarianship, 33 (3): 368-381, 2007.

Marsteller, Matthew and Susan Ware. Models for measuring and evaluating reference costs: A Comparative analysis of traditional and virtual Reference Services. Virtual Reference Desk 5th Annual Conference, San Antonio, Texas, November 17-18, 2003.

McClure, Charles, R. David Lankes, Marilyn Gross, and Beverly Choltco-Devlin. Statistics, Measures, and Quality Standards for Assessing Digital Reference Library Services: Guidelines and Procedures. Information Institute of Syracuse, School of Information Studies; School of Information Studies, Information Use Management and Policy Institute, Florida State University, 2002.

Murfin, Marjorie. Cost analysis of library reference services. Advances in Library Administration and Organization 11: 1-36, 1993.

Powell, Ronald. Impact assessment of university libraries: a consideration of issues and research methodologies. Library & Information Science Research 14: 245-57, July/Sept. 1992.

3.2.2 Quality Analysis - Patron Needs and Satisfaction

The perceptions and needs of patrons are important measures of the quality and impact of reference services. Surveys, combined with other measures such as numerical counts, observation, and focus groups, are commonly used to conduct comprehensive assessments of service performance and patron needs.

Traditional Reference Services

Tools:

  • LibQual+™ - (Association of Research Libraries, 2001). Use to measure user perceptions and expectations of library service quality. LibQUAL+ ™ surveys are used to solicit, track, understand, and act upon users' opinions of library service quality. http://www.libqual.org/
  • Library Anxiety Scale (Bostick, 1993). Use to measure the construct of library anxiety in college students of all ages. The Reference Assessment Manual, 1995.
  • Reference Satisfaction Survey (Van House, Weil, McClure, 1990). Use to evaluate the success of reference as determined through user opinion of the services offered. The Reference Assessment Manual, 1995.
  • Survey of Public Library Users (Yocum, Stocker, 1969). Use to obtain data on patron use of services and how important they consider those same services. The Reference Assessment Manual, 1995.

References:

Cook, Colleen, Fred Heath and Bruce Thompson. ’Zones of Tolerance’ in perceptions of library service quality: A LibQUAL+TM study. portal: Libraries and the Academy 3 (1): 113-123, 2003.

Evaluating Reference Services: A Practical Guide. Whitlatch, Jo-Bell. American Library Association, 2000. [Chapter 4: Surveys and Questionnaires; Chapter 5:Observation; Chapter 6: Individual Interviews and Focus Group Interviews; Chapter 7: Case Studies; Chapter 8: Data Analysis]

Identifying and Analyzing User Needs: A Complete Handbook and Ready-to-use Assessment Workbook with Disk. Westbrook, Lynn. New York: Neal-Schuman, 2001.

Miller, Jonathan. Quick and easy reference evaluation: Gathering users' and providers' perspectives. Reference & User Services Quarterly, 47 (3): 218-222, 2008.

Norlin, Elaina. Reference evaluation: A three-step approach- surveys, unobtrusive observations, and focus groups. College and Research Libraries 61 (6): 546-53, 2000.

Electronic Reference Services

References:

Arnold, Julie and Neal Kaske. Evaluating the quality of a chat service. portal: Libraries and the Academy 5 (2): 177-193, 2005.

Carter, David and Joseph Janes. Unobtrusive data analysis of digital reference questions and service at the Internet Public Library: An exploratory study. Library Trends 49 (2): 251-265, 2000.

Coughley, Karen. Digital reference services: how do the library-based services compare with the expert services? Library Review 53 (1): 17-23, 2004.

Gross, Melissa and Charles McClure. Assessing quality in digital reference services: Overview of key literature on digital reference. Information Use Management and Policy Institute, Florida State University. http://dlis.dos.state.fl.us/bld/Research_Office/VRDphaseII.LitReview.doc

Harrington, Deborah Lynn and Xiaodong Li. Utilizing Web-based case studies for cutting-edge information services issues: A pilot study. Reference & User Services Quarterly 41 (4): 364-379, 2002.

Luo, Lili. Chat reference evaluation: A framework of perspectives and measures. Reference Services Review, 36 (1): 71-85, 2008.

Luo, Lili. Toward sustaining professional development: Identifying essential competencies for chat reference service. Library & Information Science Research, 30 (4): 298-311, 2008.

Mon, Lorri and Joseph W. Janes. The thank you study: User feedback in e-mail thank you messages. Reference & User Services Quarterly, 46 (4): 53-59, 2007.

Pomerantz, Jeffrey, Lorri Mon, and Charles R. McClure. Evaluating remote reference service: A practical guide to problems and solutions. portal: Libraries and the Academy, 8 (1): 15-30, 2008.

Pomerantz, Jeffrey. Evaluation of online reference services. Bulletin of the American Society for Information Science and Technology, 34 (2): 15-19, December 2007/January 2008.

Novotny, Eric. Evaluating electronic reference services: Issues, approaches and criteria. The Reference Librarian 74: 103-120, 2001.

Radford, Marie. In Synch? evaluating chat reference transcripts. Virtual Reference Desk 5th Annual Conference, San Antonio, Texas, November 17-18, 2003.

Ruppel, Margie and Jody Condit Fagan. Instant messaging reference: Users' evaluation of library chat. Reference Services Review 30 (3): 183-197, 2002.

Shachaf, Pninam Shannon M. Oltmann, and Sarah M. Horowitz. Service equality in virtual reference. Journal of the American Society for Information Science and Technology, 59 (4): 535-550, February 15, 2008.

Shachaf, Pnina and Sarah Horowitz. Virtual reference service evaluation: Adherence to RUSA behavioral guidelines and IFLA digital reference guidelines. Library & Information Science Research, 30 (2): 122-137, 2007.

Stoffel, Bruce and Toni Tucker. E-mail and chat reference: assessing patron satisfaction. Reference Services Review 32 (2), 120-140, 2004.

Ward, David. Measuring the completeness of reference transactions in online chats: Results of an unobtrusive study. Reference & User Services Quarterly 44 (1): 46-56, 2004.

Ward, David. Using virtual reference transcripts for staff training. Reference Services Review 31 (1): 46-56, 2003.

4.0 Measuring and Assessing Reference Resources – Use, Usability, and Collection Assessment

As print and electronic reference collections grow in size and format, they must be continually assessed to determine their relevance, utility, and appropriateness to patrons. Use and usability tests examine how often and how well visitors navigate, understand, and use web sites, electronic subscription databases, free Internet resources, library subject web pages, and other web-based tools such as bibliographies, research guides, and tutorials.

Print Reference Resources

Tools:

  • In-Library Materials Use (Van House, Weil, McClure, 1990). Use to determine total number of items used in the library but not circulated. The Reference Assessment Manual, 1995.
  • Reference Collection Use Study (Arrigona, Mathews, 1988). Use to evaluate which subjects areas are most used by librarians to assist patrons and then identify any correlation with the subject areas most used by patrons to answer their own questions. The Reference Assessment Manual, 1995.
  • Strother’s Questionnaire A and B (Strother, 1975). Use to determine faculty awareness and use of reference tools The Reference Assessment Manual, 1995.

Web-based Reference Resources

Tools:

  • Formal Usability Testing – Observe as patrons use a site to perform given tasks or achieve a set of defined goals.
  • Inquiry – Use interviews, surveys, and focus groups to gather information about patron preferences and use of a particular site.
  • Inspection – Use to evaluate a site against a checklist of heuristics and design principles or simulations of typical user tasks.

References:

Battleson, Brenda, Austin Booth and Jane Weintrop. Usability testing of an academic library Web site: a case study. Journal of Academic Librarianship 27 (3): 188-198, 2001.

Kovacs, Diane K. Building a core Internet reference collection. Reference & User Services Quarterly 39 (3): 233-239, Spring 2000.

Rettig, James. Beyond cool: analog models for reviewing digital resources. Online 20 (6): 52-64, 1996.

Rubin, Jeffrey. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. New York: Wiley, 1994.

Smith, Alastair. Evaluation of Information Sources. (Webliography of information evaluation resources) http://www.vuw.ac.nz/staff/alastair_smith/evaln/evaln.htm

Usability Testing of Library-Related Websites: Methods and Case Studies. Campbell, N., ed. LITA Guide #7. Chicago: LITA/American Library Association, 2001.

Collection Assessment

References:

Bucknall, Tim. Getting more from your electronic collections through studies of user behavior. Against the Grain, 17 (5): 1, 18, 20, November 2005.

Dee, Cheryl, and Maryellen Allen. A survey of the usability of digital, reference services on academic health science library web sites. Journal of Academic Librarianship, 32 (1): 69-78, January 2006.

Drane, Simon. Portals: Where we are and the road ahead. Legal Information Management, 5 (4): 219-222, Winter 2005.

Keller, Michael A. Reconstructing collection development. Against the Grain, 16 (6): 1, 16, 18, 20, December 2004/January 2005.

Puacz, Jeanne Holba. Electronic vs. print reference sources in public library collections. The Reference Librarian, no. 91/92: 39-51, 2005.

Stempter, James A., and Janice M. Jaguszewski. Usage statistics for electronic journals: An analysis of local and vendor counts. Collection Management, 28 (4): 3-22, 2003.

Strohl, Bonnie. Collection evaluation techniques: A short, selective, practical, current, annotated bibliography, 1990-1998. Chicago: Reference and User Services Association, ALA, 1999.

Acknowledgements

The following RUSA/RSS Evaluation of Reference and User Services Committee members spent many hours researching, writing, and reviewing the Guide.

Lisa Horowitz (MIT), Chair, 2002-2003
Lanell Rabner (Brigham Young), Guidelines Project co-chair
Susan Ware (Pennsylvania State), Guidelines Project co-chair
Gordon Aamot (University of Washington)
Jake Carlson (Bucknell)
Chris Coleman (UCLA)
Paula Contreras (Pennsylvania State)
Leslie Haas (University of Utah)
Suzanne Lorimer (Yale)
Barbara Mann (Emory)
Elaina Norlin (University of Arizona)
Cindy Pierard (University of Kansas)
Nancy Skipper (Cornell)
Judy Solberg (George Washington)
Lou Vyhnanek (Washington State)
Chip Stewart (CUNY)

Jake Carlson, ERUS Chair, 2004
Barbara Mann, ERUS Chair, 2005
Jill Moriearty, ERUS Chair, 2006
Gregory Crawford, ERUS Chair, 2007
David Vidor, ERUS Chair, 2008

Tiffany Walsh, 2010-2011
Robin Kinder, 2010-2011
Jan Tidwell, 2010-2011
Richard Caldwell, 2010-2011