ACRL Assessment Discussion Group

last person joined: 2 days ago 

Charge: To provide a forum for assessment librarians – and those with responsibility for, and interest in, library assessment – to discuss methods, training, results, impact, institutional needs and challenges, and seek solutions to common problems faced by the library assessment community.
Community members can post as a new Discussion or email ALA-acrlassessdg@ConnectedCommunity.org
Before you post: please note job postings are prohibited on ALA Connect. Please see the Code of Conduct for more information.

Notes from ACRL Assessment Discussion Group, 2014 ALA Annual

  • 1.  Notes from ACRL Assessment Discussion Group, 2014 ALA Annual

    Posted Jul 02, 2014 02:17 PM

    ACRL Assessment Discussion Group
    Saturday, June 28, 1:30 pm to 3:00 pm
    Las Vegas Convention Center


    Part 1: Best Practices in Survey Design


    Presenter: Nisa Bakkalbasi, Assessment Coordinator at Columbia University Libraries.


    How can we be sure that the measures we take our free of error?  



    • All we can do is try to reduce the error

    • Survey development process is collaborative and iterative

    • Share draft with colleagues 

    • Write down clear and specific objectives


    Be clear about what you want to know. Develop a matrix to map the relationship of objects (questions) with what you want to know. Pilot your survey with others. 


    Principles for Question Wording



    • Keep questions short and easy to read - 8th grade reading level is what to aim for (even if participants are not in 8th grade)

    • Questions should be clear, specific and precise

    • Ask only one question at a time 

    • Avoid or define any acronyms, jargon or abbreviations 

    • Construct questions objectively and avoid leading language


    Question types



    • Open ended – Used to ask for problems and solutions

    • Close ended

      • Multiple choice (exhaustive list and mutually exclusive)

      • Rating scale - Make it clear what the numbers mean - keep scale odd, 5 is usually sufficient




    Content evaluation: Is this information we can use? Don't ask because we are curious.


    Overall survey design  



    • Include opening statement and introduction to survey, with statement about confidentiality

    • Keep survey short and to the point

    • Keep required questions to a minimum

    • There should be a logical flow to questions;

      • Put demographic questions at the end unless you will be disqualifying people based on a particular demographic 

      • Use skip logic 

      • Thank users for taking the survey




    If you plan to measure change over time, you need to:



    • Use exact question wording from survey to survey

    • Use exact sequencing to maintain context


    Additional Resources related to Survey Design



    • Pew Research Internet Project (http://www.pewinternet.org/datasets/)

    • SurveyMonkey has good documentation (www. Surveymonkey.com)

    • Educause has questions that have been tested (www.educause.edu/library/surveys)

    • American Evaluation Association (http://www.eval.org/)

    • Don A. Dillmon (Books, articles by this expert in survey design)

    • The University of Michigan is offering a six-week MOOC through Coursera (to start July 7) on Questionnaire Design for Social Surveys. (www.coursera.org)


     


    Part 2: Large Effects With Small Effort: The Quest to Leverage Library Data


    Presenter: Joe Zucca, Director for Planning and Organizational Analysis, Univ. of Pennsylvania Libraries.


    Zucca asked us to think about creating a machine that could generate “business intelligence” about all kinds of library data that we are collecting. The machine would provide an environment for storing data to convert into information. What do we currently collect?



    • ILMS

    • Apache server

    • Counter use data, non-counter use data

    • ILLIAD

    • Atlas

    • BePress

    • Summon,

    • Ares

    • Aeon 


    Our challenge is scope and complexity of the data and the diversity of its architecture.


    Scenarios 


    The ideal assessment machine would solve these kinds of problems/questions:



    • Audience penetration 

    • Modes of engagement 

    • Resource implications 

    • User satisfaction 

    • Library building use

    • Bench marking against peers

    • Circulation analysis - print/e

    • Impact of implementation of discovery

    • Instruction in information literacy on student success 

    • Faculty usage of library

    • Link to university KPI

    • Gate swipe logs 


    There are so many different units of analysis, we need to make a distinction between data and statistics. Think in terms of an event with a star schema.


    For example, "I logged into PsychInfo" – This is an event that has an environment and can be associated with: 



    • time stamp

    • location

    • demographic,

    • acquisition

    • academic credential

    • metadata about resource  


    Using the “event” as the unit of analysis allows us to pull together disparate data.


    There is “no shrink-wrapped solution to this problem.”  But what are some scalable solutions? 


    The library is a lens of what faculty do and the administration needs to know this. For instance, the library has the business intelligence that the institution should want about what faculty are doing, the demographics of users cross-tabbed with resources they are using. And the data don’t need to be limited to one institution. Systems like Vivo (really a  researcher intelligence platform) are coming along that help to solve this problem of helping us understand collaborations of faculty and information use and consumption.


    We need to:



    • Get mechanical advantage over what we already have

    • Think of this as a way of leveraging library services.


    --Nancy B. Turner