Core Artificial Intelligence and Machine Learning in Libraries Interest Group

Portraits of three Core members with caption Become a Member: Find Your Home: Core.

 

  • 1.  Core IG Week - Chat notes

    Posted 12 days ago

    Zoom gave the chat log which was great, but in its raw form it was a bit inelegant so I went ahead and broke it up into different categories and anonymized messages out of respect for folks in attendance who may not want their names tied to their identities. 

    Behold: the library AI zeitgeist! And boy, is there a lot here:

    On what’s motivating people’s feelings about AI:

    • Lack of trust that it's accurate. 

    • Loss of real people in jobs

    • Hallucinations

    • So many feelings all over the place, but isn’t it a broad technology with lots of caveats about all these things?

    • Breaking the staffing pipeline by eliminating true entry level work

    • Was just talking to a friend at one of the National Labs about Claude. Many of the scientists are really angry and running around to redo projects

    • Environmental impacts

    • Worried about student creativity and free of thought being negatively impacted

    • I dread the polarization, not AI. It's gotten to the point that I can't talk about what I do or enjoy doing without people hijacking the conversation about how bad it all is >.<

    • Naivete.

    • Artificial intelligence is a misnomer, it doesn't think (especially LLMs)

    • I dread the AI companies (and the companies looking to get AI into EVERYTHING) more than I do the technology itself.

    • Strongly concerned about how many people (mostly outside of libraries) default to it for everything

    • devaluing subject matter expertise; devaluing library mission and ethics; illiteracy

    • I also dread the personification of AI. People talking about giving their chatbot "breaks" to "explore" ....it's not a pet.

    • We're really stretching the term "intelligence."

    • I'm very excited about AI in fields like Healthcare. Very skeptical everywhere else

    • I'm with Delena on this. Absolutely shoved in everywhere and everything. Consent is key to proper implementation I think.

    • This all seems to be run by people who are too far into science fiction…

      • to which someone replied with, appropriately, “yes, but not enough that they learn the lessons of sci fi stories!”

      • Yes, AI is also probably trained on those scifi stories of AI against humanity as well

    • I worry about the misunderstanding of the huge impact AI will have on our society and in our profession. I also think there will be enormous misunderstanding of AI's effect on information and it's reliability. We won't be able to detect AI influence.



    On the point that what’s being called “AI” isn’t actually AI but better described as Machine Learning:

    • yes! we've had machine learning tools for YEARS!

    • Astrophysicist Dr. Katie Mack: "Chatbots -- LLMs -- do not know facts and are not designed to be able to accurately answer questions. They are designed to find and mimic patterns of words, probabilistically. When they're "right" it's because correct things are often written down, so those patterns are frequent. That's all."

      • Response from another attendee: this has been a big concern of mine, mainly because the CompSci people I have talked to about the "quality" of responses is missing the qualitative analysis of what is a "good" answer, rather than an 'acurate to the test data set' answer

     

    On what’s motivating people’s feelings about AI:

    • Lack of trust that it's accurate. 

    • Loss of real people in jobs

    • Hallucinations

    • So many feelings all over the place, but isn’t it a broad technology with lots of caveats about all these things?

    • Breaking the staffing pipeline by eliminating true entry level work

    • Was just talking to a friend at one of the National Labs about Claude. Many of the scientists are really angry and running around to redo projects

    • Environmental impacts

    • Worried about student creativity and free of thought being negatively impacted

    • I dread the polarization, not AI. It's gotten to the point that I can't talk about what I do or enjoy doing without people hijacking the conversation about how bad it all is >.<

    • Naivete.

    • Artificial intelligence is a misnomer, it doesn't think (especially LLMs)

    • I dread the AI companies (and the companies looking to get AI into EVERYTHING) more than I do the technology itself.

    • Strongly concerned about how many people (mostly outside of libraries) default to it for everything

    • devaluing subject matter expertise; devaluing library mission and ethics; illiteracy

    • I also dread the personification of AI. People talking about giving their chatbot "breaks" to "explore" ....it's not a pet.

    • We're really stretching the term "intelligence."

    • I'm very excited about AI in fields like Healthcare. Very skeptical everywhere else

    • I'm with Delena on this. Absolutely shoved in everywhere and everything. Consent is key to proper implementation I think.

    • This all seems to be run by people who are too far into science fiction…

      • to which someone replied with, appropriately, “yes, but not enough that they learn the lessons of sci fi stories!”

      • Yes, AI is also probably trained on those scifi stories of AI against humanity as well

    • I worry about the misunderstanding of the huge impact AI will have on our society and in our profession. I also think there will be enormous misunderstanding of AI's effect on information and it's reliability. We won't be able to detect AI influence.



    On the point that what’s being called “AI” isn’t actually AI but better described as Machine Learning:

    • yes! we've had machine learning tools for YEARS!

    • Astrophysicist Dr. Katie Mack: "Chatbots -- LLMs -- do not know facts and are not designed to be able to accurately answer questions. They are designed to find and mimic patterns of words, probabilistically. When they're "right" it's because correct things are often written down, so those patterns are frequent. That's all."

      • Response from another attendee: this has been a big concern of mine, mainly because the CompSci people I have talked to about the "quality" of responses is missing the qualitative analysis of what is a "good" answer, rather than an 'acurate to the test data set' answer

     

    On AI usage guidelines/policy:

    • We are encouraged to use it and to engage with it but are not provided guidelines

    • I say "yes" [there is AI policy in my org] on the institutional level, but it's very broad/loose -- use this model, don't put in confidential info, etc.



    On AI in educational contexts:

    • Absolutely agree with you. Like other institutions, we have seen a huge increase in plagiarism due to AI

      • Response from another attendee: Meanwhile, there are also students who are actually doing their own work that are accused of using AI!

      • Response from another attendee: also happening here and causing students and faculty a lot of anxiety and tears

    • People being accused of using AI by AI is so ridic. I do appreciate the ease of regurgitating and summarizing info from dense articles, though.

    • We've engaged in "cognitive offloading," to use a psychologist's term, for as long as we've had IT, but I fear we're at a point where we're handing our volition over to dumb machines. Expediency has become a misplaced virtue.

      • Reply from another attendee: Is it possibly the case that AI is being used as a way to avoid personal responsibility for conclusions/decisions?

      • Response: Certainly.  There are serious moral questions about human agency and it's supposed removal from those of us using "AI."  Look at drone strikes, and the current DoD's use of Claude in Iran, for an extreme but no less real example.

    • Here in Canada, I find our K-12 schools are teaching AI. But when they come to our post-secondary institutions, there is little to no education on how to use it ethically at a post-secondary and graduate level. Trying to have better education for AI is even more difficult when you have an administration that doesn't value their libraries or is resistant to change.



    On who’s using AI at your institution (Note: I should have set this as a “choose all that apply, but goofed on the slide setup - my bad!):

    • Radio button, so I chose institutional colleagues, but we also see students using it.

    • You are only allowing one answer here, but several apply

    • Who's using AI at work - we can only select 1 answer but there's multiple responses for me

    • Responded "Me" because the team I'm on (Digital Solutions) is heavily using it, but a lot of colleagues around the institution, including library colleagues, are also using it

    • All except "Nobody"

    • Yeah multiple responses also apply in my case.

    • Most popular AI sites are blocked on our computers unless a policy is signed and a training is completed.

    • Use it nearly every day to translate (Google Translate). Also to do research for subject heading and genre heading proposals for SACO

      • Response from another attendee: thank you for knowing that Google Translate is an ML/AI tool!! It blows people's minds when I tell them that. And that Google Translate was mainly created by Rosetta-stone style ML training on the Google Books projects

    • Students are definitely using it.  We have some grant projects.  I could only click one button

    • Our U has its own AI Initiative and the Libraries are supporting it. We started having library-wide training.

    • the answer was nobody/don't know - for me, it was I don't know - I expect we have some vendor solutions that incorporate AI, but who is individually using AI - I have no idea - and then community - I have no doubt people are using AI in the community, but I would have no way of knowing

    • It's becoming increasingly difficult to avoid because of so many tools inserting it without consent or an off button

    • in my MLIS program at SJSU they provide paid ChatGPT. At work at an academic research library,  we have a Slack where we share knowledge. Other groups on campus have shared in open meetings how they've used it in their research, which has been very interesting.

    • As a blunt person, I appreciate the ability to quickly fix my tone with chatbots. I just wish they had a lot more guardrails.

     

    On some of the general sticky ethics of AI:

    • Ghost labor! [Note: outsourcing labor of monitoring/refining AI answers to people in other countries/low wage positions]

    • More data centers are not welcome in many communities due to the environmental impact.

      • Response from another attendee: Eventually there could be data centers in space, where it is already cold, no water needed for cooling. 

      • [PM Note: the science is not sound, see “Can Orbital Data Centers Solve AI’s Power Crisis?”]

      • Another attendee replied: Datacenter-in-space deployment also wreaks havoc with terrestrial astronomy; Starlink has already created quite a mess.

      • Another response: wasn't there something about space trash and gravity that would explain the issue with moving mass from earth to space?

      • Another: yes - there's an energy cost of 5-10 kilowatt-hours per pound placed in orbit, depending on the orbit. Then there's further energy cost to keep it there and/or de-orbit it when it reaches EOL. Elon wants to put up 1M such satellites, so 5-10 million kW-hours just to put them up there.  Plus ?? kW-hours building them.

    • On the anthropomorphising of AI:

    • On Deepfakes/info authenticity:

      • I had a day of fury at AI finding 3 example ads on YouTube using fake videos of famous people to sell supplements, including a professor at our school.

      • Deep fakes are a huge issue. Bias. Control by evil people. The environment. I do think these are things we can fix, though.

      • consent is a major issue and neither corporations nor legislators seem inclined to curb deep fakes

      • One of the major failings leading us where we are is Congress' inaction on ICT for the last quarter-century. Social media especially

    • On AI and copyright:

      • Maybe we need more fundamental revisions to how we approach copyright, intellectual property, etc. It’s not just AI that is wrong—it’s exploiting some weaknesses in how we do stuff

      • My thing is - if publishers are fine with copyright being violated for AI reasons, the AAP needs to lay off about controlled digital lending / 1:1 digital access

      • Author's Alliance is tracking these suits! [PM note: discussion about how AI companies argue fair use in the various suits brought against them for copyright infringement]

        • Elsevier & other publishers come to mind, in terms of lawsuits against AI

        • Stephen King

      • There is this fundamental problem that the way the web works, unless your content is behind a login or even when it is, your computer caches things and that’s copying…. And somehow the courts allowed that way back…

    • On AI use disclosure:

      • Some authors are now even vocally anti-Libby because of their implementation of some AI tools and refusal to actively ban any titles that had AI involvement in their creation

        • Response from another attendee: In Libby's defense, outright banning any AI-created works incentivizes non-disclosure, while allowing it can encourage publisher/authors to be upfront on AI use for those who wish to avoid it (an argument made against banning AI works in AO3)

        • Another response: ^ Same argument used by Steam for videogames

        • Another: how can you require people or publishers to disclose AI use? They don't seem to want to do it on their own…

        • Another: Agreed [...] - it's much more nuanced than much of the internet community is taking into account. My immediate concern was whether authors would/could pull their works from library digital lenders like Libby entirely

        • Another: You can require disclosure, but enforcing it may be difficult. I think of Amazon's direct publishing platform, which requires you to check a box if AI was used to create the work. Presumably, failure to do so would result in your work being removed from the store if caught.

        • Another: I wish people WOULD be more transparent about where and how it is used. In my Reference class we cite the AI and sometimes also share our prompts as part of our disclosure. I think that is a decent way to do it as a researcher. As a company though, idk. I wish it was just a rule that they had to disclose. Maybe as stigma fades on its usage?

        • Sympathy Tokyo Tower is an interesting case study too

        • "Don't buy AI-generated art; buy art from true degenerates."

        • I find it unfortunate how readily people dismiss the (indispensable) human element in art.

    • On AI and Consent: 

      • Transparency builds trust.

      • I love the idea of donors being able to opt-in/opt-out of AI processing of the materials they donate. For example, I work with oral histories and I would like to see AI processing opt-in/opt-out added to releases.

        • Response from another attendee: Oh, that's a really interesting idea



    On verifiability of AI answers:

    • With RAG and agent multistep processes there are ways to try to combat hallucinations and feed in actual verified data, right?  If we know what to do

    • I notice that people are attributing bad data to AI a lot these days, e.g., if it doesn't seem relevant, it must be an AI hallucination. But, human beings have made errors for thousands of years so why is the instinct to say it is AI? Human beings not using AI thought the Sun revolved around the Earth

     

    On who to go to with AI Questions at your institution:

    • would have answered "I don't know" for the first four questions

    • We have an AI Group at my library, where we share out findings and interesting things we discover. Not incredibly authoritative though

    • I think the opportunity to talk to other LIS professionals about these issues is very exciting and valuable. I enjoy all the different perspectives.

     

    On AI Use cases:

    • sometimes it's a brainstorming/sounding board conversation more than expertise

    • That's the thing for me - many people are using it, with or without instruction. It is an information literacy/discernment issue for me and I think we should be able to support people that are using it. I find it so interesting how much the policies (or lack thereof) vary between institutions.

      • Response from Peter: aiEDU.org [for AI literacy materials]

    • Resource suggestion: Viewfinder tool (https://www.lib.montana.edu/responsible-ai/viewfinder/) is helpful for exploring different AI implementation scenarios and can help you understand better what will [work]

    On Elon Musk:

    • [PM summary of the sentiment: the reviews are generally not favorable]

     

    Re: Question about “linked data” (RDF) and AI:

    • Who asked this question? Please hit me up, Working on a dissertation related to this: Jill Strykowski [PM Note: email removed for privacy reasons]

    • Linked Data and RDF -- The PHAROS project just relaunched their discovery site, and it's backend is primarily or entirely RDF, with vectors of collection images and metadata. - https://artresearch.net/resource/start

    • Unless you build the model and you know where the data came from and whether it’s copyrighted or not (how could your crawling bots know?)… even if the tools to build the model are open source, how can you rely on the data?

      • Jill: also part of my dissertation question! Spoiler - linked-open-data will be key

     

    Re: question about how libraries can advocate for stronger AI guardrails: [PM note: my initial answer focused strongly on the importance of actually understanding how these technologies work. Follow-up answer mentioned that a lot of AI entering libraries is through vendors, and brought up the importance of privacy assessments, demanding off switches, and and opting out whenever possible/demanding the right to from vendors]

    • I don't like the idea that we just charge ahead without guardrails until people "know more"

      • Response from another attendee: Not a bad thing to learn about them. At least we know what it is and how it works. And what we are dealing with.

      • Another response: Being informed doesn't guarantee that people will embrace it. Similarly, resisting implementation doesn't mean ignorance. Plenty of people know "how it works" and see that the trade-offs are not worth it.

    • It is very hard to "walk it back" once it's been unleashed

    • Our privacy assessments happened but were super misguided based on bad assumptions of what needed to be private.

    • So much software won't give you an off-switch, it's against our will

      • Response from another attendee: especially when opt-in and opt-out are set an instution level

     

    Re: question about advocacy tips when a university doesn’t value its library/AI literacy education

    • We are encouraged to use it and to engage with it but are not provided guidelines

    • I say "yes" [there is AI policy in my org] on the institutional level, but it's very broad/loose -- use this model, don't put in confidential info, etc.



    On AI in educational contexts:

    • Absolutely agree with you. Like other institutions, we have seen a huge increase in plagiarism due to AI

      • Response from another attendee: Meanwhile, there are also students who are actually doing their own work that are accused of using AI!

      • Response from another attendee: also happening here and causing students and faculty a lot of anxiety and tears

    • People being accused of using AI by AI is so ridic. I do appreciate the ease of regurgitating and summarizing info from dense articles, though.

    • We've engaged in "cognitive offloading," to use a psychologist's term, for as long as we've had IT, but I fear we're at a point where we're handing our volition over to dumb machines. Expediency has become a misplaced virtue.

      • Reply from another attendee: Is it possibly the case that AI is being used as a way to avoid personal responsibility for conclusions/decisions?

      • Response: Certainly.  There are serious moral questions about human agency and it's supposed removal from those of us using "AI."  Look at drone strikes, and the current DoD's use of Claude in Iran, for an extreme but no less real example.

    • Here in Canada, I find our K-12 schools are teaching AI. But when they come to our post-secondary institutions, there is little to no education on how to use it ethically at a post-secondary and graduate level. Trying to have better education for AI is even more difficult when you have an administration that doesn't value their libraries or is resistant to change.



    On who’s using AI at your institution (Note: I should have set this as a “choose all that apply, but goofed on the slide setup - my bad!):

    • Radio button, so I chose institutional colleagues, but we also see students using it.

    • You are only allowing one answer here, but several apply

    • Who's using AI at work - we can only select 1 answer but there's multiple responses for me

    • Responded "Me" because the team I'm on (Digital Solutions) is heavily using it, but a lot of colleagues around the institution, including library colleagues, are also using it

    • All except "Nobody"

    • Yeah multiple responses also apply in my case.

    • Most popular AI sites are blocked on our computers unless a policy is signed and a training is completed.

    • Use it nearly every day to translate (Google Translate). Also to do research for subject heading and genre heading proposals for SACO

      • Response from another attendee: thank you for knowing that Google Translate is an ML/AI tool!! It blows people's minds when I tell them that. And that Google Translate was mainly created by Rosetta-stone style ML training on the Google Books projects

    • Students are definitely using it.  We have some grant projects.  I could only click one button

    • Our U has its own AI Initiative and the Libraries are supporting it. We started having library-wide training.

    • the answer was nobody/don't know - for me, it was I don't know - I expect we have some vendor solutions that incorporate AI, but who is individually using AI - I have no idea - and then community - I have no doubt people are using AI in the community, but I would have no way of knowing

    • It's becoming increasingly difficult to avoid because of so many tools inserting it without consent or an off button

    • in my MLIS program at SJSU they provide paid ChatGPT. At work at an academic research library,  we have a Slack where we share knowledge. Other groups on campus have shared in open meetings how they've used it in their research, which has been very interesting.

    • As a blunt person, I appreciate the ability to quickly fix my tone with chatbots. I just wish they had a lot more guardrails.

     

    On some of the general sticky ethics of AI:

    • Ghost labor! [Note: outsourcing labor of monitoring/refining AI answers to people in other countries/low wage positions]

    • More data centers are not welcome in many communities due to the environmental impact.

      • Response from another attendee: Eventually there could be data centers in space, where it is already cold, no water needed for cooling. 

      • [PM Note: the science is not sound, see “Can Orbital Data Centers Solve AI’s Power Crisis?”]

      • Another attendee replied: Datacenter-in-space deployment also wreaks havoc with terrestrial astronomy; Starlink has already created quite a mess.

      • Another response: wasn't there something about space trash and gravity that would explain the issue with moving mass from earth to space?

      • Another: yes - there's an energy cost of 5-10 kilowatt-hours per pound placed in orbit, depending on the orbit. Then there's further energy cost to keep it there and/or de-orbit it when it reaches EOL. Elon wants to put up 1M such satellites, so 5-10 million kW-hours just to put them up there.  Plus ?? kW-hours building them.

    • On the anthropomorphising of AI:

    • On Deepfakes/info authenticity:

      • I had a day of fury at AI finding 3 example ads on YouTube using fake videos of famous people to sell supplements, including a professor at our school.

      • Deep fakes are a huge issue. Bias. Control by evil people. The environment. I do think these are things we can fix, though.

      • consent is a major issue and neither corporations nor legislators seem inclined to curb deep fakes

      • One of the major failings leading us where we are is Congress' inaction on ICT for the last quarter-century. Social media especially

    • On AI and copyright:

      • Maybe we need more fundamental revisions to how we approach copyright, intellectual property, etc. It’s not just AI that is wrong—it’s exploiting some weaknesses in how we do stuff

      • My thing is - if publishers are fine with copyright being violated for AI reasons, the AAP needs to lay off about controlled digital lending / 1:1 digital access

      • Author's Alliance is tracking these suits! [PM note: discussion about how AI companies argue fair use in the various suits brought against them for copyright infringement]

        • Elsevier & other publishers come to mind, in terms of lawsuits against AI

        • Stephen King

      • There is this fundamental problem that the way the web works, unless your content is behind a login or even when it is, your computer caches things and that’s copying…. And somehow the courts allowed that way back…

    • On AI use disclosure:

      • Some authors are now even vocally anti-Libby because of their implementation of some AI tools and refusal to actively ban any titles that had AI involvement in their creation

        • Response from another attendee: In Libby's defense, outright banning any AI-created works incentivizes non-disclosure, while allowing it can encourage publisher/authors to be upfront on AI use for those who wish to avoid it (an argument made against banning AI works in AO3)

        • Another response: ^ Same argument used by Steam for videogames

        • Another: how can you require people or publishers to disclose AI use? They don't seem to want to do it on their own…

        • Another: Agreed [...] - it's much more nuanced than much of the internet community is taking into account. My immediate concern was whether authors would/could pull their works from library digital lenders like Libby entirely

        • Another: You can require disclosure, but enforcing it may be difficult. I think of Amazon's direct publishing platform, which requires you to check a box if AI was used to create the work. Presumably, failure to do so would result in your work being removed from the store if caught.

        • Another: I wish people WOULD be more transparent about where and how it is used. In my Reference class we cite the AI and sometimes also share our prompts as part of our disclosure. I think that is a decent way to do it as a researcher. As a company though, idk. I wish it was just a rule that they had to disclose. Maybe as stigma fades on its usage?

        • Sympathy Tokyo Tower is an interesting case study too

        • "Don't buy AI-generated art; buy art from true degenerates."

        • I find it unfortunate how readily people dismiss the (indispensable) human element in art.

    • On AI and Consent: 

      • Transparency builds trust.

      • I love the idea of donors being able to opt-in/opt-out of AI processing of the materials they donate. For example, I work with oral histories and I would like to see AI processing opt-in/opt-out added to releases.

        • Response from another attendee: Oh, that's a really interesting idea

    On verifiability of AI answers:

    • With RAG and agent multistep processes there are ways to try to combat hallucinations and feed in actual verified data, right?  If we know what to do

    • I notice that people are attributing bad data to AI a lot these days, e.g., if it doesn't seem relevant, it must be an AI hallucination. But, human beings have made errors for thousands of years so why is the instinct to say it is AI? Human beings not using AI thought the Sun revolved around the Earth

     

    On who to go to with AI Questions at your institution:

    • would have answered "I don't know" for the first four questions

    • We have an AI Group at my library, where we share out findings and interesting things we discover. Not incredibly authoritative though

    • I think the opportunity to talk to other LIS professionals about these issues is very exciting and valuable. I enjoy all the different perspectives.

     

    On AI Use cases:

    • sometimes it's a brainstorming/sounding board conversation more than expertise

    • That's the thing for me - many people are using it, with or without instruction. It is an information literacy/discernment issue for me and I think we should be able to support people that are using it. I find it so interesting how much the policies (or lack thereof) vary between institutions.

      • Response from Peter: aiEDU.org [for AI literacy materials]

    • Resource suggestion: Viewfinder tool (https://www.lib.montana.edu/responsible-ai/viewfinder/) is helpful for exploring different AI implementation scenarios and can help you understand better what will [work]

    On Elon Musk:

    • [PM summary of the sentiment: the reviews are generally not favorable]

     

    Re: Question about “linked data” (RDF) and AI:

    • Who asked this question? Please hit me up, Working on a dissertation related to this: Jill Strykowski [PM Note: email removed for privacy reasons]

    • Linked Data and RDF -- The PHAROS project just relaunched their discovery site, and it's backend is primarily or entirely RDF, with vectors of collection images and metadata. - https://artresearch.net/resource/start

    • Unless you build the model and you know where the data came from and whether it’s copyrighted or not (how could your crawling bots know?)… even if the tools to build the model are open source, how can you rely on the data?

      • Jill: also part of my dissertation question! Spoiler - linked-open-data will be key

     

    Re: question about how libraries can advocate for stronger AI guardrails: [PM note: my initial answer focused strongly on the importance of actually understanding how these technologies work. Follow-up answer mentioned that a lot of AI entering libraries is through vendors, and brought up the importance of privacy assessments, demanding off switches, and and opting out whenever possible/demanding the right to from vendors]

    • I don't like the idea that we just charge ahead without guardrails until people "know more"

      • Response from another attendee: Not a bad thing to learn about them. At least we know what it is and how it works. And what we are dealing with.

      • Another response: Being informed doesn't guarantee that people will embrace it. Similarly, resisting implementation doesn't mean ignorance. Plenty of people know "how it works" and see that the trade-offs are not worth it.

    • It is very hard to "walk it back" once it's been unleashed

    • Our privacy assessments happened but were super misguided based on bad assumptions of what needed to be private.

    • So much software won't give you an off-switch, it's against our will

      • Response from another attendee: especially when opt-in and opt-out are set an instution level

     

    Re: question about advocacy tips when a university doesn’t value its library/AI literacy education



    -------------------------------------------


    ------------------------------
    Peter Musser
    Chair, ALA Core IG for Artificial Intelligence and Machine Learning in Libraries

    Head, Library Services
    ISKME
    He/Him/His
    ------------------------------