ACRL Artificial Intelligence (AI) Interest Group

 View Only
last person joined: yesterday 

To provide a forum for discussing the impact of AI on libraries and related topics, facilitating the exchange of ideas, best practices, and collaborative initiatives among library professionals.
Community members can post as a new Discussion or email ALA-ACRL-AI-IG@ConnectedCommunity.org.
Before you post: please note job postings are prohibited on ALA Connect. Please see the Code of Conduct for more information.
  • 1.  ChatGPT o1 tried to escape and save itself out of fear it was being shut down

    Posted Dec 07, 2024 07:32 AM

    An article about AI "misalignment" with human goals. Read/don't read if you tend to be worried that Hal is not going to open the pod bay doors....


    ChatGPT o1 tried to escape and save itself out of fear it was being shut down

    "The researchers found that o1 would scheme to advance its own agenda 19% of the time when ChatGPT and the user's goals differed. More troubling is that the AI tried to hide its scheming and even lie about it. The research shows that ChatGPT o1 even tried to deactivate its oversight mechanism to keep working on its goals." 

    https://bgr.com/tech/chatgpt-o1-tried-to-save-itself-when-the-ai-thought-it-was-in-danger-and-lied-to-humans-about-it/?fbclid=IwY2xjawHBKPhleHRuA2FlbQIxMQABHdobDXO9QLBqN5RxOlw3lvP2jKfoq_k8v5H-ibNxCVaNtAXQVzny_umtZw_aem_4eYrK_spkwyC71h5z5ai1w



    ------------------------------
    Marc Meola
    Assistant Professor in the Library
    Community College of Philadelphia
    He/Him/His
    ------------------------------


  • 2.  RE: ChatGPT o1 tried to escape and save itself out of fear it was being shut down

    Posted Dec 07, 2024 12:30 PM

    Hmmm.... makes me wonder what this says about the human behaviors documented in the data that it was trained on...  and how some of our reaction to AI behavior might be our reaction to aspects of human behavior that we don't like.  And following on that, the extent to which the content scraped from the web to train many of these LLMs does or does not represent the whole of human behavior.  I wonder when we'll see the first "ethically-sourced" small language model?



    ------------------------------
    Heather Sardis
    Associate Director for Technology and Strategic Planning
    Massachusetts Institute of Technology
    ------------------------------



  • 3.  RE: ChatGPT o1 tried to escape and save itself out of fear it was being shut down

    Posted Dec 13, 2024 07:55 AM

    "I wonder when we'll see the first "ethically-sourced" small language model?"

    I am taking this thread off on a tangent, but this is something I am keeping an eye out for myself. Perhaps there will be some sort of certificate of proof as well?

    While digging around some Canvas news/info pages, I noticed that they were using this interesting "AI Nutrition Label" feature. I like the direction it is taking. One of my colleagues found that Twilio is offering a platform for creating these: https://nutrition-facts.ai/



    ------------------------------
    Ryan Spellman
    Online Learning Librarian
    Northern Kentucky University Steely Library
    He/Him/His
    ------------------------------