catrionagold , to AcademicChatter group
@catrionagold@mastodon.social avatar

An academic/activist crowdsourcing request:

Who is critically researching, writing or doing cool activism on the environmental impacts of AI?

I’m particularly interested in finding UK-based folks, but all recommendations are appreciated 💕 🙏

@academicchatter

ninokadic , to philosophy group
@ninokadic@mastodon.social avatar
ALT
  • Reply
  • Loading...
  • JustCodeCulture , to anthropology group
    @JustCodeCulture@mastodon.social avatar

    New Review Essay on @lmesseri tremendous new book, ethnography & tech, social hopes, & false dreams of tech solutionism. Also discussing work of Andrew Brock, Zeynep Tufekci & Kelsie Nabben on Black Twitter, Twitter & ethnographies of DAOs.

    @histodons
    @commodon
    @anthropology
    @sociology

    https://z.umn.edu/EthnographicSublime

    bibliolater , to science group
    @bibliolater@qoto.org avatar

    Backstabbing, bluffing and playing dead: has AI learned to deceive? – podcast

    “Dr Peter Park, an AI existential safety researcher at MIT and author of the research, tells Ian Sample about the different examples of deception he uncovered, and why they will be so difficult to tackle as long as AI remains a black box.”

    https://www.theguardian.com/science/audio/2024/may/14/backstabbing-bluffing-and-playing-dead-has-ai-learned-to-deceive-podcast

    @science

    attribution: Orion 8, Public domain, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Icon_announcer.svg

    bibliolater , to science group
    @bibliolater@qoto.org avatar

    "A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

    Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

    @science @technology

    attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

    bibliolater , to science group
    @bibliolater@qoto.org avatar

    "A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

    Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

    @science @technology

    attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

    bibliolater , to science group
    @bibliolater@qoto.org avatar

    "A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

    Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

    @science @technology

    attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

    bibliolater , to science group
    @bibliolater@qoto.org avatar

    "A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

    Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

    @science @technology

    attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

    bibliolater , to science group
    @bibliolater@qoto.org avatar

    AI deception: A survey of examples, risks, and potential solutions

    "Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

    DOI: https://doi.org/10.1016/j.patter.2024.100988

    @science

    bibliolater , to science group
    @bibliolater@qoto.org avatar

    AI deception: A survey of examples, risks, and potential solutions

    "Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

    DOI: https://doi.org/10.1016/j.patter.2024.100988

    @science

    attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

    bibliolater , to science group
    @bibliolater@qoto.org avatar
    AI deception: A survey of examples, risks, and potential solutions

    Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

    DOI: https://doi.org/10.1016/j.patter.2024.100988

    @science

    attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

    bibliolater , to science group
    @bibliolater@qoto.org avatar
    AI deception: A survey of examples, risks, and potential solutions

    Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

    DOI: https://doi.org/10.1016/j.patter.2024.100988

    @science

    bibliolater , to science group
    @bibliolater@qoto.org avatar
    AI deception: A survey of examples, risks, and potential solutions

    "Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

    DOI: https://doi.org/10.1016/j.patter.2024.100988

    @science

    bibliolater , to science group
    @bibliolater@qoto.org avatar
    AI deception: A survey of examples, risks, and potential solutions

    "Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

    DOI: https://doi.org/10.1016/j.patter.2024.100988

    @science

    JustCodeCulture , to sociology group
    @JustCodeCulture@mastodon.social avatar

    Congratulations to Harvard University History of Science doctoral candidate Aaron Gluck-Thaler on the 2024-25 CBI Tomash Fellowship. We are thrilled to have Aaron as a fellow in the upcoming academic year!

    @histodons
    @sociology
    @commodon

    https://z.umn.edu/2024-25-Tomash

    bibliolater , to science group
    @bibliolater@qoto.org avatar

    DeepMind’s AI can ‘predict how all of life’s molecules interact with each other’

    "AlphaFold 3 is able to envision how the complex shapes and networks of molecules – present in every cell in the human body – are connected and how the smallest of changes in these can affect biological functions that can lead to diseases."

    https://www.independent.co.uk/news/science/deepmind-dna-london-university-of-oxford-university-of-birmingham-b2541665.html

    @science

    bibliolater , to Archaeodons group
    @bibliolater@qoto.org avatar

    ‘Second renaissance’: tech uncovers ancient scroll secrets of Plato and co

    "The project belongs to a new wave of efforts that seek to read, restore and translate ancient and even lost languages with cutting-edge technologies. Armed with modern tools, many powered by artificial intelligence, scholars are starting to read what had long been considered unreadable."

    https://www.theguardian.com/books/article/2024/may/03/how-scholars-armed-with-cutting-edge-technology-are-unfurling-secrets-of-ancient-scrolls

    @archaeodons

    bibliolater , to science group
    @bibliolater@qoto.org avatar

    "Even though it was hoped that machines might overcome human bias, this assumption often fails due to a problematic or theoretically implausible selection of variables that are fed into the model and because of small size, low representativeness, and presence of bias in the training data [5.]."

    Suchotzki, K. and Gamer, M. (2024) 'Detecting deception with artificial intelligence: promises and perils,' Trends in Cognitive Sciences [Preprint]. https://doi.org/10.1016/j.tics.2024.04.002.

    @science @psychology

    attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

    bibliolater , to science group
    @bibliolater@qoto.org avatar

    "While ChatGPT-4 correlates closely with established risk stratification tools regarding mean scores, its inconsistency when presented with identical patient data on separate occasions raises concerns about its reliability."

    Heston TF, Lewis LM (2024) ChatGPT provides inconsistent risk-stratification of patients with atraumatic chest pain. PLOS ONE 19(4): e0301854. https://doi.org/10.1371/journal.pone.0301854

    @science

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines