catrionagold , to AcademicChatter group
@catrionagold@mastodon.social avatar

An academic/activist crowdsourcing request:

Who is critically researching, writing or doing cool activism on the environmental impacts of AI?

I’m particularly interested in finding UK-based folks, but all recommendations are appreciated 💕 🙏

@academicchatter

DigitalHistory , to histodons group German
@DigitalHistory@fedihum.org avatar

, aber prompto! 🤖

Im morgigen demonstrieren Torsten Hiltmann, Martin Dröge & Nicole Dresselhaus (HU Berlin, ) am Bsp. des Baedeker-Reiseführers von 1921 die Potenziale von & prompt-basierten Ansätzen für die in historischen Textquellen.

Offen für alle!

🔜 Wann? Mi., 26.06., 4-6 pm, Zoom
ℹ️ Abstract: https://dhistory.hypotheses.org/7870


@nfdi4memory @histodons

UlrikeHahn , to linguistics group
@UlrikeHahn@fediscience.org avatar

interesting read on representation analysis for anthropic’s model

(just a shame that all analyses are inaccessible, so not computationally reproducible)

@cogsci @linguistics

https://www.anthropic.com/research/mapping-mind-language-model

bibliolater , to science group
@bibliolater@qoto.org avatar

AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science group
@bibliolater@qoto.org avatar

AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science group
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

ModernDayBartleby , to bookstodon group
@ModernDayBartleby@mstdn.plus avatar

And so it begins -
PASSING by Nella Larsen (1929) via Oshun Publishing imbibed at Yanaka Coffee
@bookstodon

ModernDayBartleby OP ,
@ModernDayBartleby@mstdn.plus avatar

ATLAS OF AI by Kate Crawford via Yale University Press care of Arakawa Public Library imbibed at Mr Hippo Coffee
@bookstodon

ALT
  • Reply
  • Loading...
  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines