newsguardtech.com

jaden , to Technology in Top 10 Generative AI Models Mimic Russian Disinformation Claims A Third of the Time, Citing Moscow-Created Fake Local News Sites as Authoritative Sources

I wonder how well that percentage matches up with the percent of Americans who believe those sites, too. Would an LLM trained on the raw internet have a fairly proportional spectrum of beliefs to the American public?

SnotFlickerman , to Technology in Top 10 Generative AI Models Mimic Russian Disinformation Claims A Third of the Time, Citing Moscow-Created Fake Local News Sites as Authoritative Sources
@SnotFlickerman@lemmy.blahaj.zone avatar

I don't know how many times we have to keep saying this:

BECAUSE LLMS HAVE NO INTENTION OR ABILITY TO TELL TRUTH FROM FICTION SO WHEN THEY APPEAR CONFIDENT, THEY ARE BULLSHITTING EVEN WHEN THEY ARE CORRECT.

Even a bullshitter can be correct sometimes, it doesn't make it suddenly not bullshit. Even when LLMs get it right, they're still bullshitting.

This isn't complicated. They don't think. They have no concept of truth. They just fabricate sentences from previously copied sentences, there is no intention, no thought, no planning, no reflection.

The groups producing these LLMs are just sourcing the entire internet, they don't care how much of it is lies. There seems to be very little curation going on.

Anyone who expected anything other than this outcome is an idiot who isn't paying attention.

jaden ,

It's just weird that we get so much humanlike reasoning from them, anyways. The jury's still out whether our brains learn in an autoregressive manner like that, too. I'm finding a lot of really cool results in my research by tinkering with the idea that a developing brain might just be constantly trying to guess what's happening next.

Seems pretty plausible to me that passive learning in humans works similar to next-token prediction in transformers.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines