Excrubulent ,
@Excrubulent@slrpnk.net avatar

No he's right that it's unsolved. Humans aren't great at reliably knowing truth from fiction too. If you've ever been in a highly active comment section you'll notice certain "hallucinations" developing, usually because someone came along and sounded confident and everyone just believed them.

We don't even know how to get full people to do this, so how does a fancy markov chain do it? It can't. I don't think you solve this problem without AGI, and that's something AI evangelists don't want to think about because then the conversation changes significantly. They're in this for the hype bubble, not the ethical implications.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines