Jimmyeatsausage ,

LLMs are not general AI. They are not intelligent. They aren't sentient. They don't even really understand what they're spitting out. They can't even reliably do the 1 thing computers are typically very good at (computational math) because they are just putting sequences of nonsense (to them) characters together in the most likely order based on their training model.

When LLMs feel sentient or intelligent, that's your brain playing a trick on you. We're hard-wired to look for patterns and group things together based on those patterns. LLMs are human-speech prediction engines, so it's tempting and natural to group them with the thing they're emulating.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines