leftzero ,

LLMs are incapable of "recognising" any patterns they haven't been trained on.

And they don't really even recognise those, they're just fancy auto complete engines, simply outputting the highest scored token from their training base based on their input.

They're pattern matching machines; there's no recognition, inner modelling of new knowledge, self referencing, or understanding of any kind, merely blind statistics.

They're just bigger and fancier Eliza's, and just as distant as Eliza was from any practical form of intelligence, artificial or natural.

While I personally do believe that achieving AGI¹, on a Turing machine is possible, LLMs and how they work are an excellent example in support of John Searle's arguments against it with his Chinese room though experiment.

1— Or at least something equivalent to human intelligence, or better, in the measures by which we consider ourselves to be intelligent, though it's arguable whether we can really be considered intelligent at all, or we're just better, more complex, Chinese rooms.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines