Spedwell , (edited )

concepts embedded in them

internal model

You used both phrases in this thread, but those are two very different things. It's a stretch to say this research supports the latter.

Yes, LLMs are still next-token generators. That is a descriptive statement about how they operate. They just have embedded knowledge that allows them to generate sometimes meaningful text.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines