5gruel ,

I'm not convinced about the "a human can say 'that's a little outside my area of expertise', but an LLM cannot." I'm sure there are a lot of examples in the training data set that contains qualification of answers and expression of uncertainty, so why would the model not be able to generate that output? I don't see why it would require an "understanding" for that specifically. I would suspect that better human reinforcement would make such answers possible.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines