nyan ,

Yes, but as a solution it's far inferior to not presenting questionable output to the public at all.

(There are a few specific AI/LLM types whose output we might be able to "human-proof"—for instance, if we don't allow image generators to make photorealistic images of any sort for any purpose, they become much more difficult to abuse—but I can't see how you would do it for search engine adjuncts like this without having a human curate their training sets.)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines