eager_eagle ,
@eager_eagle@lemmy.world avatar

Well, it's not exactly impossible because of that, it's just unlikely they'll use a discriminator for the task because great part of generated content is effectively indistinguishable from human-written content - either because the model was prompted to avoid "LLM speak", or because the text was heavily edited. Thus they'd risk a high false positive rate.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines