gravitas_deficiency , (edited )

That’s great. But that’s not how it’s being marketed and sold to the public. It’s being sold as an oracle (as in crystal ball, not database). And it’s misleading and hurting people as a result.

I’ll reiterate: An LLM has no comprehension of what it says.

It’s a matter of engineering ethics, on multiple levels:

  • the training data in the vast majority of cases is outright stolen
  • it’s being sold as something that it’s not, and the result is causing real damage to people and society in a ton of ways we’re still discovering
  • most people deeply involved in developing LLMs, and basically all of the technical leadership, are categorically ignoring and abrogating any and all responsibility around this “magical” new system they’ve made. We’ve seen this before with social networking. We know where this road leads.

I’m not saying the tech should be banned. That’s obviously idiotic. Neural nets can - and are - used for tons of fascinating and excellent applications. It’s just that my staunch opinion is that LLMs are a terrible application of that the tech at this stage of development, and it’s particularly terrible that OpenAI/Microsoft/etc are aggressively foisting this technology on the public, and simultaneously refusing to take any ethical responsibility for it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines