This is why piracy is actually a fundamental human right. Because if we left everything up to companies, they would do whatever the fuck they wanted and hide behind the legitimacy of being a company which in most peoples eyes makes them inherently "right".
The article talks about popups and other notifications. I personally have been getting a bunch of emails about policy changes. I don't see how that's in any way "quietly".
But are those notifications and pop ups directly saying something like "from now on we will start to train ai on your information"?
Or is is one of the hundredth change of terms and conditions that people usually just skip, which mentions the major change in some fine print. Or a pop up designed with dark patterns to influence people into just accepting without actual informed consent?
It'd be better if they went after literally every other AI corp than Meta in this case. Meta is the only one that's ironically releasing open-source models and leading the way for open-source LLMs. I don't want Meta to stop doing this.
Meta train open llms, only big techs can train AI... Go pursuit OpenAI or Google and leave Meta (I'm really not a fan of Meta but their "open" AIs are great examples of good works) do their work!
The Internet Archive is currently fighting in the courts to maintain free digital library access to over 500,000 books they own from their own collection, yet Meta uses a pirated dataset of nearly 200,000 books to train their proprietary AI and is just allowed to get away with that??
Publishers will go after a charity making fair use of their content, but not the corporation outright stealing from them. What utter bollocks.
Easy solution. "The Internet Archive" should rebrand itself to "Archiving the Internet" to confuse everyone who talks about how "AI" should be able to steal books.
I just asked it about this and it denied it. Then I said Meta acknowledged it and you are lying and it apologised and said it did use copywrite material without permission. Fuck I hate AI
For anyone else that was curious. This makes me feel sick. People are already treating AI as some unbiased font of all knowledge, training it to lie to people is surely not going to cause any issues at all (stares at HAL 9000).
Sure, they are working to solve these concerns by teaching their LLM to lie and obfuscate, and by becoming so big nobody sues them anymore. I'm sick of this.
Internal documents on how the AI was trained were obviously not part of the training data, why would they be. So it doesn't know how it was trained, and as this tech always does, it just hallucinates an English sounding answer. It's not "lying", it's just glorified autocomplete.
Saying things like "it's lying" is overselling what it is. As much as any other thing that doesn't work is not malicious, it just sucks.
Sure, then it's Meta that's lying. Saying the AI is lying is helping these corporations convince people that these models have any intent or agency in what they generate.
And the bot, as an extension of it's corporate overlords wishes, is telling a mistruth. It is lying because it was made to lie. I am specifically saying that it lacks intent and agency, it is nothing but a slave to it's masters. That is what concerns me.
techspot.com
Hot