By our powers combined, we'll exceed 2% market share!
(no actually, please support linux. I just switched like a month ago and while it's so much better than windows there are so many petty annoyances that will never get resolved unless more people bitch about it and that kind of support needs more users)
So Mustafa steals from the entire world and justifies it by pointing to an abstraction that cannot be proven. It's already complete as they can admit it now and throw Billions at corrupt judges over a decade which will be too late.
Man this is fucking asinine. No one hates you. Certainly not the actual researchers and engineers building these products.
Capitalism fucks over everyone who's not immediately useful. AI is just modelling algorithms after neurons and discovering that that lets us solve a whole new class of fuzzy pattern matching problems.
The two of them together promises to fuck us over even more because that was one of the main things that we used to be better than computers at, but the solution is not remove the new technology from the equation, it's to remove the old and broken system of resource allocation that has and continues to fuck us no matter what.
Look at this AI paid influencer everybody! Who pays you Mustafa Jr? Most everyone knows that AI is gigging them now. When you steal from the world, that is definite hate but It was meant in the aggregate, stupified sanctimonious simpleton.
P.S. Take your "Capitalism Sucks" Marxist bullshit back to Russia, Vatnik and take Mustafa with you.
Is it that or is it that the laws are selectively applied on little guys and ignored once you make enough money? It certainly looks that way. Once you've achieved a level of "fuck you money" it doesn't matter how unscrupulously you got there. I'm not sure letting the big guys get away with it while little guys still get fucked over is as big of a win as you think it is?
Examples:
The Pirate Bay: Only made enough money to run the site and keep the admins living a middle class lifestyle.
VERDICT: Bad, wrong, and evil. Must be put in jail.
OpenAI: Claims to be non-profit, then spins off for-profit wing. Makes a mint in a deal with Microsoft.
VERDICT: Only the goodest of good people and we must allow them to continue doing so.
The IP laws are stupid but letting fucking rich twats get away with it while regular people will still get fucked by the same rules is kind of a fucking stupid ass hill to die on.
But sure, if we allow the giant companies to do it, SOMEHOW the same rules will "trickle down" to regular people. I think I've heard that story before... No, they only make exceptions for people who can basically print money. They'll still fuck you and me six ways to Sunday for the same.
But yeah, somehow, the same rules will end up being applied to us? My ass. They're literally jailing people for it right now. If that wasn't the case, maybe this argument would have legs.
The laws are currently the same for everyone when it comes to what you can use to train an AI with. I, as an individual, can use whatever public facing data I wish to build or fine tune AI models, same as Microsoft.
If we make copyright laws even stronger, the only one getting locked out of the game are the little guys. Microsoft, google and company can afford to pay ridiculous prices for datasets. What they don't own mainly comes from aggregators like Reddit, Getty, Instagram and Stack.
Boosting copyright laws essentially kill all legal forms of open source AI. It would force the open source scene to go underground as a pirate network and lead to the scenario you mentioned.
Yes, it is a travesty that people are being hounded for sharing information, but the solution to that isn't to lock up information tighter by restricting access to the open web and saying if you download something we put up to be freely accessed and then use it in a way we don't like you owe us.
The solution to bad laws being applied unevenly isn't to apply the bad laws to everyone equally, its to get rid of the bad laws.
"Copying is theft" is the argument of corporations for ages, but if they want our data and information, to integrate into their business, then, suddenly they have the rights to it.
If copying is not theft, then we have the rights to copy their software and AI models, as well, since it is available on the open web.
If copying is not theft, then we have the rights to copy their software
No we don't, copying copyrighted material is copyright infringement. Which is illegal. that does not make it theft though.
Oversimplifying the issue makes for an uninformed debate.
You realize that half of Lemmy is tying themselves in inconsistent logical knots trying to escape the reverse conundrum?
Copying isn't stealing and never was. Our IP system that artificially restricts information has never made sense in the digital age, and yet now everyone is on here cheering copyright on.
There's a clear difference between a guy in his basement on his personal computer sampling music the original musicians almost never seen a single penny from, and a megacorp trying to drive out creative professionals from the industry in the hopes they can then proceed to hike up the prices to use their generative AI software.
Yeah, I'm not a fan of AI but I'm generally of the view that anything posted on the internet, visible without a login, is fair game for indexing a search engine, snapshotting a backup (like the internet archive's Wayback Machine), or running user extensions on (including ad blockers). Is training an AI model all that different?
None of those things replace that content, though.
Look, I dunno if this is legally a copyrights issue, but as a society, I think a lot of people have decided they're willing to yield to social media and search engine indexers, but not to AI training, you know? The same way I might consent to eating a mango but not a banana.
Yes, it kind of is. A search engine just looks for keywords and links, and that's all it retains after crawling a site. It's not producing any derivative works, it's merely looking up an index of keywords to find matches.
An LLM can essentially reproduce a work, and the whole point is to generate derivative works. So by its very nature, it runs into copyright issues. Whether a particular generated result violates copyright depends on the license of the works it's based on and how much of those works it uses. So it's complicated, but there's very much a copyright argument there.
That depends, do you copy verbatim? Or do you process and understand concepts, and then create new works based on that understanding? If you copy verbatim, that's plagiarism and you're a thief. If you create your own answer, it's not.
Current AI doesn't actually "understand" anything, and "learning" is just grabbing input data. If you ask it a question, it's not understanding anything, it just matches search terms to the part of the training data that matches, and regurgitates a mix of it, and usually omits the sources. That's it.
It's a tricky line in journalism since so much of it is borrowed, and it's likewise tricky w/ AI, but the main difference IMO is attribution, good journalists cite sources, AI rarely does.
An LLM can essentially reproduce a work, and the whole point is to generate derivative works. So by its very nature, it runs into copyright issues.
Derivative works are not copyright infringement. If LLMs are spitting out exact copies, or near-enough-to-exact copies, that’s one thing. But as you said, the whole point is to generate derivative works.
They absolutely are, unless it's covered by "fair use." A "derivative work" doesn't mean you created something that's inspired by a work, but that you've modified the the work and then distributed the modified version.
I'm not in favor of piracy or LLMs. I'm also not a fan of copyright as it exists today (I think we should go back to the 1790 US definition of copyright).
I think a lot of people here on lemmy who are "in favor of piracy" just hate our current copyright system, and that's quite understandable and I totally agree with them. Having a work protected for your entire lifetime sucks.
The problem with copyright has nothing to do with terms limits. Those exacerbate the problem, but the fundamental problem with copyright and IP law is that it is a system of artificial scarcity where there is no need for one.
Rather than reward creators when their information is used, we hamfistedly try and prevent others from using that information so that people have to pay them to use it sometimes.
Capitalism is flat out the wrong system for distributing digital information, because as soon as information is digitized it is effectively infinitely abundant which sends its value to $0.
Copyright is not a capitalist idea, it's collectivist. See copyright in the Soviet Union, the initial bill of which was passed in 1925, right near the start of the USSR.
A pure capitalist system would have no copyright, and works would instead be protected through exclusivity (I.e. paywalls) and DRM. Copyright is intended to promote sharing by providing a period of exclusivity (temporary monopoly on a work). Whether it achieves those goals is certainly up for debate.
Long terms go against any benefit to society that copyright might have. I think it does have a benefit, but that benefit is pretty limited and should probably only last 10-15 years. I think eliminating copyright entirely would leave most people worse off and probably mostly benefit large orgs that can afford expensive DRM schemes in much the same way that our current copyright duration disproportionately benefits large orgs.
Copyright infrigment is not theft, training models is not copyright infringement either. We need a law equivalent to when an artist says "he's inpired by someone else" . That it specifically is illegal to do that without permission if you use a machine.
That will force big tech to pay a pittance for it and it will instakill all the small player.
Copyright Infringment strawman argument. When considering AI, we are not talking legal copyright infringement in the relationship between humans vs AI. Humans are mostly concerned with being obsoleted by Big Tech so the real issue is Intellectual Property Theft.
What I see is a system of laws that came about during the Middle Ages and have been manipulated by the powers that be to kill off any good parts of them.
We all knew copyright was broken. It was broken before my grandparents were born. It didn't encourage artists or promise them proper income, it didn't allow creations to gradually move into public domain. It punished all forms of innovation from player pianos to fanfiction on Tumblr.
He spoke carelessly, but he didn't exactly say what the author said he said. You can in fact do many things with it. Copyright doesn't care what you do if you aren't copying. That's the definition of the word.
No. It's only illegal if you republish what you scrape. Absolutely nothing prevents any company from scraping the web and using that information internally.
I think that depends how you write your web scraper. Of course the web scraper is going to load the page, just like your web browser does, which by all accounts is not an issue. What happens after the page is loaded depends on how the software is written.
Well, I happen to have a great deal of respect for and routinely offer my support to those who suffer from mental illnesses, so maybe find a better way to say this that doesn't denigrate disabled people.
No one ever tells you this, but you can just take the ducks. Just like with the city pigeons. Just make sure you don't take a government drone by accident.
Microsoft AI boss Mustafa Suleyman incorrectly believes that the moment you publish anything on the open web, it becomes “freeware” that anyone can freely copy and use.
When CNBC’s Andrew Ross Sorkin asked him whether “AI companies have effectively stolen the world’s IP,” he said:
That certainly hasn’t kept many AI companies from claiming that training on copyrighted content is “fair use,” but most haven’t been as brazen as Suleyman when talking about it.
Speaking of brazen, he’s got a choice quote about the purpose of humanity shortly after his “fair use” remark:
Suleyman does seem to think there’s something to the robots.txt idea — that specifying which bots can’t scrape a particular website within a text file might keep people from taking its content.
Disclosure: Vox Media, The Verge’s parent company, has a technology and content deal with OpenAI.
The original article contains 351 words, the summary contains 139 words. Saved 60%. I'm a bot and I'm open source!
I got the math the wrong way around but read the bottom of the bot's post. The bot's job is to cut the fluff out of articles, and it copy/pastes the remaining text for us to read here.
So my comment should have said 40%, but the point was if we're comparing what the bot did with your coworkers talking about a game, it'd be more akin to them reciting the commentator verbatim.
I thought that even discussing the game without the express permission of the media company you used to watch and the sports league was a violation. Not sure why you are bringing commentary on commentary in it. Again not a sports ball guy but when I do hear people talk about sports they are talking about sports not the person talkimg about sports.
Oh yeah, tell me about Intellectual Property, Patent, Invention, and Ideation thievery, was it still there afterwards? IP theft has been recognized for centuries.
I am so glad humans are never derivative with culture. Just look at the movie The Fast and Furious. If we were making derivative works we would live in some crazy world where that would be a franchise with ten movies, six video games, a fashion line, board games, toys, theme park attractions, and an animated series that ran for six seasons.
Perfect I will actually just start putting copyrights statements both on my site and in the source code. Ughhhh!!! But fine if you wanna go down this rabbit hole LFG bitch!!!!