I can understand why a project might want to do this until the law is fully implemented and testing in court, but I can tell most of the people in this thread haven’t actually figured out how to effectively use LLMs productively. They’re not about to replace software engineers, but as a software engineer, tools like GitHub copilot and ChatGPT are excellent at speeding up a workflow. ChatGPT for example is an excellent search engine that can give you a quick understanding of a topic. It’ll generate small amounts of code more quickly than I could write it by hand. Of course I’m still going to review that code to ensure it is to the same quality that hand written code would be, but overall this is still a much faster problem.
The luddites who hate on LLMs would have complained about the first compilers too, because they could write marginally faster assembly by hand.
Open-source does not equal free. There is probably a way to circumvent their paywall, but on their website, self-hosted free version has a limit of 1-on-1 for calls.
If you want a group call, you need to pay, even if you self-host.
Wait, which calls do you mean? We're hosting or own free mattermost and I've never had that problem.
We're pretty happy with mattermost overall, but it's also a small tech-savvy team.
Ok but seriously, that is a very good reason to ban it. Who knows what would happen if the AI just fully ripped someone else’s code off that’s supposed to be like GPL licensed or something. If humans can plagiarize, than AIs can plagiarize.
But also, how are they still using CVS? CVS is so slow and so bad. Even Subversion would be an upgrade.
I use Slack at work everyday. I suppose this does feel off in some way but I'm not sure I'm the right amount of upset about this? I don't really mind if they use my data if it improves my user experience, as long as the platform doesn't reveal anything sensitive or personal in a way that can be traced back to me.
Slack already does allow your admin to view all of your conversations, which is more alarming to me
The problem is where you said "as long as" because we already know companies AND the AI itself can't be trusted to not expose sensitive info inadvertently. At absolute best, it's another vector to be breached.
You can't really know when the genAI is synthesizing from thousands of inputs or just outright reciting copyrighted code. Not kosher if it's the latter.
mastodon.sdf.org
Top