It’s actually simple to detect: if the code sucks or is written by a bad programmer, and the docstrings are perfect, it’s AI. I’ve seen this more than once and it never fails.
So your results are biased, because you're not going to see the decent programmers who are just using it to take mundane tasks off their back (like generating boilerplate functions) while staying in control of the logic. You're only ever going to catch the noobs trying to cheat without fully understanding what it is they're doing.
Not specific to AI but someone flat out told me they didn't even run the code to see it work. They didn't understand why I would or expect that before accepting code. This was someone submitting code to a widely deployed open source project.
So, I would expect the answer is yes or very soon to be yes.
Around me, most beginners who use that don't have the skills to understand or even test what they get. They don't want to learn I guess, ChatGPT is easier.
I recently suspected a new guy was using ChatGPT because everything seemed perfect (grammar, code formatting, classes made with design patterns, etc.) but the code was very wrong. So I did some pair programming with him and asked if we could debug his simple application. He didn't know where the debug button was.
How do they know that you wrote it yourself and didn't just steal it?
This is a rule to protect themselves. If there is ever a case around this, they can push the blame to the person that committed the code for breaking that rule.
I mean, generally rules at least are to strongly discourage people from doing a thing, or to lead to things that WOULD prevent people from doing a thing.
A purely conceptual rule by itself would not magically stop someone from doing a thing, but that's kind of a weird way to think about it.
Docstrings based on the method signature and literal contents of a method or class are completely pointless, and that's all copilot can do. It can't Intuit anything that docstrings are actually there for.
Definitely not my experience. With a well structured code base it can be pretty uncanny. I think it's context is limited to files that are currently opened in the editor, so that may be your issue if you're coding with just one file open?
GitHub Copilot introduced a new keyword a little while ago, "@workspace", where it can see everything in your project. The code it generates uses all your own functions and variables in your libraries and it figures out how to use them correctly.
There was one time where I totally went "WTF", because it spat out Python. In a C++ project. But those kind of hallucinations are getting more and more rare. The more code you write, the better it gets. It really does become sort of like a "Copilot", sitting there coding alongside you. The mistake people make is assuming it's going to come up with ideas and algorithms for them without spending any mental energy at all.
I'm not trying to shill. I'm not a programmer by trade. Just a hobbyist who started on QBasic in the ancient times. But I've been trying to learn it off and on for the past 30 years, and I've never learned so much and had so much fun as in the last 1.5 with AI help. I can just think of stuff to do, and shit will just flow out now.
Instead of working on their platform to get discord users to jump ship they decide to go in the same direction.
Also pretty sure training LLMs after someone opts out is illegal?
Also pretty sure training LLMs after someone opts out is illegal?
Why? There have been a couple of lawsuits launched in various jurisdictions claiming LLM training is copyright violation but IMO they're pretty weak and none of them have reached a conclusion. The "opting" status of the writer doesn't seem relevant if copyright doesn't apply in the first place.
Nor is it up to you. But fact remains, it's not illegal until there are actually laws against it. The court cases that might determine whether current laws are against it are still ongoing.
If copyrights apply, only you and stack own the data. You can opt out but 99% of users don't. No users get any money. Google or Microsoft buys stack so only they can use the data. We only get subscription based AI, open source dies.
If copyrights don't apply, everyone owns the data. The users still don't get any money but they get free open source AI built off their work instead of closed source AI built off their work.
Having the website have copyright of the content in the context of AI training would be a fucking disaster.
It's not illegal. 2. "Law" isn't a real thing in an oligarchy, except insofar as it can be used by those with capital and resources to oppress and subjugate those they consider their lessors and to further perpetuate the system for self gain
Lots of stupid people asking "how would they know?"
That's not the fucking point. The point is that if they catch you they can block future commits and review your past commits for poor quality code. They're setting a quality standard, and establishing consequences for violating it.
If your AI generated code isn't setting off red flags, you're probably fine, but if something stupid slips through and the maintainers believe it to be the result of Generative AI, they will remove your code from the codebase and you from the project.
It's like laws against weapons. If you have a concealed gun on your person and enter a public school, chances are that nobody will know and you'll get away with it over and over again. But if anyone ever notices, you're going to jail, you're getting permanently trespassed from school grounds, and you're probably not going to be allowed to own guns for a while.
And, it's a message to everyone else quietly breaking the rules that they have something to lose if they don't stop.
It was a fair question, but this is just going to turn out like universities failing or expelling people for alleged AI content in papers.
They can't prove it. They try to use AI tools to prove it, but those same tools will say a thesis paper from a decade ago is also AI generated. Pretty sure I saw a story of a professor accusing someone based off a tool having his own past paper fail the same tool
Short of an admission of guilt, it's a witch hunt.
Slack [...] will never identify any of our customers or individuals as the source of any of these improvements to any third party, other than to Slack’s affiliates or sub-processors.
So you sign up to confirm that your IP is yours, while simultaneously agreeing to sell it off , but the source will be anonymous other than to who it’s sold to or anyone else Slack decides they want to know. These tech contracts and TOS should just say “we will (try) not do bad, but you agree to let us do bad, and if bad happens its your fault”.
This is a good move for international open source projects, with multiple lawsuits in multiple countries around the globe currently ongoing, the intellectual property nature of code made using AI isn't really secure enough to open yourself up to the liability.
I've done the same internally at our company.
You're free to use whatever tool you want but if the tool you use spits out copyrighted code, and the law eventually has decided that model users instead of model trainers are liable for model output, then that's on you buddy.
Yup. We don't allow AI tools on our codebase, but I allow it for interviews. I honestly haven't been impressed by it at all, it just encourages not understanding the code.
Does this mean you have indicated to your employees and/or contractors that you intend to hold them legally liable in the case someone launches litigation against you?
No, but it's the only one that big corporations know/care about... primarily because of MS's aggressive "look, it's basically free and TOTALLY the same thing, we promise" marketing strategy.
For matrix specifically, I recommend fluffy chat on mobile and cinny for web/desktop. Most notably, they both support the not-yet-official spec on custom emojis and stickers, which I think is important for any slack-like.
For the server (since you want to self host), you'd probably want to do Synapse - it supports not being federated as well as SSO. Also it wasn't mentioned by mp3, but xmpp is another protocol that's used by many large companies for internal chat systems as well.
Does IRC still exist? I remember laughing when I first saw Slack and its early competitors because people were excited about it and when I finally used it I realized it was basically just IRC with a nicer interface. I’m assuming these offer improvements like encryption?
Nah, there's tons of features that slack has over irc. To start with inline media (images, audio, video), but most importantly lots of out of the box external integrations and webhooks.
Yeah, now there is, but I don’t think a lot of those features were in when I first used it over a decade ago. It became a lot more useful over the years.
It didn't require using arcane commands just to sign up and log in. I love IRC and will always remember it fondly, but it wasn't easy for a novice to use and that's why things like slack and discord took off.
Subscription to a software is not mutually exclusive with self-hosting. Developers deserve to earn money, especially those who do not rely on collecting data, showing ads and enshittification of their cloud platform.
It’s a business with 300 users, but only about 50 of them would even use it. The others DO need accounts for the one time per month they login. But with their pricing, and SSO plan, that’s $3000/m
So proud of you NetBSD, this is why I sponsor you, slam dunk for the future. I'm working on a NetBSD hardening script and Rice as we speak, great OS with some fantastically valuable niche applications and I think, a new broad approach I'm cooking up, a University Edition. I did hardening for all the other BSD, I saved the best for last!
[EDIT 5/16/2024 15:04 GMT -7] NetBSD got Odin lang support yesterday. That totally seals the NetBSD deal for me if I can come up with something cool for my workstation with Odin.
If you would like to vote on whether, or by what year, AI will be in the Linux Kernel on Infosec.space:
I'm sorry, can everyone not read the actual link? They specifically say they are not training generative AIs, i.e. LLMs.
They are using the data to train non-generative AIs for stuff like emoji and channel suggestions. I.e. "you use this emoji a lot, so it's displayed first" or "people that are in these channels of yours also join these other channels you're not in yet".
This is class A misinformation being spread here, good job. It's unbelievable that I'm the first one to actually verify the truth of this post, because I wanted to share it further and that's what you do before.
It also explicitly states in the posted screengrab that the opting-out user's workspace won't contribute to the underlying models. How would that be separate from using info on their workspace as training data for any kind of model? My interpretation of that is the data would be used to inference on the models, not train them.
mastodon.sdf.org
Top