404media.co

Mbourgon , to Espresso in Scientists Use Ultrasound to Make Cold Brew Coffee in 3 Minutes Instead of 24 Hours

Honestly, this dude is pretty bad at making cold brew. Toddy takes 2 days. 12oz of coffee in a 2 quarts of water makes enough to have a coffee each day for a month, and it will last that long in the fridge.

That said - cool! More options at a coffee shop.

BeigeAgenda , to Espresso in Scientists Use Ultrasound to Make Cold Brew Coffee in 3 Minutes Instead of 24 Hours
@BeigeAgenda@lemmy.ca avatar

I hope this method also works with tea and yerba mate.

Hackworth , to Technology in Here Is What Axon’s Bodycam Report Writing AI Looks Like

As long as the footage is still accessible, sounds great to me!

Karyoplasma , to Technology in Here Is What Axon’s Bodycam Report Writing AI Looks Like

Click on article, see that I require a "free account", leave site, block 404media.co on my DNS.

SnotFlickerman , (edited )
@SnotFlickerman@lemmy.blahaj.zone avatar

404media has done some of the best in-depth tech journalism there is lately, but I guess paying them for that is a bridge too far for most (or even making a free account...).

It's literally a worker owned cooperative and your first response is to block them at the DNS level? Great research there, boyo.

https://en.wikipedia.org/wiki/404_Media

The publication covers topics such as hacking, sex work, niche online communities, and the right to repair movement. The publication is worker-owned.

404 Media was founded in 2023 by former staff of Vice Media's Motherboard after it filed for bankruptcy.

During the Taylor Swift deepfake pornography controversy, a 404 Media investigation discovered that the images originated from 4chan and were being distributed on Telegram before making it onto social media platforms.

In an article about 2024 media industry layoffs, the Financial Times highlighted 404 Media as a successful new media venture amid an "existential crisis" in the industry. The article stated that the publication has been noted for "publishing an eye-catching range of stories about the tech sector", and noted that "Not only is it producing good stories but its founders say it is breaking even".

Sorry they're trying to actually get paid for real journalism. I guess you'd prefer clickbait from corporate owned bullshit company that pays their writers squat and uses AI to write articles?

Finally, this is arguably a really important subject to be having journalists look at, but I guess clickbait is more appealing.

southsamurai ,
@southsamurai@sh.itjust.works avatar

It is possible to respect their efforts, but refuse to sign up for things on principle.

And, if the account is "free", then why did you need to give them an email in the first place? If they aren't getting money from you, then needing a login that would require an email address is sketchy as hell on the surface, and there's no explanation given.

Yeah, blocking the site in its entirety is kinda weird, seems like extra effort for no benefit at all when you can just not use the site. But objecting to what is a pointless "account" unless they're monetizing the information makes plenty of sense. Worker owned is not a guarantee of good behavior. It certainly helps, and it's the superior business model imo, but it isn't inherently going to mean they aren't doing dumb shit.

sbv ,

needing a login that would require an email address is sketchy as hell on the surface, and there's no explanation given.

The link to the explanation is right beside the text saying you need an account.

https://www.404media.co/why-404-media-needs-your-email-address/

SnotFlickerman ,
@SnotFlickerman@lemmy.blahaj.zone avatar

But that would require research, reading, and most importantly, actually giving a shit.

Far easier to just swallow AI generated swill, apparently.

AltheaHunter , to Technology in Here Is What Axon’s Bodycam Report Writing AI Looks Like

Let me guess, they used existing footage and reports as training data, and it produced an incredibly racist ai model that routinely ignores police misconduct. They'll spend a couple years working on bandaids for the problem while police departments across the country use the original model to create reports and use the "unbiased ai reports" as an excuse to hide the raw footage.

SnotFlickerman ,
@SnotFlickerman@lemmy.blahaj.zone avatar

Also: "That was just an AI hallucination, it's not admissible in court" when it was absolutely not an AI hallucination when the cop shot someone for mouthing off.

Deceptichum , to Technology in FBI Arrests Man For Generating AI Child Sexual Abuse Imagery
@Deceptichum@sh.itjust.works avatar

What an oddly written article.

Additional evidence from the laptop indicates that he used extremely specific and explicit prompts to create these images. He likewise used specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.”

They make it sound like the prompts are important and/or more important than the 13,000 images…

ricecake ,

In many ways they are. The image generated from a prompt isn't unique, and is actually semi random. It's not entirely in the users control. The person could argue "I described what I like but I wasn't asking it for children, and I didn't think they were fake images of children" and based purely on the image it could be difficult to argue that the image is not only "child-like" but actually depicts a child.

The prompt, however, very directly shows what the user was asking for in unambiguous terms, and the negative prompt removes any doubt that they thought they were getting depictions of adults.

Glass0448 ,
@Glass0448@lemmy.today avatar

And also it's an AI.

13k images before AI involved a human with Photoshop or a child doing fucked up shit.

13k images after AI is just forgetting to turn off the CSAM auto-generate button.

Obonga ,

Having an AI generate 13.000 images does not even take 24 hours (depending on hardware and settings ofc).

WILSOOON , to Technology in FBI Arrests Man For Generating AI Child Sexual Abuse Imagery

Fuckin good job

DmMacniel , to Technology in FBI Arrests Man For Generating AI Child Sexual Abuse Imagery

Mhm I have mixed feelings about this. I know that this entire thing is fucked up but isn't it better to have generated stuff than having actual stuff that involved actual children?

Retoffelnoster ,
@Retoffelnoster@lemmy.world avatar

You know whats better? Having none of this shit

DmMacniel ,

Yeah as I also said.

MxM111 ,
@MxM111@kbin.social avatar

Better for whom and why?

Thorny_Insight , (edited )

Nirvana fallacy

Yeah would be nice. Unfortunelately it isn't so and it's never going to. Chasing after people generating distasteful AI pictures is not making the world a better place.

BruceTwarzen ,

Did you just fix menal health?

Petter1 ,

Better only means less worse in this case, I guess

Murvel ,

It feeds and evolves a disorder which in turn increases risks of real life abuse.

But if AI generated content is to be considered illegal, so should all fictional content.

SigHunter ,
@SigHunter@lemmy.kde.social avatar

Or, more likely, it feeds and satisfies a disorder which in turn decreases risk of real life abuse.

Making it illegal so far helped nothing, just like with drugs

Murvel ,

That's not how these addictive disorders works.. they're never satisfied and always need more.

Norgur ,

Two things:

  1. Do we know if fuels the urge to get real children? Or do we just assume that through repetition like the myth of "gateway drugs"?
  2. Since no child was involved and harmed in the making of these images... On what grounds could it be forbidden to generate them?
Thorny_Insight , (edited )

Alternative perspective is to think that does watching normal porn make heterosexual men more likely to rape women? If not then why would it be different in this case?

The vast majority of pedophiles never offend. Most people in jail for child abuse are just plain old rapists with no special interest towards minors, they're just an easy target. Pedophilia just describes what they're attracted to. It's not a synonym to child rapist. It usually needs to coinside with psychopathy to create the monster that most people think about when hearing that word.

ricecake ,

That's a bit of a difference in comparison.
A better comparison would be "does watching common heterosexual porn make common heterosexual men more interested in performing common heterosexual sexual acts?" or "does viewing pornography long term satiate a mans sex drive?” or "does consumption of nonconsensual pornography correlate to an increase in nonconsensual sex acts?"

Comparing "viewing child sexual content might lead it engaging in sexual acts with children" to "viewing sexual activity with women might lead to rape" is disingenuous and apples to oranges.

https://wchh.onlinelibrary.wiley.com/doi/full/10.1002/tre.791

a review of 19 studies published between 2013 and 2018 found an association between online porn use and earlier sexual debut, engaging with occasional and/or multiple partners, emulating risky sexual behaviours, assimilating distorted gender roles, dysfunctional body perception, aggression, anxiety, depression, and compulsive porn use.24 Another study has shown that compulsive use of sexually explicit internet material by adolescent boys is more likely in those with lower self-esteem, depressive feeling and excessive sexual interest.1

some porn use in adult men may have a positive impact by increasing libido and desire for a real-life partner, relieving sexual boredom, and improving sexual satisfaction by providing inspiration for real sex.7

As for child porn, it's not a given that there's no relationship between consumption and abusing children. There are studies that indicate both outcomes, and are made much more complicated by one of both activities being extremely illegal and socially stigmatized making accurate tracking difficult.
It's difficult to justify the notion that "most pedophiles never offend" when it can be difficult to identify both pedophiles and abuse.

https://pubmed.ncbi.nlm.nih.gov/21088873/ for example. It looks at people arrested for possession of child pornography. Within six years, 6% were charged with a child contact crime.
Likewise, you can find research with a differing conclusion

Point being, you can't just hand wave the potential for a link away on the grounds that porn doesn't cause rape amongst typical heterosexual men. There's too many factors making the statistics difficult to gather.

HopeOfTheGunblade ,
@HopeOfTheGunblade@kbin.social avatar

I would love to see research data pointing either way re #1, although it would be incredibly difficult to do so ethically, verging on impossible. For #2, people have extracted originals or near-originals of inputs to the algorithms. AI generated stuff - plagiarism machine generated stuff, runs the risk of effectively revictimizing people who were already abused to get said inputs.

It's an ugly situation all around, and unfortunately I don't know that much can be done about it beyond not demonizing people who have such drives, who have not offended, so that seeking therapy for the condition doesn't screw them over. Ensuring that people are damned if they do and damned if they don't seems to pretty reliably produce worse outcomes.

pavnilschanda ,
@pavnilschanda@lemmy.world avatar

A problem that I see getting brought up is that generated AI images makes it harder to notice photos of actual victims, making it harder to locate and save them

TranscendentalEmpire ,

Well that, and the idea of cathartic relief is increasingly being dispelled. Behaviour once thought to act as a pressure relief for harmful impulsive behaviour is more than likely just a pattern of escalation.

Seleni ,

Source? From what I’ve heard, recent studies are showing the opposite.

TranscendentalEmpire ,

Catharsis theory predicts that venting anger should
get rid of it and should therefore reduce subsequent
aggression. The present findings, as well as previous
findings, directly contradict catharsis theory (e.g.,
Bushman et al., 1999; Geen & Quanty, 1977). For reduc-
ing anger and aggression, the worst possible advice to
give people is to tell them to imagine their provocateur’s
face on a pillow or punching bag as they wallop it, yet this
is precisely what many pop psychologists advise people to
do. If followed, such advice will only make people
angrier and more aggressive.

Source

But there's a lot more studies who have essentially said the same thing. The cathartic hypothesis is mainly a byproduct of the Freudian era of psychology, where hypothesis mainly just sounded good to someone on too much cocaine.

Do you have a source of studies showing the opposite?

blanketswithsmallpox , (edited )

Yes, but I'm too lazy to sauce everything again. If it's not in my saved comments someone else will have to.

E: couldn't find it on my reddit either. I have too many saved comments lol.

9bananas ,

your source is exclusively about aggressive behavior...

it uses the term "arousal", which is not referring to sexual arousal, but rather a state of heightened agitation.

provide an actual source in support of your claim, or stop spreading misinformation.

TranscendentalEmpire ,

Lol, my source is about the cathartic hypothesis. So your theory is that it doesn't work with anger, but does work for sexual deviancy?

Do you have a source that supports that?

9bananas ,

you made the claim that the cathartic hypothesis is poorly supported by evidence, which you source supports, but is not relevant to the topic at hand.

your other claim is that sexual release follows the same patterns as aggression. that's a pretty big claim! i'd like to see a source that supports that claim.

otherwise you've just provided a source that provides sound evidence, but is also entirely off-topic...

TranscendentalEmpire ,

but is not relevant to the topic at hand.

The belief that indulging in AI created child porn relieves the sexual deviant behaviour of being attracted to actual minors utilizes the cathartic theory. The cathartic theory is typically understood to relate to an array of emotions, not just anger. "Further, the catharsis hypothesis maintains that aggressive or sexual urges are relieved by "releasing" aggressive or sexual energy, usually through action or fantasy. "

follows the same patterns as aggression. that's a pretty big claim! i'd like to see a source that supports that claim.

That's not a claim I make, it's a claim that cathartic theory states. As I said the cathartic hypothesis is a byproduct of Freudian psychology, which has largely been debunked.

Your issue is with the theory in and of itself, which my claim is already stating to be problematic.

but is also entirely off-topic...

No, you are just conflating colloquial understanding of catharsis with the psychological theory.

9bananas ,

and your source measured the effects of one single area that cathartic theory is supposed to apply to, not all of them.

your source does in no way support the claim that the observed effects apply to anything other than aggressive behavior.

i understand that the theory supposedly applies to other areas as well, but as you so helpfully pointed out: the theory doesn't seem to hold up.

so either A: the theory is wrong, and so the association between aggression and sexuality needs to be called into question also;

or B: the theory isn't wrong after all.

you are now claiming that the theory is wrong, but at the same time, the theory is totally correct! (when it's convenient to you, that is)

so which is it now? is the theory correct? then your source must be wrong irrelevant.

or is the theory wrong? then the claim of a link between sexuality and aggression is also without support, until you provide a source for that claim.

you can't have it both ways, but you're sure trying to.

TranscendentalEmpire ,

understand that the theory supposedly applies to other areas as well, but as you so helpfully pointed out: the theory doesn't seem to hold up.

My original claim was that cathartic theory in and of itself is not founded on evidence based research.

but at the same time, the theory is totally correct! (when it's convenient to you, that is)

When did I claim it was ever correct?

I think you are misconstruing my original claim with the claims made by the cathartic theory itself.

I don't claim that cathartic theory is beneficial in any way, you are the one claiming that Cathartic theory is correct for sexual aggression, but not for violence.

Do you have a source that claims cathartic theory is beneficial for satiation deviant sexual impulses?

then the claim of a link between sexuality and aggression is also without support, until you provide a source for that claim.

You are wanting me to provide an evidence based claim between the two when I've already said the overarching theory is not based on evidence?

The primary principle to establish is the theory of cathartic relief, not wether it works for one emotion or the other. You have not provided any evidence to support that claim, I have provided evidence that disputes it.

Maggoty ,

Let's see here, listen to my therapist who has decades of real experience or a study from over 20 years ago?

Sorry bud, I know who I'm going with on this and it ain't your academic.

TranscendentalEmpire ,

Let's see here, listen to my therapist who has decades of real experience or a study from over 20 years ago?

Your therapist is still utilizing Freudian psychoanalysis?

Well, if age is a factor in your opinion about the validity of the care you receive, I have some bad news for you.....

Maggoty ,

You're still using 5,000 year old Armenian shoes?

Of course not. Stop being reductive.

TranscendentalEmpire ,

Lol, you were the one who first dismissed evidence because it was 20 years old.....

Maggoty ,

The point is you can reduce anything to its origin. That does not mean it's still the same thing.

TranscendentalEmpire ,

The point is you can reduce anything to its origin.

Okay, but how does the modern version of cathartic theory differ from what freud postulated?

I agree you can't reduce things based on its original alone , which is why I included a scientific source as evidence......

Maggoty ,

I don't know, that's why I have a therapist, I'm not educated in psychology. But I do recognize a logical fallacy when I see one.

TranscendentalEmpire ,

But I do recognize a logical fallacy when I see one.

I doubt that, so far your argument has been based on the anecdotal fallacy mixed with a bit of the appeal to authority fallacy.

Maggoty ,

Lmao. Says the guy who tried to use a study on aggression to address sexual urges.

TranscendentalEmpire ,

Reading comprehension is still hard for you? My argument was about Cathartic theory, which includes several emotions including sexual urges...... It is a theory from freud, of course it covers sexual urges.

You and the other guy just have no idea what you're talking about.
How about providing any kind of source instead of talking out of your ass?

NatakuNox ,
@NatakuNox@lemmy.world avatar

And doesn't the AI learn from real images?

pavnilschanda ,
@pavnilschanda@lemmy.world avatar

True, but by their very nature their generations tend to create anonymous identities, and the sheer amount of them would make it harder for investigators to detect pictures of real, human victims (which can also include indicators of crime location.

ricecake ,

It does learn from real images, but it doesn't need real images of what it's generating to produce related content.
As in, a network trained with no exposure to children is unlikely to be able to easily produce quality depictions of children. Without training on nudity, it's unlikely to produce good results there as well.
However, if it knows both concepts it can combine them readily enough, similar to how you know the concept of "bicycle" and that of "Neptune" and can readily enough imagine "Neptune riding an old fashioned bicycle around the sun while flaunting it's tophat".

Under the hood, this type of AI is effectively a very sophisticated "error correction" system. It changes pixels in the image to try to "fix it" to matching the prompt, usually starting from a smear of random colors (static noise).
That's how it's able to combine different concepts from a wide range of images to create things it's never seen.

helpImTrappedOnline ,

Basically if I want to create ...
(I'll use a different example for obvious reasons, but I'm sure you could apply it to the topic)

... "an image of a miniature denium airjet with Taylor Swift's face on the side of it", the AI generators can despite no such thing existing in the training data.
It may take multiple attempts and effort with the text prompt to get exactly what you're looking for, but you could eventually get a convincing image.

AI takes loads of preexisting data on airplanes, T.Swift, and denium to combine it all into something new.

Catoblepas ,

Did we memory hole the whole ‘known CSAM in training data’ thing that happened a while back? When you’re vacuuming up the internet you’re going to wind up with the nasty stuff, too. Even if it’s not a pixel by pixel match of the photo it was trained on, there’s a non-zero chance that what it’s generating is based off actual CSAM. Which is really just laundering CSAM.

DmMacniel ,

I didn't know that, my bad.

Catoblepas ,

Fair but depressing, it seems like it barely registered in the news cycle.

Ragdoll_X ,
@Ragdoll_X@lemmy.world avatar

IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren't actually available in the dataset because they had already been removed from the internet.

Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.

Catoblepas ,

What % do you think was used to generate the CSAM, though? Like, if 1% of the images were cups it’s probably drawing on some of that to generate images of cups.

And yes, you could technically do this with no CSAM training material, but we don’t know if that’s what the AI is doing because the image sources used to train it were mass scraped from the internet. They’re using massive amounts of data without filtering it and are unable to say with certainty whether or not there is CSAM in the training material.

retrospectology ,
@retrospectology@lemmy.world avatar

The arrest is only a positive. Allowing pedophiles to create AI CP is not a victimless crime. As others point out it muddies the water for CP of real children, but it also potentially would allow pedophiles easier ways to network in the open (if the images are legal they can easily be platformed and advertised), and networking between abusers absolutely emboldens them and results in more abuse.

As a society we should never allow the normalization of sexualizing children.

nexguy ,
@nexguy@lemmy.world avatar

Interesting. What do you think about drawn images? Is there a limit to how will the artist can be at drawing/painting? Stick figures vs life like paintings. Interesting line to consider.

retrospectology ,
@retrospectology@lemmy.world avatar

If it was photoreal and difficult to distinguish from real photos? Yes, it's exactly the same.

And even if it's not photo real, communities that form around drawn child porn are toxic and dangerous as well. Sexualizing children is something I am 100% against.

littlewonder ,

It feels like driving these people into the dark corners of the internet is worse than allowing them to collect in clearnet spaces where drawn csam is allowed.

acockworkorange ,

I’m in favor of specific legislation criminalizing drawn CSAM. It’s definitely less severe than photographic CSAM, and it’s definitely harmful.

lily33 , (edited )

Actually, that's not quite as clear.

The conventional wisdom used to be, (normal) porn makes people more likely to commit sexual abuse (in general). Then scientists decided to look into that. Slowly, over time, they've become more and more convinced that (normal) porn availability in fact reduces sexual assault.

I don't see an obvious reason why it should be different in case of CP, now that it can be generated.

Obonga ,

It should be different because people can not have it. It is disgusting, makes them feel icky and thats just why it has to be bad. Conventional wisdom sometimes really is just convential idiocracy.

littlewonder ,

I wonder if religiosity is correlated.

NewNewAccount ,

networking between abusers absolutely emboldens them and results in more abuse.

Is this proven or a common sense claim you’re making?

bassomitron ,

I wouldn't be surprised if it's a mixture of the two. It's kind of like if you surround yourself with criminals regularly, you're more likely to become one yourself. Not to say it's a 100% given, just more probable.

Zorque ,

So... its just a claim they're making and you're hoping it has actual backing.

bassomitron ,

I'm not hoping anything, haha wtf? The comment above me asked if it was a proven statement or common sense and I said I wouldn't be surprised if it's both. I felt confident that if I googled it, there would more than likely be studies backing up a common sense statement like that, as I've read in the past how sending innocent people or people who committed minor misdemeanors to prison has influenced them negatively to commit crimes they might not have otherwise.

And look at that, there are academic articles that do back it up:

https://www.waldenu.edu/online-bachelors-programs/bs-in-criminal-justice/resource/what-influences-criminal-behavior

Negative Social Environment

Who we’re around can influence who we are. Just being in a high-crime neighborhood can increase our chances of turning to crime ourselves.4 But being in the presence of criminals is not the only way our environment can affect our behaviors. Research reveals that simply living in poverty increases our likelihood of being incarcerated. When we’re having trouble making ends meet, we’re under intense stress and more likely to resort to crime.

https://www.law.ac.uk/resources/blog/is-prison-effective/

Time in prison can actually make someone more likely to commit crime — by further exposing them to all sorts of criminal elements.

Etc, etc.

Turns out that your dominant social group and environment influences your behavior, what a shocking statement.

Zorque ,

But you didn't say you had proof with your comment, you said it was probable. Basically saying its common sense that its proven.

Why are you getting aggressive about actually having to provide proof about something when saying its obvious?

Also, that seems to imply that locking up people for AI offenses would then encourage truly reprehensible behavior by linking them with those who already engage in it.

Almost like lumping people together as one big group, instead of having levels of grey area, means people are more likely to just go all in instead of sticking to something more morally defensible.

bassomitron ,

Because it's a casual discussion, I think it's obnoxious when people constantly demand sources to be cited in online comments section when they could easily look it up themselves. This isn't some academic or formal setting.

And I disagree, only the second source mentioned prisons explicitly. The first source mentions social environments as well. So it's a damned if you do, damned if you don't situation. Additionally, even if you consider the second source, that source mentions punishment reforms to prevent that undesirable side effect from occuring.

I find it ironic that you criticized me for not citing sources and then didn't read the sources. But, whatever. Typical social media comments section moment.

NewNewAccount ,

I think it's obnoxious when people constantly demand sources to be cited in online comments section when they could easily look it up themselves.

People request sources because people state their opinions as fact. If that’s how it’s presented then asking for a source is ok. Its either ask for a source or completely dismiss the comment.

bassomitron ,

Again, in casual conversation where no one was really debating, it's obnoxious. When you're talking to friends in real life and they say something, do you request sources from them? No, because it'd be rude and annoying. If you were debating them in earnest and you both disagreed on something, sure, that would be expected.

But that wasn't the case here, the initial statement was common sense: If pedophiles are allowed to meet up and trade AI generated child sex abuse material, would that cause some of them to be more likely to commit crimes against real kids? And I think the answer is pretty obvious. The more you hang around people who agree with you, the more an echo chamber is cultivated. It's like an alcoholic going into a bar without anyone there to support them in staying sober.

Anyway, it's your opinion to think asking for sources from strangers in casual conversation is okay, and it's mine to say it can be annoying in a lot of circumstances. We all have the Internet at our fingertips, look it up in the future if you're unsure of someone's assertion.

moitoi ,
@moitoi@lemmy.dbzer0.com avatar

The far right in France normalized its discourses and they are now at the top of the votes.

Also in France, people talked about pedophilia at the TV in the 70s, 80s and at the beginning of the 90s. It was not just once in a while. It was frequent and open without any trouble. Writers would casually speak about sexual relationships with minors.

The normalization will blur the limits between AI and reality for the worse. It will also make it more popular.

The other point is also that people will always ends with the original. Again, politic is a good example. Conservatives try to mimic the far right to gain votes but at the end people vote for the far right...

And, someone has a daughter. A pedophile takes a picture of her without asking and ask an AI to produce CP based on her. I don't want to see things like this.

quindraco ,

Yes, but the perp showed the images to a minor.

Cybermonk_Taiji ,

Is "better than" the same as totally cool and legal?

DmMacniel ,

No?

Zorque ,

Is everything completely black and white for you?

The system isn't perfect, especially where we prioritize punishing people over rehabilitation. Would you rather punish everyone equally, emphasizing that if people are going to risk the legal implications (which, based on legal systems the world over, people are going to do) they might as well just go for the real thing anyways?

You don't have to accept it as morally acceptable, but you don't have to treat them as completely equivalent either.

There's gradations of questionable activity. Especially when there's no real victims involved. Treating everything exactly the same is, frankly speaking, insane. Its like having one punishment for all illegal behavior. Murder someone? Death penalty. Rob them? Straight to the electric chair. Jaywalking? Better believe you're getting the needle.

Cybermonk_Taiji ,

Wow. I didn't say any of that, cool story though.

Go read what I said again and try replying to that instead of whatever this rant is on about

Mastengwe ,

I got your back here.

Mastengwe ,

Ironically, You ask if everything is completely black and white for someone without accepting that there’s nuance to the very issue you’re calling out. And assuming that “everything”- a very black and white term, is not very nuanced, is it?

No, not EVERYTHING, but some things. And this is one of those things. Both forms should be illegal. Period. No nuance, no argument, NO grey area.

This does not mean that nuance doesn’t exist. It just means that some believe that it SHOULDN’T exist within the paradigm of child porn.

BrianTheeBiscuiteer ,

I have trouble with this because it's like 90% grey area. Is it a pic of a real child but inpainted to be nude? Was it a real pic but the face was altered as well? Was it completely generated but from a model trained on CSAM? Is the perceived age of the subject near to adulthood? What if the styling makes it only near realistic (like very high quality CG)?

I agree with what the FBI did here mainly because there could be real pictures among the fake ones. However, I feel like the first successful prosecution of this kind of stuff will be a purely moral judgement of whether or not the material "feels" wrong, and that's no way to handle criminal misdeeds.

Chee_Koala ,

If not trained on CSAM or in painted but fully generated, I can't really think of any other real legal arguments against it except for: "this could be real". Which has real merit, but in my eyes not enough to prosecute as if it were real. Real CSAM has very different victims and abuse so it needs different sentencing.

Zorque ,

Everything is 99% grey area. If someone tells you something is completely black and white you should be suspicious of their motives.

PM_Your_Nudes_Please ,

Yeah, it’s very similar to the “is loli porn unethical” debate. No victim, it could supposedly help reduce actual CSAM consumption, etc… But it’s icky so many people still think it should be illegal.

There are two big differences between AI and loli though. The first is that AI would supposedly be trained with CSAM to be able to generate it. An artist can create loli porn without actually using CSAM references. The second difference is that AI is much much easier for the layman to create. It doesn’t take years of practice to be able to create passable porn. Anyone with a decent GPU can spin up a local instance, and be generating within a few hours.

In my mind, the former difference is much more impactful than the latter. AI becoming easier to access is likely inevitable, so combatting it now is likely only delaying the inevitable. But if that AI is trained on CSAM, it is inherently unethical to use.

Whether that makes the porn generated by it unethical by extension is still difficult to decide though, because if artists hate AI, then CSAM producers likely do too. Artists are worried AI will put them out of business, but then couldn’t the same be said about CSAM producers? If AI has the potential to run CSAM producers out of business, then it would be a net positive in the long term, even if the images being created in the short term are unethical.

Ookami38 ,

Just a point of clarity, an AI model capable of generating csam doesn't necessarily have to be trained on csam.

assassin_aragorn ,

That honestly brings up more questions than it answers.

Ookami38 ,

Why is that? The whole point of generative AI is that it can combine concepts.

You train it on the concept of a chair using only red chairs. You train it on the color red, and the color blue. With this info and some repetition, you can have it output a blue chair.

The same applies to any other concepts. Larger, smaller, older, younger. Man, boy, woman, girl, clothed, nude, etc. You can train them each individually, gradually, and generate things that then combine these concepts.

Obviously this is harder than just using training data of what you want. It's slower, it takes more effort, and results are inconsistent, but they are results. And then, you curate the most viable of the images created this way to train a new and refined model.

todd_bonzalez ,

Yeah, there are photorealistic furry photo models, and I have yet to meet an anthropomorphic dragon IRL.

Glass0448 ,
@Glass0448@lemmy.today avatar

so many people still think it should be illegal

It is illegal. https://www.thefederalcriminalattorneys.com/possession-of-lolicon

PM_Your_Nudes_Please , (edited )

I wasn’t arguing about current laws. I was simply arguing about public perception, and whether the average person believes it should be illegal. There’s a difference between legality and ethicality. Something unethical can be legal, and something illegal can be ethical.

Weed is illegal, but public perception says it shouldn’t be.

uis ,
@uis@lemm.ee avatar

Weed is illegal, but public perception says it shouldn’t be.

Alcohol is worse then weed, yet alcohol is not banned.

todd_bonzalez ,

The mitochondria is the powerhouse of the cell.

TheObviousSolution ,

Why are you assuming everyone lives in the US? Your article even admits that it is legal elsewhere (Japan).

This is a better one: https://en.wikipedia.org/wiki/Legal_status_of_fictional_pornography_depicting_minors

JovialMicrobial ,

I think one of the many problems with AI generated CSAM is that as AI becomes more advanced it will become increasingly difficult for authorities to tell the difference between what was AI generated and what isn't.

Banning all of it means authorities don't have to sift through images trying to decipher between the two.
If one image is declared to be AI generated and it's not...well... that doesn't help the victims or create less victims. It could also make the horrible people who do abuse children far more comfortable putting that stuff out there because it can hide amongst all the AI generated stuff. Meaning authorities will have to go through far more images before finding ones with real victims in it. All of it being illegal prevents those sorts of problems.

PM_Your_Nudes_Please ,

And that’s a good point! Luckily it’s still (usually) fairly easy to identify AI generated images. But as they get more advanced, that will likely become harder and harder to do.

Maybe some sort of required digital signatures for AI art would help; Something like a public encryption key in the metadata, that can’t be falsified after the fact. Anything without that known and trusted AI signature would by default be treated as the real deal.

But this would likely require large scale rewrites of existing image formats, if they could even support it at all. It’s the type of thing that would require people way smarter than myself. But even that feels like a bodged solution to a problem that only exists because people suck. And if it required registration with a certificate authority (like an HTTPS certificate does) then it would be a hurdle for local AI instances to jump through. Because they would need to get a trusted certificate before they could sign their images.

Kalcifer ,
@Kalcifer@sh.itjust.works avatar

But it’s icky so many people still think it should be illegal.

Imo, not the best framework for creating laws. Essentially, it's an appeal to emotion.

cley_faye ,

Apparently he sent some to an actual minor.

Glass0448 ,
@Glass0448@lemmy.today avatar
PhlubbaDubba ,

I think the point is that child attraction itself is a mental illness and people indulging it even without actual child contact need to be put into serious psychiatric evaluation and treatment.

Mastengwe ,

It’s better to have neither.

forensic_potato ,
@forensic_potato@lemmy.world avatar

This mentality smells of "just say no" for drugs or "just don't have sex" for abortions. This is not the ideal world and we have to find actual plans/solutions to deal with the situation. We can't just cover our ears and hope people will stop

Darkard , to Technology in FBI Arrests Man For Generating AI Child Sexual Abuse Imagery

And the Stable diffusion team get no backlash from this for allowing it in the first place?

Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?

muntedcrocodile ,

Not everything exists on the cloud (someone else's computer)

DmMacniel ,

You can run the SD model offline, so on what service would that User be flagged?

aniki ,

That's not how any of this works

yukijoou ,

my main question is: how much csam was fed into the model for training so that it could recreate more

i think it'd be worth investigating the training data usued for the model

Ragdoll_X , (edited )
@Ragdoll_X@lemmy.world avatar

This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren't actually available in the dataset because they had already been removed from the internet.

You could still make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.

DarkThoughts ,

Because what prompts people enter on their own computer isn't in their responsibility? Should pencil makers flag people writing bad words?

Glass0448 ,
@Glass0448@lemmy.today avatar

Stable Diffusion has been distancing themselves from this. The model that allows for this was leaked from a different company.

NeoNachtwaechter , to Technology in FBI Arrests Man For Generating AI Child Sexual Abuse Imagery

Bad title.

They caught him not simply for creating pics, but also for trading such pics etc.

rickyrigatoni ,
@rickyrigatoni@lemm.ee avatar

You can get away with a lot of heinous crimes by simply not telling people and sharing the results.

quindraco ,

You consider it a heinous crime to draw a picture and keep it to yourself?

catloaf ,

Read the article. He was arrested for sending the pictures to at least one minor.

quindraco ,

Re-read RickyRigatoni's comment.

Frozengyro ,

That's sickening to know there are bastards out there who will get away with it since they are only creating it.

NeoNachtwaechter ,

I'm not sure. Let us assume that you generate it on your own PC at home (not using a public service) and don't brag about it and never give it to anybody - what harm is done?

Frozengyro ,

Even if the AI didn't train itself on actual CSAM that is something that feels inherently wrong. Your mind is not right to think that's acceptable IMO.

DarkThoughts ,

Laws shouldn't be about feelings though and we shouldn't prosecute people for victimless thought crimes. How often did you think something violent when someone really pissed you off? Should you have been persecuted for that thought too?

Frozengyro ,

This goes away further than a thought

DarkThoughts ,

Who are the victims of someone generating such images privately then? It's on the same level as all the various fan fiction shit that was created manually over all the past decades.

And do we apply this to other depictions of criminalized things too? Would we ban the depiction of violence & sexual violence on TV, in books, and in video games too?

GBU_28 , (edited )

Society is not ok with the idea of someone cranking to CSAM, then just walking around town. It gives people wolf-in-sheep-clothing vibes.

So the notion of there being "ok" CSAM-style ai content is a non starter for a huge fraction of people because it still suggests appeasing a predator.

I'm definitely one of those people that simply can't accept any version of it.

Glass0448 ,
@Glass0448@lemmy.today avatar
horncorn , to Technology in FBI Arrests Man For Generating AI Child Sexual Abuse Imagery

Article title is a bit misleading. Just glancing through I see he texted at least one minor in regards to this and distributed those generated pics in a few places. Putting it all together, yeah, arrest is kind of a no-brainer.
Ethics of generating csam is the same as drawing it pretty much. Not much we can do about it aside from education.

retrospectology ,
@retrospectology@lemmy.world avatar

Lemmy really needs to stop justifying CP.
We can absolutely do more than "eDuCaTiOn". AI is created by humans, the training data is gathered by humans, it needs regulation like any other industry.

It's absolutely insane to me how laissez-fair some people are about AI, it's like a cult.

autonomoususer , (edited )

[Thread, post or comment was deleted by the author]

  • Loading...
  • retrospectology ,
    @retrospectology@lemmy.world avatar

    The fuck are you talking about? No one's "enslaving" you because they're trying to stop you from generating child porn.

    Fucking libertarians dude.

    autonomoususer , (edited )

    [Thread, post or comment was deleted by the author]

  • Loading...
  • retrospectology ,
    @retrospectology@lemmy.world avatar

    Ah yes, we need child porn because it's a slippery slope.

    msage ,

    While I agree with your attitude, the whole 'laissez-fair' thing is probably a misunderstanding:

    There is nothing we can do to stop the AI.

    Nothing.

    The genie is out of the bottle, the Pandora's box has been opened, everything is out and it won't ever return. The world will never be the same, and it's irrelevant what people think.

    That's why we need to better understand the post-AI world we created, and figure out what do to now.

    Also, to hell with CP. (feels weird to use the word 'fuck' here)

    retrospectology , (edited )
    @retrospectology@lemmy.world avatar

    Thats not the question, the question is not "can we stop AI entirely" it's about regulating its development and yes, we can make efforts to do that.

    This attitude of "it's inevitable, can't do anything about it" is eerily similar logic to what is used in climate denial and other right-wing efforts. It's a really poor attitude to have, especially about something as consequential as AI.

    We have the best opportunity right now to create rules about its uses and development. The answer is not "do nothing" as if it's some force of nature, as opposed toa tool created by humans.

    msage ,

    I hear you, and I don't necessarily disagree with you, I just know that's not how anything works.

    Regulations work for big companies, but there isn't a big company behind this specific case. And those small-time users have run away and you can't stop them.

    It's like trying to regulate cameras to not store specific images. Like, I get the sentiment, but sorry, no. It's not that I would not like that, it's just not possible.

    retrospectology ,
    @retrospectology@lemmy.world avatar

    This argument could be applied to anything though. A lot of people get away with myrder, we should still try and do what we can to stop it from happening.

    You can't sit in every car and force people to wear a seatbelt, we still have seatbelt laws and regulations for manufacturers.

    msage ,

    Physical things are much easier to regulate than software, much less serverless.

    We already regulate certain images, and it matters very little.

    The bigger payoff will be from educating the public and accepting that we can't win every war.

    retrospectology ,
    @retrospectology@lemmy.world avatar

    So accept defeat from the start, that's really just a non-starter. AI models run on hardware, they are developed by specific people, their contents are distributed by specific individuals, code bases are hosted on hardware and on specific outlets.

    It really does sound like you're just trying to make excuses to avoid regulation, not that you genuinely have a good reason to think it's not possible to try.

    GBU_28 , (edited )

    Dude the amount of open source, untrackable, distributed ai models is off the charts. This isn't just about the models offered by subscription from the big players.

    retrospectology ,
    @retrospectology@lemmy.world avatar

    This is still one of the weaker arguments.
    There is a lot of malware out there too, people are still prosecuted when they're caught developing and distributing it, we don't just throw up our hands and pretend there's nothing that can be done.

    Like, yeah, some pedophile who also happens to be tech saavy might build his own AI model to make CP, that's not some self-evident argument against attempting to stop them.

    GBU_28 ,

    No, like, the tools to do these things are common and readily available. It's not malware, it's generalized ai tools, completely embroiled with non image ai work.

    Pandora's box is wide open. All of this work can be done trivially, completely offline with a basic PC. Anyone motivated can be offline and up and running in a weekend

    You're asking to outlaw something like a spreadsheet.

    You download a general purpose image ai model, then train and prompt it completely offline

    L_Acacia , (edited )

    The models used are not trained on CP. The models weight are distributed freely and anybody can train a LORA on his computer. Its already too late to ban open weight models.

    autonomoususer , (edited )

    One of two classic excuses, virtue signalling to hijack control of our devices, our computing, an attack on libre software (they don't care about CP). Next, they'll be banning more math, encryption, again.

    It says gullible at the start of this page, scroll up and see.

    DarkThoughts ,

    You don't need CSAM training data to create CSAM images. If your model knows how children looks like, how naked human bodies look like, then it can create naked children. That's simply how generative models like this work and has absolutely nothing to do with specifically trained models for CSAM using actual CSAM material.

    So while I disagree with him, in that lack of education is the cause of CSAM or pedophilia... I'd say it could help with the general hysteria about LLMs, like the one's coming from you, who just let their emotions run wild when those topics arise. You people need to understand that the goal should be the protection of potential victims, not the punishment of victimless thought crimes.

    ricecake ,

    Legally, a sufficiently detailed image depicting csam is csam, regardless of how it was produced. Sharing it is why he got caught, inevitably, but it's still illegal even if he never brought a minor into it.

    Glass0448 ,
    @Glass0448@lemmy.today avatar

    Making the CSAM is illegal by itself https://www.thefederalcriminalattorneys.com/possession-of-lolicon

    Title is pretty accurate.

    sugartits , to Technology in FBI Arrests Man For Generating AI Child Sexual Abuse Imagery

    No no no guys.

    It's perfectly okay to do this as this is art, not child porn as I was repeatedly told and down voted when I stated the fucking obvious

    So if it's art, we have to allow it under the constitution, right? It's "free speech", right?

    SeattleRain ,

    Well yeah. Just because something makes you really uncomfortable doesn't make it a crime. A crime has a victim.

    Also, the vast majority of children are victimized because of the US' culture of authoritarianism and religious fundamentalism. That's why far and away children are victimized by either a relative or in a church. But y'all ain't ready to have that conversation.

    sugartits ,

    That thing over there being wrong doesn't mean we can't discuss this thing over here also being wrong.

    So perhaps pipe down with your dumb whataboutism.

    SeattleRain ,

    It's not whataboutism, he's being persecuted because of the idea that he's hurting children all the while law enforcement refuses to truly persecute actual institutions victimizing children and are often colluding with traffickers. For instance LE throughout the country were well aware of the scale of the Catholic church's crimes for generations.

    How is this whataboutism.

    sugartits ,

    Because it's two different things.

    We should absolutely go after the Catholic church for the crimes committed.

    But here we are talking about the creation of child porn.

    If you cannot understand this very simple premise, then we have nothing else to discuss.

    SeattleRain ,

    They're not two different things. They're both supposedly acts of pedophilia except one would take actual courage to prosecute (churches) and the other which doesn't have any actual victims is easy and is a PR get because certain people find it really icky.

    sugartits ,

    I guess we're done here then.

    todd_bonzalez ,

    Yes, case closed. You were wrong. Sucks to suck.

    sugartits ,

    Very mature response. Well done.

    DarkThoughts ,

    Just to be clear here, he's not actually persecuted for generating such imagery like the headline implies.

    catloaf ,

    He might be persecuted, but he's not prosecuted for it.

    DarkThoughts ,

    Fair enough. lol

    Glass0448 ,
    @Glass0448@lemmy.today avatar
    todd_bonzalez ,

    First of all, it's absolutely crazy to link to a 6 month old thread just to complain that you go downvoted in it. You're pretty clearly letting this site get under your skin if you're still hanging onto these downvotes.

    Second, none of your 6 responses in that thread are logical, rational responses. You basically just assert that things that you find offensive enough should be illegal, and then just type in all caps at everyone who explains to you that this isn't good logic.

    The only way we can consider child porn prohibition constitutional is to interpret it as a protection of victims. Since both the production and distribution of child porn hurt the children forced into it, we ban it outright, not because it is obscene, but because it does real damage. This fits the logic of many other forms of non-protected speech, such as the classic "shouting 'fire' in a crowded theatre" example, where those hurt in the inevitable panic are victims.

    Expanding the definition of child porn to include fully fictitious depictions, such as lolicon or AI porn, betrays this logic because there are no actual victims. This prohibition is rooted entirely in the perceived obscenity of the material, which is completely unconstitutional. We should never ban something because it is offensive, we should only ban it when it does real harm to actual victims.

    I would argue that rape and snuff film should be illegal for the same reason.

    The reason people disagree with you so strongly isn't because they think AI generated pedo content is "art" in the sense that we appreciate it and defend it. We just strongly oppose your insistence that we should enforce obscenity laws. This logic is the same logic used as a cudgel against many other issues, including LGBTQ rights, as it basically argues that sexually disagreeable ideas should be treated as a criminal issue.

    I think we all agree that AI pedo content is gross, and the people who make it and consume it are sick. But nobody is with you on the idea that drawings and computer renderings should land anyone in prison.

    sugartits , (edited )

    First of all, it's absolutely crazy to link to a 6 month old thread just to complain that you go downvoted in it. You're pretty clearly letting this site get under your skin if you're still hanging onto these downvotes.

    No, I just... Remembered the thread? Wasn't difficult to remember it. Took me a minute to find it.

    This may surprise you but CP isn't something I discuss very often.

    I don't lose sleep over people defending CP as "art", nor did it get under my skin. I just think these are fucking idiots and are for some baffling reason trying to defend the indefensible and go about my day. I'm not going to do anything about it, but I'm sure glad I don't have such dumb comments linked to a public account with my IP address logged somewhere...

    I just raised it to make my point.

    I didn't bother reading the rest of your essay. Its pretty clear from the first paragraph where you're going to land.

    Greg , to Technology in FBI Arrests Man For Generating AI Child Sexual Abuse Imagery
    @Greg@lemmy.ca avatar

    This is tough, the goal should be to reduce child abuse. It's unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don't abuse children. Like everything else AI, we won't know the real impact for many years.

    LadyAutumn , (edited )
    @LadyAutumn@lemmy.blahaj.zone avatar

    How do you think they train models to generate CSAM?

    Some of yall need to lookup what an LoRA is

    Dkarma ,

    Lol you don't need to train it ON CSAM to generate CSAM. Get a clue.

    LadyAutumn ,
    @LadyAutumn@lemmy.blahaj.zone avatar

    It should be illegal either way, to be clear. But you think theyre not training models on CSAM? Youre trusting in the morality/ethics of people creating AI generated child pornography?

    Greg ,
    @Greg@lemmy.ca avatar

    The use of CSAM in training generative AI models is an issue no matter how these models are being used.

    L_Acacia ,

    The training doesn't use csam, 0% chance big tech would use that in their dataset. The models are somewhat able to link concept like red and car, even if it had never seen a red car before.

    AdrianTheFrog ,
    @AdrianTheFrog@lemmy.world avatar

    Well, with models like SD at least, the datasets are large enough and the employees are few enough that it is impossible to have a human filter every image. They scrape them from the web and try to filter with AI, but there is still a chance of bad images getting through. This is why most companies install filters after the model as well as in the training process.

    DarkThoughts ,

    You make it sound like it is so easy to even find such content on the www. The point is, they do not need to be trained on such material. They are trained on regular kids, so they know their sizes, faces, etc. They're trained on nude bodies, so they also know how hairless genitals or flat chests look like. You don't need to specifically train a model on nude children to generate nude children.

    barsquid ,

    https://purl.stanford.edu/kh752sm9123

    I don't know if we can say for certain it needs to be in the dataset, but I do wonder how many of the other models used to create CSAM are also trained on CSAM.

    DarkThoughts ,

    I suggest you actually download stable diffusion and try for yourself because it's clear that you don't have any clue what you're talking about. You can already make tiny people, shaved, genitals, flat chests, child like faces, etc. etc. It's all already there. Literally no need for any LoRAs or very specifically trained models.

    crazyminner , to Technology in FBI Arrests Man For Generating AI Child Sexual Abuse Imagery

    I had an idea when these first AI image generators started gaining traction. Flood the CSAM market with AI generated images( good enough that you can't tell them apart.) In theory this would put the actual creators of CSAM out of business, thus saving a lot of children from the trauma.

    Most people down vote the idea on their gut reaction tho.

    Looks like they might do it on their own.

    jaschen ,

    It's also a victimless crime. Just like flooding the market with fake rhino horns and dropping the market price to a point that it isn't worth it.

    Itwasthegoat ,

    My concern is why would it put them out of business? If we just look at legal porn there is already beyond huge amounts already created, and the market is still there for new content to be created constantly. AI porn hasn't noticeably decreased the amount produced.

    Really flooding the market with CSAM makes it easier to consume and may end up INCREASING the amount of people trying to get CSAM. That could end up encouraging more to be produced.

    crazyminner ,

    The market is slightly different tho. Most CSAM is images, with Porn theres a lot of video and images.

    DarkThoughts ,

    It's such an emotional topic that people lose all rationale.
    I remember the Reddit arguments in the comment sections about pedos, already equalizing the term with actual child rapists, while others would argue to differentiate because the former didn't do anything wrong and shouldn't be stigmatized for what's going on in their heads but rather offered help to cope with it. The replies are typically accusations of those people making excuses for actual sexual abusers.

    I always had the standpoint that I do not really care about people's fictional content. Be it lolis, torture, gore, or whatever other weird shit. If people are busy & getting their kicks from fictional stuff then I see that as better than using actual real life material, or even getting some hands on experiences, which all would involve actual real victims.

    And I think that should be generally the goal here, no? Be it pedos, sadists, sociopaths, whatever. In the end it should be not about them, but saving potential victims. But people rather throw around accusations and become all hysterical to paint themselves sitting on their moral high horse (ironically typically also calling for things like executions or castrations).

    Cupcake1972 ,

    Yeah, exact same feelings here. If there is no victim then who exactly is harmed?

    Glass0448 ,
    @Glass0448@lemmy.today avatar

    It would be illegal in the United States. Artistic depictions of CSAM are illegal under the PROTECT act 2003.

    TheGrandNagus , (edited )

    And yet it's out there in droves on mainstream sites, completely without issue. Drawings and animations are pretty unpoliced.

    Ibaudia , to Technology in FBI Arrests Man For Generating AI Child Sexual Abuse Imagery
    @Ibaudia@lemmy.world avatar

    Isn't there evidence that as artificial CSAM is made more available, the actual amount of abuse is reduced? I would research this but I'm at work.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines