theatlantic.com

Marsupial , to Technology in 'LLM-free' is the new '100% organic' - Creators Are Fighting AI Anxiety With an ‘LLM-Free’ Movement
@Marsupial@quokk.au avatar

Good thing about this is it’s self selecting, all the luddites who refuse to use AI will find themselves at a disadvantage just the same as refusing to use a computer isn’t doing anyone any favours.

SkyNTP ,

The benefit of AI is overblown for a majority of product tiers. Remember how everything was supposed to be block chain? And metaverse? And web 3.0? And dot.com? This is just the next tech trend for dumb VCs to throw money at.

Zaktor ,

Except those things didn't really solve any problems. Well, dotcom did, but that actually changed our society.

AI isn't vaporware. A lot of it is premature (so maybe overblown right now) or just lies, but ChatGPT is 18 months old and look where it is. The core goal of AI is replacing human effort, which IS a problem wealthy people would very much like to solve and has a real monetary benefit whenever they can. It's not going to just go away.

BurningRiver ,

Can you trust whatever AI you use, implicitly? I already know the answer, but I really want to hear people say it. These AI hype men are seriously promising us capabilities that may appear down the road, without actually demonstrating use cases that are relevant today. “Some day it may do this, or that”. Enough already, it’s bullshit.

Zaktor , (edited )

Yes? AI is a lot of things, and most have well-defined accuracy metrics that regularly exceed human performance. You're likely already experiencing it as a mundane tool you don't really think about.

If you're referring specifically to generative AI, that's still premature, but as I pointed out, the interactive chat form most people worry about is 18 months old and making shocking levels of performance gains. That's not the perpetual "10 years away" it's been for the last 50 years, that's something that's actually happening in the near term. Jobs are already being lost.

People are scared about AI taking over because they recognize it (rightfully) as a threat. That's not because they're worthless. If that were the case you'd have nothing to fear.

PeteBauxigeg ,

ChatGPT didn't begin 18 months ago, the research that it originates from has been ongoing for years, how old is alexnet?

Zaktor ,

I'm referencing ChatGPT's initial benchmarks to its capabilities to today. Observable improvements have been made in less than two years. Even if you just want to track time from the development of modern LLM transformers (All You Need is Attention/BERT), it's still a short history with major gains (alexnet isn't really meaningfully related). These haven't been incremental changes on a slow and steady march to AI sometime in the scifi scale future.

PeteBauxigeg ,

AlexNet is related, it was the first use of consumer gpus to train neutral networks no?

Zaktor ,

No, not even remotely. And that's kind of like citing "the first program to run on a CPU" as the start of development for any new algorithm.

PeteBauxigeg ,

As far as I can find out, there was only one use of GPUs prior to alexnet for CNN, and it certainty didn't have the impact alexnet had. Besides, running this stuff on GPUs not CPUs is a relevant technological breakthrough, imagine how slow chayGPT would be running on a CPU. And it's not at all as obvious as it seems, most weather forecasts still run on CPU clusters despite them being obvious targets for GPUs.

Zaktor ,

What? Alexnet wasn't a breakthrough in that it used GPUs, it was a breakthrough for its depth and performance on image recognition benchmarks.

We knew GPUs could speed up neural networks in 2004. And I'm not sure that was even the first.

PeteBauxigeg ,

Okay, so some of the advances that chatGPT uses (consumer GPUs for training) are even older? 😁

CanadaPlus ,

Yes, it's very hyped and being overused. Eventually the bullshit artists will move on to the next buzzword, though, and then there's plenty of tasks it is very good at where it will continue to grow.

Kedly ,

Yeah, but the dot com bubble didnt kill the internet entirely, and the video game bubble that prompted nintendo to create its own quality seal of approval didnt kill video games entirely. This fad, when it dies, already has useful applications and when the bubble pops, those applications will survive

jarfil ,
@jarfil@beehaw.org avatar

Blockchain is used in more places than you'd expect... not the P2P version, or the "cryptocurrency" version, just the "signature based chained list" one. For example, all signed Git commits, form a blockchain.

The Metaverse has been bubbling on and off for the last 30 years or so, each iteration it gets slightly better... but it keeps failing at the same points (I think I wrote about it 20+ years ago, with points which are still valid).

Web 3.0, not to be confused with Web3, is the Semantic Web, in the works for the last 20+ years. Web3 is a cool idea for a post-scarcity world, pretty useless right now.

Dot.com was the original Web bubble... and here we are, on the Web, post-bubble.

MayonnaiseArch ,
@MayonnaiseArch@beehaw.org avatar

Luddites were not idiots, they were people who understood the only use of tech at their time was to fuck them. Like this complete garbage shit is going to be used to fuck people. Nobody is opposed to having tools, we just don't like Musk fanboys blowing spit bubbles while trying to get peepee hard

Marsupial ,
@Marsupial@quokk.au avatar

If capitalism is shit you attack capitalism not a technology.

All the misplaced rage and wasted effort.

rysiek ,
@rysiek@mstdn.social avatar
Marsupial ,
@Marsupial@quokk.au avatar

That literally tells the story of a people’s losing their livelihood due to capitalist usage of technology, and targeting the technology instead of the systemic issue.

princessnorah ,
@princessnorah@lemmy.blahaj.zone avatar

You say this like destroying the technology wasn't their way of targeting the systemic issue? Destroying expensive machinery has been used by labour time and again as a tactic to get the capitalist class to cough up.

MayonnaiseArch ,
@MayonnaiseArch@beehaw.org avatar

Maybe he wants guillotines

princessnorah ,
@princessnorah@lemmy.blahaj.zone avatar

Capitalists have an answer to this, it's called the Pinkertons...

Kedly ,

So, do textiles machines not exist anymore? It doesnt sound like them burning down factories stopped the textile factories in the long run

Rozauhtuno ,
@Rozauhtuno@lemmy.blahaj.zone avatar

Good thing about this is it’s self selecting, all the technobros who obsess over AI will find themselves bankrupted like when the blockchain bubble bursted.

echodot ,

The blockchain bubble burst because everyone with a brain could see from the start that it wasn't really a useful technology. AI actually does have some advantages so they won't go completely bust as long as they don't go completely mad and start declaring that it can do things it can't do.

Rozauhtuno ,
@Rozauhtuno@lemmy.blahaj.zone avatar

they won’t go completely bust as long as they don’t go completely mad and start declaring that it can do things it can’t do.

Which is exactly what's happening.

echodot ,

The fact that it is useful technology though means they'll always have a fullback. It's not going to go way like bitcoin I guarantee it.

technocrit ,

Bitcoin went away? It's at like $67k today. Personally I prefer sustainable cryptos but unfortunately Bitcoin is far from dead.

And sure, there's lots of data processing and statistics that's extremely useful. That's been the case for a long time. But anybody talking about "intelligence" is a con.

Zaktor ,

GameStop also went up. It doesn't mean GameStop is a good company that's valuable to own, it just means that dumb people will buy things without value if they think they can eventually pass the bag to someone else. If someone purchased every share of Amazon they'd own a massive asset that would continually produce value for them. If someone bought every outstanding Bitcoin, it both wouldn't produce ongoing value, but the value would actually go to zero.

sonori ,
@sonori@beehaw.org avatar

Like say, treating a program that shows you the next most likely word to follow the previous one on the internet like it is capable of understanding a sentence beyond this is the most likely string of words to follow the given input on the internet. Boy it sure is a good thing no one would ever do something so brainless as that in the current wave of hype.

It’s also definitely becuse autocompletes have made massive progress recently, and not just because we’ve fed simpler and simpler transformers more and more data to the point we’ve run out of new text on the internet to feed them. We definitely shouldn’t expect that the field as a whole should be valued what it was say back in 2018, when there were about the same number of practical uses and the foucus was on better programs instead of just throwing more training data at it and calling that progress that will continue to grow rapidly even though the amount of said data is very much finite.

uis ,
@uis@lemm.ee avatar

AIs are fancy matrix multiplications

jarfil ,
@jarfil@beehaw.org avatar

Check Q*, Google's Gemini is already using a similar approach.

technocrit ,

When did the blockchain bubble burst? Is crypto dead again? I missed it.

uis ,
@uis@lemm.ee avatar

It dies every year since 2010.

stephen01king ,

Only NFTs died, so I guess part of crypto did.

Kedly ,

How does using free software to play dress up with anime characters bankrupt me financially?

Soundhole , (edited ) to Technology in 'LLM-free' is the new '100% organic' - Creators Are Fighting AI Anxiety With an ‘LLM-Free’ Movement

As a "creator" myself, I'd like to say to my fellow artists who are anit-AI, get over it. AI artists are artists too. Yes there is bad AI art, but there's bad art in every medium. If done with care and skill, AI art can be completely awesome and if you have an open mind, you might even find some space for it in your work. But even if you don't, have some respect for the AI artists out there who put time and effort into their craft. There's room for everyone.

darkphotonstudio ,

Exactly, and if you are a trained artist, you can mop the floor with someone who only use prompts. I've been using the diffusion plugin for Krita and it is so powerful. You have the ability to paint, use layers and filters and near real-time AI fills. It's awesome and fun.

Ilandar ,

But even if you don’t, have some respect for the AI artists out there who put time and effort into their craft.

What kind of time and effort? How is AI art a skill that is comparable to real art? I am genuinely asking here, I'd like to understand your work process.

I am not a visual artist, but I have composed my own music and the amount of time and/or effort needed to create a comparable piece using generative AI is not even close to being the same. I think there is a place for AI tools that assist artists, but people generating entire pieces using AI and then referring to themselves as "artists" is honestly delusional and sad. I hope that's not what you are referring to here.

unconfirmedsourcesDOTgov ,

Not OP but familiar enough with open source diffusion image generators to be able to chime in.

Now I'd argue that being an artist comes down to being able to envision something in your mind's eye and then reproduce it in the real world using some medium, whether it's a graphite pencil, oil paint, a block of marble, Wacom tablet on a pc, or even through a negotiation with an AI model. Your definition might be different, but for the sake of conversation this is how I'm thinking about it.

The work flow for an AI generated image can have a few steps before feeling like it sufficiently aligns with your vision. Prompting for specific details can be tricky, so usually step 1 is to generate the basic outline of the image you're after. Depending on your GPU or cloud service, this could take several minutes or hours before you get a basis that you can work with. Once you have the basic image, you can then use inpainting tools to mask specific areas of the image and change specific details, colors, etc. This again can take many many generations before you land on something that sufficiently matches your vision.

This is all also after you go through the process of reviewing and selecting one of the hundreds of models that have been trained specifically for different types of output. Want to generate anime-style art? There's a model for that, want something great at landscapes? There's a different one for that. Surely you can use an all-purpose model for everything, but some models simply don't have the training to align to your vision, so you either choose to live with 'close enough' or you start downloading new options, comparing them with your existing work flow, etc.

There's certainly skill associated with the current state of image generation. Perhaps not the same level of practice you need to perfectly represent a transparent veil in graphite, but as with other formats I have a hard time suggesting that when someone represents their vision in the real world that it's automatically "not art".

Pandemanium ,

So if I walked into a restaurant that specialized in a certain cuisine (choosing the right one out of hundreds is a skill, right?) and wrote down a list of ingredients, and the restaurant made me a meal with those ingredients according to however the restaurant functions (nobody can see into the kitchen, after all), does this make me a chef?

unconfirmedsourcesDOTgov ,

Is there any chance you're at a kbbq or hotpot restaurant? Because then you get to cook the meal yourself, which is arguably chef-like.

Jokes aside, I see the comparison you're making and it's not a bad one. I'd counter by giving the example of a menu - when you get to a restaurant you're given a menu with text descriptions of the food you can receive from the kitchen. Since this is an analogy and not an exact comparison, let's say that a meal on the menu is like the starting point of the workflow I described.

Based on that you have an idea of what the output will be when you order - but let's say you don't like mushrooms and you prefer your sauce on the side. When you make your order you provide those modifications - this is like inpainting.

Certainly you're not a 'chef', but if the dish you design is both bespoke and previously unimaginable, I'd argue that at the very least you contributed to the creative process and participated in creating something new that matches your internal vision.

Not exactly the same but I don't think it's entirely different.

Ilandar ,

You keep using the word "vision", but I have a hard time understanding how an AI artist has a vision equivalent to that of a traditional artist based on the explanation you've provided. It still sounds they are just cycling through AI generated options until they find something they like/that looks good. That is not the same as seeing something in your mind and then manually recreating that to the best of your ability.

Zaktor ,

Is a photographer an artist? They need to have some technical skill to capture sharp photos with good lighting, but a lot of the process is designing a scene and later selecting among the photos from a shoot for which one had the right look.

Or to step even further from the actual act of creation, is a creative director an artist? There's certainly some skill involved in designing and recognizing a compelling image, even if you were not the one who actually produced it.

Ilandar ,

You're sort of stepping around the issue here. Are you confirming that AI art is about cycling through options blind until you stumble across something you like?

Zaktor ,

No, both of those examples involve both design and selection, which is reminiscent to the AI art process. They're not just typing in "make me a pretty image" and then refreshing a lot.

Ilandar ,

They’re not just typing in “make me a pretty image” and then refreshing a lot.

The only explanation I've received so far sounded exactly like this, just with more steps to disguise the underlying process.

Zaktor ,

It isn't. People design a scene and then change and refine the prompt to add elements. Some part of it could be refreshing the same prompt, but that's just like a photographer taking multiple photos of a scene they've directed to catch the right flutter of hair or a dress or a creative director saying "give me three versions of X".

Ready to get back to my original questions?

Soundhole ,

Well now that's just close minded!

Go back and read discussions about synthesisers when they first arrived on the scene and you will see much wailing and gnashing of teeth about how synths are not real instruments and etc and so on. Then do the same thing when hip-hop goes mainstream and people say it's not "real" music because the musicians don't perform with "real" instruments, I guess.

You see where I'm going with this? There's lots of examples like these in music and visual arts and they nearly always stem from ignorance.

I don't know anything about AI music generation, but visual art can be generated by AI models on local machines with a great amount of fine tuning and depth. Further, people feed their original artwork into the AI and manipulate that, so it's not so cut and dry. This idea that folks just write a sentence and the computer barfs out an image is uninformed.

Anyways, I'm blabbing. Hope that helps.

Ilandar ,

You see where I'm going with this?

No, I'm sorry but those are terrible examples. Synthesisers still require full creative control and an understanding of sound production techniques to create a custom sound. Some musicians rely on presets and samples, but even then they still need to be capable of actually composing a piece of music. Also, the debate was largely about whether synthesisers could be considered real instruments, not whether the music created by synthesisers was real music. The Hip Hop comparison is completely irrelevant and an even worse attempt at conflating genuine criticism of AI "musicians" with "old people are just mad".

I don't know anything about AI music generation

It's literally just prompts AFAIK, so the people making it don't require any musical talent, ability or creativity. They are just asking someone/something else to make them music that has a certain sound. It's the equivalent of a monarch commissioning a piece of work from their court musician and then claiming they are a musician too.

visual art can be generated by AI models on local machines with a great amount of fine tuning and depth.

Are there specific pieces of AI art software people use? Any popular ones you can recommended to help me understand the process better?

Soundhole ,

Ah, you are picking apart the examples instead of taking in the point. Well, I tried.

To answer your question, yes. Automatic1111 and ComfyUI are two of the most popular.

Ilandar ,

It was a terrible and irrelevant point, as I explained. Thanks for the links though, I will check them out.

Soundhole ,

It's really not.

Maybe someday you'll do some research into the history of art and music and get some context into how technology has influenced both and the repeating patterns of the reactionary art that tends to get produced by artists you've never heard of when that happens.

Or maybe you won't!

Either way, good luck.

Ilandar ,

Err, you admitted yourself that you are absolutely clueless when it comes to AI music generation. So yes, your "point" was a bad one and clearly came from a place of complete ignorance.

jarfil ,
@jarfil@beehaw.org avatar

What kind of time and effort?

AI art can require training a model, or a LORA for a model, which requires choosing a series of samples and annotating them for the parts of you want to incorporate. After that, writing a prompt can involve several paragraphs with the definitions of what you want it to output, with a series of iterations, followed by a personal choice of the output.

How is AI art a skill that is comparable to real art?

How is stacking 10 buckets of sand and letting them fall in an art gallery, comparable to real art? Dunno, but they call it that: "real art".

Art is a communication act that requires some sort of vision, intended to elicit some sort of emotional response in the receiver, and a series of steps to achieve that.

As long as there is a vision and an intent, the series of steps required to create art with AI, are comparable to any other series of steps conducting to the creation of art with any other medium.

For a rough estimate, you can compare the number and difficulty of the steps, and the effectiveness of the communication.

people generating entire pieces using AI and then referring to themselves as "artists" is honestly delusional and sad

Let me refer you to the aforementioned sand bucket... sculpture? or the renowned orchestral piece "A minute of silence", or paintings like "Black square", or more performative pieces like "Banana duct taped to a wall".

There will always be artists, and "artists".

Ilandar ,

I'm not sure equating AI art to sand bucket man is the glowing endorsement you think it is.

jarfil ,
@jarfil@beehaw.org avatar

I think you misunderstood: "sand bucket man" is the bar for human art.

AI art has been above that for at least a decade, maybe two. Modern AI art, is orders of magnitude farther, even with the simplest of prompts.

Ilandar ,

How is stacking 10 buckets of sand and letting them fall in an art gallery, comparable to real art? Dunno, but they call it that: “real art”.

Your insinuation here was that AI art is "real art" because someone once stacked 10 buckets of sand and called it "real art". It comes across as pretty desperate that you relied on a comparison with something as questionable as this to argue that AI art is the equivalent of traditional art. As you said, there will always be artists and "artists". Sounds like AI "artists" fit in quite well with the latter group.

jarfil ,
@jarfil@beehaw.org avatar

Let me clarify: I've seen the sand bucket guy's art featured twice on the news in the past few days, filmed at an art gallery, described as art, commented as being art. It's not some random event, it's the current publicly accepted definition of "art".

My statement, not insinuation, as to why AI art is comparable to "traditional" art, comes after that.

What comes across as desperate however, is generalizing all AI output and disparaging it, without considering the quality of input from the person behind it. Reminds me of how photography used to not be art, how electric instruments couldn't be art, or how using a computer couldn't be art either. Tools don't make or break an artist.

teawrecks , to Technology in 'LLM-free' is the new '100% organic' - Creators Are Fighting AI Anxiety With an ‘LLM-Free’ Movement

So this could go one of two ways, I think:

  1. the "no AI" seal is self-ascribed using the honor system and over time enough studios just lie about it or walk the line closely enough that it loses all meaning and people disregard it entirely. Or,
  2. getting such a seal requires 3rd party auditing, further increasing the cost to run a studio relative to their competition, on top of not leveraging AI, resulting in those studios going out of business.
lvxferre , (edited )
@lvxferre@mander.xyz avatar

3. If you lie about it and get caught people will correctly call you a liar, ridicule you, and you lose trust. Trust is essential for content creators, so you're spelling your doom. And if you find a way to lie without getting caught, you aren't part of the problem anyway.

CanadaPlus ,

And if you find a way to lie without getting caught, you aren’t part of the problem anyway.

I was about to disagree, but that's actually really interesting. Could you expand on that?

lvxferre , (edited )
@lvxferre@mander.xyz avatar

Do you mind if I address this comment alongside your other reply? Both are directly connected.

I was about to disagree, but that’s actually really interesting. Could you expand on that?

If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with "made by AI". To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.

In other words, to lie without getting caught you're getting rid of what makes the output problematic on first place. The problem was never people using AI to do the "heavy lifting" to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else's, instead of a decent and original one. Those are the ones who'd get caught, because they're doing what you called "dumb" (and I agree) - not proof-reading their output.

Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.

CanadaPlus , (edited )

Yes, sorry, I didn't realise I was replying to the same user twice.

The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one.

Exactly. I guess I'm conditioned to expect "AI is smoke and mirrors" type comments, and that's not true. They're genuinely quite impressive and can make intuitive leaps they weren't directly trained for. What they're not is aligned; they just want to create human-like output, regardless of truth, greater context or morality, because that's the only way we know how to train them.

I definitely hate searching something, and finding a website that almost reads as human with fake "authors", but provides no useful information. And I really worry for people who are less experienced spotting AI errors and filler. That's a moral issue, though, as opposed to a practical one; it seems to make ad money perfectly well for the "creators".

Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.

TIL. They're going to have trouble identifying rulebreakers if contributors use the tool correctly the way we've discussed, though.

teawrecks ,

I think the first half of yours is the same as my first, and I think a lot of artists aren't against AI that produces worse art than them, they're againt AI art that was generated using stolen art. They wouldn't be part of the problem if they could honestly say they trained using only ethically licensed/their own content.

Melody ,
Mac , to Technology in 'LLM-free' is the new '100% organic' - Creators Are Fighting AI Anxiety With an ‘LLM-Free’ Movement

we should punish techncompanies for ruining the Internet with AI.

darkphotonstudio ,

The internet was ruined before AI.

lvxferre , to Technology in 'LLM-free' is the new '100% organic' - Creators Are Fighting AI Anxiety With an ‘LLM-Free’ Movement
@lvxferre@mander.xyz avatar

For writers, that "no AI" is not just the equivalent of "100% organic"; it's also the equivalent as saying "we don't let the village idiot to write our texts when he's drunk".

Because, even as we shed off all paranoia surrounding A"I", those text generators state things that are wrong, without a single shadow of doubt.

Zaktor ,

Sometimes. Sometimes it's more accurate than anyone in the village. And it'll be reliably getting better. People relying on "AI is wrong sometimes" as the core plank of opposition aren't going to have a lot of runway before it's so much less error prone than people the complaint is irrelevant.

The jobs and the plagiarism aspects are real and damaging and won't be solved with innovation. The "AI is dumb" is already only selectively true and almost all the technical effort is going toward reducing that. ChatGPT launched a year and a half ago.

lvxferre ,
@lvxferre@mander.xyz avatar

Sometimes. Sometimes it’s more accurate than anyone in the village.

So does the village idiot. Or a tarot player. Or a coin toss. And you'd still need to be a fool if your writing relies on the output of those three. Or of a LLM bot.

And it’ll be reliably getting better.

You're distorting the discussion from "now" to "the future", and then vomiting certainty on future matters. Both things make me conclude that reading your comment further would be solely a waste of my time.

Zaktor ,

You're lovely. Don't think I need to see anything you write ever again.

Ilandar ,

Yes, I always get the feeling that a lot of these militant AI sceptics are pretty clueless about where the technology is and the rate at which it is improving. They really owe it to themselves to learn as much as they can so they can better understand where the technology is heading and what the best form of opposition will be in the future. As you say, relying on "haha Google made a funny" isn't going to cut it forever.

Zaktor ,

Yeah. AI making images with six fingers was amusing, but people glommed onto it like it was the savior of the art world. "Human artists are superior because they can count fingers!" Except then the models updated and it wasn't as much of a problem anymore. It felt good, but it was just a pleasant illusion for people with very real reasons to fear the tech.

None of these errors are inherent to the technology, they're just bugs to correct, and there's plenty of money and attention focused on fixing bugs. What we need is more attention focused on either preparing our economies to handle this shock or greatly strengthen enforcement on copyright (to stall development). A label like this post is about is a good step, but given how artistic professions already weren't particularly safe and "organic" labeling only has modest impacts on consumer choice, we're going to need more.

sonori ,
@sonori@beehaw.org avatar

Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.

So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence. Instead, in order to get a coherent output the system must be fed training data that closely mirrors the context, this is why groups like OpenAi have been met with so much success by simplifying the algorithm, but progressively scrapping more and more of the internet into said systems.

I would argue that a similar inherent technological limitation also applies to image generation, and until a generative model can both model a four dimensional space and conceptually understand everything it has created in that space a generated image can only be as meaningful as the parts of the work the tens of thousands of people who do those things effortlessly it has regurgitated.

This is not required to create images that can pass as human made, but it is required to create ones that are truely meaningful on their own merits and not just the merits of the material it was created from, and nothing I have seen said by experts in the field indicates that we have found even a theoretical pathway to get there from here, much less that we are inevitably progressing on that path.

Mathematical models will almost certainly get closer to mimicking the desired parts of the data they were trained on with further instruction, but it is important to understand that is not a pathway to any actual conceptual understanding of the subject.

Zaktor ,

Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.

So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence.

This is a misunderstanding of what "probabilistic word choice" can actually accomplish and the non-probabilistic systems that are incorporated into these systems. People also make mistakes and don't actually "know" the meaning of words.

The belief system that humans have special cognizance unlearnable by observation is just mysticism.

sonori ,
@sonori@beehaw.org avatar

To note the obvious, an large language model is by definition at its core a mathematical formula and a massive collection of values from zero to one which when combined give a weighted average of the percentage that word B follows word A crossed with another weighted average word cloud given as the input ‘context’.

A nuron in machine learning terms is a matrix (ie table) of numbers between zero and 1 by contrast a single human nuron is a biomechanical machine with literally hundreds of trillions of moving parts that darfs any machine humanity has ever built in terms of complexity. This is just a single one of the 86 billion nurons in an average human brain.

LLM’s and organic brains are completely different and in both design, complexity, and function, and to treat them as closely related much less synonymous betrays a complete lack of understanding of how one or both of them fundamentally functions.

We do not teach a kindergartner how to write by having them read for thousands of years until they recognize the exact mathematical odds that string of letters B comes after string A, and is followed by string C x percent of the time. Indeed humans don’t naturally compose sentences one word at a time starting from the beginning, instead staring with the key concepts they wish to express and then filling in the phrasing and grammar.

We also would not expect that increasing from hundreds of years of reading text to thousands would improve things, and the fact that this is the primary way we’ve seen progress in LLMs in the last half decade is yet another example of why animal learning and a word cloud are very different things.

For us a word actually correlates to a concept of what that word represents. They might make mistakes and missunderstand what concept a given word maps to in a given language, but we do generally expect it to correlate to something. To us a chair is a object made to sit down on, and not just the string of letters that comes after the word the in .0021798 percent of cases weighted against the .0092814 percent of cases related to the collection of strings that are being used as the ‘context’.

Do I believe there is something intrinsically impossible for a mathematical program to replicate about human thought, probably not. But this this not that, and is nowhere close to that on a fundamental level. It’s comparing apples to airplanes and saying that soon this apple will inevitably take anyone it touches to Paris because their both objects you can touch.

Zaktor , (edited )

None of these appeals to relative complexity, low level structure, or training corpuses relates to whether a human or NN "know" the meaning of a word in some special way. A lot of your description of what "know" means could be confused to be a description of how Word2Vec encodes words. This just indicates ignorance of how ML language processing works. It's not remotely on the same level as a human brain, but your view on how things work and what its failings are is just wrong.

localhost ,

technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data

What you're describing is Markov chain, not an LLM.

So long as a model has no regard for the actual you know, meaning of the word

It does, that's like the entire point of word embeddings.

sonori ,
@sonori@beehaw.org avatar

Generally the term Markov chain is used to discribe a model with a few dozen weights, while the large in large language model refers to having millions or billions of weights, but the fundamental principle of operation is exactly the same, they just differ in scale.

Word Embeddings are when you associate a mathematical vector to the word as a way of grouping similar words are weighted together, I don’t think that anyone would argue that the general public can even solve a mathematical matrix, much less that they can only comprehend a stool based on going down a row in a matrix to get the mathematical similarity between a stool, a chair, a bench, a floor, and a cat.

Subtracting vectors from each other can give you a lot of things, but not the actual meaning of the concept represented by a word.

localhost ,

I don’t think that anyone would argue that the general public can even solve a mathematical matrix, much less that they can only comprehend a stool based on going down a row in a matrix to get the mathematical similarity between a stool, a chair, a bench, a floor, and a cat.

LLMs rely on billions of precise calculations and yet they perform poorly when tasked with calculating numbers. Just because we don't calculate anything consciously to get a meaning of a word doesn't mean that no calculations are actually done as part of our thinking process.

What's your definition of "the actual meaning of the concept represented by a word"? How would you differentiate a system that truly understands the meaning of a word vs a system that merely mimics this understanding?

sonori ,
@sonori@beehaw.org avatar

No part of a human or animal brain operates on subtracting tables of cleanly defined numbers from each other so I think it’s pretty safe to say that no matrix calculation is done on a handful of numbers as part of much less as our sole means of understanding concepts or objects.

I don’t know exactly how one could tell true understanding from minicry, far smarter and more well researched people than me have debated that for decades, i’m just pretty sure what we think an kindness is boils down to something a bit more complex than a high school math problem discribing a word cloud.

localhost ,

So you're basically saying that, in your opinion, tensor operations are too simple of a building block for understanding to ever appear out of them as an emergent behavior? Do you feel that way about every mathematical and logical operation that a high school student can perform? That they can't ever in whatever combination create a system complex enough for understanding to emerge?

sonori ,
@sonori@beehaw.org avatar

They are definitely to simple to represent the entirety of an concepts meaning on their own. Yep, I don’t believe it’s likely that such an incrediblely intricate thing as a nuron, much less the idea of conceptual meaning, can be replicated by a high school math problem. Maybe they could be a part, but your off by about a half a dozen order of magnitude at least from where we are now with love being a matrix with a few hundred numbers in it.

CanadaPlus ,

Occasionally. If you aren't even proofreading it that's dumb, but it can do a lot of heavy lifting in collaboration with a real worker.

For coders, there's actually hard data on that. You're worth about a coder and a half using CoPilot or similar.

theangriestbird , to Technology in 'LLM-free' is the new '100% organic' - Creators Are Fighting AI Anxiety With an ‘LLM-Free’ Movement
@theangriestbird@beehaw.org avatar

I hate how the Atlantic will publish well-thought pieces like this, and then turn around and publish op-eds like this that are practically drooling with lust for AI.

averyminya ,
WldFyre ,

That's what op-eds are for though haha

Ilandar ,

From the article:

The Atlantic has a corporate partnership with OpenAI. The editorial division of The Atlantic operates independently from the business division.

CanadaPlus , (edited )

Shouldn't we be glad they're publishing both viewpoints? (On a real, actively debated issue, I don't mean the "both sides" shit)

Etterra , to Technology in EVs Could Last Nearly Forever—If Car Companies Let Them

Good luck with that. Planned obsolescence is a key ingredient in capitalism. I mean what better way to make line go up than to turn a one-time purchase into a repeat purchase? This shareholders and executives will never be able to step on the working class if they can't gouge customers. Won't anyone think of the shareholders?

normanwall ,

As soon as a car company figures out autonomous taxis you will see them go super modular for repairability

It will be too profitable

mojofrododojo , to Technology in EVs Could Last Nearly Forever—If Car Companies Let Them

power density just needs to grow until someone can easily kit-swap a range of battery and motor options into any platform - then we can ev-ify whatever we want to drive around.

Snapz , (edited ) to Technology in EVs Could Last Nearly Forever—If Car Companies Let Them

"EVs won't last nearly forever."

Mio , to Technology in EVs Could Last Nearly Forever—If Car Companies Let Them

It would be wonder if they last forever and easly could be repaired. Making it better to keep the car then buy a new one. It just need to be upgradedable to the latest standards that might be more safe, efficient and agree with current law.

But I am pretty that would never exist - too hard.

Venator , (edited )

There's not much room for improvement in terms of efficiency for EVs, except maybe lower rolling resistance tyres and better aero. You generally have to replace the whole car for better aero though unless you don't mind having some bolt on mods 😂

Venator , (edited )

Batteries capacity per m^3 and/or per KG is improving over time though, so that's where the main reason to upgrade an EV would come from.

reksas , to Technology in EVs Could Last Nearly Forever—If Car Companies Let Them

Obviously they wont "let" them. Why would they ever do that? They have to be made to do it. But I hope i'm wrong, we will see.

dantheclamman , to Technology in EVs Could Last Nearly Forever—If Car Companies Let Them
@dantheclamman@lemmy.world avatar

I think people need to start being educated about how their climate influences how they can use the electric car. Many people know if they live by the sea or where roads are salted that corrosion is an issue. But people might not be aware that with some EVs, they should leave it plugged in if they're in an extreme climate, so the car can air condition or heat the battery. I caused some battery degradation to my Volt because I wasn't able to leave it plugged in living in Tucson.

the_third ,

That is too general of a statement. I have three EVs in my family, none of them do any temp condition of the battery just by being plugged in. However, EVCC turns off the wallbox when they reach 75% SoC and there is no appointment that day in our shared calendar. Sitting at high SoCs kills batteries, especially in warm climates.

Techranger ,

You have a point; some EVs like the Leaf don't even have conditioning. The Volt does have active conditioning, and being a PHEV instead of a BEV has battery charge and discharge limits which were limited by the factory to preserve longevity at the expense of being able to charge to a true 100%. If extra range is needed the ICE is activated instead of stressing the traction battery.

NaoPb ,

Have you ever been driven the Desert Bus from Tucson to Las Vegas on that Genesis game?

tonyn , to Technology in EVs Could Last Nearly Forever—If Car Companies Let Them

Same goes for light bulbs

Aux ,

LED bulbs last pretty much forever.

dan ,
@dan@upvote.au avatar

Yeah I've only ever had one LED bulb die, and I think that was because it was faulty in some way. I've had a much better experience with them compared to CFLs.

m0darn ,

I've had lots of led bulbs die. I think it's because I bought them at the dollar store.

RippleEffect ,

And finding quality ones that will last a long time is more difficult than you might think.

Many of them are made cheaply.

Krauerking ,

Usually it's a badly designed heat sink that's meant to cause an eventual short so that it has to be replaced. Or just shoddy low material builds. LEDs really can last an obscene amount of time and they don't die another part does.

Defectus ,

They get dimmer over time. And they do it gradually so you don't notice it until you buy a new one and realize how dim the old one was

themeatbridge ,

Most LEDs run on DC, and the built-in transformer is the most likely component to fail. If the LED is failing and getting dimmer, it's most likely due to poor heat dissipation.

If we had little 12v adapters and separate LED modules, you could reduce waste by only replacing the part that fails, and manufacturers would have greater incentive to improve build quality. Instead, we get cheaply manufactured bulb-shaped disposable units that need to be thrown away when one part fails.

fruitycoder ,

Honestly considering going to DC lighting after my solar conversion completes at my house for this reason

themeatbridge ,

I have some dc lighting in my basement. It's great, but there aren't as many options out there and electricians don't want to touch it.

fruitycoder ,

I was looking at rv lighting as some options over wise just doing custom jobs (LEDs in whatever fixtures I think look nice). It helps like domes, reccesed, and ambiant lighting I think.

Oh yeah electricians are allergic to DC lol (I used to be one, and yeah that was big knowledge gap in codes, breakers, etc).

Defectus ,

Yeah. Its about 50/50 for the ones who failed me. Gets too hot and burn out or the power supply fails. More prevalent in the compact formats like spots and g8 or g4.

kuhore ,

Well yes, but the light would be very dim, if we are talking about incandescent bulbs.

Technology connections had an episode about it.

KingThrillgore , to Technology in EVs Could Last Nearly Forever—If Car Companies Let Them
@KingThrillgore@lemmy.ml avatar

Well that's just not going to happen.

Pacmanlives , to Technology in EVs Could Last Nearly Forever—If Car Companies Let Them

“Unlike gas-powered engines—which are made up of thousands of parts that shift against one other—a typical EV has only a few dozen moving parts. That means lessdamage and maintenance, making it easier and cheaper to keep a car on the road well past the approximately 200,000-mile average lifespan of a gas-powered vehicle. And EVs are only getting better. “There are certain technologies that are coming down the pipeline that will get us toward that million-mile EV,” Scott Moura, a civil and environmental engineer at UC Berkeley, told me. That many miles would cover the average American driver for 74 years. The first EV you buy could be the last car you ever need to purchase.“

No way a car would last me and my family 74 years. First year I owned my car I put on almost 35k. Was driving 100 miles back and forth to work at that time. We typically take a road trip from colorado to near Vermont every year for a vacation.

A lot of midwesterns will drive 14 hours to get some where

BlackAura ,

At best case 60 miles an hour... Your commute was more than 90 mins? Ugh. That's awful.

You weren't clear if that was round trip or not, so possibly more than 180 mins? How did you find time to sleep!?

dan ,
@dan@upvote.au avatar

In the San Francisco Bay Area, it's not uncommon for people that work here but can't afford to live here to have commutes of over an hour with good traffic (2+ hours with heavy traffic) each way. That's the case in a few major metro areas in countries like the USA and Australia.

Pacmanlives ,

Yeah Bay Area and LA traffic is next level. My condolences to those souls who make that drive every day

Pacmanlives ,

Round trip was 100 miles every day. This was rural Ohio driving to Columbus so it was not to bad 2 and 4 lane roads till you hit the city most of them time. If we got a lot of snowfall it could super suck but I was from NE Ohio so most of the time it was not that much white knuckle driving. You just listen to a lot of audiobooks and podcasts or call some friends on your hour or so drive home

asret ,

Sure, there's always going to be outliers. Most people live and work in the same metropolitan area though - they're not driving 50,000km+ a year.
Besides, having a vehicle with 5 times the effective lifetime is going to be a big win regardless of how much you drive it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines