Do you mean AI, just Generative models, or LLMs in particular? I'm pretty thoroughly convinced that AI is a general solution to automation, while generative models are only a partial but very powerful solution.
I think the larger issue is actually that displacement from the workforce causes hardship to those who have been displaced. If that were not the case, most people either wouldn't care or would actively celebrate their jobs being lost to automation.
As someone who has sometimes been accused of being an AI cultist, I agree that it's being pursued far too recklessly, but the people who I argue with don't usually give very good arguments about it. Specifically, I kept getting people who argue from the assumption that AI "aren't real minds" and trying to draw moral reasons not to use it based on that. This fails for two reasons: 1. We cannot know if AI have internal experiences and 2. A tool being sapient would have more complicated moral dynamics than the alternative. I don't know how much this helps you, but if you didn't know before, you know now.
Edit:y'all're seriously downvoting me for pointing out that a question is unanswerable when it's been known to be such for centuries. Read a fucking philosophy book ffs.
You're making the implicit assumption that an entity that lacks memory necessarily does not have any internal experience, which is not something that we can know or test for. Furthermore, there's no law of the universe that states that something created by humans cannot have an internal experience; we have no way of knowing whether something we create has an internal experience or not.
You can think of LLMs like a hyper advanced auto correct.
Yes; this is functionally what LLMs are, but the scope of the discussion extends beyond LLMs, and doesn't address my core complaint about how these arguments are being conducted. Generally though maybe not universally, if a core premise of your argument is "x works differently than humans" your argument won't be valid. I'm not currently making a claim of substance, I'm critiquing a tactic being used and pointing out that it among other things relies on a bad foundation.
If you want to know another way to make the argument, consider focusing on the practical implications of how current and future technologies given current and hypothetical ways of structuring society. For example: the fact that generative AI (being a novel form of automation) making images will lead to the displacement of Artists, the fact that art is being used without consent to train these models which are then being used for profit, etc.
Not "by my definitions" by the simple fact that we can't test for it. Technically, no one knows if any other individual has internal experiences or not. I know for a fact that my sensorium provides me data, and if I assume that data is at all accurate, I can be reasonably confident that other entities that look and behave similarly to me exist. However, I can't verify that any of them have internal experiences the way I do. Sure, it's reasonable to expect that, so we can just add that to the pile of assumptions we've been working with so far without much issue. What about other animals, like dogs? They have the same computational substrate, and the same mechanism for making those computations. I think it's reasonable to say animals probably have internal experiences, but I've met multiple people who insist they somehowknow they don't, and so animal abuse is a myth. Now if we assume animals have internal experiences, what about nematodes? Nematode brains are simple enough that you can run them on a computer. If animals have internal experiences, does that include nematodes, and if so does that mean the simulated Nematode brain has internal experiences? If a computer's subroutine can have internal experiences, what about the computer?
Do you now understand why and what I'm saying? Where's the line drawn? As far as I can tell, the only honest answer is to admit ignorance.
Beijing ramps up pressure over ‘crime of secession’ while Taipei says China has no jurisdiction over Taiwan and urges its people not to be intimidated...
Degenerate, 88, 14, the Roman salute, multiple names, the fascista, shaven heads, lighting motifs, runic symbols.
That's just what I came up with off the top of my head. The other person is right, and I say we should reclaim every symbol because those fuckers shouldn't be allowed to call anything their own or have anything to ralley around or identify each other with. The only symbol I'm aware of that the made was the black sun which is itself simply the ss symbol repeated around a circle, which itself is an appropriated rune.
Does the author think LLMs are Artificial General Intelligence? Because they're definitely not.
AGI is, at minimum capable of taking input and giving output from any domain that a human can, which no generative neural network is currently capable of. For one thing, generative networks are incapable of reciting facts reliably, which immediately disqualifies them.
Russian bots with a Kremlin disinformation network published 120,000 fake anti-Ukraine quotes falsely attributed to celebrities, including Jennifer Aniston and Scarlett Johansson, in one day, the independent Russian media outlet Agentsvo reported June 15....
Also: voting is important because it lets you choose your enemy. Progressive liberals and social democrats won't fight against you as hard as conservatives and fascists.
Putting this here because some people might read this and think "Voting doesn't matter."
"Emmanuel Macron, the French president, has announced that he is dissolving the national assembly, and calling for legislative elections on June 30 and July 7....
A woman has been found dead inside the belly of a snake after it swallowed her whole in central Indonesia, a local official said Saturday, marking at least the fifth person to be devoured by a python in the country since 2017....
To my knowledge: they constrict, which could theoretically result in crushing, but typically the prey dies of suffocation or lack of circulation.
I was also under the impression that they weren't able to eat humans because our shoulders are so much wider than our heads, but that's clearly wrong so take that with salt.
For all your boycotting needs. I'm sure there's some mods caught in lemmy.ml's top 10 that are perfectly upstanding and reasonable people, my condolences for the cross-fire....
Lemmy.ml is full of tankie creeps, and there's a big debate about defederating from it. One of the big talking points is that ml has a bunch of popular communities. These are alternatives to them.
Pug Jesus summarized it well enough. I didn't think I'd have a stronger stance on it, but I am strongly in favor of defederating. I also have a very strong personal opposition to MLs in general, since I essentially regard them as traitors due to the faction's pattern of conduct.
What? How do you read this as white-washing Hamas? My comment didn't claim Hamas was ever virtuous, infact it was entirely silent in text on if hams was good or not. I thought it was part of the subtext, but if it's not clear: Hamas is bad. The reason Netanyahu chose Hamas as his enemy is because they're evil and hard to cheer for, which makes the genocide easier.
You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)...
I'm a mathematician who's been following this stuff for about a decade or more. It's not just expensive. Generative neural networks cannot reliably evaluate truth values; it will take time to research how to improve AI in this respect. This is a known limitation of the technology. Closely controlling the training data would certainly make the information more accurate, but that won't stop it from hallucinating.
The real answer is that they shouldn't be trying to answer questions using an LLM, especially because they had a decent algorithm already.
This is true, but when we're talking about something that limited you'll probably get better results with less work by using human-curated answers rather than generating a reply with an LLM.
Are you talking about epistemics in general or alethiology in particular?
Regardless, the deep philosophical concerns aren't really germain to the practical issue of just getting people to stop falling for obvious misinformation or people being wantonly disingenuous to score points in the most consequential game of numbers-go-up.
This is good, and I'm piggybacking so I can add on:
Get a passport, make friends in another country.
Vote to slow them down, yell at anyone who tries to say that it's better to not vote or that Trump would be better for Palestinians.
If you want to engage in electoral reform, you need to start years in advance.
If you live in Texas: Vote for Biden and encourage GOP voters to not vote for Trump. If you're in a place with lots of GOP weirdos, try publicly and loudly watch his rallies at 1.5x speed; I've heard it breaks the spell b/c his cadence gets disrupted. If Texas goes blue, Biden wins and all the DNC voters suddenly know they can win, making it immediately more likely.
If all else fails, remember someone might [Comment Cannot Legally Be Finished], which would solve multiple problems.
The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT....
I mean, AI eventually will take our jobs, and with any luck it'll be a good thing when that happens. Just because Chat GPT v3 (or w/e) isn't up to the task doesn't mean v12 won't be.
Yes, that's exactly the scenario we need to avoid. Automated gay space communism would be ideal, but social democracy might do in a pinch. A sufficiently well-designed tax system coupled with a robust welfare system should make the transition survivable, but the danger with making that our goal is allowing the private firms enough political power that they can reverse the changes.
This is a fallacy. Specifically, I think you're committing the informal fallacy confusion of necessary and sufficient conditions. That is to say, we know that if we can reliably simulate a human brain, then we can make an artificial sophont (this is true by mere definition). However, we have no idea what the minimum hardware requirements are for a sufficiently optimized program that runs a sapient mind. Note: I am setting aside what the definition of sapience is, because if you ask 2 different people you'll get 20 different answers.
We shouldn't take for granted it's possible.
I'm pulling from a couple decades of philosophy and conservative estimates of the upper limits of what's possible as well as some decently-founded plans on how it's achievable. Suffice it to say, after immersing myself in these discussions for as long as I have I'm pretty thoroughly convinced that AI is not only possible but likely.
The canonical argument goes something like this: if brains are magic, we cannot say if humanlike AI is possible. If brains are not magic, then we know that natural processes can create sapience. Since natural processes can create sapience, it is extraordinarily unlikely that it will prove impossible to create it artificially.
So with our main premise (AI is possible) cogently established, we need to ask the question: "since it's possible, will it be done, and if not why?" There are a great many advantages to AI, and while there are many risks, the barrier of entry for making progress is shockingly low. We are talking about the potential to create an artificial god with all the wonders and dangers that implies. It's like a nuclear weapon if you didn't need to source the uranium; everyone wants to have one, and no one wants their enemy to decide what it gets used for. So everyone has the insensitive to build it (it's really useful) and everyone has a very powerful disincentive to forbidding the research (there's no way to stop everyone who wants to, and so the people who'd listen are the people who would make an AI who'll probably be friendly). So what possible scenario do we have that would mean strong general AI (let alone the simpler things that'd replace everyone's jobs) never gets developed? The answers range from total societal collapse to extinction, which are all worse than a bad transition to full automation.
So either AI steals everyone's job or something worse happens.
Russian President Vladimir Putin is ready to halt the war in Ukraine with a negotiated ceasefire that recognises the current battlefield lines, four Russian sources told Reuters, saying he is prepared to fight on if Kyiv and the West do not respond....
Sure, but what you're describing isn't freedom of speech; freedom of speech is the prohibition against the government taking action for the contents of the opinions you express. It has nothing to do with what a non-government platform allows or disallows.
A platform that allows Nazis is a Nazi platform, plain and simple.
I realize you're probably a dishonest pos, so this is for the benefit of whoever else reads it.
1.3 million people died from TB in 2022
Its been around for millions of years but because it is slow it doesn't ever get press coverage
AI bell curve
Neo-Nazis Are All-In on AI ( www.wired.com )
China threatens death penalty for Taiwan independence ‘diehards’ ( www.theguardian.com )
Beijing ramps up pressure over ‘crime of secession’ while Taipei says China has no jurisdiction over Taiwan and urges its people not to be intimidated...
It's called attaining divinity ( sh.itjust.works )
Not everything can be done in constant time, that's O(k) ( sh.itjust.works )
Lemons(?) of Lemmy, what is something that feels so obvious to you that you just get lowkey pissed at the world for not knowing?
GenAI more buzz than biz as tech barely dents jobs ( www.theregister.com )
Casting practice ( lemmy.world )
Kremlin bots spam internet with fake celebrity quotes against Ukraine ( kyivindependent.com )
Russian bots with a Kremlin disinformation network published 120,000 fake anti-Ukraine quotes falsely attributed to celebrities, including Jennifer Aniston and Scarlett Johansson, in one day, the independent Russian media outlet Agentsvo reported June 15....
How a Draconic Bloodline is started ( lemmy.world )
Patrick Breyer and Pirate Party lose EU Parliament seats ( stackdiary.com )
Patrick Breyer, a staunch defender of digital rights, laments the Pirate Party’s exit from the EU Parliament as a blow to online privacy.
EU elections 2024 live: Emmanuel Macron dissolves French parliament and calls snap elections after huge far-right gains ( www.theguardian.com )
"Emmanuel Macron, the French president, has announced that he is dissolving the national assembly, and calling for legislative elections on June 30 and July 7....
Missing mother found dead inside 16-foot-long python after it swallowed her whole in Indonesia ( www.cbsnews.com )
A woman has been found dead inside the belly of a snake after it swallowed her whole in central Indonesia, a local official said Saturday, marking at least the fifth person to be devoured by a python in the country since 2017....
18+ (CW: Suicide reference) It's the honorable thing to do ( lemmy.world )
PSA: Alternatives for the most popular lemmy.ml communities
For all your boycotting needs. I'm sure there's some mods caught in lemmy.ml's top 10 that are perfectly upstanding and reasonable people, my condolences for the cross-fire....
We're in medieval Europe ( lemmy.world )
No end to Gaza war until ’destruction’ of Hamas, says Netanyahu ( www.theguardian.com )
CEO of Google Says It Has No Solution for Its AI Providing Wildly Incorrect Information ( futurism.com )
You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)...
Total Recall ( lemmy.world )
cross-posted from: https://lemmy.world/post/15748792...
Americans, what's the plan if Trump wins the election in November? (serious)
ChatGPT Answers Programming Questions Incorrectly 52% of the Time: Study ( gizmodo.com )
The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT....
Exclusive: Putin wants Ukraine ceasefire on current frontlines ( www.reuters.com )
Russian President Vladimir Putin is ready to halt the war in Ukraine with a negotiated ceasefire that recognises the current battlefield lines, four Russian sources told Reuters, saying he is prepared to fight on if Kyiv and the West do not respond....
The US is thinking about letting Ukraine use its weapons to strike Russia, even if it enrages Putin: report ( www.businessinsider.com )
US officials are considering letting Ukraine strike Russia with US weapons, The New York Times reports....
Former Green Bay Packers Quarterback Aaron Rodgers Suggests Religion Is Used To Manipulate People ( wisportsheroics.com )
Telegram Reportedly "Ready to Fight Piracy" According to Govt. Official * TorrentFreak ( torrentfreak.com )
Image...