@Leate_Wonceslace@lemmy.dbzer0.com avatar

Leate_Wonceslace

@[email protected]

This profile is from a federated server and may be incomplete. View on remote instance

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

As soon as Kurtzgesagt said John Green I knew what the video would be about. John Green has made TB awareness his mission for a couple of years now.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Do you mean AI, just Generative models, or LLMs in particular? I'm pretty thoroughly convinced that AI is a general solution to automation, while generative models are only a partial but very powerful solution.

I think the larger issue is actually that displacement from the workforce causes hardship to those who have been displaced. If that were not the case, most people either wouldn't care or would actively celebrate their jobs being lost to automation.

Leate_Wonceslace , (edited )
@Leate_Wonceslace@lemmy.dbzer0.com avatar

As someone who has sometimes been accused of being an AI cultist, I agree that it's being pursued far too recklessly, but the people who I argue with don't usually give very good arguments about it. Specifically, I kept getting people who argue from the assumption that AI "aren't real minds" and trying to draw moral reasons not to use it based on that. This fails for two reasons: 1. We cannot know if AI have internal experiences and 2. A tool being sapient would have more complicated moral dynamics than the alternative. I don't know how much this helps you, but if you didn't know before, you know now.

Edit:y'all're seriously downvoting me for pointing out that a question is unanswerable when it's been known to be such for centuries. Read a fucking philosophy book ffs.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

You're making the implicit assumption that an entity that lacks memory necessarily does not have any internal experience, which is not something that we can know or test for. Furthermore, there's no law of the universe that states that something created by humans cannot have an internal experience; we have no way of knowing whether something we create has an internal experience or not.

You can think of LLMs like a hyper advanced auto correct.

Yes; this is functionally what LLMs are, but the scope of the discussion extends beyond LLMs, and doesn't address my core complaint about how these arguments are being conducted. Generally though maybe not universally, if a core premise of your argument is "x works differently than humans" your argument won't be valid. I'm not currently making a claim of substance, I'm critiquing a tactic being used and pointing out that it among other things relies on a bad foundation.

If you want to know another way to make the argument, consider focusing on the practical implications of how current and future technologies given current and hypothetical ways of structuring society. For example: the fact that generative AI (being a novel form of automation) making images will lead to the displacement of Artists, the fact that art is being used without consent to train these models which are then being used for profit, etc.

Leate_Wonceslace , (edited )
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Not "by my definitions" by the simple fact that we can't test for it. Technically, no one knows if any other individual has internal experiences or not. I know for a fact that my sensorium provides me data, and if I assume that data is at all accurate, I can be reasonably confident that other entities that look and behave similarly to me exist. However, I can't verify that any of them have internal experiences the way I do. Sure, it's reasonable to expect that, so we can just add that to the pile of assumptions we've been working with so far without much issue. What about other animals, like dogs? They have the same computational substrate, and the same mechanism for making those computations. I think it's reasonable to say animals probably have internal experiences, but I've met multiple people who insist they somehow know they don't, and so animal abuse is a myth. Now if we assume animals have internal experiences, what about nematodes? Nematode brains are simple enough that you can run them on a computer. If animals have internal experiences, does that include nematodes, and if so does that mean the simulated Nematode brain has internal experiences? If a computer's subroutine can have internal experiences, what about the computer?

Do you now understand why and what I'm saying? Where's the line drawn? As far as I can tell, the only honest answer is to admit ignorance.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Degenerate, 88, 14, the Roman salute, multiple names, the fascista, shaven heads, lighting motifs, runic symbols.

That's just what I came up with off the top of my head. The other person is right, and I say we should reclaim every symbol because those fuckers shouldn't be allowed to call anything their own or have anything to ralley around or identify each other with. The only symbol I'm aware of that the made was the black sun which is itself simply the ss symbol repeated around a circle, which itself is an appropriated rune.

Reclaim every symbol.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

I believe that's correct, yes.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Why do you even want those things?

I'm a mathematician. 88 is a number. You think letting them have an entire number isn't damaging?

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Idk, if someone's trying to do a mass shooting, a little intervention could do some good.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Reality is that which continues to exist when you stop believing in it.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

If everyone were immortal, and no one needed to worry about resources, that would be awesome and pretending otherwise is the biggest cope.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

That's entirely fair.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

No, but I've argued with enough people IRL to know that my opinion isn't exactly common. Most people who agree with me are my fellow transhumanists.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Does the author think LLMs are Artificial General Intelligence? Because they're definitely not.

AGI is, at minimum capable of taking input and giving output from any domain that a human can, which no generative neural network is currently capable of. For one thing, generative networks are incapable of reciting facts reliably, which immediately disqualifies them.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Yes; I misunderstood what the author meant. Ty for letting me know.

Kremlin bots spam internet with fake celebrity quotes against Ukraine ( kyivindependent.com )

Russian bots with a Kremlin disinformation network published 120,000 fake anti-Ukraine quotes falsely attributed to celebrities, including Jennifer Aniston and Scarlett Johansson, in one day, the independent Russian media outlet Agentsvo reported June 15....

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Also: voting is important because it lets you choose your enemy. Progressive liberals and social democrats won't fight against you as hard as conservatives and fascists.

Putting this here because some people might read this and think "Voting doesn't matter."

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Go to your local library and read a book. Any book.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

"Hey guys, plowing this field won't feed you, we should just gather."

Missing mother found dead inside 16-foot-long python after it swallowed her whole in Indonesia ( www.cbsnews.com )

A woman has been found dead inside the belly of a snake after it swallowed her whole in central Indonesia, a local official said Saturday, marking at least the fifth person to be devoured by a python in the country since 2017....

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

To my knowledge: they constrict, which could theoretically result in crushing, but typically the prey dies of suffocation or lack of circulation.

I was also under the impression that they weren't able to eat humans because our shoulders are so much wider than our heads, but that's clearly wrong so take that with salt.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

That's a really bizarre way to spell "I know literally nothing about psychology" but you do you.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Lemmy.ml is full of tankie creeps, and there's a big debate about defederating from it. One of the big talking points is that ml has a bunch of popular communities. These are alternatives to them.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Pug Jesus summarized it well enough. I didn't think I'd have a stronger stance on it, but I am strongly in favor of defederating. I also have a very strong personal opposition to MLs in general, since I essentially regard them as traitors due to the faction's pattern of conduct.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Isn't an antimeme something that's impossible to remember?

It's an anti joke that's utilizing an image macro that itself is a meme.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Incase anyone here is not already aware, the war is causing Hamas. Word is that previously Hamas was caused by Netanyahu's direct support.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

white washing of Hamas

What? How do you read this as white-washing Hamas? My comment didn't claim Hamas was ever virtuous, infact it was entirely silent in text on if hams was good or not. I thought it was part of the subtext, but if it's not clear: Hamas is bad. The reason Netanyahu chose Hamas as his enemy is because they're evil and hard to cheer for, which makes the genocide easier.

CEO of Google Says It Has No Solution for Its AI Providing Wildly Incorrect Information ( futurism.com )

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)...

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

it's just expensive

I'm a mathematician who's been following this stuff for about a decade or more. It's not just expensive. Generative neural networks cannot reliably evaluate truth values; it will take time to research how to improve AI in this respect. This is a known limitation of the technology. Closely controlling the training data would certainly make the information more accurate, but that won't stop it from hallucinating.

The real answer is that they shouldn't be trying to answer questions using an LLM, especially because they had a decent algorithm already.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

only return results for a specific set of topics.

This is true, but when we're talking about something that limited you'll probably get better results with less work by using human-curated answers rather than generating a reply with an LLM.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Are you talking about epistemics in general or alethiology in particular?

Regardless, the deep philosophical concerns aren't really germain to the practical issue of just getting people to stop falling for obvious misinformation or people being wantonly disingenuous to score points in the most consequential game of numbers-go-up.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

This comment made me feel unspeakably old. Like I was some demon from the infinite abyss of time.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

This is good, and I'm piggybacking so I can add on:

Get a passport, make friends in another country.

Vote to slow them down, yell at anyone who tries to say that it's better to not vote or that Trump would be better for Palestinians.

If you want to engage in electoral reform, you need to start years in advance.

If you live in Texas: Vote for Biden and encourage GOP voters to not vote for Trump. If you're in a place with lots of GOP weirdos, try publicly and loudly watch his rallies at 1.5x speed; I've heard it breaks the spell b/c his cadence gets disrupted. If Texas goes blue, Biden wins and all the DNC voters suddenly know they can win, making it immediately more likely.

If all else fails, remember someone might [Comment Cannot Legally Be Finished], which would solve multiple problems.

ChatGPT Answers Programming Questions Incorrectly 52% of the Time: Study ( gizmodo.com )

The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT....

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

I mean, AI eventually will take our jobs, and with any luck it'll be a good thing when that happens. Just because Chat GPT v3 (or w/e) isn't up to the task doesn't mean v12 won't be.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Yes, that's exactly the scenario we need to avoid. Automated gay space communism would be ideal, but social democracy might do in a pinch. A sufficiently well-designed tax system coupled with a robust welfare system should make the transition survivable, but the danger with making that our goal is allowing the private firms enough political power that they can reverse the changes.

Leate_Wonceslace , (edited )
@Leate_Wonceslace@lemmy.dbzer0.com avatar

It suggests to me that AI

This is a fallacy. Specifically, I think you're committing the informal fallacy confusion of necessary and sufficient conditions. That is to say, we know that if we can reliably simulate a human brain, then we can make an artificial sophont (this is true by mere definition). However, we have no idea what the minimum hardware requirements are for a sufficiently optimized program that runs a sapient mind. Note: I am setting aside what the definition of sapience is, because if you ask 2 different people you'll get 20 different answers.

We shouldn't take for granted it's possible.

I'm pulling from a couple decades of philosophy and conservative estimates of the upper limits of what's possible as well as some decently-founded plans on how it's achievable. Suffice it to say, after immersing myself in these discussions for as long as I have I'm pretty thoroughly convinced that AI is not only possible but likely.

The canonical argument goes something like this: if brains are magic, we cannot say if humanlike AI is possible. If brains are not magic, then we know that natural processes can create sapience. Since natural processes can create sapience, it is extraordinarily unlikely that it will prove impossible to create it artificially.

So with our main premise (AI is possible) cogently established, we need to ask the question: "since it's possible, will it be done, and if not why?" There are a great many advantages to AI, and while there are many risks, the barrier of entry for making progress is shockingly low. We are talking about the potential to create an artificial god with all the wonders and dangers that implies. It's like a nuclear weapon if you didn't need to source the uranium; everyone wants to have one, and no one wants their enemy to decide what it gets used for. So everyone has the insensitive to build it (it's really useful) and everyone has a very powerful disincentive to forbidding the research (there's no way to stop everyone who wants to, and so the people who'd listen are the people who would make an AI who'll probably be friendly). So what possible scenario do we have that would mean strong general AI (let alone the simpler things that'd replace everyone's jobs) never gets developed? The answers range from total societal collapse to extinction, which are all worse than a bad transition to full automation.

So either AI steals everyone's job or something worse happens.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

You're welcome! I'm always happy to learn someone re-evaluated their position in light of new information that I provided. 🙂

Exclusive: Putin wants Ukraine ceasefire on current frontlines ( www.reuters.com )

Russian President Vladimir Putin is ready to halt the war in Ukraine with a negotiated ceasefire that recognises the current battlefield lines, four Russian sources told Reuters, saying he is prepared to fight on if Kyiv and the West do not respond....

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Here's a way that Russia can liberate Ukraine 100% foolproof.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

"Freezing"? This take is so cold that upon uttering it hydrogen condenses into a 0-viscosity liquid.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Sure, but what you're describing isn't freedom of speech; freedom of speech is the prohibition against the government taking action for the contents of the opinions you express. It has nothing to do with what a non-government platform allows or disallows.

A platform that allows Nazis is a Nazi platform, plain and simple.

I realize you're probably a dishonest pos, so this is for the benefit of whoever else reads it.

Leate_Wonceslace , (edited )
@Leate_Wonceslace@lemmy.dbzer0.com avatar

Hopefully the package containing all the fucks I give doesn't get lost in the mail.

Edit: There's either a bunch of Nazis supporting this fucker or they're using alt accounts.

Leate_Wonceslace ,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

💝

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines