This profile is from a federated server and may be incomplete. View on remote instance

frog , (edited )

The other thing that needs to die is hoovering up all data to train AIs without the consent and compensation to the owners of the data. Most of the more frivolous uses of AI would disappear at that point, because they would be non-viable financially.

frog ,

I'm feeling the need to do a social media detox, including Beehaw. Pro-AI techbros are getting me down.

Shockingly, keeping Instagram active. My feed there is nothing but frogs, greyhounds, and art from local artists, and detoxing from stuff that is improving my mood rather than making it worse seems unnecessary.

frog ,

Kind of depressing that the answer to not being replaced by AI is "learn to use it and spend your day fixing its fuckups", like that's somehow a meaningful way to live for someone who previously had an actual creative job.

frog ,

AI is also going to run into a wall because it needs continual updates with more human-made data, but the supply of all that is going to dry up once the humans who create new content have been driven out of business.

It's almost like AIs have been developed and promoted by people who have no ability to think about anything but their profits for the next 12 months.

frog ,

Yep. Life does just seem... permanently enshittified now. I honestly don't see it ever getting better, either. AI will just ensure it carries on.

frog ,

Yep. I used to be an accountant, and that's how trainees learn in that field too. The company I worked at had a fairly even split between clients with manual and computerised records, and trainees always spent the first year or so almost exclusively working on manual records because that was how you learned to recognise when something had gone wrong in the computerised records, which would always look "right" on a first glance.

frog ,

But this is the point: the AIs will always need input from some source or another. Consider using AI to generate search results. Those will need to be updated with new information and knowledge, because an AI that can only answer questions related to things known before 2023 will very quickly become obsolete. So it must be updated. But AIs do not know what is going on in the world. They have no sensory capacity of their own, and so their inputs require data that is ultimately, at some point in the process, created by a human who does have the sensory capacity to observe what is happening in the world and write it down. And if the AI simply takes that writing without compensating the human, then the human will stop writing, because they will have had to get a different job to buy food, rent, etc.

No amount of "we can train AIs on AI-generated content" is going to fix the fundamental problem that the world is not static and AI's don't have the capacity to observe what is changing. They will always be reliant on humans. Taking human input without paying for it disincentivises humans from producing content, and this will eventually create problems for the AI.

frog ,

The scales of the two are nowhere near comparable. A human can't steal and regurgitate so much content that they put millions of other humans out of work.

frog ,

I did not know the exact wording of this guidance, but this is basically the strategy I use. I've always figured that because I prepare for my journeys, I am never in such a rush that I need to put someone else's life at risk in order to pass them quicker - it's not like it's going to make a difference to my day if I arrive at my destination 2 minutes later, but it'll make a huge difference to someone else's day if I rush past a cyclist when it's not safe.

frog ,

I honestly don't get why so many people are so reckless and impatient on the roads. I've seen some people being really fucking stupid around cyclists and motorcyclists. One incident haunts me, because I know someone would have been severely injured, maybe killed, if I hadn't been quick enough to get out of the way of an impatient person overtaking in a stupid place.

And it's just like... why? Just leave home a few minutes earlier!

frog ,

There may not have been any intentional design, but humans are still meant to eat food, drink water, and breathe oxygen, and going against that won't lead to a good end.

frog ,

Just gonna say that I agree with you on this. Humans have evolved over millions of years to emotionally respond to their environment. There's certainly evidence that many of the mental health problems we see today, particularly at the scale we see, is in part due to the fact that we evolved to live in a very different way to our present lifestyles. And that's not about living in cities rather than caves, but more to do with the amount of work we do each day, the availability and accessability of essential resources, the sense of community and connectedness with small social groups, and so on.

We know that death has been a constant of our existence for as long as life has existed, so it logically follows that dealing with death and grief is something we've evolved to do. Namely, we evolved to grieve for a member of our "tribe", and then move on. We can't let go immediately, because we need to be able to maintain relationships across brief separations, but holding on forever to a relationship that can never be continued would make any creature unable to focus on the needs of the present and future.

AI simulacrums of the deceased give the illusion of maintaining the relationship with the deceased. It is certainly well within the possibility that this will prolong the grieving process artificially, when the natural cycle of grieving is to eventually reach a point of acceptance. I don't know for sure that's what would happen... but I would want to be absolutely sure it's not going to cause harm before unleashing this AI on the general public, particularly vulnerable people (which grieving people are.)

Although I say that about all AI, so maybe I'm biased by the ridiculous ideology that new technologies should be tested and regulated before vulnerable people are experimented on.

frog ,

Sure, you should be free to make one. But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!), there are valid questions about whether that will cause them harm rather than help - and grieving people do not always make the most rational decisions. They can very easily be convinced that interacting with AI-you would be good for them, but it actually prolongs their grief and makes them feel worse. Grieving people are vulnerable, and I don't think AI companies should be free to prey on the vulnerable, which is a very, very realistic outcome of this technology. Because that is what companies do.

So I think you need to ask yourself not whether you should have the right to make an AI version of yourself for those who survive your death... but whether you're comfortable with the very likely outcome that an abusive company will use their memories of you to exploit their grief and prolong their suffering. Do you want to do that to people you care about?

frog ,

Think of how many family recipes could be preserved. Think of the stories that you can be retold in 10 years. Think of the little things that you’d easily forget as time passes.

An AI isn't going to magically know these things, because these aren't AIs based on brain scans preserving the person's entire mind and memories. They can learn only the data they're told. And fortunately, there's a much cheaper way for someone to preserve family recipies and other memories that their loved ones would like to hold onto: they could write it down, or record a video. No AI needed.

frog ,

I also suspect, based on the accuracy of AIs we have seen so far, that their interpretation of the deceased's personality would not be very accurate, and would likely hallucinate memories or facts about the person, or make them "say" things they never would have said when they were alive. At best it would be very Uncanny Valley, and at worst would be very, very upsetting for the bereaved person.

frog ,

Given the husband is likely going to die in a few weeks, and the wife is likely already grieving for the man she is shortly going to lose, I think that still places both of them into the "vulnerable" category, and the owner of this technology approached them while they were in this vulnerable state. So yes, I have concerns, and the fact that the owner is allegedly a friend of the family (which just means they were the first vulnerable couple he had easy access to, in order to experiment on) doesn't change the fact that there are valid concerns about the exploitation of grief.

With the way AI techbros have been behaving so far, I'm not willing to give any of them the benefit of the doubt about claims of wanting to help rather than make money - such as using a vulnerable couple to experiment on while making a "proof of concept" that can be used to sell this to other vulnerable people.

frog ,

I absolutely, 100% agree with you. Nothing I have seen about the development of AI so far has suggested that the vast majority of its uses are grotesque. The few edge cases where it is useful and helpful don't outweigh the massive harm it's doing.

frog ,

Nope, I'm just not giving the benefit of the doubt to the techbro who responded to a dying man's farewell posts online with "hey, come use my untested AI tool!"

frog ,

Yeah, I think you could be right there, actually. My instinct on this from the start is that it would prevent the grieving process from completing properly. There's a thing called the gestalt cycle of experience where there's a normal, natural mechanism for a person going through a new experience, whether it's good and bad, and a lot of unhealthy behaviour patterns stem from a part of that cycle being interrupted - you need to go through the cycle for everything that happens in your life, reaching closure so that you're ready for the next experience to begin (most basic explanation), and when that doesn't happen properly, it creates unhealthy patterns that influence everything that happens after that.

Now I suppose, theoretically, there's a possibility that being able to talk to an AI replication of a loved one might give someone a chance to say things they couldn't say before the person died, which could aid in gaining closure... but we already have methods for doing that, like talking to a photo of them or to their grave, or writing them a letter, etc. Because the AI still creates the sense of the person still being "there", it seems more likely to prevent closure - because that concrete ending is blurred.

Also, your username seems really fitting for this conversation. :)

frog ,

As many a person has said before when an outlandish and unproven claim is made: pics or it didn't happen.

frog ,

Having flicked through a few spots in the video, and being British, my conclusion is this:

Britain has got some major problems, many of which there is a lack of political will to fix, to the point that I could identify the general subject of many sections of this video just by the title on the timestamps. But the video is still pretty rubbish and overly sensationalised, with some of the opinions presented (smoking bans being bad, switching to an American style insurance-based healthcare being a good idea) are just straight up idiotic.

I would still rather live here than America though. Although Britain has been badly mismanaged by the Conservatives over the last 14 years, it is less polarised than the US, and the electorate as a whole are broadly tolerant and compassionate people who have very little tolerance of or respect for culture wars. The Conservatives insisting on trying to make this election about culture wars is a contributing factor into why their poll ratings are getting worse.

frog ,

UK citizens can also opt out, as the Data Protection Act 2018 is the UK's implementation of GDPR and confers all of the same rights.

In my opt out, I have also reminded them of their obligation to delete data when I do not consent to its use, so since I have denied consent, any of my data that has been used must be scrubbed from the training sets and resulting AI outputs derived from the unauthorised use of my data.

Sadly, having an Instagram account is unavoidable for me. Networking is an important part of many creatives' careers, and if the bulk of your connections are on Instagram, you have to be there too.

frog ,

Techbros once again surprised at how their technology is used.

The other breaking headlines for today:

Shock discovery that water is wet.

Toddler discovers that fire is hot after touching it.

Bear shits in woods.

Pope revealed to be Catholic.

frog ,

Ah, the old "the only way to stop a bad person with a gun is for all the good people to have guns" argument.

Were the dictators even working on their own large language models, or do these tools only exist because OpenAI made one and released it to the public before all the consequences had been considered, thus sparking an arms race where everyone felt the need to jump in on the action? Because as far as I can see, ChatGPT being used to spread disinformation is only a problem because OpenAI were too high on the smell of their own arses to think about whether making ChatGPT publicly available was a good idea.

frog ,

It really is. I'm also not a huge fan of "everyone needs to have access to their own personal open source AI, otherwise only corporations will be able to use it", like somehow the answer to corporations being shit is to give everyone else a greater ability to be shit too. What the world really needs is even more shit!

frog ,

Just don't complain when the world becomes even more shit than it already is. Open source AIs that rely on scraping content without paying the creator are just as exploitative of workers as corporate AIs doing the exact same thing.

frog ,

And probably also that water is hot and fire is wet?

frog ,

The metaphoric argument is exactly on point, though: the answer to "bad actors will use it for evil" is not "so everybody should have unrestricted access to this really dangerous thing." Sorry, but in no situation you can possibly devise is giving everyone access to a dangerous tool the correct answer to bad people having access to it.

frog ,

AI programs are already dominated by bad actors, and always will be. OpenAI and the other corporations are every bit the bad actors as Russia and China. The difference between Putin and most techbros is as narrow as a sheet of paper. Both put themselves before the planet and everyone else living on it. Both are sociopathic narcissists who take, take, take, and rely on the exploitation of those poorer and weaker than themselves in order to hoard wealth and power they don't deserve.

frog ,

Had OpenAI not released ChatGPT, making it available to everyone (including Russia), there are no indications that Russia would have developed their own ChatGPT. Literally nobody has made any suggestion that Russia was within a hair's breadth of inventing AI and so OpenAI had better do it first. But there have been plenty of people making the entirely valid point that OpenAI rushed to release this thing before it was ready and before the consequences had been considered.

So effectively, what OpenAI have done is start handing out guns to everyone, and is now saying "look, all these bad people have guns! The only solution is everyone who doesn't already have a gun should get one right now, preferably from us!"

frog ,

Well, let's see about the evidence, shall we? OpenAI scraped a vast quantity of content from the internet without consent or compensation to the people that created the content, and leaving aside any conversations about whether copyright should exist or not, if your company cannot make a profit without relying on labour you haven't paid for, that's exploitation.

And then, even though it was obvious from the very beginning that AI could very easily be used for nefarious purposes, they released it to the general public with guardrails that were incredibly flimsy and easily circumvented.

This is a technology that required being handled with care. Instead, its lead proponents are of the "move fast and break things" mentality, when the list of things that can be broken is vast and includes millions of very real human beings.

You know who else thinks humans are basically disposable as long as he gets what he wants? Putin.

So yeah, the people running OpenAI and all the other AI companies are no better than Putin. None of them care who gets hurt as long as they get what they want.

frog ,

I wonder how much of this stuff may still be around on harddrives somewhere.

Probably quite a lot!

Just as an example, I'm a part of an art and writing focused community that's been around off-and-on since the late 90s. Typically each member has/had their own website. So a few years ago when we went from an "off" phase back to "on" again, a major project became reconstructing the stuff that used to be on Geocities, the various smaller platforms of the 90s and 00s, and ISP-provided webhosting. And obviuously it's hard to judge how much stuff we don't remember and therefore don't know we're missing, but well over half of what we have reconstructed has come from "I found my external hard drive from 2006 and it had X, Y and Z on it!" I personally had ~3000 files sitting on my NAS, which I had moved off my own hard drive at some point, but had been unwilling to delete, so I just dumped it into long-term storage. Four years into the reconstruction project, we still occasionally find files we thought were lost forever, usually when someone's found an old hard drive in a box in their attic/basement. The found content often was created by someone else, but downloaded and archived by the hard drive owner.

Although this is representative of just one community, given how apparently common it was for people to download offline copies of websites they liked, it could well be that large swathes of the old internet are sitting on people's hard drives, waiting to be rediscovered.

frog ,

I live next to a railway line in the south west that is similar. A single train runs up and down the line. If you're on one of the stations, you wave to the train so it'll stop for you. If you're on the train and want to get off, you ask the driver to stop.

frog ,

They're definitely trains. I live next to a similar one. It is physically a train, with exactly the same hardware as trains on busier lines (though typically only hauling 1-2 carriages instead of 4+). It's just more fuel-efficient for a train to keep going through a station if nobody is getting on or off, so when passenger numbers are low, the practice is to let the driver know if you need on or off.

frog ,

It seems to be a feature that many techbros and financebros share: they all have incredibly punchable faces.

frog ,

I'm very glad that my definitely-100%-legit copy of Windows 10 seems to have no idea how to upgrade to 11. It still gets other updates, my hardware is definitely compatible. The thought of upgrading to 11 just never seems to enter its mind. I suspect I'll be sticking with Windows 10 for a long, long time, until either Microsoft give up on this ridiculous idea in response to customer backlash, or Linux becomes a viable option for my usecase (Nvidia GPU, lots of proprietary software that I need to use for university and future career). It wouldn't be the first time I've held onto an older version of Windows for a protracted period of time, skipping a dreadful iteration or two, and then upgrading when Microsoft have learned their lesson.

frog ,

I've been a late adopter of every version of Windows I've ever used - and I skipped 8 too, switching to 10 around the same time you did because my software required it. It does seem the best way to avoid most of the problems: Microsoft has moved on to pulling its old tricks on the newest version, and there are more tools for modifying the old version. So I figure I'll switch to 11 or 12 when Microsoft is doing awful things with 13.

frog ,

I am definitely happy to be friends. :)

frog ,

Hello friend! It's lovely to meet you! :)

frog ,

Not too bad. How are you?

frog ,

Yeah, I do feel this reality has been severely missold. There should be an inquiry.

frog ,

Group project is due tomorrow, including the presentation of the completed animation to the client. After one person on the team (who has been thoroughly documented in these threads over the last six months) got caught lying about how much of his sequence he had done, he was given an ultimatum: a hard deadline that passed fifteen minutes ago, and if he failed to meet it, someone else is doing his scene and his name is getting taken out of the credits. We could justify this as he hasn't contributed significantly to any other part of the project.

He failed to meet the deadline.

I would like to note at this point that his scene is two shots totalling about 15 seconds. My scene was eight shots totalling 45 seconds and I was done last Friday.

We have another assignment due at midnight tonight, which I sensibly/foolishly completed and handed in on Friday. Since everybody else is finishing that assignment this evening, I am the only one with any time available to animate and render this scene, I get to rig and animate the final scene of our animation. That's why we can't just cut the scene and work around it: the story would not have a conclusion without this scene. In retrospect we probably should never have trusted him with it, but it's not like there was anything else that was short and simple he could have done.

I am very angry with this guy, and I'm not convinced I'll be able to hold my tongue if he turns up for the presentation tomorrow.

frog ,

Thank you. :)

We did, amazingly, get it done on time, and the end results were pretty amazing. For first year student work, anyway.

frog ,

I mean, I'd like to be surprised that a technology driven by a techbro with the "move fast and break things" mentality has broken because of moving too quickly into human trials, but....

I guess we should just count ourselves lucky that the poor human test subject patient wasn't permanently harmed by Musk's raging arrogance.

frog ,

Yeah, I'm surprised as well. I assume it's a reflection of how weakened regulators have become that no one was able to say "no" to Musk.

frog ,

Railways and public transport are grouped under infrastructure because even if climate change was not an issue, public transport is infrastructure that's good for people and the economy. There's plenty of statistics to support the idea that good public transport infrastructure has a wide range of benefits, including improved economic growth, that pre-dates climate change by decades, and will still be the case long after climate change is fixed. The Victorians didn't build railway lines all over Europe because trains are better for the climate than cars. :)

frog ,

It probably helped that Susan Hall is deranged, and the UK as a whole prefers its politicians as close to the centre as possible, rejecting extremes on both ends.

frog ,

Final week on the final group project of the academic year. Deadline is Monday. And I am fucking pissed off.

  • Team leader and sub-team leader for the production phase of the project are incapable of providing leadership, because the former is lovely but timid, and the other is just never fucking there. With just days to go and important decisions and instructions just not happening, I have simply taken over and started telling everyone what to do. But this now means that on top of my work, everyone is now coming to me with questions, including the team leader and sub-team leaders.

  • The useless, obstructive, narcissistic, lazy, arrogant piece of utter shite who I had to work with on the last project. Well, it transpires he has basically done absolutely fucking nothing on this project since January, apart from 3D modelling half of a rock (someone else finished the rock) and modelling 80% of one character (it's shit and the texture job is half-arsed). But this week he actually had to do something, which was building one set and rigging one character. I got a phone call at 8:30am this morning from the person who had to animate that one scene, and... yeah, surprise surprise, it's only half done. Lighting, cameras, and rigging are not done. I hope the guy who has to clean up this mess calms down by Monday, otherwise there's going to be a murder.

  • After spending all day rendering shots, after making a judgement call on the resolution because it wasn't included in the assignment brief (so I guessed based on the previous project) and we were unable to get a response from the teacher when we contacted to ask. Nope, that's the wrong resolution. So everything that was rendered yesterday needs to be rendered again in a different resolution and format. Which takes twice as long. Shots that took 2.5 years yesterday require 5.5 hours today. So while I set up the remaining shots today, I've got both my laptop and my spouse's laptop re-rendering all of yesterday's work. My desk is a chaotic collection of three computers, six screens, three keyboards, two mice, and a specialist 3D mouse.

Yeah, I am extremely fucking pissed off and if my teammate opts for murder I might just join him, because right now an awful lot of people are looking incredibly stabbable. I hate group projects.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines