Imho this is actually a very serious problem. They are undermining our society with this.
We should push tech companies to block, its technically very feasible.
This is a very easy to flag, given the intelligence of the people working at OpenAI. Russian IP, political topic, high post frequency.
But blocking them has an opportunity cost in identifiable dollar value, doing nothing only costs them a few pithy press releases and a “commitment to truthfulness and openness”.
Move fast and break things, right? As long as the money rolls in… Just this time they’re breaking the fabric of reality binding society together.
That's only going to stop the people who don't want to give their ID away. If someone would actually want to spread propaganda or other trough bots they would just buy stolen information.
This is a major problem for all democracies, and LLM driven troll accounts probably do exist. But this xitter post is a fake error message. It's clearly a troll.
Blocking fake accounts would help with the misinformation problem, but it's a cat and mouse game. It could ultimately give additional credence to the trolls who slip through if the platform is assumed to be safe. The reality is that there will always be ways for fake accounts to avoid detection and to spoof account verification. Making it harder would help, but it's not a comprehensive solution. Not to mention the fact that the platform itself has the power to manipulate public opinion, amplify their preferred narrative, etc.
The solution I've always preferred is the mentality the 4chan community had when I was younger and frequented it. Basically, and I'm paraphrasing:
Everyone here needs to grow up and understand that no post should ever be presumed to be true or legitimate. This is an anonymous forum. Assume that everything was written by a bot or a troll in the absence of proof that it wasn't.
I think people put too much trust in social media precisely because they assume that there's a real person behind every post. They assume that a face and a few photos gives an account legitimacy, despite the fact that it's trivial to copy photos from a random account (2015/16 pro-Trump Facebook style) or just generate all of the content from scratch with AI (to avoid duplicate detection).
Trust itself is driver of misinformation. On social media, people should only fully trust posts made by people they know. That is the simplest and most comprehensive solution to the problem.
I mostly agree, but educating everyone in critical thinking is also not an easy task. Both strategies should be done: we need to hold the platforms more acountable and help ppl have more critical thought.
I used to work in the industry that prevents this, trust and safety. It's like DEI. If an individual with enough clout gives a shit and takes the time to make it happen, or if a bad thing happens and a corporation needs to make a show of caring to cover their asses, that's when they invest the minimum.
There's the issue - anyone with any sense or sensibility has left The Xitter. All that's left are the bots, the unknowing, and the attention whores. Xitter is the new Facebook.
Im sorry, but as a language model, I don't have the capability to perceive or assess physical appearances. I am however sure you have other desirable qualities.
dailydot.com
Newest