This year marks 30 years since the Rwandan genocide in 1994, when a Hutu-majority government and a privately owned radio station with close ties to the government colluded to murder 800,000 people....
However, Wikipedia editors consider Media Bias/Fact Check as "generally unreliable", recommending against its use for what some see as breaking Wikipedia's neutral point of view.
Huh, that's so, it was there last January. It used to follow this paragraph (still there today anyway), which contains a similar criticism with citation:
It is widely used and has sometimes been criticised for its methodology.[4] Scientific studies[5] using its ratings note that ratings from Media Bias/Fact Check show high agreement with an independent fact checking dataset from 2017,[6] with NewsGuard[7] and with BuzzFeed journalists.
So if those are considered fact-based, there's no need to delve further.
Or as Dijkstra puts it: “asking whether a machine can think is as dumb as asking if a submarine can swim”.
Alan Turing puts it similarly, the question is nonsense. However, if you define "machine" and "thinking", and redefine the question to mean: is machine thinking differentiable from human thinking; you can answer affirmatively, theoretically (rough paraphrasing). Though the current evidence suggests otherwise (e.g. AI learning from other AI drifts toward nonsense).
Used to know someone who looked for cars around a restaurant, or long lines waiting to get into a tiny cafe, asked wait staff for interesting places they liked to go; went into non-chain stores where locals shopped (off the main streets); asked walkers and service station workers for directions. Always had wild stories about what happened, if you could get past their private nature. Weird fucker, unpredictable, never could get used to'm. Likeable enough, though.
Let's extend this thought experiment a little. Consider just forum posts; the numbers will be somewhat similar for articles and other writings, as well as photos and videos.
A bot creates how many more posts than a human? Being (ridiculously) conservative, we'll say 10x more.
On day one: 10 humans are posting (for simplicity's sake) 10 times a day, totaling 100 posts. Bot is posting 100 a day. For a total of 200 human and bot posts; 50% of which are the bot.
In your (extended) example, at the end of a year: 10 humans are still posting 100 times a day. The 10 bots are posting a total of 1000 times a day. Bots are at 90%, humans 10%.
This statistic can lead you to think human participation in the Internet is difficult to find.
Returning to reality, consider how inhuman AI bots are, with each probably able to outpost humans by millions or billions of times under millions of aliases each. If you find search engines, articles, forums, reviews, and such are bonkers now, just wait a few years. Predicting general chaotic nonsense for the Internet is a rational conclusion, with very few islands of humanity. Unless bots are stopped.
How Africa’s War on Disinformation Can Save Democracies Everywhere ( foreignpolicy.com )
This year marks 30 years since the Rwandan genocide in 1994, when a Hutu-majority government and a privately owned radio station with close ties to the government colluded to murder 800,000 people....
"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded ( www.vox.com )
Stack Overflow and OpenAI Partner ( files.mastodon.online )
cross-posted from: https://lemmy.ml/post/15315562...
Why people don't talk about Google Maps' privacy issues ( www.youtube.com )
Title is editorialized because the original is, frankly, clickbait garbage
GIMP 2.10.38 Released with Much-Requested Backports of GTK3 Features ( 9to5linux.com )
Humans share the web equally with bots, report warns amid fears of ‘dead internet’ ( www.independent.co.uk )