dustyData ,

Turing test isn't actually meant to be a scientific or accurate test. It was proposed as a mental exercise to demonstrate a philosophical argument. Mainly the support for machine input-output paradigm and the blackbox construct. It wasn't meant to say anything about humans either. To make this kind of experiments without any sort of self-awareness is just proof that epistemology is a weak topic in computer science academy.

Specially when, from psychology, we know that there's so much more complexity riding on such tests. Just to name one example, we know expectations alter perception. A Turing test suffers from a loaded question problem. If you prompt a person telling them they'll talk with a human, with a computer program or announce before hand they'll have to decide whether they're talking with a human or not, and all possible combinations, you'll get different results each time.

Also, this is not the first chatbot to pass the Turing test. Technically speaking, if only one human is fooled by a chatbot to think they're talking with a person, then they passed the Turing test. That is the extend to which the argument was originally elaborated. Anything beyond is alterations added to the central argument by the author's self interests. But this is OpenAI, they're all about marketing aeh fuck all about the science.

EDIT: Just finished reading the paper, Holy shit! They wrote this “Turing originally envisioned the imitation game as a measure of intelligence” (p. 6, Jones & Bergen), and that is factually wrong. That is a lie. “A variety of objections
have been raised to this idea”, yeah no shit Sherlock, maybe because he never said such a thing and there's absolutely no one and nothing you can quote to support such outrageous affirmation. This shit shouldn't ever see publication, it should not pass peer review. Turing never, said such a thing.

NutWrench ,
@NutWrench@lemmy.world avatar

Each conversation lasted a total of five minutes. According to the paper, which was published in May, the participants judged GPT-4 to be human a shocking 54 percent of the time. Because of this, the researchers claim that the large language model has indeed passed the Turing test.

That's no better than flipping a coin and we have no idea what the questions were. This is clickbait.

Hackworth ,

On the other hand, the human participant scored 67 percent, while GPT-3.5 scored 50 percent, and ELIZA, which was pre-programmed with responses and didn’t have an LLM to power it, was judged to be human just 22 percent of the time.

54% - 67% is the current gap, not 54 to 100.

NutWrench ,
@NutWrench@lemmy.world avatar

The whole point of the Turing test, is that you should be unable to tell if you're interacting with a human or a machine. Not 54% of the time. Not 60% of the time. 100% of the time. Consistently.

They're changing the conditions of the Turing test to promote an AI model that would get an "F" on any school test.

bob_omb_battlefield ,

But you have to select if it was human or not, right? So if you can't tell, then you'd expect 50%. That's different than "I can tell, and I know this is a human" but you are wrong...
Now that we know the bots are so good, I'm not sure how people will decide how to answer these tests. They're going to encounter something that seems human-like and then essentially try to guess based on minor clues... So there will be inherent randomness.
If something was a really crappy bot then it wouldn't ever fool anyone and the result would be 0%.

tourist ,
@tourist@lemmy.world avatar

The participants judged GPT-4 to be human a shocking 54 percent of the time.

ELIZA, which was pre-programmed with responses and didn’t have an LLM to power it, was judged to be human just 22 percent of the time

Okay, 22% is ridiculously high for ELIZA. I feel like any half sober adult could clock it as a bot by the third response, if not immediately.

Try talking to the thing: https://web.njit.edu/~ronkowit/eliza.html

I refuse to believe that 22% didn't misunderstand the task or something.

CaptainBasculin ,

This is the same bot. There's no way this passed the test.

https://lemmy.ml/pictrs/image/ce2fa109-b52a-48ea-8c46-f2f1d92f3b96.jpeg.

catastrophicblues ,

Forget psychotherapy, it can barely understand anything:

> Hello, I am Eliza. I'll be your therapist today. 
* Hello! How are you?
> Would you prefer if I were not ? 
* Not what?
> Have you asked such questions before? 
* No
> You are being a bit negative and that won't help us move forward. 
* What are you smoking?
> Perhaps in your fantasies I am smoking?
catastrophicblues ,
> Hello, I am Eliza. I'll be your therapist today. 
* Hello! How are you?
> Would you prefer if I were not ? 
* Not what?
> Have you asked such questions before? 
* No
> You are being a bit negative and that won't help us move forward. 
* What are you smoking?
> Perhaps in your fantasies I am smoking?

Yeah, it took me one message lol

HerzogVonWiesel ,

ITT: nobody understands what the Turing Test really is

webghost0101 ,

To clarify:

People seem to legit think the jury talks to the bot in real time and can ask about literally whatever they want.

Its rather insulting to the scientist that put a lot of
thought into organizing a controlled environment to properly test defined criteria.

technocrit ,

Its rather insulting to the scientist that put a lot of thought into organizing a controlled environment to properly test defined criteria.

lmao. These "scientists" are frauds. 500 people is not a legit sample site. 5 minutes is a pathetic amount of time. 54% is basically the same as guessing. And most importantly the "Turing Test" is not a scientific test that can be "passed" with one weak study.

Instead of bootlicking "scientists", we should be harshly criticizing the overwhelming tide of bad science and pseudo-science.

kogasa ,
@kogasa@programming.dev avatar

I don't think the methodology is the issue with this one. 500 people can absolutely be a legitimate sample size. Under basic assumptions about the sample being representative and the effect size being sufficiently large you do not need more than a couple hundred participants to make statistically significant observations. 54% being close to 50% doesn't mean the result is inconclusive. With an ideal sample it means people couldn't reliably differentiate the human from the bot, which is presumably what the researchers believed is of interest.

NeoNachtwaechter ,

Turing test? LMAO.

I asked it simply to recommend me a supermarket in our next bigger city here.

It came up with a name and it told a few of it's qualities. Easy, I thought. Then I found out that the name does not exist. It was all made up.

You could argue that humans lie, too. But only when they have a reason to lie.

Chozo ,
@Chozo@fedia.io avatar

The Turing test doesn't factor for accuracy.

Lmaydev ,

That's not what LLMs are for. That's like hammering a screw and being irritated it didn't twist in nicely.

The turing test is designed to see if an AI can pass for human in a conversation.

NeoNachtwaechter ,

turing test is designed to see if an AI can pass for human in a conversation.

I'm pretty sure that I could ask a human that question in a normal conversation.

The idea of the Turing test was to have a way of telling humans and computers apart. It is NOT meant for putting some kind of 'certified' badge on that computer, and ...

That's not what LLMs are for.

...and you can't cry 'foul' if I decide to use a question for which your computer was not programmed :-)

Lmaydev ,

It wasn't programmed for any questions. It was trained hehe

phoneymouse ,

Easy, just ask it something a human wouldn’t be able to do, like “Write an essay on The Cultural Significance of Ogham Stones in Early Medieval Ireland“ and watch it spit out an essay faster than any human reasonably could.

webghost0101 ,

The touring test isn't an arena where anything goes, most renditions have a strict set of rules on how questions must be asked and about what they can be about.
Pretty sure the response times also have a fixed delay.

Scientists ain't stupid. The touring test has been passed so many times news stopped covering it. (Till this click bait of course). The test has simply been made more difficult and cheat-proof as a result.

technocrit ,

most renditions have a strict set of rules on how questions must be asked and about what they can be about. Pretty sure the response times also have a fixed delay. Scientists ain’t stupid. The touring test has been passed so many times news stopped covering it.

Yes, "scientists" aren't stupid enough to fail their own test. I'm sure it's super easy to "pass" the "turing test" when you control the questions and time.

TheBigBrother ,

Oh no!! the AImageddon it's closer everyday.. Skynet it's coming for us!!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines