voracitude ,

In no way true AI

I'm not so sure about that. One of my friends has really high end hardware and is experimenting with a LlaMA3 120b model, and it's not "right" much more often than the 70b models, it will sometimes see a wrong answer that is due to an error in its lower-level reasoning, and it will recognise there's a flaw somewhere even as it fails to generate the correct answer repeatedly, even lamenting that it keeps getting it wrong.

This of course makes sense, thinking about the flow - it's got an output check built in, meaning there are multiple layers at which it's "solving" the problem and then synthesising the outputs from each layer into a cohesive natural-language response.

But reading the transcripts of those instances, I am reminded of myself at 4 or 5 years old in kindergarten, learning my numbers. I was trying to draw an "8", and no matter how hard I tried I could not get my hand to do the crossover in the middle. I had a page full of "0". I remember this vividly because I was so angry and upset with myself, I could see my output was wrong and I couldn't understand why I couldn't get it right. Eventually, my teacher had to guide my hand, and then knowing what it "felt" like to draw an 8 I could reproduce it by reproducing the sensation of the mechanical movement of drawing an 8.

So, it seems to me those "sparks" of AGI are getting just a little brighter.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines