• Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    AlphaGo became better than humans at playing Go by playing against itself - there was no “dumb human” teaching it. The same applies to every other deep learning system. You can, of course, always move the goalposts by redefining what intelligence even means, but when I use the term, I’m referring to the ability to acquire, understand, and use knowledge.

    By that definition, a chess bot is intelligent - it knows the rules of chess, it can observe the pieces on the board, think ahead, and make decisions. It’s not generally intelligent, but within its domain, it’s a genius. The same applies to LLMs. The issue isn’t that they’re bad; it’s that they’re not what people thought they would be. When an average person hears “AI,” they picture HAL 9000, Samantha, or Jarvis - but those are AGI systems. LLMs are not. They’re narrow-intelligence systems designed to produce natural-sounding language, and at that, they’re exceptionally good.

    The fact that they also often get things right is a byproduct of being trained on a huge amount of correct information - not what they were designed to do. If anything, the fact that a language bot can also give accurate answers this often should make people more worried, not less. That’s like a chess bot also turning out to be kind of good at conversation.