• Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    I don’t see any reason to assume humans are anywhere near the far end of the intelligence spectrum. We already have narrow-intelligence systems that are superhuman in specific domains. I don’t think comparing intelligence to something like a wheel is fair - there are clear geometric limits to how round a wheel can be, but I’ve yet to hear any comparable explanation for why similar limits should exist for intelligence. It doesn’t need to be infinitely intelligent either - just significantly more so than we are.

    Also, as I said earlier - unless some other catastrophe destroys us before we get there. That doesn’t conflict with what I said, nor does it give me any peace of mind. It’s simply my personal view that AGI or ASI is the number one existential risk we face.

    • NuraShiny [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      Okay, granted. But if we are on the stupid side of the equation, why would we be able to make something smarter then us? One does not follow from the other.

      I also disagree that we have made anything that is actually intelligent. A computer can do math billions of times faster then a human can, but doing math is not smarts. Without human intervention and human input, the computer would just idle and do nothing. That is not intelligence. At no point has code shown the ability to self-improve and grow and the current brand of shitAI is no different. They call what they do to it training, but it’s really just telling it how to weigh the reams of data it’s eating and without humans, it would not do even that.

      Ravens and Octopi can solve quite complex puzzles. Are they intelligent? What is even the cutoff for intelligence? We don’t even have a good definition for what intelligence is that encompasses everything. People cite IQ, which is obviously bunk. People try to section it into several types of intelligence, social, logical and so on. If we don’t even know what the objective definition of intelligence is, I am not worried about us creating it from whole cloth.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        AlphaGo became better than humans at playing Go by playing against itself - there was no “dumb human” teaching it. The same applies to every other deep learning system. You can, of course, always move the goalposts by redefining what intelligence even means, but when I use the term, I’m referring to the ability to acquire, understand, and use knowledge.

        By that definition, a chess bot is intelligent - it knows the rules of chess, it can observe the pieces on the board, think ahead, and make decisions. It’s not generally intelligent, but within its domain, it’s a genius. The same applies to LLMs. The issue isn’t that they’re bad; it’s that they’re not what people thought they would be. When an average person hears “AI,” they picture HAL 9000, Samantha, or Jarvis - but those are AGI systems. LLMs are not. They’re narrow-intelligence systems designed to produce natural-sounding language, and at that, they’re exceptionally good.

        The fact that they also often get things right is a byproduct of being trained on a huge amount of correct information - not what they were designed to do. If anything, the fact that a language bot can also give accurate answers this often should make people more worried, not less. That’s like a chess bot also turning out to be kind of good at conversation.