• Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    I’ve been worried about this since around 2016 - long before I’d ever heard of LLMs or Sam Altman. The way I see it, intelligence is just information processing done in a certain way. We already have narrowly intelligent AI systems performing tasks we used to consider uniquely human - playing chess, driving cars, generating natural-sounding language. What we don’t yet have is a system that can do all of those things.

    And the thing is, the system I’m worried about wouldn’t even need to be vastly more intelligent than us. A “human-level” AGI would already be able to process information so much faster than we can that it would effectively be superintelligent. I think that at the very least, even if someone doubts the feasibility of developing such a system, they should still be able to see how dangerous it would be if we actually did stumble upon it - however unlikely that might seem. That’s what I’m worried about.

    • silasmariner@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      Yeah see I don’t agree with that base premise, that it’s as simple as information processing. I think sentience - and, therefore, intelligence - is a more holistic process that requires many more tightly-coupled external feedback loops and an embedding of the processes in a way that makes the processing analogous to the world as modelled. But who can say, eh?

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        It’s not obvious to me that sentience has to come along for the ride. It’s perfectly conceivable that there’s nothing it’s like to be a superintelligent AGI system. What I’ve been talking about this whole time is intelligence — not sentience, or what I’d call consciousness.