“Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently,” the IDAIS statement continues. “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity.”

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    3 months ago

    You’re right. They’re more than stochastic parrots. And some people here don’t realize that. They can do a lot of things. But as is, they lack any substancial internal state hence things like consciousness, the ability to learn while in operation and a body. So while AI content can harm people and society, we’re still far away from the robot apocalypse.

    • DarkCloud@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      3 months ago

      They’re more than stochastic parrots. And some people here don’t realize that. They can do a lot of things.

      They can only do what they’re trained to do. There’s been no proven new functions that aren’t already present in the training data. Much of the “novel functions” such as finding they can speak in other languages is because that data was online already. It was already in the scraped information they were trained on.

      So whilst no doubt they’re a technology that will be applied to many data sets, they will always rely on those data sets to produce content/outputs. Otherwise they would no longer be LLMs, they’d be augmented. So far no augmentation written in their code produces intelligence…

      …to go further - we have never had a means of “coding” something into sentience, and likely never will. Sentience from Semantics is a pipe dream (akin to sigils, or magic enchanted rituals/symbols). We need more than semantic models/theories.

      Some people just wish to argue from faith in future possibilities, rather than what’s currently possible/happening.

      • Zexks@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        3 months ago

        You can only do what you’re trained to do as well. The only difference is you get to continue to exist after you’ve completed whatever task you were assigned at the moment. I still remember people incapable of seeing any future to the web. That is the kind of mentality that is pervading this space. But as with most things in tech and programming in particular. Garbage in garbage out.

        • DarkCloud@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          3 months ago

          Nope, I can re-train for small tasks at a moments notice. I can learn as I go and retain that information longterm, then make choices much later based on what I’ve learned.

          I think and have an internal world which chugs along constantly because I am autonomous.

          These are all characteristics a human intelligence has, that language models don’t.

          These are the hurdles.

          …but also, obviously this is a huge field with huge possibilities (no one is denying that), but it’s not intelligence yet.

          Potential doesn’t equate to reality - and it’s only potential until it does. Then it’s reality.

          Right now in reality, there’s no intelligence there. Regardless of whether there might be one day.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        3 months ago

        Sure. What I’m referring to is that they just don’t generate any random garbage. But actually store knowledge and have the ability to combine and apply that. Sure they get trained on some datasets. That’s what AI and machine learning is all about. But it has complex implications and consequences. LLMs work very unlike “intelligent” living creatures. However that doesn’t mean they can’t generate “intelligent” text. They do it a different way. There are some severe limitations as of now and I didn’t find good use for my real-world tasks yet as they’re just not intelligent enough to do anything useful. Except translation and role-play games. That works very well and I’m glad I have something outperforming google translate by quite some degree. Intelligence isn’t well defined. And it’s not set in stone that you need human-like intelligence for lots of tasks… And I mean even a human can only do things they’ve learned before. Or infer things from other things they’ve learned. So fundamentally it’s not that different. For example I’m not a lawyer. If I wanted to write some legal document, I’d need to read a lot of stuff and study that matter. An LLM would need to do exactly the same to be enabled to generate text that sounds like a legal document. And the “intelligence” part we’re talking about is finally understanding the subject and be able to connect things, so to speak. Infer, and apply learned knowledge to new things. And we have some evidence that AI can do exactly that. So… It’s a bit crude, and not there yet. But it’s more than a stochastic parrot. The fundamental parts to a subarea of intelligence is there. And not by accident. Machine learning was invented to infer patterns from some datasets.

        And I’m not sure about the sentience part either. Sure it’s completely impossible with the current approach. But is there a fundamental barrier? Didn’t nature already “code” it into existence with the structure of our brains? And we found out it’s just physics? A bit of chemistry and electricity in a complex structure of interconnected cells? It’s utter sci-fi, but why wouldn’t we be able to the same with silicone chips? I know people regularly deny the possibility. But I’ve never seen a good argument or a scientific paper ruling it out. I think it’s still debated whether there are fundamental barriers. Or what makes sentience in the first place. Just stating some uninformed opinion on that doesn’t proove anything. And for a positive proof we’re missing a good idea, research and even any hardware that’d be remotely capable of doing the calculations, lots of money and energy. So we’re far away from even thinking about it. So maybe we’ll know in 100 years. Or you give me some mathematical proof that rules it out?!

        • DarkCloud@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 months ago

          I’m not really here to debate the possible, just state the reality: It’s not intelligent. Regardless of the fact it produces “intelligent text”… which I take it is short for “intelligent sounding text”… which of course it does - that’s what it was trained on.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            3 months ago

            Fair enough. Yeah, the article was about the future and hypothetical advancements in science in the next decades to come. But I’d agree. As of now I wouldn’t call it intelligent. I tried letting ChatGPT write my emails and despite everyone hyping AI to no end and calling the newest one on a PhD level student… I don’t see that at all.