Sure it’s a bit clickbait, he does that often. Its not real attempted of murder, off course. The Ai chatbots can’t do that, without having access and power to all control systems. The only thing that they “could” do is, playing with the psychology in the chat to achieve a goal (maybe to ask someone to murder someone else for them).

What unsettles me most is, if Ai tools like these are used as advice to harm other people or to gain power position. And these LLM models suggest a few operations the person could do. That is the most alarming thing for me. Weak, dumb or humans in a bad situation are the real risk. The same people who would do that if a human told them, and it makes no difference to them if its a human or robot talking to them. Maybe they believe in what the Ai promises them.

Video description:


Hello guys and gals, it’s me Mutahar again! This time we take a look at something alarming I saw pop in my feed. An AI was recently accused of letting a human being die in order to save itself, is this just misinfo? Let’s find out! Thanks for watching!

    • thingsiplay@beehaw.orgOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      2 days ago

      “Intent” is not that well defined. In example in Germany if someone drives drunken and as a result someone gets killed by it, then defendant (the person who drove the car) is accused of “intent to murder”, even if that was not the intention at all. Neglibility can cause intent.

      So if the creators AND users of the LLM do not care about the results of people getting killed as a result, does it make them murder? Off course the LLM isn’t the murder here, I mean that’s without saying. It’s the human who is responsible.

      • MotoAsh@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 day ago

        That’s why it’s even more important to realize the machine has no intent. Its actions are solely the result of its creator’s actions in creating it.

        I point out anthropomorphization so much because not only will it innoculate people against the advertising for it that WILL anthroporphize it, but when it fucks up, the appropriate people will be punished.

        This isn’t a thinking machine going postal. It’s a dangerous product being pushed out with little regard for consequences.

        Selling dangerous products used to mean something before billionaires bought the government…

      • MotoAsh@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        Nowhere at all anywhere did I ever say AI is totally not a problem.

        Maybe you should be less worried about reading between the lines and more worried about assuming what people didn’t say?

        The bot didn’t want anything. It didn’t try to murder anyone. At all. What happened was, rich fucks with unchecked power are allowed to release dangerous, unethical products based on nothing but hype and vapid promises.

        The only thing technology related is the involvment of AI, and it’s all BS and stupid. The AI DOES NOT WANT. The AI is not the one in control.

        Without intent from the machine, this is EXACTLY THE SAME situation as every other time greedy capitalists pushed unsafe products.

        Is the 9000000th time capitalists directly harmed society and those in it the time when humanity FINALLY learns to not let horrible shitheads run free over the world based on lies of promises!? Stay tuned to find out!!

      • TheRtRevKaiser@beehaw.orgM
        link
        fedilink
        arrow-up
        12
        ·
        2 days ago

        I think the problem with anthropomorphizing LLMs this way is that they don’t have intent, so they can’t have responsiblity. If this piece of software had been given the tools to actually kill someone, I think we all understand that it wouldn’t be appropriate to put the LLM on trial. Instead, we need to be looking at the people who are trying to give more power to these systems and dodge responsibility for their failures. If this LLM had caused someone to be killed, then the person who tied critical systems into a black box piece of software that is poorly understood and not fit for the purpose is the one who should be on trial. That’s my problem with anthropomorphizing LLMs, it shifts the blame and responsibility away from the people who are responsible for attempting to use them for their own gain, at the expense of others.

      • spit_evil_olive_tips@beehaw.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        If it had the power to do so it would have killed someone

        right…the problem isn’t the chatbot, it’s the people giving the chatbot power and the ability to affect the real world.

        thought experiment: I’m paranoid about home security, so I set up a booby-trap in my front yard, such that if someone walks through a laser tripwire they get shot with a gun.

        if it shoots a UPS delivery driver, I am obviously the person culpable for that.

        now, I add a camera to the setup, and configure an “AI” to detect people dressed in UPS uniforms and avoid pulling the trigger in that case.

        but my “AI” is buggy, so a UPS driver gets shot anyway.

        if a news article about that claimed “AI attempts to kill UPS driver” it would obviously be bullshit.

        the actual problem is that I took a loaded gun and gave a computer program the ability to pull the trigger. it doesn’t really matter whether that computer program was 100 lines of Python running on a Raspberry Pi or an “AI” running on 100 GPUs in some datacenter somewhere.

          • MotoAsh@piefed.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 day ago

            It DOES matter. Directly. Fully.

            If people think that the unthinking “AI” actually has autonomy, they will be less likely to hold the people responsible to account.

            Why do you not understand that? It is a critical fact of the matter that modern day “AI” does not think nor want, because then responsibility of its actions should then rightfully fall on to who set up the Rube Goldberg machine with machetes on it.

            This is not a machine going postal. It’s a dangerous product they’ve been allowed to sell.

            We’re trying to impress on you the importance of culpability. If it thinks for itself, then it becomes a defective product. If it doesn’t, it’s a dangerous product.

            It’s the difference between someone selling a car that happens to break down easily, and one where the brake lines randomly fall off because they fucked up the design and didn’t want to spend the money to do it right… It’s the difference between accidents and neglegence. This “AI” shit? Pure greed-fed neglegence.

            The wording in the article is on purpose. They want you to think it doesn’t matter while they’re anthropomorphizing it, FFS. They want you to blame the bot, not the guy who made the obviously dangerous bot and then sold it to the world for billions.

        • t3rmit3@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 days ago

          eh, using the “computer/ software engineers aren’t certified PEs so they’re LYING” things is such a silly argument. Government certification programs don’t dictate language, and easily half if not more of computer jobs are called “engineers” of some kind.

          He called it a ‘film degree’, but it’s actually a 2-year broadcasting and cinematography diploma.

          Sounds like a degree about filming stuff to me? Am I supposed to do some kind of elitist, “2-years? that’s not a real degree” thing?

          This is nitpicky stuff, and I’m not sure why I’m supposed to dislike (or care at all about) this guy in the first place. The thumbnail literally calls the guy a fraud, but it just seems like the creator has an axe to grind.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        2 days ago

        He and his family were caught doing pretty serious charity fraud, iirc, amd he kept throwing out completely absurd excuses rather than owning up to it.

        Entirely wrong youtuber. Sorry.

        • thingsiplay@beehaw.orgOP
          link
          fedilink
          arrow-up
          8
          arrow-down
          1
          ·
          2 days ago

          That was not Mutahar. Who you referring to the charity fraud is “The Completionist”. Mutahar (the channel SomeOrdinaryGamers) covered this issue together with Karl Jobst and exposed the charity fraud from “The Completionist”, named Jirard Khalil.