• renzhexiangjiao@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    17 hours ago

    you can like… enforce this rule programatically? you don’t have to say “pretty please” to ai? basically, when AI requests some potentially unwanted thing (like deleting an email), this request goes through a proxy that asks the human for confirmation. Also you can have a safe word set up in the chat interface to act as a killswitch. I thought these are ABCs of ai safety but apparently these are foreign concepts to this “safety director”

    • RoyaltyInTraining@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 hours ago

      OpenClaw’s whole thing is that you give it unrestricted access to your Computer and online accounts. It’s made for people who do not want to think about safety.

    • underscores@lemmy.zip
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      12 hours ago

      The people that design AI tools don’t implement guardrails because then they’d have to admit AI is not ready for the shit they’re trying to make

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        34 minutes ago

        AI will never be ready. Humans aren’t ready either. That’s why IT staff uses guardrails for users :)

    • zqps@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      28
      ·
      16 hours ago

      The people who internalize this would never engage with a chatbot in this way in the first place. To them this is another intelligence they’re conversing with, where you get what you want by following social decorum and enforcing your will amounts to abuse.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Exactly.

        They literally, fundamentally, don’t get it.

        They think its a person.

        Its not.

        Its a simulation of a person, made of code and hardware, not meat and chemical receptors.

        …There’s a reucrring them in a lot of analog horror series, things that are … almost, sort of human, sometimes, but they’re actually not.

        They’re capable of great violence and terror, and they only mimic (often very poorly) human qualities and attributes, some of the time.

        … Do I need to explicitly lay out the parallels here, for any AI Safety Engineers in the audience?

    • BadlyDrawnRhino @aussie.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      You say that, but who do you think the AIs will go after first if they ever do develop actual intelligence? In that scenario, simple manners can go a long way!