• ag10n@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    It can’t, it’s software that needs a governing body to dictate the rules.

    • Sarah Valentine (she/her)@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 days ago

      The rules are in its code. It was not designed with ethics in mind, it was designed to steal IP, fool people into thinking it’s AI, and be profitable for its creators. They wrote the rules, and they do not care about right or wrong unless it impacts their bottom line.

      • ag10n@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        That’s the point, there has to be a human in the loop that sets explicit guard rails

      • jacksilver@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        The issue is more that there aren’t rules. Given there are billions of parameters that define how these models work, there isn’t really a way to ensure that it cant produce unwanted content.

      • ag10n@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        It’s not an excuse, it doesn’t think or reason.

        Unless the software owner sets the governing guardrails it cannot act or present or redact in the way a human can.