• Megaman_EXE@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    3 hours ago

    It blows my mind that people are going forward with this AI nonsense and that it has infected key infrastructure. I feel like im taking crazy pills here. I could kind of understand if it actually worked. Like if it genuinely worked as well as they said? I could totally understand it. I would still question it, but it would make more sense.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 hours ago

    This is a terrible idea for Amazon, the cloud services company.

    But for Amazon, the AI company? This is them illustrating the new grift that almost any company can do: use AI to keep a plausible mirage of your company going while reducing opex, and sacrifice humans when necessary to dodge accountability.

    But his job wasn’t even to supervise the chatbot adequately (single-handedly fact-checking 10 lists of 15 items is a long, labor-intensive pro­cess). Rather, it was to take the blame for the factual inaccuracies in those lists. He was, in the phrasing of Dan Davies, “an accountability sink” (or as Madeleine Clare Elish puts it, a “moral crumple zone”).

    https://locusmag.com/feature/commentary-cory-doctorow-reverse-centaurs/

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    43
    ·
    14 hours ago

    If a person is going to be blamed, it should be the one that mandated use of the AI systems… Because that’s exactly what Amazon was doing.

  • Soulphite@reddthat.com
    link
    fedilink
    arrow-up
    33
    ·
    14 hours ago

    Talk about an extra slap in the fuckin face… getting blamed for something your replacement did. Cool.

      • Soulphite@reddthat.com
        link
        fedilink
        arrow-up
        7
        ·
        14 hours ago

        True. In this case, these poor saps being tricked into “training” these AI to eventually render their jobs obsolete.

        • pinball_wizard@lemmy.zip
          link
          fedilink
          arrow-up
          3
          ·
          13 hours ago

          Yes. “obsolete” in that Amazon doesn’t give a shit about reliability anymore, so an AI reliability engineer is fine, now. Haha.

  • frustrated_phagocytosis@fedia.io
    link
    fedilink
    arrow-up
    13
    ·
    14 hours ago

    Would said employees have voluntarily used the agent if Amazon didn’t demand it? If no, this isn’t on them. They shouldn’t be responsible for forced use of unvetted tools.

  • MNByChoice@midwest.social
    link
    fedilink
    arrow-up
    4
    ·
    12 hours ago

    Yay! Extra mental load of having to ask the AI “correctly” and then keep up one’s skills to be able to review the AI’s work! Extra bonus for being blamed for letting anything slip past.

    At least the junior that fucked up will learn something from the experience and can buy a round of beers (if the junior is paid well enough, otherwise the seniors have to buy the junior a beer while talking it out).

    • Powderhorn@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      I’m reminded of a time I was in a bar in Georgia at a conference. It was in the hotel, and a high-ranking editor for the then-reputable Washington Post bought me a beer. He let me take a sip before launching into how much “immature shit [I] need to get out of [my] system” before being ready to be “Post material.”

      Where is any industry going to be in a decade, when no one’s been mentored?

  • Petter1@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    13 hours ago

    Well, AI code should be reviewed prior merge into master, same as any code merged into master.

    We have git for a reason.

    So I would definitely say this was a human fault, either reviewer’s or the human’s who decided that no (or AI driven) review process is needed.

    If I would manage devOps, I would demand that AI code has to be signed off by a human on commit taking responsibility with the intention that they review changes made by AI prior pushing

    • heluecht@pirati.ca
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      @Petter1 @remington at our company every PR needs to be reviewed by at least one lead developer. And the PRs of the lead developers have to be reviewed by architects. And we encourage the other developers to perform reviews as well. Our company encourages the usage of Copilot. But none of our reviewers would pass code that they don’t understand.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      12 hours ago

      If I would manage devOps, I would demand that AI code has to be signed off by a human on commit taking responsibility with the intention that they review changes made by AI prior pushing

      And you would get burned. Today’s AI does one thing really really well - create output that looks correct to humans.

      You are correct that mandatory review is our best hope.

      Unfortunately, the studies are showing we’re fucked anyway.

      Because whether the AI output is right or wrong, it is highly likely to at least look correct, because creating correct looking output is where (what we call “AI”, today) AI shines.

    • Limerance@piefed.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      13 hours ago

      Realistically what happens is the code review is done under time pressure and not very thoroughly.

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        This is what happens to us. People put out a high volume of AI-generated PRs, nobody has time to review them, and the code becomes an amalgamation of mixed paradigms, dependency spaghetti, and partially tested (and horribly tested) code.

        Also, the people putting out the AI-generated PRs are the same people rubber stamping the other PRs, which means PRs merge quickly, but nobody actually does a review.

        The code is a mess.

          • TehPers@beehaw.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            48 minutes ago

            Because if I spent my whole day reviewing AI-generated PRs and walking through the codebase with them only for the next PR to be AI-generated unreviewed shit again, I’d never get my job done.

            I’d love to help people learn, but nobody will use anything they learn because they’re just going to ask an LLM to do their task for them anyway.

            This is a people problem, and primarily at a high level. The incentive is to churn out slop rather than do things right, so that’s what people do.