• squaresinger@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 hours ago

    The problem with AI here is that it tends to prefer agreeing to you over being correct and it’s very likely that it teaches patterns and terminology to you that doesn’t exist.

    For example, I just asked ChatGPT to explain a “backflip” in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked “backflip” to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably: If “backflip” means work moving backward:

    Track the Cause of Backflips: Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done: Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints: Build small validation steps earlier to catch issues sooner.

    Visualize Flow: Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it’s right.

    • paraphrand@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      I believe you and agree.

      I have to be carful to not ask leading questions of the AI to much. It’s very happy to go off and fix things that don’t need fixing when I suggest there is a bug, but in reality it’s user error or a configuration error on my part.

      It’s so eager to please.

      • squaresinger@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Yeah, as soon as the question could be interpreted as leading, it will directly follow your lead.

        I had a weird issue with Github the other day, and after Google and the documentation failed me, I asked ChatGPT as a last-ditch effort.

        My issue was that some file that really can’t have an empty newline at the end had an empty newline at the end, no matter what I did to the files before committing. I figured, that something was adding a newline and ChatGPT confirmed that almost enthusiastically. It was so sure that Github did that and told me that it’s a frequent complaint.

        Turns out, no, it doesn’t. All that happened is that I first committed the file with an empty newline by accident, and Github raw files has a caching mechanism that’s set to quite a long time. So all I had to do was to just wait for a bit.

        Wasted about an hour of my time.