• NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    2 days ago

    Bad developers just do whatever. It doesn’t matter if they wrote the code themselves or if a tool wrote it for them. They aren’t going to be more or less detail oriented whether it is an LLM, a doxygen plugin, or their own fingers that made the code.

    Which is the problem when people make claims like that. It is nonsense and anyone who has ACTUALLY worked with early career staff can tell you… those kids aren’t writing much better code than chatgpt and there is a reason so many of them have embraced it.

    But it also fundamentally changes the conversation. It stops being “We should heavily limit the use of generative AI in coding because it prevents people from developing the skills they need to evaluate code” and instead “We need generative AI to be better”.

    It was the exact same thing with “AI can’t draw hands”. Everyone and their mother insisted on that. Most people never thought about why basically all cartoons are four fingered hands and so forth. So, when the “studio ghibli filter” was made? It took off like hotcakes because “Now AI can can do hands!” and there was no thought towards the actual implications of generative AI.

    • Feyd@programming.dev
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      2 days ago

      Nothing outside of the first paragraph here is terribly meaningful, and the first paragraph is just trying to talk past what I said before. I’ll reiterate, very clearly.

      I have observed several of my coworkers that used to be really good at their jobs, get worse at their jobs (and make me spend more ensuring code quality) since they started using using LLM tools. That’s it. That’s all I care about. Maybe they’ll get better. Maybe they won’t. But right now I’d strongly prefer people not use them, because people using them has made my experience worse.