• NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    arrow-up
    96
    ·
    2 days ago

    That is the reality.

    The problem isn’t “vibe coding” (anyone who has ever managed early career staff will be able to attest that… the bar is REAL fucking low). The problem is a complete lack of testing or any sort of “investment” in caring if production breaks.

    A lot of it is general apathy induced by… gestures around. But it very much goes beyond just the obnoxious rise in brain drains over “vibe coding”. Personally speaking, I am THIS fucking close to driving over to my company’s head of IT’s house and burning it down with him in it (For legal purposes, this is a joke) as that entire team continues to think “We’ll just wait until people tell us what is broken” is at all fucking acceptable.

    But pretty much any SDLC is going to be built around code review. And code review is how you handle developers of different skill and sanity levels. Whether they are old hats who have been in the basement since before you were born, youngins who can’t stop talking about Rust, or chatbots.

    • Feyd@programming.dev
      link
      fedilink
      arrow-up
      32
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Unfortunately a lot of people are trying to outsource code review to LLMs as well. Also, LLM generated code is more likely to have subtle errors that a human would be very unlikely to make in otherwise mundane code. Errors that are easy to gloss over if you don’t take a magnifying glass to it. My current least favorite thing is LLM generated unit tests that don’t actually test what they say they do.

      • NuXCOM_90Percent@lemmy.zip
        link
        fedilink
        arrow-up
        7
        arrow-down
        12
        ·
        edit-2
        2 days ago

        Shit code review is not code review. If you just rubber stamp everything or outsource it to someone who will, you aren’t doing code review.

        Aside from that:

        LLM generated code is more likely to have subtle errors that a human would be very unlikely to make in otherwise mundane code.

        Citation requested

        My current least favorite thing is LLM generated unit tests that don’t actually test what they say they do.

        If I had a nickle for every single time I had to explain to someone that their unit test doesn’t do anything or that they literally just copied the output and checked against it (and that they are dealing with floating points so that is actually really stupid)… I’d probably go buy some Five Guys for lunch.


        Its like saying that the problem is that you are using robots to assemble cybertrucks rather than people. The problem isn’t who is super glueing sharp jagged metal together. The problem is that your product is fundamentally shite and should never have reached production in the first place. And you need to REALLY work through your design flows and so forth.

        • Feyd@programming.dev
          link
          fedilink
          arrow-up
          17
          ·
          2 days ago

          Citation requested

          I keep seeing it over and over again. Anyone that actually has to deal with coworkers using this bullshit that isn’t also in the cult is going to recognize it.

          If I had a nickle for every singl yada yada yada

          Sure, there have always been better and worse developers. LLMs are making developers that used to be better, worse.

          • NuXCOM_90Percent@lemmy.zip
            link
            fedilink
            arrow-up
            3
            arrow-down
            4
            ·
            2 days ago

            Bad developers just do whatever. It doesn’t matter if they wrote the code themselves or if a tool wrote it for them. They aren’t going to be more or less detail oriented whether it is an LLM, a doxygen plugin, or their own fingers that made the code.

            Which is the problem when people make claims like that. It is nonsense and anyone who has ACTUALLY worked with early career staff can tell you… those kids aren’t writing much better code than chatgpt and there is a reason so many of them have embraced it.

            But it also fundamentally changes the conversation. It stops being “We should heavily limit the use of generative AI in coding because it prevents people from developing the skills they need to evaluate code” and instead “We need generative AI to be better”.

            It was the exact same thing with “AI can’t draw hands”. Everyone and their mother insisted on that. Most people never thought about why basically all cartoons are four fingered hands and so forth. So, when the “studio ghibli filter” was made? It took off like hotcakes because “Now AI can can do hands!” and there was no thought towards the actual implications of generative AI.

            • Feyd@programming.dev
              link
              fedilink
              arrow-up
              12
              arrow-down
              1
              ·
              2 days ago

              Nothing outside of the first paragraph here is terribly meaningful, and the first paragraph is just trying to talk past what I said before. I’ll reiterate, very clearly.

              I have observed several of my coworkers that used to be really good at their jobs, get worse at their jobs (and make me spend more ensuring code quality) since they started using using LLM tools. That’s it. That’s all I care about. Maybe they’ll get better. Maybe they won’t. But right now I’d strongly prefer people not use them, because people using them has made my experience worse.

        • veni_vedi_veni@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 day ago

          The problem isn’t who is super glueing sharp jagged metal together.

          I know it’s not related, curious about this part.

          I know it has an aluminum based frame which should inhibit it’s use to haul heavy loads, but what else?

    • mx_smith@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      I have seen at least 1 out of every 5 comments from coderabbitai that lead me down a rabbit hole looking to see if the suggestion is correct. It can waste so much time trying to validate their suggestions only to find out it’s complete BS.