• theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    The real problem with LLM coding, in my opinion, is something much more fundamental than whether it can code correctly or not. One of the biggest problems coding faces right now is code bloat. In my 15 years writing code, I write so much less code now than when I started, and spend so much more time bolting together existing libraries, dealing with CI/CD bullshit, and all the other hair that software projects has started to grow.

    The amount of code is exploding. Nowadays, every website uses ReactJS. Every single tiny website loads god knows how many libraries. Just the other day, I forked and built an open source project that had a simple web front end (a list view, some forms – basic shit), and after building it, npm informed me that it had over a dozen critical vulnerabilities, and dozens more of high severity. I think the total was something like 70?

    All code now has to be written at least once. With ChatGPT, it doesn’t even need to be written once! We can generate arbitrary amounts of code all the time whenever we want! We’re going to have so much fucking code, and we have absolutely no idea how to deal with that.

    • BloodyDeed@feddit.ch
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      This is so true. I feel like my main job as a senior software engineer is to keep the bloat low and delete unused code. Its very easy to write code - maintaining it and focusing on the important bits is hard.

      This will be one of the biggest and most challenging problems Computer Science will have to solve in the coming years and decades.

      • floofloof@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        It’s easy and fun to write new code, and it wins management’s respect. The harder work of maintaining and improving large code bases and data goes mostly unappreciated.

    • space_comrade [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I don’t think it’s gonna go that way. In my experience the bigger the chunk of code you make it generate the more wrong it’s gonna be, not just because it’s a larger chunk of code, it’s gonna be exponentially more wrong.

      It’s only good for generating small chunks of code at a time.

      • FunkyStuff [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        It won’t be long (maybe 3 years max) before industry adopts some technique for automatically prompting a LLM to generate code to fulfill a certain requirement, then iteratively improve it using test data to get it to pass all test cases. And I’m pretty sure there already are ways to get LLM’s to generate test cases. So this could go nightmarishly wrong very very fast if industry adopts that technology and starts integrating hundreds of unnecessary libraries or pieces of code that the AI just learned to “spam” everywhere so to speak. These things are way dumber than we give them credit for.