• space_comrade [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I don’t think it’s gonna go that way. In my experience the bigger the chunk of code you make it generate the more wrong it’s gonna be, not just because it’s a larger chunk of code, it’s gonna be exponentially more wrong.

    It’s only good for generating small chunks of code at a time.

    • FunkyStuff [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It won’t be long (maybe 3 years max) before industry adopts some technique for automatically prompting a LLM to generate code to fulfill a certain requirement, then iteratively improve it using test data to get it to pass all test cases. And I’m pretty sure there already are ways to get LLM’s to generate test cases. So this could go nightmarishly wrong very very fast if industry adopts that technology and starts integrating hundreds of unnecessary libraries or pieces of code that the AI just learned to “spam” everywhere so to speak. These things are way dumber than we give them credit for.