• 0 Posts
  • 7 Comments
Joined 2 months ago
cake
Cake day: January 27th, 2025

help-circle


  • That’s a lot better than it could be. But I’m also talking about training costs. Models have to be updated to work swimmingly with new languages, conventions, libraries, etc. Models are not future-proof.

    There are more efficient training methods being employed. See: the stuff R1 used. And existing models cam be retooled. But it’s still an intrinsic problem.

    Perhaps most importantly it’s out of the reach of common consumer grade hardware to train a half decent LLM from scratch. It’s a tech that exists mostly in the scope of concentrated power among peoole who care little for their enviromental ramifications. Relying on this in the short term puts influence and power in the hands of people willing to burn our planet. Quite the hard sell, as you might imagine.

    Also see: the other points I made



  • Energy and water costs for developmenr and usage alone are completely incompatible with that. Come back in 20 years when it’s not batshit insane ecologically.

    Not to mention reducing power usage of programs isnt going to be very feasible based on simply an LLM’s output. LLMs are biased twoards common coding patterns and those are demonstrably inefficient (if the scourge of web apps based on electron is any tell). Thusly your code wouldn’t work well with lower grade hardware. Hard sell.

    Theoritically they could be an efficient method of helping build software in the future. As it is now that’s a pipe dream.

    More importantly, why is the crux of your focus on not understanding the code you’re making. It’s intrinsically contrived from the perspective of a solarpunk future where applications are designed to help people efficiently - without much power, heat, etc… weird man