• wccrawford@discuss.online
    link
    fedilink
    arrow-up
    24
    arrow-down
    6
    ·
    3 days ago

    Most of the people I hear being critical of AI Coding are very clear about what it’s good for, and what it isn’t.

    If someone is wholly for or against something, their advice generally isn’t very good.+

    • tyler@programming.dev
      link
      fedilink
      arrow-up
      16
      arrow-down
      4
      ·
      3 days ago

      The majority of the time, people are talking about AI in general, not AI coding. AI coding might be useful, but if it comes with all of the baggage of LLMs in general then it might not be worth it no matter how much it helps you code.

      It’s like buying software from someone that is actively committing a genocide. They might have the best software on the entire planet, even the entire universe, but that doesn’t mean you should use it.

      • setsubyou@lemmy.world
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        3 days ago

        I don’t understand how people can look at the insane progress gpt has made in the last 3 years and just arbitrarily decide that this is its maximum capability.

        So this is not entirely arbitrary, and probably part of it is also that they’re not just looking at the progress, but also at systemic issues.

        For example we know that larger models with more training material are more powerful. That’s probably the biggest contributing factor to the insane pace at which they’ve developed. But we’re also at a point where AI companies are saying they are running out of data. The models we have now are already trained on basically the entire open internet and a lot of non-public data too. Therefore we can’t expect their capabilities to scale with more data unless we find ways to get humans to generate more data. At the same time the quality of data on the open internet decreases because more of it is generated by AI.

        On the other hand, making them larger also has physical requirements, most of all power. We are already at a point where AI companies are buying nuclear power plants for their data centers. So scaling in this way is close to the limit too. Building new nuclear power plants takes ages.

        Another different thing is that LLMs can’t learn. They don’t have to be able to learn to be useful, obviously we can use the current ones just fine at least for some tasks. But nonetheless this is something that limits the progress that’s possible for them.

        And then there is the entire AI bubble thing. The economical side of things, where we have an entire circular economy based on the idea that companies like OpenAI can spend billions on data centers. But they are losing money. Pretty much none of the AI companies are profitable other than the ones that only provide the infrastructure. Right now investors are scared enough to miss out on AGI to continue investing but if they stopped, it would be over.

        And all this is super fragile. The current big players are all using the same approach. If one company makes that next step and finds a better approach than transformer LLMs, the others are toast. Or if some Chinese company makes a breakthrough with energy usage again. Or if there is a hardware breakthrough and the incentive to pay for hosted LLMs goes away. Basically even progress can pop the bubble because if we can all run AI that does a good enough job at home then the AI companies will never hit their revenue targets. And then the investment stops and companies that bleed billions every quarter without investors backing them can die very quickly.

        Personally I don’t think they will stop becoming better right now. Even if they do stop, I’m not convinced we understand them well enough to be unable to improve the ways in which we use them a bit more. But when people say that this is the peak, they’re looking at the bigger picture. They say that LLMs can’t get closer to human intelligence because fundamentally, we don’t have a way to make them learn, they say that the development model is not sustainable, and other reasons like that.

      • m532@lemmygrad.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        2 days ago

        “AI bad” is dogma to them. They don’t logic foward, but backward. Everything that contradicts with the dogma must be fake.

        I broke out of it after realizing that they made me side with the copyright industry I had always hated.