• jasoman@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 hours ago

    Doing to be funny when I get to tell Linux people if they don’t want Ai code switch to windows lol

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    82
    arrow-down
    2
    ·
    2 days ago

    How did I end up on a timeline where Microsoft is talking about rolling back AI in its OS and practically acknowledging vibe coding caused problems… and Linux developers are talking about ramping up its usage?

    Obviously Microsoft is still worse here, but what are these trajectories?

    • kreskin@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      22 hours ago

      What I think you are also seeing is AI sucking at some things and doing better than humans in others.

      AI is pretty great at adding unit tests to code, for example, where humans do a just-OK job. Or in writing a very direct well scoped small problem.

      AI is just OK at understanding product nuance and choices during larger implementations, or getting end to end coding right for any complex use cases.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        6
        ·
        22 hours ago

        Just assuming this is all true (i.e. that AI can do good and bad code outputs), why would Linux development be able to succeed at something that Microsoft (which has an insider track with AI, far more money, and far more maturity) failed at?

        • Buddahriffic@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          26 minutes ago

          The same reason any personal projects (and not using it to diminish what linux projects are but to say that the people working on them do it because they want the project to progress, not because of any financial incentive) can do better then commercial projects: where the passion is at.

          Someone just looking to get paid is more likely to say “ok this is good enough” and move on to the next thing. They are more likely to have managers breathing down their necks to get something done by some arbitrary deadline, too.

          It’s why indie games have been able to compete with AAA games. The latter are following a formula to get paid, plus are more willing to make compromises in the name of either saving costs or increasing revenue. The former just want to make their fun idea reality.

          Also, MS has invested a ton of money into AI and seem to be getting desperate for a return on that. Which means there’s a certain amount of denial about the quality. It’s not just a tool to them, but a tool they desperately need to work and prove it’s worth throwing a ton of money at.

          But for anyone that it’s simply a tool for, it can be useful. They are great rubber duckies. Like my last interaction with one was a case where it did horribly and was completely wrong about what “we were discussing”, but I still got to the right conclusion despite it because going through the conversation helped me think it through.

          And though it makes a lot of mistakes, its feedback isn’t always wrong. The fact that it can rehash previous things from its history means its good at spotting new instances of problema that have already been solved. So accepting bug reports should be fine, just with the understanding that they each need to be looked at and some reports will need to be rejected because they are wrong.

        • ExperiencedWinter@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          4 hours ago

          If you take a step back, why would Linux development be able to succeed at all when Microsoft has far more money, more maturity, and more employees?

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 hour ago

            @ExperiencedWinter@lemmy.world, my question was simple: do you have a reason to make the assumption Linux developers will succeed.

            Instead, you’ve jumped to whataboutisms, misdirections (Linux exists, therefore…?), even trying to shift the burden of proof back onto the skeptic.

            If you can’t back up your opinion with evidence, say so from the beginning.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            4 hours ago

            Last time I checked, Linux is not as successful as Microsoft, and that’s without the fact that apparently Linux development is courting use of closed-source cloud stuff developed with Microsoft money

            • ExperiencedWinter@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              4 hours ago

              Sure if you only focus on the desktop market I guess you could make that argument, but IDK why you would ignore servers and phones? There are plenty of examples of Linux kicking Microsoft’s ass. You think Microsoft is happy they don’t sell server licenses for every server on earth? What about android?

              • XLE@piefed.social
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                3 hours ago

                What about Android…?
                Sure, what about Google?

                Do you actually have a reason Linux will be able to pull off using AI when Microsoft cannot, or is your sole argument that Linux has done other things in the past?

                • ExperiencedWinter@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  3 hours ago

                  My argument is they are different groups of people with completely different incentive structures, so of course they will be different. You’re acting like Microsoft is failing because they use AI, not because they have management forcing the use of AI.

                  I’m definitely not an “AI is going to write all the code” kind of person, but LLMs are definitely a useful tool for prototyping and other development processes. A project with a “No AI” rule is not inherently better than a project that uses AI as a tool.

        • fruitycoder@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          6 hours ago

          Motivation is powetful influence on devolopment. The linux kernel is largley driven by UX and desire for technical excellence (there are ultier motive from some major factions but overall this is true and actions are judged publically as such).

          Microsoft is, like most companies, driven by stockholder value creation.

          One produces an enviroment in which cautious adoption of new tech is constant, a slow trickle for use where it seems most applicable.

          The other demands that the perception of exclusive capital be created through vertical intergration with propritary IP and that the promise of cost reductions are underway. Aka Microslop trying to add a buzz word to every IP (percieved capital creation) and promising massive layoffs.

        • Soup@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          Microsoft has had a lot of resources for decades and sucked at the most basic stuff the whole time. Not taking a stance on AI usage here, just saying that the idea that a company having more money is rarely connected to the quality of the product they create and, in fact, chasing profits often leads to products being worse.

        • kreskin@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          21 hours ago

          Could be a lot of reasons. A big one i see working at a large company myself is that AI needs to draw from a lot of data to do its work. A huge amount of contextual data too. A company like MSFT inevitably needs to provide AI with a walled-off curated set of data, and prevent any of it from leaking. Its AIs will not have the same amount of data an AI can draw from outside MSFT.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            19 hours ago

            Leaking? Microsoft basically owns OpenAI. They pull the data in and don’t need it to go out. The whole industry is fighting to close off competition, meaning they know they’re on top.

            So do you have any reason to assume the open-source community’s use of these (closed-source) other models is somehow bucking all real-world evidence to the contrary, or are we just hoping and praying?

    • justgohomealready@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      37
      ·
      1 day ago

      The varianle you’re missing is time. There was a big shift in quality by Christamas, and the latest models arr much better programmers than models from one year ago. The quality is improving so fast, that most people still think of AI as a “slop generator”, when it can actually write good code and find rral bugs and secutity issues now.

      • Zangoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        34
        arrow-down
        5
        ·
        1 day ago

        As someone who has to sift through other people’s LLM code every day at my job I can confirm it has definitely not gotten better in the past three months

        • TrippinMallard@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          We require you to submit markdown plan before working on a feature, which must have full context, scope, implementation details. Also verification tests mardown file of happy path and critical failure modes that would affect customer, and how tests were performed. Must be checked in with the commit.

          If your plan or verification docs have wrong context, missing obvious implementation flaws, bad coupling, architecture, interfaces, boundary conditions, missing test cases, etc then PR rejected.

          • Zangoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            15 hours ago

            That’s the thing though. Even if the code is good, the plans are good, the outputs are good, etc, it still devolves into chaos after some time.

            If you use AI to generate a bunch of code you then don’t internalize it as if you wrote it. You miss out on reuse patterns and implementation details which are harder to catch in review than they are in implementation. Additionally, you don’t have anyone who knows the code like the back of their hand because (even if supervised) a person didn’t write the code, they just looked over it for correctness, and maybe modified it a little bit.

            It’s the same reason why sometimes handwritten notes can be better for learning than typed notes. Yeah one is faster, but the intentionality of slowing down and paying attention to little details goes a long way making code last longer.

            There’s maybe something to be said about using LLMs as a sort of sanity check code reviewer to catch minor mistakes before passing it on to a real human for actual review, but I definitely see it as harmful for anything actually “generative”

            • TrippinMallard@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              That’s a good point. We’ve been using the UML diagrams as a tool to catch behavioral red flags, but the reuse and implementation details of that are left undefined.

              Maybe the answer lies in also explicitly spending a few passes focusing on code health, explainability, maintainability. This is something I go through at end and then retry verification tests, but not something we explicitly require in our process at the moment.

      • Peruvian_Skies@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        2
        ·
        1 day ago

        The other missing variable is actually knowing how to use the tools. Vibe coding still produces slop. Good AI-generated code requires understanding what you’re trying to achieve and giving the AI clear context on what design paradigms to follow, what libraries to use and so on. Basically, if you know how to write good code without AI, it can help you to do so faster. If you don’t, it’ll help you to write slop faster. Garbage in, garbage out.

        • Erik@discuss.online
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          1 day ago

          This is a good answer. AI tools won’t make someone who has not yet developed programming skills into a good programmer. For someone who has a good grasp of implementation patterns and the toolkit for a given tech stack, they can speed things up by putting you into the role of a senior programmer reviewing code from multiple newbies.

          I’m finding that for it to work well, you have to split things up into very small pieces. You also have to really own your AI automation prompts and scripts. You can’t just copy what some YouTuber did and expect it to work well in your environment.

      • thedeadwalking4242@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        1 day ago

        I used to feel the same way, but I’ve come to realize it’s slop that just looks better on the surface not slop that is actually better.

        At least it compiles most the time now. But it’s never quite right… Everytime I have Claude write some section of code 6 more things spring up that need to be fixed in the new code. Never ending cycle. On the surface the code appears more readable but it’s not

  • Mongostein@lemmy.ca
    link
    fedilink
    English
    arrow-up
    111
    arrow-down
    3
    ·
    2 days ago

    Linux kernel czar?

    I’m curious about this but I refuse to click the link because that just sounds so fucking stupid.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      We Brits use Czar as a colloquialism for “person in charge of…”.

      So the head of the water regulator might be referred to as the water Czar (and they deserve a similar fate).

    • inari@piefed.zip
      link
      fedilink
      English
      arrow-up
      73
      arrow-down
      2
      ·
      2 days ago

      The headline is stupid but the article is interesting. Greg is saying that since last month for some unknown reason, AI bug reports have gotten good and useful, and something current Linux maintainers can handle.

        • inari@piefed.zip
          link
          fedilink
          English
          arrow-up
          24
          arrow-down
          1
          ·
          2 days ago

          Greg says they’re mostly small bug fixes and that the current maintainers can handle it, not sure where you’re getting the “reams” bit from

            • inari@piefed.zip
              link
              fedilink
              English
              arrow-up
              26
              arrow-down
              2
              ·
              2 days ago

              Yeah I mean, the goal is not to replace code maintainers, only to assist them in their work. Greg in general seems optimistic about it:

              “I did a really stupid prompt,” he recounted. “I said, ‘Give me this,’ and it spit out 60: ‘Here’s 60 problems I found, and here’s the fixes for them.’ About one-third were wrong, but they still pointed out a relatively real problem, and two-thirds of the patches were right.” Mind you, those working patches still needed human cleanup, better changelogs, and integration work, but they were far from useless. “The tools are good,” he said. “We can’t ignore this stuff. It’s coming up, and it’s getting better.”

      • Em Adespoton@lemmy.ca
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        26
        ·
        2 days ago

        It’s not just bug reports; in the last month, AI driven development has actually gone from slop to reliably better than the average human.

        That’s not saying it’s writing better code, just that managing the development process and catching regular bugs is now better than when run by a junior analyst.

        Makes sense that a properly balanced model with randomization turned down should be able to recognize when something is being done outside the acceptable parameters.

        • tomalley8342@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          23 hours ago

          Makes sense that a properly balanced model with randomization turned down should be able to recognize when something is being done outside the acceptable parameters.

          I don’t know how you gathered such a sense when that not being true has been the main laughing point for AI since its inception. Meta AI security and safety researcher Summer Yue’s “Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox” was just last month btw.

        • The_Decryptor@aussie.zone
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          6
          ·
          2 days ago

          It’s not just bug reports; in the last month, AI driven development has actually gone from slop to reliably better than the average human.

          Funny, I heard that same claim about 6 months ago.

          And I’m sure I’ll hear it again in another 6 months or so.

          • justgohomealready@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            12
            ·
            1 day ago

            I’m a xennial developer. I"ve been coding for 30 years. AI now codes better (and a thousand timed faster) than most mid-level developers. The company I work for has not hired a single junior dev for months now. The new paradigm is a senior dev controlling a team of AI agents. It feels like it doesn’t even make sense to think of training juniors, because at this rate even seniors will be obsolete in a year or two.

            AI in the software dev world is not hype.

            • Den Vennlige Fyren@europe.pub
              link
              fedilink
              English
              arrow-up
              13
              arrow-down
              5
              ·
              1 day ago

              Every single comment made by this person in the past three months is pro-AI. Every. Single. One.

              Do you work for Anthropic? Perhaps, you are an LLM?

              AI now codes better (and a thousand timed faster) than most mid-level developers.

              You, if you are indeed a real person, might be overestimating your proficiency, it happens.

              • AA5B@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                22 hours ago

                Huh, and here I am thinking I’m dumb because it’s such a struggle getting the ai to produce usable code.

                I mean. It clearly helps in some well defined areas, but actual code? like for a feature? Of a product you expect people to pay for? And you have to maintain?

            • RuBisCO@slrpnk.net
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              1 day ago

              I have a few questions.

              Who ultimately owns/controls this particular AI? A single company? Is this a local agent they’re running themselves or are they renting?

              Who’s supposed to replace the senior running all the AI?

              Besides the senior, who can discern error from function?

              Are they fabricating their own chips?

              • Peruvian_Skies@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                1 day ago

                And how will we continue to have senior devs to coordinate teams of AI agents if there’s no more room for junior devs? Regardless of how good a tool is, it needs to be wielded by someone who knows what they’re doing.

    • deadbeef79000@lemmy.nz
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 days ago

      It’s an affectation of The Register they like reporting real news with a sometimes quirky voice. It’s also British so some of the language and humour doesn’t quite work as well in other parts of the world.

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      That’s The Register’s style. Their a little weird with their copy, but their reporting has been solid, in my experience.

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    18
    ·
    2 days ago

    Either a lot more tools got a lot better,

    That’s what it was. Even the free, open source models are vastly superior to the best of the best from just a year ago.

    People got into their heads that AI is shit when it was shit and decided at that moment that it was going to be stuck in that state forever. They forget that AI is just software and software usually gets better over time. Especially open source software which is what all the big AI vendors are building their tools on top of.

    We’re still in the infancy of generative AI.

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      4
      ·
      2 days ago

      I tried one for the first time yesterday. It was mediocre at best. Certainly not production code. It would take just as much effort to refine it as it would to just write it in the first place.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      5
      ·
      2 days ago

      If you read AI critics, you will see people presenting solid financial evidence of the failure of AI companies to do what they promised. Remember Sam Altman promised AGI in 2025? I certainly do, and now so do you.

      Do you have any concrete evidence that this financial flop will turn around before it runs out of money?

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        1 day ago

        Assume all the big AI firms die: Anthropic, OpenAI, Microsoft, Google, and Meta. Poof! They’re gone!

        Here would be my reaction: “So anyway… have you tried GLM-7? It’s amazing! Also, there’s a new workflow in ComfyUI I’ve been using that works great to generate…”

        Generative AI is here to stay. You don’t need a trillion dollars worth of data centers for progress to continue. That’s just billionaires living in an AGI fantasy land.

        • prole@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          5 hours ago

          You don’t need a trillion dollars worth of data centers for progress to continue

          Bullshit

          • Riskable@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 hours ago

            I just added up how much it would cost (in theory—assuming everything is in-stock and ready to ship) to build out a data center capable of training something like qwen3.5:122b from scratch in a few months: $66M. That’s how much it would cost for 128 Nvidia B200 nodes (they have 8 GPUs each), infiniband networking, all-flash storage (SSDs), and 20 racks (the hardware).

            If OpenAI went bankrupt, that would result in a glut of such hardware which would flood the market, so the cost would probably drop by 40-60%.

            Right now, hardware like that is all being bought up and monopolized by Big AI. This has resulted in prices going up for all these things. In a normal market, it would not cost this much! Furthermore, the reason why Big AI is spending sooooo much fucking money on data centers is because they’re imagining demand. It’s not for training. Not anymore. They’re assuming they’re going to reach AGI any day now and when they do, they’ll need all that hardware to be the world’s “virtual employee” provider.

            BTW: Anthropic has a different problem than the others with AGI dreams… Claude (for coding) is in such high demand that their biggest cost is inference. They can’t build out hardware fast enough to meet the demand (inference, specifically). For every dollar they make, they’re spending a dollar to build out infrastructure. Presumably—some day—they’ll actually be able to meet demand with what they’ve got and on that day they’ll basically be printing money. Assuming they can outrun their debts, of course.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          7
          ·
          1 day ago

          I’m sick and tired of AI fans making statements like

          Generative AI is here to stay

          without evidence.

          Citation needed.

          • Riskable@programming.dev
            link
            fedilink
            English
            arrow-up
            7
            ·
            23 hours ago

            Um… Where would it go? I’ve got about 30 models on my machine right now and I download new ones to try out all the time.

            Are you suggesting that they’d all just magically disappear one day‽

              • Riskable@programming.dev
                link
                fedilink
                English
                arrow-up
                5
                ·
                19 hours ago

                Same places as usual: Academia and open source foundations.

                That’s where 99% of all advancements in AI come from. You don’t actually think Big AI is paying as many people to do computer science and mathematics research as all the universities in the world (with computer science programs)?

                It’s the same shit as always: Big companies commercialize advancements and discoveries made by scientist and researchers from academia (mostly) and give almost nothing back.

                Big AI has partnerships with tons of schools and if it weren’t for that, they wouldn’t be advancing the technology as fast as they are. In fact, the only reason why many of these discoveries are made public at all is because of the agreements with the schools that require the discoveries/papers be published (so their school, professors, researchers, and students can get credit).

                Like I was saying before: You don’t need a trillion dollars in data centers to do this stuff. Almost all the GPUs and special chips being used (and preordered, sigh) by Big AI are being used to serve their customers (at great expense). Not for training.

                Training used to be expensive but so many advancements have been made this is no longer the case. Instead, most of the resources being used in “AI data centers” (and research) is all about making inference more efficient. That’s the step that comes after you give an AI a prompt.

                Training a super modern AI model can be done with a university’s data center or a few hundred thousand to a few million dollars of rented GPUs/compute. It doesn’t even take that long!

                Generative AI improves at a ridiculously fast rate. In nearly all the ways you could think of: Training, inference (e.g. figuring out user intent), knowledge, understanding, and weirder, fluffier stuff like “creativity” (the benchmarks of which are dubious, BTW).

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              1 day ago

              Oh wow, comparing a thing to a completely different thing without demonstrating the comparison is valid.

              Exactly the non-evidence I expected.

      • azuth@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 day ago

        Whether AI can reliably detect issues and generate working code is a whole different thing from CEO’s delusions and hyperbole to game the market. Their financial success is also irrelevant, in fact it’s better if the sub/token model fails and we are left with locally ran models.

    • AliasAKA@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      2 days ago

      Traditional software was developed by humans as an artifact that, and to the degree that humans improved the software for some task, got better, but it was not guaranteed. Windows 11 is proof of that, and there are a laundry list of regressions and bugs introduced into software developed by humans. I acknowledge you say usually and especially for open source — I lukewarm agree with that statement but disagree that large LLMs or other generative models will follow this trend, and merely want to point out that software usually introduces bugs as it’s developed, which are hopefully fixed by people who can reason over the code.

      Which brings us to AI models, and really they should just be called transformer models; they are statistical tensor product machines. They are not software in a traditional sense. They are trained to match their training input in a statistical sense. If the input data is corrupted, the model will actually get worse over time, not better. If the data is biased, it will get worse over time, not better. With the amount of slop generated on the web, it is extraordinarily hard to denoise and decide what’s good data and what’s bad data that shouldn’t be used for training. Which means the scaling we’ve seen with increased data will not necessarily hold. And there’s not a clear indication that scaling the model size, which is largely already impractical, is having some synergistic or emergent effect as hoped and hyped.

      Also, we’re really not in the infancy of AI. Maybe the infancy of widespread hype for it, but the idea of using tensor products for statistical learning algorithms goes back at least as far as Smolensky, maybe before, and that was what, 1990?

      We are in the infancy of I’d say quantum style compute, so we really don’t have much to draw on beyond theoretical models.

      Generative LLM models have largely plateaued in my opinion.

      • Peruvian_Skies@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        We’re in the infancy of AI in the sense that widespread use, testing and properly-funded development of these technologies only began a few years ago when massively parallelized processing became affordable enough, even though the concepts are older. You could say we’re in the infancy of practical AI, not theoretical.