• RedstoneValley@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    1
    ·
    9 hours ago

    The scenario begins with AI agents undergoing a “jump in capability”.

    Might as well stop reading there. Another fluff piece about how useful and capable AI supposedly is, disguised as a doomsday scenario. I’m so sick of reading this bullshit. “Agentic AI” based on LLMs does not work reliably yet and very likely never will.

    If you complain about bugs in traditional (deterministic) software, you ain’t seen nothing yet. A probabilistic system such as an LLM might or might not book the correct flight for you. It might give you the information you have asked for or it might delete your inbox instead.

    As a consequence of a system being probabilistic, anything you do with it works or fails based on probabilities. This really is the dumbest timeline.

    • magikmw@piefed.social
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      5 hours ago

      Not to mention agents not being immune to confabulation, what we’d call if human did it: “making shit up”.

  • baseball2020@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    10 hours ago

    My favourite take so far is the comparison to the introduction of the microwave. Some people really believed that they’d never have to cook again. So what we got was actually a way to make crap quality meals or reheat things when we don’t have time. This is roughly analogous to the output I get from the LLM.

  • andallthat@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    10 hours ago

    It’s almost funny how all those AI doomsday scenarios are actually meant to prop up investment in AI.

    See how Amodei and Altman are usually the ones pushing these narratives on how worried they are by the incredible advancements of their respective companies’ creatures. They are so, so worried about the demise of the human race and how fast it’s coming.

    And I sort of understand them because whatever disruption they are peddling needs to happen very fast or they will all run out of money. But what does it tell about the rest of the human race that we are actually buying into it and pouring money into creating a dystopian future?

    • lost_faith@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Just re-watched Tron last night and a scene really struck me. Dumont was talking to Lora about how since the computers are able to think the humans will stop. That scene had more impact this time through

      • andallthat@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 hours ago

        It’s like watching a real-life version of Avengers, but one where Tony Stark says “hey, this Thanos guy is diarupting industries here!” and teams up with… Thiel and Musk to fund his quest for the Infinity Stones. You know, we can’t let China get them first!

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    104
    arrow-down
    3
    ·
    14 hours ago

    I just…

    Am I wrong here? Like, look, shame me. I work in machine learning and have since 2012. I don’t do any of the llm shit. I do things like predicting wildfire risk from satellite imagery or biomass in the amazon, soil carbon, shit like that.

    I’ve tried all the code assistants. They’re fucking crap. There’s no building an economy around these things. You’ll just get dogshit. There’s no building institutions around these things.

    • WanderingThoughts@europe.pub
      link
      fedilink
      English
      arrow-up
      28
      ·
      12 hours ago

      Heh, that’s the joke going around now.

      AI works, it replaces workers, we lose our jobs.

      AI doesn’t work, bubble pops, we lose our jobs.

    • v_krishna@lemmy.ml
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      6
      ·
      11 hours ago

      You’ve worked in ML since 2012 but dont think transformers have had an absolutely insane impact, for example in NLP and machine translation? (I have worked in those fields longer than that and while I dont think AGI or anything like that is coming from transformers and deep neural nets I think you are full of it if you dont admit they have revolutionized a large number of [highly technical] fields).

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 minutes ago

        Tldr at the bottom

        I’m literally submitting a transformers paper for publication this week. They’re truly incredible. They’re a huge step forwards from where we we at. But so was YOLO, and UNET, and lstm’s (kinda, they were a bit meh).

        But there is a secondary claim about llms, chat bot/agentic llm specifically, that they’re doing things they simply arent. And I do pay for higher tier access so I at least think I’m using some of the state of the art of these things.

        I think you are full of it if you dont admit they have revolutionized a large number of highly technical fields

        I’m specifically saying they haven’t, at least, that if you are using Claude or chatgpt to do those things, you aren’t doing what you think you are doing. Domain experts who use these tools recognize their limitations, and limitation is a soft way of putting it. They just get shit fundamentally wrong. And sometimes, when you are working on a complex problem, if you don’t have the knowledge or experience to know when something is wrong, you’ll believe these machines are doing far more than they are.

        Look I use them regularly. I can support up to 128gb models locally. I understand the claim that these things have utility. But after several years of working with them, I genuinely don’t think they actually are capable of supporting the claims businesses are making about them.

        For one, while they can help you solve some problems faster, often, they just make the situation far, far worse, and you spend an inordinate amount of time trying to get the thing to do something a specific way, but it just won’t. I think this is related to the half glass of wine issue, which I’ll come back to.

        Second, they, as far as I’ve been able to use them, are utter dogshit at returning to a codebase. If you are trying to get them to have some kind of long term comprehension of what’s happening in a project, good fucking luck. You end up with a codebase of constant refactors and stupid useless “sanity” checks that creates the appearance of good practices, but is all smoke and mirrors. They seem to work ok for single shot demos, but you could never run a business or build a program that’s worth keeping around where the llm is central to managing the process. And there is more to say in this because when you are building up a codebase, the most fundamental thing you are really building up is a vision of how it all fits together. When you outsource this to LLMs, you don’t get the vision, and frankly they don’t either. What you end up with is maybe functional at first, but inevitably unstable, and unsustainable.

        Third, and maybe this is me, but I’ve never actually seen an llm come up with a clever solution to anything. Like not once have I seen it come up with a truly elegant, efficient solution. It’s almost always the most banal, solution, and more often then not, it’s not even a solution, but a work around that avoids the problem entirely while creating the impression of a solution.

        And to be clear. I’m not talking about mundaun hello world statements. I’m talking about things that undergrads and graduate students miss all the time. I’m talking about gotchas and problems that you need somethings decades plus to know that the fundamental assumptions are flawed. There is something more inherent to the issues they create.

        I think the half glass of wine issue has been papered over and remains the core limitation with LLMs, and represents a fundamental issue with either transformers, or maybe gradient decent, and I don’t think this current architecture is going to get us past it. You are probably familiar with the issue, it got traction a while back, but the hot fixed the phenomena and it lost media attention. However, if you know what you are looking for, you’ll find non image based examples of this all the time when using LLMs. They’ll constantly insist they’ve done something they haven’t. And there will be no obvious way to get them to recognize they haven’t done or aren’t doing the thing. I don’t believe any of the philosophy explanations given in the YouTube coverage of the issue. I think the problem is likely more core, more central to machine learning that credit is being given.

        The concern I have is that this is something more fundamental, and were only noticing it because image generation and natural language are something humans can comprehend and notice the issue in. But what about when it becomes something incomprehensible to humans, like a sequence of weather data or output from a sensor. We would have no ability to notice if ml model is doing the same thing that an llm is doing, effectively lying about it’s about.

        Long rant over shortly.

        Tldr

        I think don’t contend the massive advances transformers represent as an architecture. But there is clearly something rotten or missing at their core which makes them practically self destructive to rely upon beyond superficial, well solved issues. I think the rot is in the math or the approach to training and I don’t think there is any amount of engineering that can unbake the sawdust out of the cake.

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 hours ago

        I think you read way more into their comment than was written. They said nothing about transformers, only that these assistants are shit. Which, let’s admit, they are.

        The underlying technology is cool, current implementations are trash and have no long term economical path to viability unless things radically change quickly.

    • Zwuzelmaus@feddit.org
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      12 hours ago

      They’re fucking crap. There’s no building an economy around these things.

      You are right in every serious part of the world.

      But add “venture capital” to the equation and it works out stronger than anything else so far.

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 hours ago

      I think it’s supposed to work like, “well, even if you are right about the massive utility of AI, is that still what we should be aiming for?”

      It gets around the combative “you’re wrong, AI is garbage” argument. The people hoisting AI because they believe, even if it does suck, it’ll get better… those people can probably understand this argument much more easily.

      • ageedizzle@piefed.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        It sucks and its at the point now where were hitting diminishing returns so I’m not sire if it sill get better

  • inclementimmigrant@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    109
    arrow-down
    1
    ·
    edit-2
    15 hours ago

    Really reinforces my belief that the stockmarket is driven by idiots.

    Reminds me of this old Kal cartoon:

    Granted AI will probably doom us all but not how the substance post says it will.

  • Gsus4@mander.xyz
    link
    fedilink
    English
    arrow-up
    19
    ·
    15 hours ago

    Lol, they know it’s all castles on clouds and any spark e.g. a substack post could trigger the loaded spring.

    • WanderingThoughts@europe.pub
      link
      fedilink
      English
      arrow-up
      8
      ·
      12 hours ago

      They kind of know. The dot com crashed many companies, and also gave rise to Amazon. They’re all just hoping they’ll be the one that invested in the next Amazon.

        • WanderingThoughts@europe.pub
          link
          fedilink
          English
          arrow-up
          10
          ·
          11 hours ago

          Amazon didn’t make any profit for a decade and made 360 billion least year. They tell investors that AI will be the same.

          • Passerby6497@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            How much of that profit less decade was just them reinvesting in their company as opposed to burning money like you’re trapped on Everest and need every bit of heat you can get?

            • WanderingThoughts@europe.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              That’s the part they didn’t tell investors. Some call that the enshittification of the investment market. Lies everywhere.

          • HakFoo@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            7
            ·
            10 hours ago

            The difference was that Amazon knew how to make a profit, but was reinvesting into infrastructure plays and bigger fish.

            If they had to, they could have been a modestly profitable bookshop in 2002. AWS and monster logistics might not have developed to put them in the 13-digit club though.

            Does any AI-centric play have that fundamental fallback? The services that seem to be most effective at direct monetization, the coding tools, are typically running at huge losses. If they raised costs to cover, precious few firms will pay basically the salary of a senior dev for an emulation of an enthusiastic junior dev with an affinity for footguns.

            The less enterprise-focused products-- parasocial toys, image and video gen, will likely try to dip into consumer subs and advertising, but can that generate the cash volumes these platforms demand?

            • WanderingThoughts@europe.pub
              link
              fedilink
              English
              arrow-up
              4
              ·
              6 hours ago

              If people would always demand answers for those questions, we wouldn’t have speculative bubbles. For now, everybody seems to still believe the “it’s the worst it’ll ever be right now” and the “just more scaling bro” answers.