• CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Welp, it was ‘fun’ while it lasted. Time for everyone to adjust their expectations to much more humble levels than what was promised and move on to the next sceme. After Metaverse, NFTs and ‘Don’t become a programmer, AI will steal your job literally next week!11’, I’m eager to see what they come up with next. And with eager I mean I’m tired. I’m really tired and hope the economy just takes a damn break from breaking things.

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      move on to the next […] eager to see what they come up with next.

      That’s a point I’m making in a lot of conversations lately : IMHO the bubble didn’t pop BECAUSE capital doesn’t know where to go next. Despite reports from big banks that there is a LOT of investment for not a lot of actual returns, people are still waiting on where to put that money next. Until there is such a place, they believe it’s still more beneficial to keep the bet on-going.

    • Fetus@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      ·
      2 months ago

      I just hope I can buy a graphics card without having to sell organs some time in the next two years.

      • macrocephalic@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        2 months ago

        Don’t count on it. It turns out that the sort of stuff that graphics cards do is good for lots of things, it was crypto, then AI and I’m sure whatever the next fad is will require a GPU to run huge calculations.

        • utopiah@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 months ago

          I’m sure whatever the next fad is will require a GPU to run huge calculations.

          I also bet it will, cf my earlier comment on rendering farm and looking for what “recycles” old GPUs https://lemmy.world/comment/12221218 namely that it makes sense to prepare for it now and look for what comes next BASED on the current most popular architecture. It might not be the most efficient but probably will be the most economical.

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          AI is shit but imo we have been making amazing progress in computing power, just that we can’t really innovate atm, just more race to the bottom.

          ——

          I thought capitalism bred innovation, did tech bros lied?

          /s

      • sheogorath@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        If there is even a GPU being sold. It’s much more profitable for Nvidia to just make compute focused chips than upgrading their gaming lineup. GeForce will just get the compute chips rejects and laptop GPU for the lower end parts. After the AI bubble burst, maybe they’ll get back to their gaming roots.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        2 months ago

        My RX 580 has been working just fine since I bought it used. I’ve not been able to justify buying a new (used) one. If you have one that works, why not just stick with it until the market gets flooded with used ones?

      • Zorsith@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        I’d love an upgrade for my 2080 TI, really wish Nvidia didn’t piss off EVGA into leaving the GPU business…

    • Honytawk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      AI doesn’t need to steal all programmer jobs next week, but I have much doubt there will still be many available in 2044 when even just LLMs still have so many things that they can improve on in the next 20 years.

    • DogWater@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      I’m not sure, these companies are building data centers with so many gpus that they have to be geo located with respect to the power grid because if it were all done in one place it would take the grid down.

      And they are just building more.

      • viking@infosec.pub
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        But the company doesn’t have the money. Stock value means investor valuation, not company funds.

        Once a company goes public for the very first time, it’s getting money into its account, but from then on forward, that’s just investors speculating and hoping on a nice return when they sell again.

        Of course there should be some correlation between the company’s profitability and the stock price, so ideally they do have quite some money, but in an investment craze like this, the correlation is far from 1:1. So whether they can still afford to build the data centers remains to be seen.

        • DogWater@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 months ago

          Yeah, someone else commented with their financials and they look really good, so while I certainly agree that they are overvalued because we are in an AI training bubble, I don’t see it popping for a few years, especially given that they are selling the shovels. every big player in the space is set on orders of magnitude of additional compute for the next 2 years or more. It doesn’t matter if the company they sold gpus to fails if they already sold them. Something big that unexpected would have to happen to upset that trajectory right now and I don’t see it because companies are in the exploratory stage of ai tech so no one knows what doesn’t work until they get the computer they need. I could be wrong, but that’s what I see as a watcher of ai news channels on YouTube.

          The co founder of open AI just got a billion dollars for his new 3 month old AI start up. They are going to spend that money on talent and compute. X just announced a data center with 100,000 gpus for grok2 and plans to build the largest in the world I think? But that’s Elon, so grains of salt and all that are required there. Nvidia are working with robotics companies to make AI that can train robots virtually to do a task and in the real world a robot will succeed first try. No more Boston dynamics abuse compilation videos. Right now agentic ai workflow is supposed to be the next step, so there will be overseer ai algorithms to develop and train.

          All that is to say there is a ton of work that requires compute for the next few years.

          {Opinion here} – I feel like a lot of people are seeing grifters and a wobbly gpt4o launch and calling the game too soon. It takes time to deliver the next product when it’s a new invention in its infancy and the training parameters are scaling nearly logarithmically from gen to gen.

          I’m sure the structuring of payment for the compute devices isn’t as simple as my purchase of a gaming GPU from microcenter, but Nvidia are still financially sound. I could see a lot of companies suffering from this long term but nvidia will be The player in AI compute, whatever that looks like, so they are going to bounce back and be fine.

          • Bitswap@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Couldn’t agree more. There is quite a bit of AI vaporware but NVIDIA is the real stuff and will weather whatever storm comes with ease.

        • Nomecks@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          They’re not building them for themselves, they’re selling GPU time and SuperPods. Their valuation is because there’s STILL a lineup a mile long for their flagship GPUs. I get that people think AI is a fad, and it’s public form may be, but there’s thousands of GPU powered projects going on behind closed doors that are going to consume whatever GPUs get made for a long time.

          • utopiah@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            Their valuation is because there’s STILL a lineup a mile long for their flagship GPUs.

            Genuinely curious, how do you know where the valuation, any valuation, come from?

            This is an interesting story, and it might be factually true, but as far as I know unless someone has actually asked the biggest investor WHY they did bet on a stock, nobody why a valuation is what it is. We might have guesses, and they might even be correct, but they also change.

            I mentioned it few times here before but my bet is yes, what you did mention BUT also because the same investors do not know where else do put their money yet and thus simply can’t jump boats. They are stuck there and it might again be become they initially though the demand was high with nobody else could fulfill it, but I believe that’s not correct anymore.

            • Bitswap@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              but I believe that’s not correct anymore.

              Why do you believe that? As far as I understand, other HW exists…but no SW to run on it…

              • utopiah@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                Right, and I mentioned CUDA earlier as one of the reason of their success, so it’s definitely something important. Clients might be interested in e.g Google TPU, startups like Etched, Tenstorrent, Groq, Cerebras Systems or heck even design their own but are probably limited by their current stack relying on CUDA. I imagine though that if backlog do keep on existing there will be abstraction libraries, at least for the most popular ones e.g TensorFlow, JAX or PyTorch, simply because the cost of waiting is too high.

                Anyway what I meant isn’t about hardware or software but rather ROI, namely when Goldman Sachs and others issue analyst report saying that the promise itself isn’t up to par with actual usage for paying customers.

                • Bitswap@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  2 months ago

                  Those reports might effect investments from the smaller players, but the big names(Google, Microsoft, Meta, etc.) are locked in a race to the finish line. So their investments will continue until one of them reaches the goal…[insert sunk cost fallacy here]…and I think we’re at least 1-2 years from there.

                  Edit: posted too soon

            • Nomecks@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Well, I’m no stockologist, but I believe when your company has a perpetual sales backlog with a 15-year head start on your competition, that should lead to a pretty high valuation.

              • utopiah@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                I’m also no stockologist and I agree but I that’s not my point. The stock should be high but that might already have been factored in, namely this is not a new situation, so theoretically that’s been priced in since investors have understood it. My point anyway isn’t about the price itself but rather the narrative (or reason, as the example you mention on backlog and lack of competition) that investors themselves believe.

    • WoodScientist@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      4
      ·
      2 months ago

      I think they’re going to be bankrupt within 5 years. They have way too much invested in this bubble.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        2 months ago

        Fall in share price, yes.

        Bankrupt, no. Their debt to Equity Ratio is 0.1455. They can pay off their $11.23 B debt with 2 months of revenue. They can certainly afford the interest payments.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        2 months ago

        I highly doubt that. If the AI bubble pops, they’ll probably be worth a lot less relative to other tech companies, but hardly bankrupt. They still have a very strong GPU business, they probably have an agreement with Nintendo on the next Switch (like they did with the OG Switch), and they could probably repurpose the AI tech in a lot of different ways, not to mention various other projects where they package GPUs into SOCs.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            2 months ago

            Sure, but their deliveries have also been incredibly large. I’d be surprised if they haven’t already made enough from previous sales to cover all existing and near-term investments into AI. The scale of the build-out by big cloud firms like Amazon, Google, and Microsoft has been absolutely incredible, and Nvidia’s only constraint has been making enough of them to sell. So even if support completely evaporates, I think they’ll be completely fine.

      • leftytighty@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 months ago

        NVIDIA uses of AI technology aren’t going to pop, things like DLSS are here to stay. The value of the company and their sales are inflated by the bubble, but the core technology of NVIDIA is applicable way beyond the chat bot hype.

        Bubbles don’t mean there’s no underlying value. The dot com bubble didn’t take down the internet.

    • hamsterkill@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Nvidia is diversified in AI, though. Disregarding LLM, it’s likely that other AI methodologies will depend even more on their tech or similar.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      Maybe we can have normal priced graphics cards again.

      I’m tired of people pretending £600 is a reasonable price to pay for a mid range GPU.

  • potentiallynotfelix@lemdro.id
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    2 months ago

    I don’t think AI is ever going to completely disappear, but I think we’ve hit the barrier of usefulness for now.

  • umbraroze@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 months ago

    Have any regular users actually looked at the prices of the “AI services” and what they actually cost?

    I’m a writer. I’ve looked at a few of the AI services aimed at writers. These companies literally think they can get away with “Just Another Streaming Service” pricing, in an era where people are getting really really sceptical about subscribing to yet another streaming service and cancelling the ones they don’t care about that much. As a broke ass writer, I was glad that, with NaNoWriMo discount, I could buy Scrivener for €20 instead of regular price of €40. [note: regular price of Scrivener is apparently €70 now, and this is pretty aggravating.] So why are NaNoWriMo pushing ProWritingAid, a service that runs €10-€12 per month? This is definitely out of the reach of broke ass writers.

    Someone should tell the AI companies that regular people don’t want to subscribe to random subscription services any more.

    • Lenny@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 months ago

      I work for an AI company that’s dying out. We’re trying to charge companies $30k a year and upwards for basically chatgpt plus a few shoddily built integrations. You can build the same things we’re doing with Zapier, at around $35 a month. The management are baffled as to why we’re not closing any of our deals, and it’s SO obvious to me - we’re too fucking expensive and there’s nothing unique with our service.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      As someone dabbling with writing, I bit the bullet and tried to start looking into the tools to see if they’re actually useful, and I was impressed with the promised tools like grammar help, sentence structure and making sure I don’t leave loose ends in the story writing, these are genuinely useful tools if you’re not using generative capability to let it write mediocre bullshit for you.

      But I noticed right away that I couldn’t justify a subscription between $20 - $30 a month, on top of the thousand other services we have to pay monthly for, including even the writing software itself.

      I have lived fine and written great things in the past without AI, I can survive just fine without it now. If these companies want to actually sell a product that people want, they need to scale back the expectations, the costs and the bloated, useless bullshit attached to it all.

      At some point soon, the costs of running these massive LLM’s versus the number of people actually willing to pay a premium for them are going to exceed reasonable expectations and we will see the companies that host the LLM’s start to scale everything back as they try to find some new product to hype and generate investment on.

  • billbennett@piefed.social
    link
    fedilink
    Afaraf
    arrow-up
    5
    ·
    2 months ago

    I’ve spent time with an AI laptop the past couple of weeks and ‘overinflated’ seems a generous description of where end user AI is today.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    15
    ·
    2 months ago

    It’s like the least popular opinion I have here on Lemmy, but I assure you, this is the begining.

    Yes, we’ll see a dotcom style bust. But it’s not like the world today wasn’t literally invented in that time. Do you remember where image generation was 3 years ago? It was a complete joke compared to a year ago, and today, fuck no one here would know.

    When code generation goes through that same cycle, you can put out an idea in plain language, and get back code that just “does” it.

    I have no idea what that means for the future of my humanity.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      6
      ·
      2 months ago

      you can put out an idea in plain language, and get back code that just “does” it

      No you can’t. Simplifying it grossly:

      They can’t do the most low-level, dumbest detail, splitting hairs, “there’s no spoon”, “this is just correct no matter how much you blabber in the opposite direction, this is just wrong no matter how much you blabber to support it” kind of solutions.

      And that happens to be main requirement that makes a task worth software developer’s time.

      We need software developers to write computer programs, because “a general idea” even in a formalized language is not sufficient, you need to address details of actual reality. That is the bottleneck.

      That technology widens the passage in the places which were not the bottleneck in the first place.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        20
        ·
        2 months ago

        I think you live in a nonsense world. I literally use it everyday and yes, sometimes it’s shit and it’s bad at anything that even requires a modicum of creativity. But 90% of shit doesn’t require a modicum of creativity. And my point isn’t about where we’re at, it’s about how far the same tech progressed on another domain adjacent task in three years.

        Lemmy has a “dismiss AI” fetish and does so at its own peril.

        • Jesus_666@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          2 months ago

          And I wouldn’t know where to start using it. My problems are often of the “integrate two badly documented company-internal APIs” variety. LLMs can’t do shit about that; they weren’t trained for it.

          They’re nice for basic rote work but that’s often not what you deal with in a mature codebase.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            8
            ·
            2 months ago

            Again, dismiss at your own peril.

            Because “Integrate two badly documented APIs” is precisely the kind of tasks that even the current batch of LLMs actually crush.

            And I’m not worried about being replaced by the current crop. I’m worried about future frameworks on technology like greyskull running 30, or 300, or 3000 uniquely trained LLMs and other transformers at once.

            • EatATaco@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              edit-2
              2 months ago

              I’m with you. I’m a Senior software engineer and copilot/chatgpt have all but completely replaced me googling stuff, and replaced 90% of the time I’ve spent writing the code for simple tasks I want to automate. I’m regularly shocked at how often copilot will accurately auto complete whole methods for me. I’ve even had it generate a whole child class near perfectly, although this is likely primarily due to being very consistent with my naming.

              At the very least it’s an extremely valuable tool that every programmer should get comfortable with. And the tech is just in it’s baby form. I’m glad I’m learning how to use it now instead of pooh-poohing it.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                2 months ago

                Ikr? It really seems like the dismissiveness is coming from people either not experienced with it, or just politically angry at its existence.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          7
          ·
          edit-2
          2 months ago

          Are you a software developer? Or a hardware engineer? EDIT: Or anyone credible in evaluating my nonsense world against yours?

            • hark@lemmy.world
              link
              fedilink
              English
              arrow-up
              12
              arrow-down
              6
              ·
              2 months ago

              That explains your optimism. Code generation is at a stage where it slaps together Stack Overflow answers and code ripped off from GitHub for you. While that is quite effective to get at least a crappy programmer to cobble together something that barely works, it is a far cry from having just anyone put out an idea in plain language and getting back code that just does it. A programmer is still needed in the loop.

              I’m sure I don’t have to explain to you that AI development over the decades has often reached plateaus where the approach needed to be significantly changed in order for progress to be made, but it could certainly be the case where LLMs (at least as they are developed now) aren’t enough to accomplish what you describe.

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                2 months ago

                It’s not about stages. It’s about the Achilles and tortoise problem.

                There’s extrapolation inside the same level of abstraction as the data given and there’s extrapolation of new levels of abstraction.

                But frankly far smarter people than me are working on all that. Maybe they’ll deliver.

            • rottingleaf@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              11
              ·
              2 months ago

              So close, but not there.

              OK, you’ll know that I’m right when you somewhat expand your expertise to neighboring areas. Should happen naturally.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            7
            ·
            2 months ago

            Dismiss at your own peril is my mantra on this. I work primarily in machine vision and the things that people were writing on as impossible or “unique to humans” in the 90s and 2000s ended up falling rapidly, and that generation of opinion pieces are now safely stored in the round bin.

            The same was true of agents for games like go and chess and dota. And now the same has been demonstrated to be coming true for languages.

            And maybe that paper built in the right caveats about “human intelligence”. But that isn’t to say human intelligence can’t be surpassed by something distinctly inhuman.

            The real issue is that previously there wasn’t a use case with enough viability to warrant the explosion of interest we’ve seen like with transformers.

            But transformers are like, legit wild. It’s bigger than UNETs. It’s way bigger than ltsm.

            So dismiss at your own peril.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              2 months ago

              But that isn’t to say human intelligence can’t be surpassed by something distinctly inhuman.

              Tell me you haven’t read the paper without telling me you haven’t read the paper. The paper is about T2 vs. T3 systems, humans are just an example.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                8
                ·
                2 months ago

                Yeah I skimmed a bit. I’m on like 4 hours of in flight sleep after like 24 hours of air ports and flying. If you really want me to address the points of the paper, I can, but I can also tell it doesn’t diminish my primary point: dismiss at your own peril.

                • barsoap@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  ·
                  edit-2
                  2 months ago

                  dismiss at your own peril.

                  Oooo I’m scared. Just as much as I was scared of missing out on crypto or the last 10000 hype trains VCs rode into bankruptcy. I’m both too old and too much of an engineer for that BS especially when the answer to a technical argument, a fucking information-theoretical one on top of that, is “Dude, but consider FOMO”.

                  That said, I still wish you all the best in your scientific career in applied statistics. Stuff can be interesting and useful aside from AI BS. If OTOH you’re in that career path because AI BS and not a love for the maths… let’s just say that vacation doesn’t help against burnout. Switch tracks, instead, don’t do what you want but what you can.

                  Or do dive into AGI. But then actually read the paper, and understand why current approaches are nowhere near sufficient. We’re not talking about changes in architecture, we’re about architectures that change as a function of training and inference, that learn how to learn. Say goodbye to the VC cesspit, get tenure aka a day job, maybe in 50 years there’s going to be another sigmoid and you’ll have written one of the papers leading up to it because you actually addressed the fucking core problem.

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            2 months ago

            I’ve written something vague in another place in this thread which seemed a good enough argument. But I didn’t expect that someone is going to link a literal scientific publication in the same very direction. Thank you, sometimes arguing in the Web is not a waste of time.

            EDIT: Have finished reading it. Started thinking it was the same argument, in the middle got confused, in the end realized that yes, it’s the same argument, but explained well by a smarter person. A very cool article, and fully understandable for a random Lemming at that.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        this is just wrong no matter how much you blabber to support it" kind of solutions.

        When you put it like that, I might be a perfect fit in today’s world with the loudest voice wins landscape.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          I regularly think and post conspiracy theory thoughts about why the “AI” is such a hype. And in line with them a certain kind of people seem to think that reality doesn’t matter, because those who control the present control the past and the future. That is, they think that controlling the discourse can replace controlling the reality. The issue with that is that whether a bomb is set, whether a boat is sea-worthy, whether a bridge will fall is not defined by discourse.

      • tetris11@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        2 months ago

        they’re pretty good, and the faults they have are improving steadily. I dont think we’re hitting a ceiling yet, and I shudder to think where they’ll be in 5 years.

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 months ago

      I agree with you but not for the reason you think.

      I think the golden age of ML is right around the corner, but it won’t be AGI.

      It would be image recognition and video upscaling, you know, the boring stuff that is not game changing but possibly useful.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        I feel the same about the code generation stuff. What I really want is a tool that suggests better variable names.

  • RegalPotoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    2
    ·
    2 months ago

    Personally I can’t wait for a few good bankruptcies so I can pick up a couple of high end data centre GPUs for cents on the dollar

    • bruhduh@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      edit-2
      2 months ago

      Search Nvidia p40 24gb on eBay, 200$ each and surprisingly good for selfhosted llm, if you plan to build array of gpus then search for p100 16gb, same price but unlike p40, p100 supports nvlink, and these 16gb is hbm2 memory with 4096bit bandwidth so it’s still competitive in llm field while p40 24gb is gddr5 so it’s good point is amount of memory for money it cost but it’s rather slow compared to p100 and compared to p100 it doesn’t support nvlink

      • RegalPotoo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Digging into it a bit more, it seems like I might be better off getting a 12gb 3060 - similar price point, but much newer silicon

        • bruhduh@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          It depends, if you want to run llm data center gpus are better, if you want to run general purpose tasks then newer silicon is better, in my case i prefer build to offload tasks, since I’m daily driving linux, my dream build is main gpu is amd rx 7600xt 16gb, Nvidia p40 for llms and ryzen 8700g 780m igpu for transcoding and light tasks, that way you’ll have your usual gaming home pc that also serves as a server in the background while being used

      • RegalPotoo@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        Thanks for the tips! I’m looking for something multi-purpose for LLM/stable diffusion messing about + transcoder for jellyfin - I’m guessing that there isn’t really a sweet spot for those 3. I don’t really have room or power budget for 2 cards, so I guess a P40 is probably the best bet?

        • Justin@lemmy.jlh.name
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          Intel a310 is the best $/perf transcoding card, but if P40 supports nvenc, it might work for both transcode and stable diffusion.

        • bruhduh@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          2 months ago

          Try ryzen 8700g integrated gpu for transcoding since it supports av1 and these p series gpus for llm/stable diffusion, would be a good mix i think, or if you don’t have budget for new build, then buy intel a380 gpu for transcoding, you can attach it as mining gpu through pcie riser, linus tech tips tested this gpu for transcoding as i remember

          • RegalPotoo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            8700g

            Hah, I’ve pretty recently picked up an Epyc 7452, so not really looking for a new platform right now.

            The Arc cards are interesting, will keep those in mind

        • utopiah@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          Interesting, I did try a bit of remote rendering on Blender (just to learn how to use via CLI) so that makes me wonder who is indeed scrapping the bottom of the barrel of “old” hardware and what they are using for. Maybe somebody is renting old GPUs for render farms, maybe other tasks, any pointer of such a trend?

      • Scipitie@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        2 months ago

        Lowest price on Ebay for me is 290 Euro :/ The p100 are 200 each though.

        Do you happen to know if I could mix a 3700 with a p100?

        And thanks for the tips!

  • Grofit@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    6
    ·
    2 months ago

    A lot of the AI boom is like the DotCom boom of the Web era. The bubble burst and a lot of companies lost money but the technology is still very much important and relevant to us all.

    AI feels a lot like that, it’s here to stay, maybe not in th ways investors are touting, but for voice, image, video synthesis/processing it’s an amazing tool. It also has lots of applications in biotech, targetting systems, logistics etc.

    So I can see the bubble bursting and a lot of money being lost, but that is the point when actually useful applications of the technology will start becoming mainstream.

    • criticalthreshold@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      8
      ·
      2 months ago

      Google Search is such an important facet for Alphabet that they must invest as many billions as they can to lead the new generative-AI search. IMO for Google it’s more than just a growth opportunity, it’s a necessity.

      • FlorianSimon@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        “AI” is what put Google into trouble to begin with… Sure, let’s double-down on the shittiness, I don’t see how anything could go wrong.

      • hamsterkill@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        I guess I don’t really see why generative AI is a necessity for a search engine? It doesn’t really help me find information any faster than a Wikipedia summary, and is less reliable.

        • RinseDrizzle@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          edit-2
          2 months ago

          So far…

          Obviously still has fair share of dumb stuff happening with these systems today, but there have been some big steps in just the last few years. Wouldn’t be surprised if it was much spookier a decade from now.

          In general, good to use as a tool to be taken with grain of salt and further review.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      6
      ·
      2 months ago

      I’m glad someone else is acknowledging that AI can be an amazing tool. Every time I see AI mentioned on lemmy, people say that it’s entirely useless and they don’t understand why it exists or why anyone talks about it at all. I mention I use ChatGPT daily for my programming job, it’s helpful like having an intern do work for me, etc, and I just get people disagreeing with me all day long lol

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 months ago

      The bubble burst and a lot of companies lost money but the technology is still very much important and relevant to us all.

      The DotCom bubble was built around the idea of online retail outpacing traditional retail far faster than it did, in fact. But it was, at its essence, a system of digital book keeping. Book your orders, manage your inventory, and direct your shipping via a more advanced and interconnected set of digital tools.

      The fundamentals of the business - production, shipping, warehousing, distribution, the mathematical process of accounting - didn’t change meaningfully from the days of the Sears-Roebuck Catalog. Online was simply a new means of marketing. It worked well, but not nearly as well as was predicted. What Amazon did to achieve hegemony was to run losses for ten years, while making up the balance as a government sponsored series of data centers (re: AWS) and capitalize on discount bulk shipping through the USPS before accruing enough physical capital to supplant even the big box retailers. The digital front-end was always a loss-leader. Nobody is actually turning a profit on Amazon Prime. It’s just a hook to get you into the greater Amazon ecosystem.

      Pivot to AI, and you’ve got to ask… what are we actually improving on? It’s not a front-end. It’s not a data-service that anyone benefits from. It is hemorrhaging billions of dollars just at OpenAI alone (one reason why it was incorporated as a Non-Profit to begin with - THERE WAS NO PROFIT). Maybe you can leverage this clunky behemoth into… low-cost mass media production? But its also extremely low-rent production, in an industry where - once again - marketing and advertisement are what command the revenue you can generate on a finished product. Maybe you can use it to optimize some industrial process? But it seems that every AI needs a bunch of human babysitters to clean up all the shit is leaves. Maybe you can get those robo-taxis at long last? I wouldn’t hold my breath, but hey, maybe?!

      Maybe you can argue that AI provides some kind of hook to drive retail traffic into a more traditional economic model. But I’m still waiting to see what that is. After that, I’m looking at AI in the same way I’m looking at Crypto or VR. Just a gimmick that’s scaring more people off than it drags in.

      • Grofit@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        I don’t mean it’s like the dotcom bubble in terms of context, I mean in terms of feel. Dotcom had loads of investors scrambling to “get in on it” many not really understanding why or what it was worth but just wanted quick wins.

        This has same feel, a bit like crypto as you say but I would say crypto is very niche in real world applications at the moment whereas AI does have real world usages.

        They are not the ones we are being fed in the mainstream like it replacing coders or artists, it can help in those areas but it’s just them trying to keep the hype going. Realistically it can be used very well for some medical research and diagnosis scenarios, as it can correlate patterns very easily showing likelyhood of genetic issues.

        The game and media industry are very much trialling for voice and image synthesis for improving environmental design (texture synthesis) and providing dynamic voice synthesis based off actors likenesses. We have had peoples likenesses in movies for decades via cgi but it’s only really now we can do the same but for voices and this isn’t getting into logistics and/or financial where it is also seeing a lot of application.

        Its not going to do much for the end consumer outside of the guff you currently use siri or alexa for etc, but inside the industries AI is very useful.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 months ago

          crypto is very niche in real world applications at the moment whereas AI does have real world usages.

          Crypto has a very real niche use for money laundering that it does exceptionally well.

          AI does not appear to do anything significantly more effectively than a Google search circa 2018.

          But neither can justify a multi billion dollar market cap on these terms.

          The game and media industry are very much trialling for voice and image synthesis for improving environmental design (texture synthesis) and providing dynamic voice synthesis based off actors likenesses. We have had peoples likenesses in movies for decades via cgi but it’s only really now we can do the same but for voices and this isn’t getting into logistics and/or financial where it is also seeing a lot of application.

          Voice actors simply don’t cost that much money. Procedural world building has existed for decades, but it’s generally recognized as lackluster beside bespoke design and development.

          These tools let you build bad digital experiences quickly.

          For logistics and finance, a lot of what you’re exploring is solved with the technology that underpins AI (modern graph theory). But LLMs don’t get you that. They’re an extraneous layer that takes enormous resources to compile and offers very little new value.

            • UnderpantsWeevil@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              2 months ago

              there are loads of white papers detailing applications of AI in various industries

              And loads more of its ineffectual nature and wastefulness.

              • Grofit@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                Are you talking specifically about LLMs or Neural Network style AI in general? Super computers have been doing this sort of stuff for decades without much problem, and tbh the main issue is on training for LLMs inference is pretty computationally cheap

                • UnderpantsWeevil@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  2 months ago

                  Super computers have been doing this sort of stuff for decades without much problem

                  Idk if I’d point at a supercomputer system and suggest it was constructed “without much problem”. Cray has significantly lagged the computer market as a whole.

                  the main issue is on training for LLMs inference is pretty computationally cheap

                  Again, I would not consider anything in the LLM marketplace particularly cheap. Seems like they’re losing money rapidly.

      • PaulBlartFartTart@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        The funny thing about Amazon, is we are phasing it out of our home now. Because it has become an online 7Eleven. You don’t pay for shipping and it comes fast, but you are often paying 50-100% more for everything. If you use AliExpress, 300-400% more… just to get it a week or two faster. I would rather go to local retailers that are increasing Chinese goods for a 150% profit, than Amazon and pay 300%. It just means I have to leave the house for 30 minutes.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 months ago

          would rather go to local retailers that are increasing Chinese goods for a 150% profit, than Amazon and pay 300%

          A lot of the local retailors are going out of business in my area. And those that exist are impossible to get into and out of, due to the fixation on car culture. The Galleria is just a traffic jam that spans multiple city blocks.

          The thing that keeps me at Amazon, rather than Target, is purely the time visit of shopping versus shipping.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Do you have money and/or personal emotional validation tied up in the promise that AI will develop into a world-changing technology by 2027? With AGI in everyone’s pocket giving them financial advice, advising them on their lives, and romancing them like a best friend with Scarlett Johansson’s voice whispering reassurances in your ear all day?

      If you are banking on any of these things, then yeah, you should probably be afraid.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    4
    ·
    2 months ago

    Well, they also kept telling investors all they need to simulate a human brain was to simulate the amount of neurons in a human brain…

    The stupidly rich loved that, because they want computer backups for “immortality”. And they’d dump billions of dollars into making that happen

    About two months ago tho, we found out that the brain uses microtubules in the brain to put tryptophan into super position, and it can maintain that for like a crazy amount of time, like longer than we can do in a lab.

    The only argument against a quantum component for human consciousness, was people thought there was no way to have even just get regular quantum entanglement in a human brain.

    We’ll be lucky to be able to simulate that stuff in 50 years, but it’s probably going to be even longer.

    Every billionaire who wanted to “live forever” this way, just got aged out. So they’ll throw their money somewhere else now.

    • half_built_pyramids@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      2 months ago

      I used to follow the Penrose stuff and was pretty excited about QM as an explanation of consciousness. If this is the kind of work they’re reaching at though. This is pretty sad. It’s not even anything. Sometimes you need to go with your gut, and my gut is telling me that if this is all the QM people have, consciousness is probably best explained by complexity.

      https://ask.metafilter.com/380238/Is-this-paper-on-quantum-propeties-of-the-brain-bad-science-or-not

      Completely off topic from ai, but got me curious about brain quantum and found this discussion. Either way, AI still sucks shit and is just a shortcut for stealing.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        2 months ago

        That’s a social media comment from some Ask Yahoo knockoff…

        Like, this isn’t something no one is talking about, you don’t have to solely learn about that from unpopular social media sites (including my comment).

        I don’t usually like linking videos, but I’m feeling like that might work better here

        https://www.youtube.com/watch?v=xa2Kpkksf3k

        But that PBS video gives a really good background and then talks about the recent discovery.

        • Jordan117@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          2 months ago

          some Ask Yahoo knockoff…

          AskMeFi predated Yahoo Answers by several years (and is several orders of magnitude better than it ever was).

          • givesomefucks@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            edit-2
            2 months ago

            And that linked accounts last comment was advocating for Biden to stage a pre-emptive coup before this election…

            https://www.metafilter.com/activity/306302/comments/mefi/

            It doesn’t matter if it was created before Ask Yahoo or if it’s older.

            It’s random people making random social media comments, sometimes stupid people make the rare comment that sounds like they know what they’re talking about. And I already agreed no one had to take my word on it either.

            But that PBS video does a really fucking good job explaining it.

            Cuz if I can’t explain to you why a random social media comment isn’t a good source, I’m sure as shit not going to be able to explain anything like Penrose’s theory on consciousness to you.

            • Jordan117@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              edit-2
              2 months ago

              It doesn’t matter if it was created before Ask Yahoo or if it’s older.

              It does if you’re calling it a “knockoff” of a lower-quality site that was created years later, which was what I was responding to.

              edit: btw, you’ve linked to the profile of the asker of that question, not the answer to it that /u/half_built_pyramids quoted.

              • givesomefucks@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                7
                ·
                2 months ago

                Great.

                So the social media site is older than I thought, and the person who made the comment on that site is a lot stupider than it seemed.

                Like, Facebooks been around for about 20 years. Would you take a link to a Facebook comment over PBS?

                • Jordan117@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  ·
                  2 months ago

                  My man, I said nothing about the science or the validity of that comment, just that it’s wrong to call Ask MetaFilter “some Ask Yahoo knockoff”. If you want to get het up about an argument I never made, you do you.

  • sunbeam60@lemmy.one
    link
    fedilink
    English
    arrow-up
    30
    ·
    2 months ago

    Argh, after 25 years in tech I am surprised this keeps surprising you.

    We’ve crested for sure. AI isn’t going to solve everything. AI stock will fall. Investor pressure to put AI into everything will subside.

    The we will start looking at AI as a cost benefit analysis. We will start applying it where it makes sense. Things will get optimised. Real profit and long term change will happen over 5-10 years. And afterwards, the utter magical will seem mundane while everyone is chasing the next hype cycle.

    • Bakkoda@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      I’m far far more concerned about all the people who were deemed non essential so quickly after being “essential” for so long because AI will do so much work slaps employees with 2 weeks severance

      • sunbeam60@lemmy.one
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        I’m right there with you. One of my daughters love drawing and designing clothes and I don’t know what to tell her in terms of the future. Will human designs be more valued? Less valued?

        I’m trying to remain positive; when I went into software my parents barely understood that anyone could make a living of that “toy computer”.

        But I agree; this one feels different. I’m hoping they all feel different to the older folks (me).

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 months ago

      Truth. I would say the actual time scales will be longer, but this is the harsh, soul-crushing reality that will make all the kids and mentally disturbed cultists on r/singularity scream in pain and throw stones at you. They’re literally planning for what they’re going to do once ASI changes the world to a star-trek, post-scarcity civilization… in five years. I wish I was kidding.