• theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    116
    ·
    edit-2
    6 months ago

    Investment giant Goldman Sachs published a research paper

    Goldman Sachs researchers also say that

    It’s not a research paper; it’s a report. They’re not researchers; they’re analysts at a bank. This may seem like a nit-pick, but journalists need to (re-)learn to carefully distinguish between the thing that scientists do and corporate R&D, even though we sometimes use the word “research” for both. The AI hype in particular has been absolutely terrible for this. Companies have learned that putting out AI “research” that’s just them poking at their own product but dressed up in a science-lookin’ paper leads to an avalanche of free press from lazy credulous morons gorging themselves on the hype. I’ve written about this problem a lot. For example, in this post, which is about how Google wrote a so-called paper about how their LLM does compared to doctors, only for the press to uncritically repeat (and embellish on) the results all over the internet. Had anyone in the press actually fucking bothered to read the paper critically, they would’ve noticed that it’s actually junk science.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      17
      ·
      6 months ago

      A big part of the problem – and this is not a new issue, goes back decades – is that a lot of terms in AI-land don’t correspond to concrete capabilities, so it’s easy to claim that you do X when X is generally-perceived to be a much-more-sophisticated thing than what you’re actually doing, even if your thing technically qualifies as X by some definition.

      None of this in any way conflicts with my position that AI has tremendous potential. But if people are investing money without having a solid understanding of what they’re investing in, there are going to be people out there misrepresenting their product.

    • dev_null@lemmy.ml
      link
      fedilink
      arrow-up
      11
      ·
      6 months ago

      Same with all cryptocurrencies having a “white paper”, as if it was anything other than marketing crap formatted like a scientific paper.

  • PenguinCoder@beehaw.org
    link
    fedilink
    English
    arrow-up
    73
    ·
    6 months ago

    Go-dAmn Sachs is wrong often, but in this I think they’re on point. Learned from the Crypto insanity.

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    20
    ·
    6 months ago

    AI has been overhyped since it first played tic-tac-toe in the 1950s. One definition of “AI” is: “an algorithm that people don’t understand… yet” 🤷

    • Letstakealook@lemm.ee
      link
      fedilink
      arrow-up
      17
      ·
      6 months ago

      The stuff they’re calling ai now is just predictive text algorithms. I really can’t wait to stop hearing about this because it is all artificial with no intelligence.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        7
        ·
        6 months ago

        Not exactly.

        LLMs are predictive-associative token algorithms with a degree of randomness and some self-reflection. A key aspect is that anything can be a token, they can self-feed their own output, creating the basis for a thought cycle, as well as output control input for other algorithms. It remains to be seen whether the core of “(human) intelligence” is much more than that, and by how much.

        Stable Diffusion is a random image generator that refines its output based on perceptual traits associated with a prompt. It’s like a “lite” version of human dreaming, only with a super-human training set. Kind of an “uncanny valley” version of dreaming.

        It just so happens that both algorithms have been showcased at about the same time, and it’s the first time we can build a “set and forget” AI system that can both make decisions about its own next steps, and emulate human creativity… which has driven the hype into overdrive.

        I don’t think we’ll stop hearing about it, but I do think there is much more to be done, and it’s pretty much impossible to feed any of the algorithms with human experience data, without registering at least one human learning cycle, as in over many years from inside a humanoid robot.

        • AVincentInSpace@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 months ago

          LLMs are predictive associative token algorithms

          Ah, so they produce parts of words instead of whole words at a time. Totally different.

          with a degree of randomness and self reflection.

          And they’re hooked up to random number generators so if you give it the same input twice you’ll get different output. Totally makes it smarter.

          A key aspect is that anything can be a token

          …much like predictive text. Rarely will you find one that doesn’t suggest punctuation on occasion.

          they can self feed their own output

          …much like predictive text.

          as well as output control input for other algorithms.

          Oh, so you can tell it to suggest certain tokens more or less often. How fancy.

          It remains to be seen whether the core of human intelligence is much more than that.

          I mean, I’d say the ability to visualize things and reason about scenarios it hasn’t experienced before are a good start.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            Not sure if you were unable or unwilling to understand anything of what I wrote, and I don’t like your tone. Feel free to come back with something more serious.

      • tyler@programming.dev
        link
        fedilink
        arrow-up
        8
        ·
        6 months ago

        LLMs have been shown to have emergent math capabilities (that are the opposite of what is trained) so you’re simplifying way too much. Yes a lot is just “predictive text” but there’s a ton of “this was not the training and we don’t know how it knows this” as well.

        • anachronist@midwest.social
          link
          fedilink
          English
          arrow-up
          9
          ·
          6 months ago

          Game of Life has cool emergent properties that are a lot more interesting and fun to play with than LLMs. LLMs also have emergent properties like, for instance, failing classification due to the manipulation of individual image pixels.

      • EatATaco@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        ·
        5 months ago

        You know it’s funny how many times I’ve heard that “it’s just predictive text algorithms!” As a dismissal that I’m beginning to think we’re just predictive text algorithms.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Yep. All the reasons cited could pretty much apply to a person as well. GPT-4 is pretty damn smart by every reasonable measure.

        • Blóðbók@slrpnk.net
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          We are prediction machines, but nothing like chatgpt. Current AI has no ability to learn, adapt, or even consider the future.

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            Current AI has no ability to learn, adapt, or even consider the future.

            BS. The first two are all a neural net does.

            • Blóðbók@slrpnk.net
              link
              fedilink
              arrow-up
              3
              ·
              5 months ago

              Once. They do not have the ability to learn or adapt on their own. They are created by humans through “deep learning”, but that is fundamentally different from continuously learning based on one’s own actions and experiences.

              • CanadaPlus@lemmy.sdf.org
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                Yeah, once they’re out of training, that’s true. It’s almost like we grow this semi-intelligence, and then run it in something like a deep coma.

                I wouldn’t quite say it’s a one-time thing, though. It’s not only possible but typical to put it back in training to finetune it.

  • wagesj45@kbin.run
    link
    fedilink
    arrow-up
    8
    ·
    6 months ago

    Oh no, you mean the big “smart” money investors that manage to crash the economy every decade or so and ruin every business they touch are gonna leave generative AI alone? Oh nooo. How will the science progress without Goldman Sachs’s guiding hand?

    Good riddance.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 months ago

      Yep, as wildly expensive and unreliable as AI is, so are staff.

      Watch as loads of people get laid off, they realise the AI can’t do their jobs after all, but you know who can give it a go? Some guy in a third world country on $3 an hour.

    • coffeetest@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      That is so funny.

      chatgpt: “Artificial Intelligence (AI) represents a transformative investment opportunity, characterized by robust growth potential and broad applicability across industries. The AI market, projected to exceed $190 billion by 2025, offers substantial upside in sectors such as healthcare, finance, automotive, and e-commerce. As businesses increasingly adopt AI to enhance efficiency and innovation, associated firms are poised for significant returns. Key investment areas include machine learning, natural language processing, robotics, and AI-driven analytics. Despite risks like regulatory challenges and ethical concerns, the strategic deployment of capital in AI technologies holds promise for long-term value creation. Diversification within this space is advisable to mitigate volatility.”

    • anachronist@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 months ago

      Funny you should mention that McKinsey published a paper a few months back concluding that GenAI will take over most of the jobs in America because it was good at doing what McKinsey Associates do. Missed by the authors is that the job of a McKinsey associate is to confidently spout nonsense all day long and that’s actually exactly what chatgpt is programmed to do.

  • Handles@leminal.space
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    Oh, so now we’re supposed to pay attention? Internet pundits came to the same realisation from the beginning, but we don’t have the same kind of purchasing power.

    • bluewing@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      6 months ago

      Sometimes that bear shits in my yard. And then the little asshole trashes my garden. I might buy a tag and shoot the son of a bitch this fall if he keeps it up…

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 months ago

    Goldman Sachs has not invested in AI.

    Their statement is factual though, on all three points. nVidia’s share price alone should alarm people. It’s the new dot com bubble.

    • Umbrias@beehaw.org
      link
      fedilink
      arrow-up
      22
      ·
      6 months ago

      The internet is a funny analogue!

      Because it experienced the dot com crash under almost the same sort of circumstances.

    • LukeZaz@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      I find comments like these on places like Beehaw almost amusing in a way. It’s like watching a drunk person stumble from a bar all the way to a courthouse and getting upset the clerk won’t sell them more liquor.

      Seriously though, I’m not sure what you hope to accomplish here. Just about everybody here disagrees and isn’t keen on a take like this, and I’d figure you’d have been able to tell as much before posting. So… are you just here to argue?

    • TehPers@beehaw.org
      link
      fedilink
      English
      arrow-up
      26
      ·
      6 months ago

      You’re right. Once it settles into its niches and the hype dies down, it won’t be overhyped anymore because everyone will have moved on.

      I’ve been working with generative AI for years now and we still struggle to solve real world problems with it. It isn’t useless or anything. It’s way too unreliable, and this isn’t one of those things where time will solve it - it’s being used to solve problems that have no perfect solutions, like human interfacing and generating culturally-appropriate and visually-accurate images. I’d expect it to improve at those tasks over time, but the scope needs to drop from every problem humanity has ever faced to the problems that these models are good at solving.

      • Milk_Sheikh@lemm.ee
        link
        fedilink
        arrow-up
        13
        ·
        6 months ago

        Correct. Dress it up however you like, but LLM and ML programs are probability gamblers all the way down. We’re building a conversation tool, that doesn’t truly comprehend the language because it’s a calculator at its core - it’s like asking your eyeballs to see in UHF frequencies.

        They’re called “computers” for a reason, and we are deep in the myopic tech tree of further and further complexity. The current wave of AI has solid potential, but not globally for all applications. It is a great at ‘digital assistant’ roles and is already killing it in CCTV monitoring software. Mindjourney can make incredible images, but it can’t make art. ChatGPT can write, but it’s a terrible author or speechwriter.

        • Schadrach@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          Mindjourney can make incredible images, but it can’t make art.

          Mostly because you’re defining “art” in such a way that being produced by MidJourney disqualifies it automatically.

          • Aelis@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            Sorry to break it to you but there is no defining art without disqualifying ai, the subject is so old it’s hardly an opinion at this point. Even the most imaginative mating rituals animals can do barely qualifies… And mind you, these have emotions and cognitive capabilities, so something as barebone as the kind of “ai” we make now… nothing more than a joke art wise.

          • anachronist@midwest.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 months ago

            This is the same middlebrow dismissal that AI advocates have been using for years.

            “It’s just a stochastic parrot.” “How do you know that you aren’t just a stochastic parrot?”

            Well we do know. There are experts on human cognition. They have been studying it for decades. We may not know enough about it to know how to make a computer do it. But we certainly know enough about it to know when a computer chatbot is not doing it.

      • coffeetest@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        6 months ago

        I agree with this. Its wildly misunderstood and it’s the name. AI is absolutely the most amazing marketing name for it but its only a thin veneer of our sci fi dreams. Over time that veneer might get a bit thicker but it wont be what people think it will be. It is good at certain things, like you know, being a large language model, but it is a (very) limited subset of what human intelligence is.

        • Kichae@lemmy.ca
          link
          fedilink
          English
          arrow-up
          11
          ·
          6 months ago

          It’s not “widely misunderstood”, it’s been widely hyped by the people actively selling it. The tech bros are pumping and dumping it, just like with every other tech panacea.

          It’s not the public, it’s the snake oil salesmen.

          • coffeetest@beehaw.org
            link
            fedilink
            arrow-up
            4
            ·
            6 months ago

            That’s what I am saying. The buyers wildly misunderstand it. The seller presents it with a very effective and misleading pitch.

            Look at the Intuit CEO who just fired 10% of their labor to pivot to AI to um, “give financial advise.” And then goes on to say any other company who doesn’t do the same will fall behind and fail. Time will tell but I am going to go with, people will laugh when Intuit is on fire.

            • anachronist@midwest.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 months ago

              I suspect Intuit fired those workers for other reasons (free file) and are using AI as an excuse because to admit that free-file is an existential threat to their business is to admit that their company has no long term business prospects.

              • coffeetest@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                That seems entirely plausible for the staffing change. But Intuit is more than their tax software for example Quickbooks isn’t going anywhere. I am sure they do other stuff, probably payment processing and I don’t know what else. So they will survive at some level, it would be hard to kill Quickbooks.

    • ShepherdPie@midwest.social
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      I look at it more like autonomous driving which we’ve been told is just around the corner for close to a decade now.