[I literally had this thought in the shower this morning so please don’t gatekeep me lol.]

If AI was something everyone wanted or needed, it wouldn’t be constantly shoved your face by every product. People would just use it.

Imagine if printers were new and every piece of software was like “Hey, I can put this on paper for you” every time you typed a word. That would be insane. Printing is a need, and when you need to print, you just print.

  • kadu@scribe.disroot.org
    link
    fedilink
    arrow-up
    5
    ·
    15 hours ago

    LLMs have amazing potential.

    That’s not what studies from most universities, Anthropic, OpenAI, Apple and Samsung show.

    Even if we didn’t have this data - and we do have it - are you truly impressed by a machine that can simulate what a Reddit user said 6 months ago? Really? Either you’re massively underselling the actual industrial revolution, or you’d be easily impressed by a child’s magic trick.

    • LeFantome@programming.dev
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      14 hours ago

      The Industrial Revolution was literally “are you truly impressed by a machine that can weave cloth as well as your grandmother”? And the answer was yes because one person could be trained to use that machine in much less time than it took to learn to weave. And they could make 10 times as much stuff in the same time.

      LLMs are literally the same kind of progress.

      Except we are not 200 years later when the impact on the world is obvious and not up for debate. We are in the first few years where the “machine” would be broken half the time and its work would have obvious defects.

    • grindemup@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      15 hours ago

      Honestly, yes I am impressed when you compare what was possible with NLP prior to LLMs. Your question is akin to asking: are you truly impressed by a machine that can stick blocks together as well as some random person? Regardless of whether you are impressed, significant amounts of human labour can be reproduced by machines in a manner that was previously impossible. Obviously there’s still a lot of undeserved hype, but let’s not pretend that replicating human language is trivial or worthless.

    • Boomer Humor Doomergod@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      edit-2
      15 hours ago

      I recently created a week-long IT training course with an AI. It got almost all of it right, only hallucinating when it came to details I had to fix. But it took a task that would have taken me a couple months to a couple weeks. So for specific applications it is actually quite useful. (Because it’s basically rephrasing what a bunch of people wrote on Reddit.)

      For this use case I would call it as revolutionary as desktop publishing. Desktop publishing allowed people to produce in a couple days what it would have taken a team of designers and copy editors to do in a couple weeks.

      Everything else I’ve used it for it’s been pretty terrible at, especially diagnosing issues. This is due particularly to the fact that it will just make shit up if it doesn’t know, so if you also don’t know you can’t just trust it and end up doing research and experimentation yourself.

      • akacastor@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        11 hours ago

        “It got almost all of it right, only hallucinating when it came to details I had to fix.”

        What does this even mean? It did a great job, the only problems were the parts I had to fix? 🤣

        • Boomer Humor Doomergod@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          11 hours ago

          Most of it was basic knowledge that it could get from its training on the web. The stuff it missed was details about things specific to the product.

          But generating 90% of the content and me just having to edit a bit is still way less work than me doing it all myself, even if it’s right the first time.

          It’s got intern-level intelligence

          • BluescreenOfDeath@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            8 hours ago

            It’s got intern-level intelligence

            The problem is, it’s not “intelligence”. It’s an enormous statistical based autocorrect.

            AI doesn’t understand math, it just knows that the next character in a string starting “2+2=” is almost unanimously “4” in all the data it’s statistically analyzed. If you try to have it solve an equation that isn’t commonly repeated, it can’t solve it. Even when you try to train it on textbooks, it doesn’t ‘learn’ the math, it tries to analyze the word patterns in the text of the book and attempts to replicate it. That’s why it ‘hallucinates’, and also why it doesn’t matter how much data you feed it, it won’t be ‘intelligent’.

            It seems intelligent because we associate intelligence with language, and LLMs mimic language in an amazing way. But it’s not ‘thinking’ the way we associate with intelligence. It’s running complex math about what word should come next in a sentence based on the other sentences of that sort it’s seen before.

            • Boomer Humor Doomergod@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              8 hours ago

              Interns aren’t that intelligent, either. But they can generate content even if they’re not intelligent and that’s helpful, too.

              Having the right answer is a lot less useful than looking like you have the right answer, sadly.