• JackEddyfier@beehaw.org
    link
    fedilink
    arrow-up
    6
    ·
    16 hours ago

    I’ll start believing in AI when, and if, it’s able to eliminate error. When will AI be able to work out whether the training material it used is true, fasle, myth, or other narrative?

    • The Bard in Green@lemmy.starlightkel.xyz
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      15 hours ago

      We tried to build systems that perform a kind of basic, rudimentary, extremely power intensive and inefficient mimicry of how (we think maybe) brain cells work.

      Then that system lies to us, makes epic bumbling mistakes, expresses itself with extreme, overconfidence, and constantly creatively misinterprets simple instructions. It recognizes patterns that aren’t there, and regurgitates garbage information that it picks up on the internet.

      Hmmm… Actually, maybe we’re doing a pretty good job of making systems that work similarly to the way brain cells work…

  • BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    21 hours ago

    I really hate this headline.

    They aren’t wrong 70% of the time.

    The study found that they only successfully complete multi-step business tasks 30% of the time. Those tasks were made up by the researchers to simulate an office environment.

    This percentage spread for different models is also absolutely massive too, with some coming in at 1% completion and others coming in over 30%.

  • James R Kirk@startrek.website
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    22 hours ago

    This bit at the end, wow:

    Gartner still expects that by 2028 about 15 percent of daily work decisions will be made autonomously by AI agents, up from 0 percent last year.

    Agentic AI is wrong 70% of the time, but even assuming a human employee is barely correct most of the time and wrong 49% of the time, is it really still more efficient to replace them?

    • Cruxifux@feddit.nl
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      22 hours ago

      Honestly this whole argument is insane to me and indicative to the clown world we live in. If AI can do human jobs, even if it’s a little shittier, we should HAVE THAT and then have HUMANS WORK LESS but this thing that should be making our lives awesome is absolutely going to be used to make them worse.

      • MountingSuspicion@reddthat.com
        link
        fedilink
        arrow-up
        7
        ·
        18 hours ago

        If you think we should offload to AI even if it’s worse, I have serious questions about your day to day life. What industry do you think could stand to be worse? Doctor’s offices? Lawyers? Mechanics? Accounts?

        The end user (aka the PEOPLE NEEDING A SERVICE) are the ones getting screwed over when companies offload to AI. You tell AI to schedule an appointment tomorrow, and 80% of the time it does and 20% it just never does or puts it on for next week. That hurts both the office trying to maximize the people seen/helped and the person that needs the help. Working less hours due to tech advancement is awesome, but in reality offloading to AI in the current work climate is not going to result in working less hours. Additionally, how costly is each task the AI is doing? Are the machines running off of renewables, or is using this going to contribute to worse air quality and worse climate outcomes for people you’re trying to save from working more. People shouldn’t have to work their lives away, but we have other problems that need to be solved before prematurely switching to AI.

      • James R Kirk@startrek.website
        link
        fedilink
        English
        arrow-up
        5
        ·
        20 hours ago

        Right? It actually makes me feel insane that the topic of “humans working less” is never in the selling points of these products.

        Honestly I suspect that rather than some nefarious capitalist plot to enslave humanity, it is just more evidence that the software can’t actually do what the people selling it to big corporations claim it can do.

        • Cruxifux@feddit.nl
          link
          fedilink
          arrow-up
          2
          ·
          14 hours ago

          I mean it kind of is, just less insidious sounding when laid out plain. Hiring people cuts into profits, so they want to hire less people and use AI, which is theoretically cheaper, to do it. And then you don’t have to worry as much about pesky things like UNIONS or HR or SEXUAL HARASSMENT LAWSUITS. Different people have different opinions on how evil absolute nihilism towards the people who are affected by the loss of those jobs is, I personally think it’s pretty fucking evil but I think pretty much every capitalist value is pretty fucking evil so I wouldn’t say my viewpoint on this is an especially nuanced one.

          • James R Kirk@startrek.website
            link
            fedilink
            English
            arrow-up
            1
            ·
            13 hours ago

            Oh yeah absolutely, but I also think the goal of the AI companies is not to actually create a functioning AI that could “do a job 20% as good as a human, but 90% cheaper”, but to sell fancy software, whether it works or not, and leave the smaller companies holding the bag after they lay off their workforce.

            • Cruxifux@feddit.nl
              link
              fedilink
              arrow-up
              2
              ·
              13 hours ago

              I think that’s also part of it. Lots of stupid moving parts in this giant idiot machine.