Dutch lawyers increasingly have to convince clients that they can’t rely on AI-generated legal advice because chatbots are often inaccurate, the Financieele Dagblad (FD) found when speaking to several lawfirms. A recent survey by Deloitte showed that 60 percent of lawfirms see clients trying to perform simple legal tasks with AI tools, hoping to achieve a faster turnaround or lower fees.

  • Lurking Hobbyist🕸️@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    I understand what you mean, but… looks at Birgenair 301 and Aeroperu 603 looks at Qantas 72 looks at the 737 Max 8 crashes Planes have spat out false data, and in of the 5 cases mentioned, only one avoided disaster.

    It is down to the humans in the cockpits to filter through the data and know what can be trusted. Which could be similar to LLMs except cockpits have a two person team to catch errors and keep things safe.

    • ToTheGraveMyLove@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      54 minutes ago

      So you found five examples in the history of human aviation, how often do you think AI hallucinates information? Because I can guarantee you its a hell of a lot more frequently than that.

      • Lurking Hobbyist🕸️@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        33 minutes ago

        You should check out Air Crash Investigation, amigo, all 26 seasons, you’d be surprised what humans in metal life support machines can cause when systems breakdown.