• Shayeta@feddit.org
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    4
    ·
    15 hours ago

    It doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 hours ago

      I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.

    • Outbound7404@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      5 hours ago

      A human can review something close to correct a lot better than starting the task from zero.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          harder to notice incorrect information in review, than making sure it is correct when writing it.

          That depends entirely on your writing method and attention span for review.

          Most people make stuff up off the cuff and skim anything longer than 75 words when reviewing, so the bar for AI improving over that is really low.

        • loonsun@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      14 hours ago

      Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

        I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.

        • zbyte64@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

          I usually write 3x the code to test the code itself. Verification is often harder than implementation.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Yes, but the test code “writes itself” - the path is clear, you just have to fill in the blanks.

            Writing the proper product code in the first place, that’s the valuable challenge.