• conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    2 months ago

    I think this is a good thing.

    Pictures/video without verified provenance have not constituted legitimate evidence for anything with meaningful stakes for several years. Perfect fakes have been possible at the level of serious actors already.

    Putting it in the hands of everyone brings awareness that pictures aren’t evidence, lowering their impact over time. Not being possible for anyone would be great, but that isn’t and hasn’t been reality for a while.

    • Hacksaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      I completely agree. This is going to free kids from someone taking a picture of them doing something relatively harmless and extorting them. “That was AI, I wasn’t even at that party 🤷”

      I can’t wait for childhood and teenage life to being a bit more free and a bit less constantly recorded.

      • gandalf_der_12te@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 months ago

        yeah, every time you go to a party, and fun happens, somebody pulls out their smartphone and starts filming. it’s really bad. people can only relax when there’s privacy, and smartphones have stolen privacy from society for over 10 years now. we need to either ban filming in general (which is not doable) or discredit photographs - which we’re doing right now.

    • reksas@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      2 months ago

      While this is good thing, not being able to tell what is real and what is not would be disaster. What if every comment here but you were generated by some really advanced ai? What they can do now will be laughable compared to what they can do many years from now. And at that point it will be too late to demand anything to be done about it.

      Ai generated content should have somekind of tag or mark that is inherently tied to it that can be used to identify it as ai generated, even if only part is used. No idea how that would work though if its even possible.

        • reksas@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          it wouldnt be label, that wouldnt do anything since it could just be erased. It should be something like invisible set of pixels on pictures or some inaudible soundpattern on sounds that can be detected in some way.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            2 months ago

            But it’s irrelevant. You can watermark all you want in the algorithms you control, but it doesn’t change the underlying fact that pictures have been capable of lying for years.

            People just recognizing that a picture is not evidence of anything is better.

            • reksas@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Yes, but reason why people dont already consider pictures irrelevant is that it takes time and effort to manipulate a picture. With ai not only is it fast it can be automated. Of course you shouldnt accept something so unreliable as legal evidence but this will spill over to everything else too

              • conciselyverbose@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                It doesn’t matter. Any time there are any stakes at all (and plenty of times there aren’t), there’s someone who will do the work.

                • reksas@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  It doesnt matter if you cant trust anything you see? What if you couldn’t be sure if you weren’t talking to bot right now?

                  • conciselyverbose@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    edit-2
                    2 months ago

                    Photos/video from unknown sources have already been completely worthless as evidence for a solid decade. If you used a random picture online to prove a point 5 years ago, you were wrong. This does not change that reality in any way.

                    The only thing changing is your awareness that they’re not credible.