• x00z@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    1 month ago

    ALL conversations are logged and can be used however they want.

    I’m almost certain this “detector” is a simple lookup in their database.

  • DrCataclysm@lemmy.world
    link
    fedilink
    English
    arrow-up
    109
    arrow-down
    3
    ·
    2 months ago

    The detection rate is worthless, an algorithm that says anything is Chatgpt would have a detection rate of 100%. What would be more interesting than that is the false positive rate but they never talk about that.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      2 months ago

      The detector provides an assessment of how likely it is that all or part of the document was written by ChatGPT. Given a sufficient amount of text, the method is said to be 99.9 percent effective.

      That means given 100 pieces of text and asked if they are made by ChatGPT or not, it gets maybe one of them wrong. Allegedly, that is, and with the caveat of “sufficient amount of text”, whatever that means.

      • oktoberpaard@feddit.nl
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 month ago

        A false positive is when it incorrectly determines that a human written text is written by AI. While a detection rate of 99.9% sounds impressive, it’s not very reliable if it comes with a false positive rate of 20%.

    • PenisDuckCuck9001@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      2 months ago

      My unpopular opinion is when they’re assigning well beyond 40 hours per week of homework, cheating is no longer unethical. Employers want universities to get students used to working long hours.

      • Amanda@aggregatet.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I agree, and I teach. A huge part of learning is having the time to experiment and process what you’ve learnt. However, doing that in a way that can be controlled, examined, etc, is very difficult so many institutions opt for tons of homework etc.

  • Etterra@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    2 months ago

    If they have one, and that’s IF, then of course they won’t release it. They’re still trying to find a use case for their stupid toy so that they can charge people for it. Releasing the counter agent would be completely contradictory to their business model. It’s like Umbrella Corp. but even dumber.

  • Cyteseer@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    ·
    2 months ago

    If they aren’t willing to release it, then the situation is no different from them not having one at all. All these claims openai makes about having whatever system but hiding it, is just tobtry and increase hype to grab more investor money.

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 months ago

    I wonder if this means they’ve discovered a serious flaw that they don’t know how to fix yet?

    • ArbitraryValue@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      I think the more like explanation is that being able to filter out AI-generated text gives them an advantage over their competitors at obtaining more training data.

    • MagicShel@programming.dev
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      2 months ago

      The flaw is in the training to make it corporate friendly. Everything it says eventually sounds like a sexual harassment training video, regardless of subject.

  • Alphane Moon@lemmy.world
    link
    fedilink
    English
    arrow-up
    162
    ·
    2 months ago

    Given a sufficient amount of text, the method is said to be 99.9 percent effective.

    If that’s really the case, they should release some benchmarks. I am skeptical. Promising the world is a key component of their “business model”.