ALL conversations are logged and can be used however they want.
I’m almost certain this “detector” is a simple lookup in their database.
The detection rate is worthless, an algorithm that says anything is Chatgpt would have a detection rate of 100%. What would be more interesting than that is the false positive rate but they never talk about that.
The detector provides an assessment of how likely it is that all or part of the document was written by ChatGPT. Given a sufficient amount of text, the method is said to be 99.9 percent effective.
That means given 100 pieces of text and asked if they are made by ChatGPT or not, it gets maybe one of them wrong. Allegedly, that is, and with the caveat of “sufficient amount of text”, whatever that means.
It’s actually 1 in 1000, 99.0% would be 1/100.
A false positive is when it incorrectly determines that a human written text is written by AI. While a detection rate of 99.9% sounds impressive, it’s not very reliable if it comes with a false positive rate of 20%.
I know what a false positive is, and it’s not a thing when talking about effectiveness, they claim it gets it right 99.9% of the time.
Right, I see what you mean now. I misread your comment as explaining something that was already clear.
shhh, my professor may use it
If the assignment is so easy ChatGPT can do it, it’s too easy.
My unpopular opinion is when they’re assigning well beyond 40 hours per week of homework, cheating is no longer unethical. Employers want universities to get students used to working long hours.
I agree, and I teach. A huge part of learning is having the time to experiment and process what you’ve learnt. However, doing that in a way that can be controlled, examined, etc, is very difficult so many institutions opt for tons of homework etc.
If they have one, and that’s IF, then of course they won’t release it. They’re still trying to find a use case for their stupid toy so that they can charge people for it. Releasing the counter agent would be completely contradictory to their business model. It’s like Umbrella Corp. but even dumber.
This technology will not be published until the GPT-3 code is released.
There is no way it’s that accurate, which is why they don’t want to release it.
If they aren’t willing to release it, then the situation is no different from them not having one at all. All these claims openai makes about having whatever system but hiding it, is just tobtry and increase hype to grab more investor money.
I wonder if this means they’ve discovered a serious flaw that they don’t know how to fix yet?
I think the more like explanation is that being able to filter out AI-generated text gives them an advantage over their competitors at obtaining more training data.
The flaw is in the training to make it corporate friendly. Everything it says eventually sounds like a sexual harassment training video, regardless of subject.
Given a sufficient amount of text, the method is said to be 99.9 percent effective.
If that’s really the case, they should release some benchmarks. I am skeptical. Promising the world is a key component of their “business model”.
What is a sufficient amount? Most comments are short af.
I think given enough output I could probably detect it that accurately as well. ChatGPT has a particular voice and the longer it goes, the more that voice comes out.
I don’t think these grifters know what a benchmark is.
I trust you bro
deleted by creator
Let me guess: too much processing power?
I call bullshit.
„It’s probably broken and I don’t believe you”
Did they claim it or prove it? I don’t believe anything tech says