- cross-posted to:
- technology@lemmy.world
“admit” like this was something they denied? Everyone always knew that ai text detectors don’t work and they never claimed otherwise.
The issue is that schools have been using detectors to flag AI essays. When students (wrongly) get caught up in them, they get penalized even if they never used any AI to help them write the essay in question. Sort of like a plagiarism filter falsely flagging a paragraph as plagiarized, even if the student didn’t plagiarize it.
It also disproportionately flags the work of neurodivergent students, to add a bonus reason that these detectors are a dogshit idea.
Not that I don’t believe you, because I do, but do you have a citation for this?
Neurodivergent people are secretly robots, and sometimes the AI text detectors can pick up on this, and risk ruining our cover.
In all seriousness, I looked it up out of curiosity and couldn’t find a study stating that, but that isn’t surprising as the use of AI detectors is relatively new. I do think there is a high likelihood that the statement is true, just due to how a neurodivergent person often writes compared to a neurotypical person. This is not something that you could say matter-of-fact though, just anecdotally.
I’m glad I went to school (mostly) before all the automatic scanning…
In college I had a lot of stuff flagged as plagiarism, including a metaphysics paper I wrote in which I created a new hypothesis for consciousness (not necessarily a good one, mind, but entirely unique) because yup, robot.
It’s so weird that writing properly and without error gets you flagged as a cheater…. That supposed to be the damned standard you are being measured on…
But yeah, it sounds highly plausible, but I don’t tend to take assertions as fact without support, so thanks for looking into it for me :)
From what I understand, a lot of AI text detectors work from measuring “perplexity and burstiness” which basically means, lower randomness, lower emotion, and sentence uniformity is more likely to get flagged as AI text. Those are all things that can be associated with neurodivergence, so I see where that statement would come from. That’s also the way you are expected to write formal essays.
As for plagiarism detectors, I don’t know much about them except that they false flag the shit out of everything.
Oh yeah it’s an issue, but none of that is on OpenAI. There’s no admission here. It’s a statement from an authority to shut up the idiots, like a map maker saying earth is a sphere, something we already know but somehow it’s still believed by many.
I figured this was the case from th start. basically as useful as a polygraph.
This isn’t an admittance, as that implies a fault. This is a statement of fact. Of course AI writing detectors don’t work, any human can write in any style, and an AI can replicate any writing style.
AI can replicate any writing style.
This is false, mostly because AI outputs nonsense that almost looks like real writing. It’s all firmly in the uncanny valley of gibberish.
Is true that an AI cannot spot AI writing, but for anything longer than a paragraph or two a human can spot AI output most of the time.
As a professor that has to grade a lot of papers, I’m tempted to agree with this. But we probably need some well-conducted research to determine if this conventional wisdom is actually correct.
This feels a little like people who think they can always spot plastic surgery, when really they can just always spot the bad-okay cases, but completely miss the good outcomes of plastic surgery
Have you seen AI in recent months…? It’s really not that cut and dry anymore. Might see some hiccups here and there but nowhere near the “uncanny valley of gibberish” levels you describe, at least not on the good ones