Why You Shouldn’t Trust AI Detectors

Many teachers are unhappy with the AI ​​revolution, and it’s hard to blame them: ChatGPT has proven that you can send a hint to an AI for, say, a high school essay, and the AI ​​can come up with a result in seconds. Of course, there may be many errors in this essay, but hey, homework is done. So when AI checkers advertise themselves as the new line of defense against AI cheating, it makes sense for everyone concerned to start using them. The problem is that they are not perfect, and these imperfections hurt people.

How AI detectors work

All currently popular AI programs (eg ChatGPT) are based on large language models (LLMs). LLMs learn from a huge amount of text and use this knowledge to answer you. Simply put, all the AI ​​does is predict what word should follow the last thing it said, based on what it knows from its training. He doesn’t know what the expression “Snow is cold and fire is hot” means, but he does know that “hot” often follows “there is fire.” (Again, super simplified explanation.)

AI validators work in a similar way: they are trained on text datasets—some written by humans, others written by bots, and some written by both. AI detectors look for telltale signs of text generated by LLM, such as repetitive language, word choice probability, and tone. In the same way that you hope to teach AI to recognize patterns, developers also hope that AI checkers will be able to recognize whether text was written by a robot or not.

These developers claim that their checkers can do this. But I’m not sure. In fact, I asked ChatGPT if it could recognize if a given text was written by an AI or a human. He replied:

As an AI language model, I can generate text that is hard to distinguish from human-written text. However, it is not always possible to definitively determine whether a piece of text was generated by AI or written by a human, especially if the text was well written and edited.

There are some indicators that can indicate if text was generated by AI, such as unusual or repetitive patterns, lack of coherence, or unusual vocabulary choices. However, these indicators are not always present and can be overcome with the help of advanced artificial intelligence technologies.

It is also worth noting that the quality of AI-generated text is constantly improving, so in the future it may become increasingly difficult to distinguish between human-generated text and AI-generated text. Ultimately, the best way to determine if a piece of text was generated by artificial intelligence is to carefully analyze it by a trained expert in the field.

This isn’t really a glittering endorsement of the godfather of AI bots. But let’s see how these detectors behave in action:

How AI detectors work in the wild

Turnitin , which I remember inspired fear with its plagiarism checking system, claims that its software now detects AI-generated content with 98% certainty . If this is true, then it is undoubtedly a lot. BestColleges tested software that could distinguish between human-written, AI-generated, and hybrid writing. But even this statistic means that a teacher will wrongly accuse one in every 50 students of using AI tools to cheat.

However, the Washington Post’s experience with Turnitin is even more damning. Students helped Post create 16 writing samples compromised from human-generated, artificial intelligence, and hybrid text. In the end, Turnitin received over half of the samples, at least partly incorrect. He tagged six correctly, but completely lost the ball on three others. If it were true class, Turnitin would produce nothing less than a mess.

One of the first AI generators to go viral, GPTZero , failed the accuracy test in a small number of experiments. I tested this by writing a paragraph about ice cream in a neutral tone. He told me: “Your text will most likely be written entirely by AI.”

Ice cream is a popular frozen dessert enjoyed by people all over the world. It is mostly eaten in warm to hot weather such as summer, but is enjoyed all year round. Ice cream comes in a variety of flavors and is often paired with toppings such as candies, nuts, fruits, or syrups.

But my ambiguous paragraph is only the beginning. Another detector, confusingly named ZeroGPT , fell on its head when a Reddit user decided to analyze the US Constitution . According to ZeroGPT, the constitution is 92.26% written by AI. Who knew that the Philadelphia Convention relied so much on artificial intelligence in drafting the country’s laws. (In any case, I think this explains some of these amendments.)

However, one way to trick the detectors is to let the AI-generated text go through the AI ​​again. QuillBot , for example, rewrites the text for you, and is already being used by students to avoid checkers. If an AI detector is looking for the “average” of a given text to determine if it was written by an AI or not, having another AI adding more variety to the text will cause a jolt in that system. QuillBot often appears in TikTok comments discussing AI detectors at school. Children will find a way.

AI detectors harm innocent students

While all these examples are theory. But these checkers are not prototypes. They’re here and they’re being used against real students. I’m sure these detectors have identified many students who have used tools like ChatGPT to scam, but they also falsely accuse innocent students of the same, and it’s detrimental:

This tweet had a “good” resolution as the instructor admitted the mistake and dropped his charges. But in other situations, teachers treat AI checkers like gospel, ending the discussion with each “AI-generated” result:

I will not deny that we are facing a new world. Larger language models mean that students can plug in an essay prompt and receive a fully written essay (with varying quality levels) in return. But perhaps the fact that students can cheat the system so easily indicates that the system needs to be changed rather than a temporary solution that punishes innocent students just as easily as guilty ones.

Turnitin calls moments when it flags human-generated text as being written by AI ” false positives “. Cute. But the company stresses that it “does not define misconduct”; rather, they offer data and leave the last word to the educator. This disclaimer seems to have been lost on many: in their eyes, if an AI checker says you’re a scammer, you’re a scammer.

More…

Leave a Reply