The AI Puzzle: When Machines Mistake the US Constitution as AI-Created

According to a recent report by Ars Technica, AI detectors are identifying the United States Constitution, a fundamental document of American democracy, as a creation of artificial intelligence.

unexpected outcome has led to a wave of bemusement and confusion, with social media users joking about the founding fathers being futuristic bots. The same phenomenon has been observed with biblical texts, further highlighting the fallibility of these AI tools.

The Ars Technica report also brings to light the use of AI detectors in educational settings. There have been instances where overzealous professors have failed entire classes due to the suspicion of AI writing tool use. This has led to a crisis in education, with teachers scrambling to maintain the status quo and continue relying on essays to gauge student mastery of a topic.

However, the report suggests that these AI detectors, including GPTZero, ZeroGPT, and OpenAI’s Text Classifier, are not entirely reliable due to their tendency to give false positives. This raises concerns about their use in educational settings and the potential for unjust accusations against students.

The crux of the issue, as explained by Ars Technica, lies in the methodology employed by AI detectors. These tools are trained on vast volumes of human and AI-generated text. They then use metrics like “perplexity” to determine whether a piece of writing is more likely to be human or AI-generated.

Perplexity, in the realm of machine learning, measures how much a text deviates from what an AI model has learned during its training. The theory is that AI models, like ChatGPT, will naturally gravitate toward what they know best, which is their training data. The closer the output is to the training data, the lower the perplexity rating.

However, the catch is that humans can also write with low perplexity, especially when adopting a formal style, as seen in legal or academic writing. Moreover, many phrases we use are surprisingly common, further blurring the lines between human and AI-generated text.

The recent misclassification of the US Constitution and biblical texts by AI detectors underscores the limitations of these tools. While they may seem like a convenient solution to detecting AI-generated writing, their propensity for false positives suggests that they cannot be trusted blindly.

Featured Image Credit: Photo / illustration by “geralt” via Pixabay

Previous Story

ChatGPT: A Game-Changer for Writers or a Double-Edged Sword?

Next Story

Google’s ‘Nearby Share for Windows’ App Exits Beta, Bridging the Gap Between Android and Windows

Latest from News