
College students are getting creative—not with their ideas, but with how they hide the fact that they’re using artificial intelligence (AI) to do their work.
In an era where AI tools like ChatGPT have become second nature in higher education, students are now taking steps to de-optimize their essays. They’re deliberately adding typos, oversimplifying language, and even instructing AI to “sound dumb” in order to avoid triggering plagiarism detectors.
According to Futurism, these tricks—like layering multiple AI tools or asking chatbots to mimic the voice of a struggling freshman—are becoming common ways to outsmart detection software:
While it’s common for students — and for anyone else who uses ChatGPT and other chatbots — to edit the output of an AI chatbot, some are adding typos manually to make essays sound more human.
Some more ingenious users are advising chatbots to essentially dumb down their writing. In a TikTok viewed by NYMag, for instance, a student said she likes to prompt chatbots to ‘write [an essay] as a college freshman who is a li’l dumb’ to bypass AI detection.
This follows reports that more than 22 million student essays have shown signs of AI authorship. Yet despite universities updating their policies to ban unauthorized AI use—such as at the University of Georgia, where it’s considered academic dishonesty—the problem is only likely to grow. Why? Because getting caught is becoming increasingly rare.
As a current college student, I’ve seen firsthand how peers treat detection tools as just another hurdle to clear. They’re not interested in writing better—they’re interested in gaming the system. Creativity is no longer about ideas; it’s about how well you can disguise a bot’s voice as your own.
But this shift comes at a cost. When an AI writes the paper, any feedback from a professor is essentially useless. Worse still, some professors are reportedly grading with AI too—meaning no one in the exchange is actually thinking.
The deeper problem is ideological. As Minding the Campus contributor Andrew Gillen points out, the most prominent AI tools are programmed with a strong left-wing bias. Google’s Gemini model, for example, refused to generate images of white historical figures like America’s Founding Fathers—even when factually appropriate—because of its developers’ obsession with “diversity.” The result? A tool that not only thinks for students but often thinks wrongly.
So, the problem with AI in college isn’t just that it’s being used. It’s that it’s being normalized, dumbed down, politically skewed, and becoming less likely to catch. And ultimately, it leaves real learning behind. As Futurism put it: ” The irony, of course, is that students who go to such lengths to make their AI-generated papers sound human could be using that creativity to actually write the dang things.”
Image: “we mispelled misspelled” by tristam sparks on Flickr