AI’s Higher Ed Takeover Is Not Inevitable

The proliferation of artificial intelligence (AI) on campus has engendered a chorus of doom and gloom among conservative commentators. Daily Wire host Matt Walsh, citing a survey from the Guardian showing a sharp rise in AI cheating, said recently that “AI has killed what was left of the education system. It’s over.”

But is that really true? I suppose it could be. As a college writing teacher, I’ve worried about it myself. But I don’t believe the demise of higher education at the hands of AI is necessarily inevitable, and I don’t plan to give up without a fight.

Not that I’m opposed to AI, per se. I think it has its place. I’m sure there are many professions in which an ability to use AI well could be an advantage, even a necessity. My son the investment banker relies on AI to help generate the numerous mundane, boilerplate reports that are the bane of his existence.

He can do that effectively because he already knows how to write and think, skills he developed in his required undergraduate humanities courses, among other places. He utilizes AI as a tool to improve efficiency, not as a way to outsource his brain.

That’s why I believe we are missing the mark if we give up on teaching thinking and writing skills, assuming students will inevitably cheat anyway, and there’s nothing we can do about it because AI cheating is so difficult to detect. Even worse, I believe, is trying to teach writing using AI. To my way of thinking, that is not teaching writing at all. It’s acquiescing to the notion that the student’s original thoughts and unique voice are fundamentally irrelevant.

I refuse to accept that. I believe the most important thing we can do, as humanities professors in particular, is help students learn to develop and organize their ideas and then express them in a way that reflects, to some extent, their own personality. After all, the “humanities” are supposedly the study of what makes us human. Sounding like a robot is the opposite of that.

To be fair, the problem of students sounding like robots has been around as long as I’ve been teaching. I had a name for it long before AI came along: “AP Syndrome.” To maximize test scores, AP students are taught to organize their essays using prescribed formulas, sprinkle their sentences with polysyllabic words and trite phrases (“In today’s world…”), and at times be intentionally abstruse.

[RELATED: AI: Friend or Foe?]

Perhaps that’s why I haven’t yet despaired over AI. The strategies I’ve long used to help students overcome AP Syndrome seem to work well against overreliance on AI, too. It’s basically a three-pronged attack.

First, I tell students early on that I don’t want them to use AI in my class and explain what I’m trying to accomplish instead: In a world full of robots, I want them to sound like human beings, unique and discreet individuals, each with their own thoughts, perspectives, and ways of expressing themselves. I also point out some of the well-known problems with AI, such as the fact that it must be instructed very carefully and has been known to make stuff up, not to mention that, according to a recent MIT study, AI is actually making us stupider.

Second, I design my writing assignments to be as AI-proof as possible. Clever students might be able to get around this firewall, but I’m at least going to make it difficult for them. As I said, “programming” AI can be tricky, so I try to create assignments that are highly detailed and specific. A student can’t just say, “Write an essay comparing Hamlet and Macbeth,” because it almost certainly won’t fulfill the exact assignment I gave. Even if I can’t prove the essay was written with AI, I can penalize it for not following directions.

Which brings me to my third prong: Grading. I tell my students up front that, if their essay sounds like it was written by a robot—whether they used AI or just have an advanced case of AP Syndrome—it’s going to get a lower grade. One of the main things I look for when evaluating their writing is authenticity. I want to hear their voice, even if it’s imperfect at times. And when I do hear their voice, I reward that. I’d much rather read an essay that is interesting but full of human mistakes than one written with dry, robotic precision.

Will this strategy work long-term as AI becomes increasingly adept at mimicking human speech? I don’t know. Perhaps the day will come when I can no longer tell the difference between a text produced by a machine and one written by a person. But that day is not today, and until then, I intend to keep fighting the good fight.

Follow Rob Jenkins on X.


Cover by Jared Gould using ChatGPT

Author

  • Rob Jenkins is an associate professor of English at Georgia State University – Perimeter College and a Higher Education Fellow at Campus Reform. He is the author or co-author of six books, including Think Better, Write Better, Welcome to My Classroom, and The 9 Virtues of Exceptional Leaders. In addition to Campus Reform Online, he has written for the Brownstone Institute, Townhall, The Daily Wire, American Thinker, PJ Media, The James G. Martin Center for Academic Renewal, and The Chronicle of Higher Education. The opinions expressed here are his own.

    View all posts

Leave a Reply

Your email address will not be published. Required fields are marked *