
Editor’s Note: A version of this article was originally published on the author’s Substack on June 6, 2025. With edits to match Minding the Campus’s style guidelines, it is crossposted here with permission.
A new era is here. With ChatGPT, Claude, and Grok now widely available, artificial intelligence (AI) tools are starting to reshape how we write, learn, and work. These large language models (LLMs) can generate relatively high-quality essays, emails, and even poems with nothing more than a prompt. That’s both thrilling and unnerving—especially in higher education, where the stakes of knowing—or appearing to know—something are often high.
Some universities have responded by banning or restricting AI tools altogether. Schools in Alabama and New York, along with institutions like Cambridge and Imperial College London, are trying to contain what they see as a threat to academic integrity. But what if AI tools don’t just threaten higher education? What if they reveal something about what higher education really is?
Why Ban ChatGPT?
The main rationale behind banning AI is simple: students who outsource their writing won’t develop key skills. College, we’re told, is where students sharpen critical thinking, learn to write clearly, and build the intellectual foundation needed to thrive in a complex world. This is the “skills” explanation for higher ed.
Carnegie Mellon University says it aims to foster “deep disciplinary knowledge,” along with communication and leadership skills. Arizona State wants to graduate “critically thinking global citizens.” These goals track what many assume college is for: developing human capital.
Under this model, AI poses a clear problem. If students rely too much on ChatGPT to write their papers, they won’t learn to do it themselves. If AI writes your term paper, you might still get a degree—but you might not become the kind of person your degree is supposed to represent.
[RELATED: AI’s Higher Ed Takeover Is Not Inevitable]
Or Maybe It’s All About the Signal?
But what if that story is only half right? There’s another theory about college that casts a different light on ChatGPT: signaling theory.
In economics and evolutionary biology, signaling theory explains how people—and animals—communicate hard-to-observe traits through costly, hard-to-fake behaviors. Peacocks flaunt their feathers. Olympic athletes train for decades. People wear designer clothes or offer expensive engagement rings. The costliness of the signal is part of the point—it’s what makes it believable.
Michael Spence, who won a Nobel Prize for this idea, applied it to education. Most employers don’t really know how smart or diligent a job applicant is. So, they rely on signals—like degrees and GPAs—to reduce uncertainty. A diploma from a selective university says: “This person has enough intelligence, conscientiousness, and conformity to survive four years of institutional demands.”
That may sound cynical. But if you look at how students behave, it starts to make sense. Students often celebrate when class is canceled. They seek “easy A” classes. They worry more about grades than content retention. As economist Bryan Caplan noted in The Case Against Education, they’ll cheat—if they think they can get away with it.
If college were primarily about acquiring knowledge, these behaviors would be irrational. But if college is largely about signaling preexisting traits, then the behaviors make sense: what matters is getting the credential, not necessarily absorbing the content.
Enter ChatGPT
Which brings us back to ChatGPT.
If college is about learning skills, then AI is a major threat. But if college is mostly about signaling traits like intelligence and work ethic, then the game just changes a little. The signal evolves.
At schools that ban AI tools, students face a new kind of challenge: can you use AI in your work without getting caught? It turns out, many already are. A Columbia undergrad recently described how students are using ChatGPT to brainstorm, outline, or even co-write their papers before rewriting them to sound more “human.” The AI isn’t replacing students; it’s becoming a stealth tutor.
In this new world, the students who succeed are the ones who know how to integrate AI tools strategically and responsibly—enough to do well, but not so much that they get flagged by plagiarism software or professors. They’ll be the ones who understand the tool, not just use it.
In fact, being able to collaborate effectively with AI might itself become a new kind of signal: one that indicates tech-savviness, adaptability, and the ability to navigate complex institutional systems. And in a post-LLM economy, those might be exactly the traits employers want.
So What Happens Next?
As Caplan argues, signals don’t have to be perfect—they just have to be better than nothing. College still filters people, even if AI helps some students along the way. A degree still says something. It just might say something different now.
The smartest students won’t simply avoid using AI to cheat. They’ll use it to think more clearly, write more effectively, and work more efficiently. They’ll combine human insight with machine support. That’s a skill worth learning—and maybe worth signaling too.
Cover illustration by Jared Gould, created with Grok and based on the author’s original artwork accompanying his original essay.