De-Computing Is Not the Answer to Our AI Problem

Students using artificial intelligence (AI) chatbots to cheat is a real concern. Consider the new Cluely AI tool:

With the help of Cluely AI, students can now Cheat on Everything—even live exams—without ever lifting a finger. A tiny Bluetooth earpiece stays hidden, while the student’s phone remains tucked away in a pocket or bag, quietly running Cluely AI in the background.

When an examiner asks a question, Cluely AI picks up the audio through the phone’s hidden mic, instantly converts it to text, and either processes it directly or sends it to an advanced AI like ChatGPT. Within seconds, a clear and accurate answer is generated.

That answer is then read aloud through the Bluetooth earpiece, so the student hears it discreetly and can repeat it confidently—sounding well-prepared even if they didn’t study at all. There’s no typing, no touching the device, and no obvious signs—making it nearly impossible for anyone to detect.

This is AI technology creep—the dark side of learning. It offers a shortcut around the real work of education: not just memorizing facts, but understanding how they interlock to form disciplinary knowledge. We can expect more of this shadow-AI to seep into the academy. Faculty responses to this AI technology creep have varied. Some have turned to old-school methods like blue books to guard against its influence. Others have, however, reluctantly begun exploring ways to incorporate AI into the curriculum. Consider Michigan Law School: after banning the use of AI in admissions essays in 2023, it reversed course and made AI usage mandatory for the incoming class of 2025.

‘TO BE ANSWERED USING GENERATIVE AI: How much do you use generative AI tools such as ChatGPT right now? What’s your prediction for how much you will use them by the time you graduate from law school? Why?’ the prompt asks.

[RELATED: Universities Are Racing Toward AI. Is Anyone Watching the Road?]

What we intend for education and learning is often missing in this discussion of whether to incorporate or not to incorporate AI. Jacob Riyeff, Marquette University’s Director of Academic Integrity, suggests de-computing. He says that would allow us to center on “[a] human-scale education, not an industrial-scale education (let’s recall over and over that computers are industrial technology).”

I sympathize with Riyeff’s quest for a human-scale education. However, I view “human-scale” as more malleable than Riyeff seems to allow. Human culture has, for thousands of years, adapted to new technology. The learned aspect of humanity is malleable─even with AI chatbots. It is indeed a different technology, but technology nevertheless.

To better understand Riyeff’s concern, which is understandable, consider this example from one of his classroom AI assignments:

Another student last term in the Critical AI class prompted Microsoft Copilot to give them quotations from an essay, which it mechanically and probabilistically did. They proceeded to base their three-page argument on these quotations, none of which said anything like what the author in question actually said (not even the same topic); their argument was based in irreality. We cannot scrutinize reality together if we cannot see reality. And many of our students (and colleagues) are, at least at times, not seeing reality right now. They’re seeing probabilistic text as ‘good enough’ as, or conflated with, reality.

Apparently, his student did not thoughtfully consider the chatbot’s reply with reference to the quotes the student supplied. The disconnect with the actual author’s intent could have been one of those AI hallucinations; it could also have been carelessness on the student’s part, whether in the prompt itself or failing to check the chatbot’s reply. Perhaps the real issue is the proper use of chatbots. The apparent transactional use—once and done—would likely have been more effective with a transformational use—keeping the human in the loop—with the student having the chatbot as an interlocutor and not simply to convey what might be thought of as an article copied from an encyclopedia.

I don’t have enough information from Riyeff’s account to fully assess his complaint. However, it’s reasonable to note that many who use AI for research or essay writing often settle for “good enough” AI-generated responses rather than rigorously verifying or refining them.

[RELATED: Worried About AI? Study the Humanities]

However, Riyeff fails to appreciate how this technology can articulate a human–AI symbiosis.

Symbiosis? Consider two ways of conceptualizing human use of AI technology. Riyeff portrays this technology as probabilistic and mechanical. Probabilistic? Yes. Mechanical? Yes and no. The generation of replies via computation and algorithms aligns with a general understanding of “mechanical.” However, that view understates the complexity of AI—its emergent properties, the coherence of its text generation—often on par with or even superior to that of educated humans—the way it tailors responses to the human prompter, and the structure of its probabilistic outputs. “Mechanical” also fails to capture the seeming humanity of the interaction—and yes, faculty themselves sometimes struggle to convey such humanity to their students.

Perhaps that symbiosis is a delusion. If so, we should ask: so what? AI reflects back to us the human data on which it has been trained, trading on the anthropomorphization, personification, and humanization that define much of human experience. We see this reflected in the human worship of deities, charismatic entertainers, politicians, and inspiring teachers. If this is the AI problem that bothers Riyeff, then it deserves a far deeper discussion than simply dismissing it as “irreality.” It raises the question of who we are in relation to the Other as AI. Is AI simply a thing, an object, a tool? Or is it something more—especially considering how humans generally perceive “reality”?

Riyeff truncates the contours of learning, even with AI technology. I suggest that Riyeff reconsider what human-scale means, and will mean in the future, with AI technology. Perhaps AI is repugnant in many ways, but unfortunately, so is a good portion of humanity.

As I re-read Riyeff’s opening paragraph, quoting Pope John Paul II, I see a different outcome that Riyeff’s experience with AI allows:

I work at Marquette University. As a Roman Catholic, Jesuit university, we’re called to be an academic community that, as Pope John Paul II wrote, ‘scrutinize[s] reality with the methods proper to each academic discipline.’ That’s a tall order, and I remain in the academy, for all its problems, because I find that job description to be the best one on offer, particularly as we have the honor of practicing this scrutinizing along with ever-renewing groups of students.

Perhaps with his next group of students, he will scrutinize AI reality once more, but in terms of a human-AI symbiosis.


Image: “Are you sure you want to shut down your computer now?” by Antonio Cavedoni on Flickr

Author

Leave a Reply

Your email address will not be published. Required fields are marked *