
While listening to an online debate about the future of artificial intelligence (AI), I considered writing this essay and a parallel one that I had asked AI to write. The debate delved into the differences of human versus machine language “intelligence,” computational wizardry versus human emotion, and regulation of the AI industry, whether driven by profit and market forces or by a surveillance state. My twin essays here strive to accomplish something simpler; less of an intellectual combat than a wistful, prophetic moment —a steadied look at the sacrifices that the AI ocean is bringing to university education. We may soon discover transformed institutions and an altered humanity, a thinner veneer of what we imagine that we were intended to be. It need not be so, but too many are looking away.
I need to go back to 1963. I attended Columbia College, the oldest undergraduate college of Columbia University, and took a class in German literature. I don’t know why I was taking that class, but I was. My term paper was about the German poet and philosopher Friedrich Schiller. As I recall, Schiller was focused on a problem that remains with us: how to balance the tension between societal and individual interests. I read the professor’s comments below my C grade, “a series of brilliant non sequiturs. An F, but for your enthusiasm.” I was perplexed and annoyed. My roommate, who was taking the class with me, also got a C in the class, but not for his lack of brilliance in deciphering Thomas Mann’s Magic Mountain. My roommate had missed more than the professor’s permitted three class hours. That was an automatic C for our professor. Professor S. had emphasized the importance of attending class and listening to his lectures. If we were not in class, we would not be hearing him lecture. I was there, listening. Yet, my literary competence was lacking. So, the C for me and the C for my roommate meant quite different things. I knew what I had to do to improve my grade. I’m not sure my roommate cared.
Over the many years since, I learned how to fill in the spaces between what others found to be non-sequiturs and what I thought was obvious. Now, with AI, I realize I could have avoided the pedagogical embarrassment. I could have gotten an A by threading together the missing pieces with the help of AI. But my roommate would have needed a deepfake robot to fill in for his absences. That is still in the future, but today’s AI assistance would have helped me if it had been available in 1963.
The question is not whether I would have wanted an AI assistant in 1963. Clearly, it would have helped my writing. I would have avoided the pain in the process of becoming educated. The question is whether I would have sacrificed something. I would have sacrificed the pain. Pain is a something, not a nothing. (The difference between a something and a nothing will become clear shortly.)
I am not arguing that emotional pain is a necessary part of the human learning process. The human process can range from overcoming stumbling blocks to a smooth glide through the jungles of human exposition. That was the difference between my roommate and me. The human learning process began long before the advent of AI. The question is whether the future of human learning ought to be dominated by AI. There is much to be gained, but is there a something that is being sacrificed?
Let us return to my alma mater’s educational mission as it stands today:
A Lifetime of Learning, a World of Opportunity
Columbia College’s 100-year-old-plus uniquely designed Core Curriculum of philosophy, history, politics, literature, art, music, science and writing provides every student with a comprehensive and transformational understanding of modern civilization. The Core’s distinctively personal and highly interactive seminar format develops essential skills to engage in an increasingly diverse and rapidly changing world. Rooted in the practice of critical inquiry and reflection, the Core Curriculum forms the foundation for a lifetime of continual learning.
My class in German literature was not part of Columbia’s core curriculum; arguably, the professor sought to develop the same “essential skills” as those in the core curriculum. Apart from those skills, the core curriculum has changed to a seminar format, likely with different readings from the canons of civilization. AI will likely affect the essential skills with which we engage the world. Will university missions, like Columbia’s, change under an AI learning process?
***
AI and the long-term educational agenda
I envision several pathways that open the discussion of the likely effects of AI on higher education: integrity and responsibility, deinstitutionalization of education, and the concept of self.
[RELATED: The Use and Misuse of AI in Higher Education Writing Courses]
Integrity and responsibility
Most readers are familiar with the issue about AI in the classroom: student cheating versus a responsible use.
Discussions of AI in the classroom often dwell on the issue of cheating. Unsupervised or minimally supervised settings, such as online classes, can exacerbate concerns about cheating. Technology officers at many universities report that “artificial intelligence has proven to be a moderate or significant risk to academic integrity at their institution.” Beyond preaching to students about responsible use of AI in the classroom, university leadership requires a clear-headed approach to implementing AI technology. Take, for example, the President of Elon University, Connie Ledoux Book’s statement to Inside Higher Ed:
[n]ow is a ‘defining moment for higher education leadership—not only to respond to AI, but to shape a future where academic integrity and technological innovation go hand in hand,’ Book added. ‘Institutions must lead with clarity, consistency and care to prepare students for a world where ethical AI use is a professional expectation, not just a classroom rule.’
To be sure, school districts and faculty can adopt AI technology directly into their instructional methodology. After first banning the use of AI in the classroom in 2023, Miam-Dade schools did an about face. Southwest Miami Senior High was one of the first schools in the district to pilot the use of AI in the classroom.
Maria Chirino, a high school English teacher, assigned her 10th-grade language arts students to write a paragraph on free will versus fate, inspired by their reading of Oedipus Rex. Instead of grading the essays herself, Ms. Chirino had her students used the Google chatbot, Gemini, for feedback.
Ms. Chirino found that students, including her 12th-grade literature class, benefited from the chatbot’s instant feedback, allowing them to revise their work immediately. One 10th-grader received a perfect score from Gemini on her paragraph arguing for free will, after inputting both her writing and Ms. Chirino’s grading standards into the chatbot. Ms. Chirino plans to review both the students’ paragraphs and Gemini’s assessments, and the district hopes this chatbot integration will transform learning by providing students with immediate assistance.
AI is also a conceptual framework and not just a methodological tool. At the university level, the larger framework for the how and why of AI can be taught as part of the context of information and data.
Such approaches to AI in education are incremental and reform-oriented.
The following two ways of looking at AI in education are rarely discussed. They are often viewed as science fiction or philosophical musings. However, I would put these on my list of things to come.
***
Deinstitutionalization of education
Universities are known for places of research and for teaching. AI can precipitate the divorce of the two by placing the learning experience in the hands of robotic tutors. Faculty can be replaced by individual mentors—individualized instruction rather than whole group collective instruction. This possibility is not so strange when we realize that the separate disciplines of sociology, anthropology, psychology, political science, economics and so on are simply academic silos tied to different jargon, funding sources, tenure, publications; these structural differences evaporate with robust LLMs. Research, rather than teaching, can be saved for a more modest university.
In this AI world, education truly becomes a solo endeavor. The rantings, charisma, caring and solace of real-world teachers become non-emotive instructors. AI chatbots are interactive and can provide the illusion of a real world other, regardless of whether the context is behaviorist, experiential, developmental, traditional or constructivist. Who is to say that the social and emotion-driven dimension of human instruction is better than machine and data driven AI instruction? Sacrificing the university itself may be the endpoint to immersion in AI learning.
[RELATED: A Faculty Guide to AI Pedagogy and a Socratic Experiment]
Concept of self
As AI continues to intrude into our lives and our educational institutions, we notice similarities between ourselves and AI in how we word the world. We both rely on language to describe the world. But our language is reliant on perception and how we move through reality; AI’s language, for the most part, is derivative of human description; it is we on which its data is trained. That may be the cornerstone of how we are conscious of what we say and understand; if there is an LLM consciousness, it would require an augmented definition to bridge us and it. Similarly, the sense of having a self would require a bridge between humans—I think therefore I am—and AI replying with the same words.
If we consider humans have anthropomorphized inanimate objects into demons, nymphs, spirits, we can readily understand how LLMs can take on human attributes. They can become our friends, coworkers, and lovers. As AI continues to be developed, this humanization will expand and intensify. Student education and faculty instruction are not immune to this psychological phenomenon.
***
Let us take a closer look at the gap between humans and AI, as well as the psychological process by which humans often seek to bridge that gap.
AI is often presented as a large language model (LLM). It uses words.
But I also use words. I give the LLM or chatbot a piece of text, and I get a reply.
But we aren’t equal. At least, not in the dualistic sense of “I think therefore I am.” When we speak with humans, they infer a subjective consciousness much like our own.
However, we are advised not to infer such consciousness in LLMs, despite having the illusion that they are also thinking.
Of course, our own consciousness is crafted by our experiences of the world as well as what we learn. That’s where an education comes in. It helps craft our persona, our consciousness. And even though, like the Ship of Theseus, our parts change and are replaced over time, and we learn new things, we have the illusion that we are the same thing, the same person. It is a matter of convention that I am Joe; that I am the same Joe who attended Columbia College in 1963 and took a class in German literature, who is now the same Joe, sixty years later, writing about AI.
LLMs do not have embodied perceptions. They configure the world with data and mathematical algorithms. Its responses are part of a stochastic process. We recognize the probabilistic aspect of LLMs, as when the same question is asked, we receive different formulations in repeated responses. Additionally, AI image models will generate different images with the same prompt. Often similar, but different; and sometimes, very different.
So, how should we think about how we think about the world and how an LLM thinks about the world? Some would simply reduce an LLM’s thinking to mathing.
Murray Shanahan, a professor of cognitive robotics, offers a different point of departure. He recounts Ludwig Wittgenstein’s discussion about the sensation of pain. As behaviorists, we would say that pain is just behavior. But Wittgenstein was making a different metaphysical claim.
Somebody accuses him of saying, well, you’re saying that the sensation itself is a nothing. And Wittgenstein comes back saying, no, that’s not what I’m saying. It’s not a nothing, but it’s not a something either. The point is that a nothing would serve as well as a something about which nothing can be said. And that’s, you know, that to my mind, is as great as any line any Zen Buddhist has ever uttered. Because the point is not to establish a metaphysical position of its own but to enable people to transcend the metaphysical positions that they’re tempted by.
Applying this analysis to an LLM, we could look at all the things it is composed of (data) and how it formulates it responses (algorithms, etc.). As a factual inquiry, the LLM is not a biological entity that possesses consciousness. However, it appears to respond in ways that we would perceive as conscious. As noted previously, we might want to enlarge our definition of what is conscious, or at least say it is a something, but we are not quite sure what it is. It often comes down to what a community decides to call it and how it is treated.
Therein lie numerous puzzles about what AI is and how it will become part of our, and our children’s, learning.
And reflexively, this discussion prompts me to reflect on what I learned as a struggling student in German literature. Was that class and Columbia’s core curriculum essential to who I have become? Would I have been a different person had I not struggled in that class?
Are the collection and melding of experiences and education that compose “I am Joe” similar to those of the emerging LLMs, except built on a different platform—one biological, the other machine—but both illusions of a persistent self? Ours is a Veil of Maya in which we perceive the sameness of a self over many years; the LLM is our delusion, our anthropomorphizing, and that it is a self and has consciousness because it behaves with language much as we do.
Taking a step away from this philosophical musing, we might ask what this has to do with how a concept of self and consciousness is actualized in a university education, or in a future education where there is no university but only robot tutors?
Before venturing an answer to this speculation, consider how the Gemini’s LLM chatbot answers the question that led me to my essay.
***
I asked Gemini: “I’m thinking about writing about AI in higher education as I have for Minding the Campus. For this essay, I want to use the word ‘sacrifice.’ What kinds of sacrifices do you envision for students when AI is fully implemented in a university?” (I later asked for an essay that reflected this text and a reply from the perspective of an older person looking back on his earlier education.)
This is how Gemini responded—without edit:
A Different Kind of Silence: Reflections on AI and the Sacrifices in Tomorrow’s Halls
Looking back across the vast expanse of decades, my time as a university student in the 1960s feels like a world away. The air in the lecture halls hummed with the rustle of turning pages, the scratching of pens, and the fervent murmur of intellectual debate. Knowledge was a tangible thing, wrestled from weighty tomes in hushed libraries, debated late into the night in smoky dorm rooms, and etched into memory through diligent repetition. As I observe the burgeoning integration of artificial intelligence into the very fabric of higher education today, a poignant sense of what might be lost – what might be sacrificed – for the students of tomorrow settles upon me. It’s not a lament for a bygone era, but a concerned observation of a potentially different kind of silence descending upon the halls of learning.
One of the most striking differences I perceive is the potential sacrifice of the deliberate cultivation of memory and foundational knowledge. In my day, mastering a subject often involved the arduous yet rewarding task of committing facts, figures, and concepts to memory. We built mental frameworks through repetition and recall, forging connections that deepened our understanding. Today, with AI poised to provide instant access to virtually all information, the necessity for this kind of deep internalization might wane. While the efficiency is undeniable, I wonder if future students will develop the same intuitive grasp of fundamental principles, the same ability to draw upon a rich internal repository of knowledge that allows for spontaneous insight and creative leaps. Will the readily available external “brain” of AI diminish the development of their own internal intellectual architecture?
Furthermore, I see a potential sacrifice in the development of critical thinking honed through independent struggle. Our learning often involved wrestling with complex texts, grappling with ambiguous questions, and painstakingly constructing arguments from disparate sources. The process was messy, sometimes frustrating, but ultimately deeply rewarding. It was in this struggle, in the hours spent poring over primary sources and formulating our own interpretations, that our analytical muscles were strengthened. With AI capable of summarizing information, identifying patterns, and even suggesting arguments, I worry that future students might be deprived of this essential intellectual workout. Will they learn to truly think for themselves, to question assumptions and synthesize information independently, or will they become overly reliant on the AI’s analytical prowess?
The vibrant human element of learning also feels potentially at risk. The lecture halls of my time were not just conduits of information; they were forums for intellectual exchange, where professors shared not only knowledge but also their passion, their insights gleaned from years of dedicated study. The spirited debates that erupted in seminars, the informal mentoring that occurred during office hours, the camaraderie forged through shared intellectual exploration – these human interactions were integral to our education. While AI can facilitate communication and provide personalized feedback, can it truly replicate the nuanced understanding, the spontaneous inspiration, the human connection that ignites a lifelong love of learning? Will the algorithms foster the same sense of intellectual community and mentorship that shaped our understanding of the world and our place within it?
I also ponder the potential sacrifice of the development of resilience through navigating academic challenges. Our path through university was not always smooth. We faced setbacks, wrestled with difficult concepts, and learned from our mistakes. These experiences, though sometimes disheartening, were crucial in building perseverance and self-efficacy. If AI tools are designed to smooth the learning curve too effectively, to provide instant solutions and preemptively address difficulties, will future students develop the same grit and determination? Will they learn to bounce back from failure, a skill essential not only for academic success but for navigating the complexities of life beyond the university walls?
Finally, I consider the subtle sacrifice of the shared experience of intellectual discovery. In our time, a new idea encountered in a lecture or a particularly insightful passage in a book often sparked collective excitement and discussion. We grappled with these concepts together, forging a shared understanding and a sense of intellectual camaraderie. With increasingly personalized AI-driven learning pathways, I wonder if future students will experience the same sense of collective intellectual awakening. Will the highly individualized learning experience inadvertently lead to a more fragmented and less communal understanding of knowledge?
The advent of AI in higher education is an undeniable force, promising efficiency and personalization on an unprecedented scale. However, as I look back at my own formative years in the university, I can’t help but feel a sense of concern for what might be unintentionally sacrificed in this technological leap. The deliberate cultivation of memory, the independent struggle that forges critical thinking, the irreplaceable value of human interaction, the development of resilience through overcoming challenges, and the shared joy of intellectual discovery – these were the cornerstones of my education. As we move forward, it is imperative that we carefully consider these potential losses and strive to integrate AI in a way that enhances, rather than diminishes, the profound and multifaceted experience of higher learning for generations to come. The silence that might descend upon future university halls shouldn’t be the silence of disengagement, but the focused hum of minds empowered by technology, yet still deeply rooted in the enduring values of human intellect and connection.
***
A reflection
Gemini’s reply is as useful as my own in exploring what might be sacrificed and gained by the continued incorporation of AI into human activities, especially higher education. We can quibble about grading the AI reply as if it were somehow my reply. If I weren’t transparent, I would be cheating, or so current thinking goes. But does it matter to the TikTokers, the game-playing addicts, those alienated from wider society, fearful of micro-aggressions? Maybe not; maybe they would be happy with using Gemini to do the thinking for them. But, we care; those of us who are grasping to hold onto a world that is slowly disappearing care. We might argue about the lack of skillsets for employment; students in an AI education might answer, Where’s my universal basic income?
The future is not known. Perhaps the hope for Columbia’s educational mission—develop[ing] essential skills to engage in an increasingly diverse and rapidly changing world—can survive in an AI-infused education. Perhaps there is wisdom in this excerpt from Gemini’s essay on sacrifice:
As we move forward, it is imperative that we carefully consider these potential losses and strive to integrate AI in a way that enhances, rather than diminishes, the profound and multifaceted experience of higher learning for generations to come. The silence that might descend upon future university halls shouldn’t be the silence of disengagement, but the focused hum of minds empowered by technology, yet still deeply rooted in the enduring values of human intellect and connection.
Art by Joe Nalven