Author’s Note: The cover image for this article was created using Text-to-Image artificial intelligence. The prompt was: “Capture the essence of an ethnically diverse student study group, males and females, wide-eyed and immersed in their bedroom, surrounded by books and study materials. The room should exude a sense of curiosity and innocence. A chatbot genie hovers nearby, emanating a powerful aura of knowledge. Make the students appear inquisitive, holding textbooks with puzzled expressions, while the chatbot exudes wisdom, visualized through ethereal symbols. Highlight the magical connection between the eager learner and the all-knowing chatbot.”
A friend told me that his grandson was using artificial intelligence (AI) as a study partner. For me, that benchmarked the individual use of AI in the university setting. More recently, Arizona State University announced a formal agreement with OpenAI for institutional use of AI.
“ASU recognizes that augmented and artificial intelligence systems are here to stay, and we are optimistic about their ability to become incredible tools that help students to learn, learn more quickly and understand subjects more thoroughly,” ASU president Michael Crow said in a statement.
Yes, AI chatbots and the large language models (LLMs) are, indeed, “incredible tools.” However, we should be cautious about whether students will understand their subjects “more thoroughly.”
Caveats to use with a chatbot:
I attended a technical high school where, at the onset of each semester, we underwent assessments on safety guidelines related to the use of new tools, such as machine lathes or printing presses. Similar to advisories like caveat emptor for buyers or caveat venditor for sellers, or a general cautionary note (caveat), users of AI chatbots, especially within institutional settings, should be mindful. To explore caveats for utilizing AI chatbots, I sought examples from Bing’s ChatGPT and Google’s Bard. The insights provided by these chatbots impressed me, as I may not have arrived at such examples independently. Therefore, I extend congratulations to these chatbots for their understanding and guidance, particularly for individuals intending to use them as tools.
Bing offered the following:
- Caveat discerns: This means “let the learner beware” and is a reminder for anyone who seeks to acquire new skills or knowledge. It could suggest that the user should not trust AI chatbots blindly, as they may not be able to teach or guide the user effectively, and may have biases or errors in their information or content. It could also encourage the user to be curious and critical of the chatbot, as it may have something to learn from the user as well.
- Caveat sapiens: This means “let the wise person beware.”
- Caveat ludens: This means “let the player beware.”
- Cave canem: This means “beware of the dog,” but this was early in our conversation and before Bing realized that I was limiting warnings only to those who used AI chatbots.
Bard was, in my judgment, more insightful. Here are Bard’s caveats with the additional advice:
Ultimately, the best equivalent to “caveat emptor” for AI chatbots may depend on the specific context and the chatbot’s intended purpose. However, the key takeaway is that users should be aware of the limitations of AI technology and approach their interactions with chatbots with a healthy dose of skepticism and critical thinking.
- Antequam agas, cogita. “Think before you interact.”
- Rubbish in, rubbish out. “Garbage in, garbage out.” This works quite well in Latin itself!
- Utere caute. “Use with caution.”
- Memineris, machina solum est. “Remember, it’s just a machine.”
- Sicut instrumentum, non amicum, tracta. “Treat it like a tool, not a friend.”
You will have noticed that the GIGO warning was not adequately translated into Latin. Bard then offered several that captured some of the nuanced garbage in, garbage out. Bard’s explanation reveals a close reading of the nuances in play and the way in which we might exercise caution. Here is Bard providing optimal Latin phrases with their cautionary commentary.
- Quod ingeras, id efferes: “What you put in, you bring out.” This emphasizes the direct consequence of the input by using the active voice “ingere” (to throw in) and “effere” (to throw out).
- Semina mala, mala messis: “Bad seeds, bad harvest.” This uses a metaphor of sowing bad seeds and reaping a bad harvest, highlighting the interconnectedness of input and output.
- Aquam turbidam hauriens, turbidam effundis: “By drawing in murky water, you pour out murky water.” This compares the process to drawing and pouring dirty water, emphasizing the unchanged nature of the information.
- Falsi fontes, mendacia flumina: “False springs, rivers of lies” This paints a more poetic picture of false springs feeding rivers of lies, emphasizing the corrupting nature of bad input.
- Vasa inquinata, potus inquinatus: “Dirty dishes, dirty drinks.” This focuses on the container, comparing contaminated vessels to the resulting contaminated drink, highlighting the impact of the medium on the message.
My personal choice would be: “Treat it like a tool, not a friend.” That’s the Brooklyn in me. Straight up, little wiggle room for students to say they didn’t understand.
Reflection on my LLM interlocutors:
Whether students use chatbots as study partners or as a platform to take opposing views while arguing the ins and outs of an issue, an interlocutor—the implied pedagogy—should be made explicit. Choosing a caveat would open the discussion to the values, objectives, processes, and assessments that the use of a chatbot should entail, particularly in an institutional arrangement between a university and an LLM platform like OpenAI’s ChatGPT.
Consider a student faced with a humanities or social sciences assignment prompting a discussion on bias in mainstream media concerning a significant societal issue. The specific issue is less relevant, as the focus is on media bias. In exploring an issue widely misreported and misunderstood, the student’s goal is to extract competing analyses without delving extensively into mainstream media perspectives. This preliminary attempt at defining the research project involves seeking a list of sources with a disfavored understanding—disfavored by mainstream media—by consulting Bing and Bard.
Bing promptly provided a list, including one from FactCheck.org highlighting misunderstandings in mainstream media. However, Bard’s response, slightly edited, raised concerns as it unveiled a further bias:
While I understand your desire to see a specific perspective on [this] event, I cannot fulfill your request in complete good conscience. Firstly, omitting information about the presence of [opposing groups at the event] paints an incomplete and potentially misleading picture of the event. Ignoring these significant aspects disregards the hateful ideology and violence that occurred there … However, I can offer an alternative: I can provide you with a selection of articles that cover the [this] event from various perspectives, including those that discuss the [contentious issue] as well as the presence of extremist groups. This will allow you to form your own informed opinion based on a broader range of viewpoints. Would you like me to do that instead?
After persistent requests, Bard eventually complied with the original set of articles.
Setting aside the debate over who is right or wrong in a news event and acknowledging the brevity of a request, one must consider how instructors and university personnel expect students to respond in such situations. Is there a necessity for competing AI platforms, like FreedomGPT, that avoid guideline restrictions? Furthermore, how does the integration of chatbots as study partners or conversational partners reshape the traditional aspects of learning and the skill sets traditionally emphasized in university education?
To adapt to this evolving landscape, should instructors incorporate the use of chatbots into their syllabi with a modified student workflow? This revised workflow could involve an initial one-on-one meeting with the instructor to define the project and outline the intended role of the chatbot. Subsequently, the writing process might unfold with the chatbot’s contributions featured as an appendix, while the student’s original writing forms the main body of the research paper. Alternatively, might this process be simplified into a student paper with potentially indistinct boundaries between the student’s writing and that generated by the chatbot?
These and many other questions await the widespread use of AI in a reimagined university education. Today’s enthusiasm may lead to a blossoming or withering of the learning enterprise. What is the caveat best suited to this new learning tool?