Don’t be a Luddite with ChatGPT

Historically, I’ve been a late adopter of technology. I was one of the last people I know to get a cell phone. And I continued to pay for gas inside the gas station for years after everyone else started paying at the pump with a credit card. But recognizing my own Luddite tendencies, I try to deliberately challenge my biases every once in a while.

ChatGPT, the newest general-purpose chatbot, seemed like a good test.

I was skeptical of ChatGPT before I even knew it existed. Like you, I have encountered numerous special-purpose chatbots on various websites. In general, I found engaging with these chatbots unhelpful, annoying, and time-wasting. No matter how trivial my request, the usually unhelpful response left me with the distinct impression that I was either crazy or being gaslit.

My skepticism was reinforced by some early reviews, such as that of Paul T. von Hippel, who wrote,

ChatGPT is often wrong but never in doubt. It acts like an expert, and sometimes it can provide a convincing impersonation of one. But often it is a kind of b.s. artist, mixing truth, error, and fabrication in a way that can sound convincing unless you have some expertise yourself.

But other users are much more optimistic. Writing here on Minding the Campus, Jonathan Sircy made a compelling case that ChatGPT can have many useful educational applications:

ChatGPT provides a solution to this problem by giving students personalized assessments that will improve their reading, writing, and thinking skills. The tool offers the educational freedom of bespoke feedback at scale. …

If the student provides a prompt, “Generate an exercise for me that would help me improve my use of transitional words,” the bot does just that. If the student asks the bot to provide feedback on the completed exercise, it will. …

[R]eal-time responses are an essential component of deliberate practice, the method of mastering any skill. ChatGPT offers a scalable method of reading, writing, and thinking instruction that can supplement traditional methods, making it a valuable resource for students and teachers alike. Yes, we know how to abuse the tool, but we should acknowledge its potential too.

Others such as Tyler Cowen and Arnold Kling are extremely bullish on this new technology. While these endorsements remind me a little bit of this famous scene, they did convince me to try ChatGPT for myself.

I have a few projects in various stages of progress, so I engaged with ChatGPT on several of them. Here are the four main lessons I learned from using ChatGPT—two don’ts and two dos.

[Related: “ChatGPT Can Help Students (and Teachers) Make the Grade”]

1. Don’t ask ChatGPT to settle factual disputes.

It hedges quite a bit, it’s easily swayed by the way you ask the question (much like humans), and it will fabricate an answer if one isn’t readily available. None of these responses make it very useful as an arbiter of truth, or even of conventional wisdom.

2. Don’t ask ChatGPT for reading suggestions.

I asked for some journal article and book recommendations on a topic that I knew well, and the results were barely relevant. Interestingly, ChatGPT will occasionally make up fictious works, such as when answering “What is the most cited economics article?”

3. Do ask ChatGPT to summarize the pros and cons of arguments.

I have a forthcoming paper comparing state appropriations with student aid, so I asked the bot:

This is a decent response, and while it’s far from perfect, it would be a great starting point for anyone new to the topic.

4. Do ask follow-up questions, as further inquiries can yield substantial improvements.

I was recently preparing for a presentation on what is driving college costs. I asked the bot:

This was a bit unwieldy, so I asked it to add some organizational structure:

Again, there are some errors, but it’s a decent starting point. At a minimum, this would give a user a list of thoughts to consider. I then threw the bot a curveball:

Again, it’s not exactly reliable, but it’s a pretty impressive list of things to consider and research.

[Related: “Teaching Academic Integrity”]

Overall, it seems that ChatGPT is a lot like adaptive cruise control with lane assist for your car. If you’re not familiar, adaptive cruise control will speed up and slow down with traffic, and lane assist keeps the car centered in your lane. Together, this means that the car can essentially drive itself without crashing into anything for substantial chunks of time. These features make driving much less taxing, particularly during long, open stretches of road. But it can’t do everything. In particular, you still need to know how to drive so that you can recognize when to take control (such as when approaching a stop light when no car is in front of you). You also need to know where you’re trying to go.

And that, essentially, is my assessment of ChatGPT—it can’t do much on its own, but if you know what you want and when to override it, it can be a helpful co-driver.

Teachers are worried that students will abuse ChatGPT to cheat. This is, no doubt, a justifiable fear. But the challenges are no different than those posed to mathematical education by the introduction of handheld calculators. Calculators (for math) and ChatGPT (for writing) definitely pose challenges to teaching these subjects’ respective skills and can certainly be abused inside and outside the classroom. But the solution is not a futile attempt at suppression—it’s to reorganize assessments. Just as many math classes forbid calculators during exams, so will many writing classes need to forbid ChatGPT. But outside of assessments, ChatGPT writing “assistance” will be just as common as mathematical assistance through calculators. Even this Luddite will, grudgingly, have to adapt to our new world.


Image: Adobe Stock

Author

2 thoughts on “Don’t be a Luddite with ChatGPT

  1. Dear Mr. Gillen,
    two points: first, the use of “Luddite” is commonly conflated with skepticism about adoption of technology. This amounts simply to an ill-informed slur.
    second, in your review of ChatGPT, you made no mention of the biases underlying the responses generated by the AI. A brief search with Google Scholar shows a developing body of research on this: a sweep of the Web generally will show more journalistic results.
    Skeptically yours, etc.

Leave a Reply

Your email address will not be published. Required fields are marked *