Generative AI and Higher Education: Thinking Through Novel Classroom Tools

By Patrick Thaddeus Jackson


The only way we are ever going to have sensible discussions about so-called “generative AI” in the classroom is if we have those discussions without romantic notions about “the good old days” and the kind of real thinkers that students and faculty alike once were. It was never that simple, and there has never been a solid core of “real” humanity that has been conserved over time against technological encroachment. That’s a gesture that gets repeated over and over in different contexts, and it’s never true—but it’s often very effective as a frame.

I remember as a child stumbling upon my dad’s old slide rule, and being amazed at how it worked. But my pocket calculator could do basically everything it could do, and it was a lot easier to remember to punch in some numbers and symbols than to remember how to adjust the slide rule. I also remember hearing some people (not my dad) expressing nostalgia about the slide rule, with the idea that doing a calculation “by hand” gave them a more organic relationship to the process. This always puzzled me. Clearly the slide rule is a piece of technology, so why call calculating with it “by hand”?

Who would nowadays start off working on a dissertation by assembling a file card index? Yes, people used to do that. So, is using Zotero some kind of deviance or betrayal?

Along similar lines, there is great nostalgia for “handwriting” in some quarters. But handwriting was tedious, difficult for people without a certain level of fine motor control to produce, difficult to read, etc. “But it really connects you to your words.” I’m calling bullshit here, at least on the claim as a generally true one. It might be true for some people, but not for others (making my autistic son hold a pencil and “write by hand” is torture for all involved).

Holding on to traditions for their own sake can easily privilege some while leaving others behind.

Indeed, there is very often nostalgia for the old way of doing things once a new way comes along. The old way often seems—in retrospect, by comparison—to be more natural, more organic, more genuine. “Thinking” (and “naturalness”) belongs to what we once did, “automation” (and “artificiality”) to what we do now. But this is a trick of memory, nothing more. The line between what is an automated tool and what is “thinking” isn’t an absolute, but is historically and socially variable, and as we no longer have to think about some aspect of a process we think about others (the rest becomes, as John Dewey would put it, habit).

We romanticize the biologically individual human being. We put entirely too much emphasis on what that featherless biped can accomplish “alone.” As though we human beings hadn’t been using tools to accomplish our goals for thousands of years. And as though there were some identifiable “human” core that stood apart from every tool imaginable, as opposed to a human/tool boundary that gets continually renegotiated.


I give students writing assignments so they can play with words and concepts and ideas and in doing so figure out what they think. The quality of the product is considerably less relevant than the impact of having produced it upon the student. I don’t prevent them from using spell-check (although sometimes they don’t seem to use it very well), so why would I prevent them from using ChatGPT? Again, it’s a tool, and I am not interested in so much in the quality of the written product as in the impact on the student, so anything they generate with such a tool has to be flagged and then reflected on.

The equipment with which I brew beer produces beer; it doesn’t “do my thinking for me.” The equipment automates certain aspects of the brewing process and frees me up to think about others. My searching the internet to see other recipes and then coming up with my own also doesn’t “do my thinking for me.” Likewise, asking ChatGPT to generate a recipe doesn’t “do my thinking for me.”

If a writer uses an LLM—a large language model—to generate versions of a text and then selects among them, how are they any less of a writer than if they generated several versions of a text “themselves” (which is silly: they always have models to work from, prior writings to reflect on, implicit audiences to relate to, etc.) and then selected among them?

LLM tools (and these are not “AI,” not in the generalized intelligence sense; they are really good mimics) are very efficient at creating sensible text. And if our goal is efficiently creating text—a.k.a. the overwhelming majority of the skills that we teach when we assign students the task of writing “policy memos” or “business plans” or similar things—then we, we humans, have already lost that competition; we’ve been replaced even if we don’t yet know it. No human being is ever going to be as efficient as a contemporary LLM when it comes to generating sensible text.

LLMs don’t think; they produce. They are tools. They cannot “do our thinking for us.” By definition, what a tool can do isn’t “thinking,” because thinking marks the non-tool side of a historically contingent boundary.

Our problem is that we evaluate the product, and presume that our jobs as educators is to train our students to produce better products. This is a dead-end exercise.

Our jobs as educators are to provide our students with the opportunity to become better thinkers, and to use the right tools for the job at hand.

One of the problems that haunts these conversations is the notion that a change of tool means an increase in precision or accuracy, as opposed to an alteration of a process. The introduction of instant replay into various sports doesn’t just give the referees or umpires a technique for “getting the call right”; it fundamentally changes what “getting the call right” means, and where in the game judgment has to be exercised (as opposed to the more mechanical reading a result off of an instrument). Likewise, with LLMs, the tool doesn’t just provide a more effective way of generating text; it fundamentally changes what “producing text” means. By automating the work of constructing grammatical sentences and paragraphs and summarizing bodies of written work, it frees us to stop confusing such linguistic acumen with thinking per se.

When we treat a tool as nothing but a more efficient means for achieving an older end, we have implicitly imposed later standards back onto earlier approaches, which is what makes the older approach into a less precise way of doing what the newer technology does. Layering on the nostalgia effect of how the newest technology always appears “artificial” (or perhaps “progressive”) compared to the “natural” older way of working makes this even worse. If we shed both illusions, then maybe we can ask what LLMs actually do and what they might be good for.

Perhaps LLMs will allow us to understand thinking as being more about reflection than it is about linguistic production.

Author Profile

Patrick Thaddeus Jackson (PTJ) is Professor of International Studies and Chair of the Department of Global Inquiry in the School of International Service. He was named the 2012 U.S. Professor of the Year for Washington D.C. by the Carnegie Foundation for the Advancement of Teaching.


Original version prepared for the SIS Teaching Café, 18 October 2023. Thanks to Betsy Cohn for the prompt and for the opportunity, to Aaron Boesenecker for presenting these remarks in my stead while I was fighting COVID, and to both for subsequent feedback and conversation. Thanks also to the editors of The CTRL Beat for their helpful comments on an earlier draft.


Dewey, John. 1985. How We Think, Revised Edition. Electronic Edition. The Later Works of John Dewey, 1925-1953, Volume 8, 1933. Carbondale and Edwardsville: Southern Illinois University Press.

Medina, José. 2004. “In Defense of Pragmatic Contextualism: Wittgenstein and Dewey on Meaning and Agreement.” The Philosophical Forum 35 (3): 341–69.

Warner, John. 2018. Why They Can’t Write. Kindle. Johns Hopkins University Press.