Last semester, my English professor acknowledged that he probably wouldn’t recognize an essay generated by artificial intelligence, but also warned students that if this was the route they chose, they would only be cheating themselves. The value of a humanities education should not come from a desire for a good grade, but rather from a love of learning and the process of developing original ideas, and he trusted that his students felt the same, he argued.
Princeton’s humanities departments are far from a consensus on the regulation of AI usage. Some have instituted department-wide policies, like the history department’s ban on using AI as a “text-generation tool,” while others, like the English department, have not yet implemented an official policy. Many of my own professors have expressed concerns over the capabilities of AI to author convincing essays, interpret dense sources, and create shortcuts.
From higher ed journalism to concerned professors, AI is often portrayed as an unprecedented frontier to be tamed, a danger to both the future of the humanities and the independent, critical thinking abilities of its scholars. But a strict AI ban is superfluous, not to mention difficult to enforce: If students are plugging essay prompts into ChatGPT en masse, there are larger issues at stake that a band-aid AI ban won’t fix.
Yet AI does not actually destabilize or introduce any new threat to higher education in the humanities. Upper-level courses in the humanities overwhelmingly self-select students who are genuinely passionate about the material and do not want to replace their original thinking with AI. The demanding workload and complexity of these courses inherently draw a body of students motivated to think for themselves, and they are unlikely to take AI shortcuts. Furthermore, the unique specificity, creativity, and originality required to study these subject areas are inherently difficult for the homogenization and generic writing style of AI to adequately replicate.
In my current English seminar on film adaptation, for example, close reading and watching is essential for successful and substantive engagement. In each discussion and Canvas post, we analyze specific punctuation choices, the slight expression shift of an actor, and thematic parallels with a text discussed weeks ago. We frequently disagree and debate interpretative arguments, and the productivity of this conflict represents exactly the kind of insight which AI’s regurgitated material fails to replicate.
Generative AI can summarize, but it cannot offer the highly individual and varied set of perspectives that emerges from a discussion between a group of advanced humanities students. It can give a homogenized, Western-centric, and bland perspective, but it cannot come up with the unique, differing personal viewpoints that are so central to a meaningful seminar. Studies have routinely shown that AI writing displays lower rates of ideological and stylistic diversity than human subjects.
But even if AI eventually develops the ability to meaningfully close-read and come up with unique arguments, the future of the humanities will not necessarily be in jeopardy. While some students in humanities classes see no problem with AI usage — and would prioritize passing a course or minimizing effort over meaningful engagement — there still exist passionate humanities students who want to contribute to their fields with original thinking driven by their human perspective and experience. AI will not replace human scholars, even if it becomes capable of an A-level paper, because motivated students won’t allow AI such influence and power.
One may reasonably argue that this view is too idealistic — after all, the overbooked schedules of Princeton students makes efficiency the academic ideal, driving students toward AI due to perceived necessity. Indeed, if one can get away with AI-generated essays for a decent grade, freeing up valuable time and mental effort, then why wouldn’t they? But underlying this question are the issues of profit-seeking and misaligned motives in education, which have existed far before AI. AI is a new manifestation of the same problem, and no technology ban will solve it.
In suggesting that AI will not undermine the intellectual integrity of the humanities, I’m not saying that the humanities won’t evolve. But evolving is not the same as vanishing. To suggest that AI-generated analysis and sped-up research techniques will replace real humanistic scholarly work is not just to overestimate AI’s capabilities, but to underestimate the passion of the students who care to do this scholarship themselves.
Humanities departments should approach AI not as an unprecedented antagonist but rather as a tool with both admissible and inadmissible use cases. AI is not fundamentally revolutionary to human thought, nor capable of replacing it. It enables jaded students to cheat more easily, and it helps motivated scholars to pursue their original research with new and innovative strategies. In other words, it only amplifies and enables our existing tendencies. AI’s effects are more human — and less artificial — than its name may suggest.
Ava Chen is a contributing Opinion writer and an English major from Massachusetts who is pursuing a minor in Computing, Society, and Policy. She can be reached at ac5214[at]princeton.edu.






