Wednesday, October 1

Previous Issues

Follow us on Instagram
Try our free mini crossword
Subscribe to the newsletter
Download the app

The future of humanities teaching in the AI age, according to Princeton professors

Images of Aristotle, Egyptian hieroglyphs, the cover of Fahrenheit 451, and a map are on the screens of devices on a wooden table.
AI use is becoming increasingly common among students.
Renee Cargill / The Daily Princetonian

In a memo sent to faculty this summer, Dean of the College Michael Gordin was blunt: “this is the moment to reevaluate what you do and how you do it.” 

The numbers, Gordin wrote, showed sharp increases in generative AI use. According to data from entering students, more than half of the Class of 2028 said they had “rarely or never” used such products in the class year. In the Class of 2029, which had access to ChatGPT and other generative AI throughout most of high school, that figure was 28 percent.

ADVERTISEMENT

“If students, especially the generation that’s coming in right now, went through high school using [AI], it’s going to be really, really hard for them to stop,” said Meredith Martin, an English professor and Faculty Director of the Center for Digital Humanities.

For many users, generative AI has revolutionized coding, automated simple tasks like emails, and outstripped Google search as a reference tool.

But for humanities professors, the new tools have generated significant anxiety. “Will the humanities survive artificial intelligence?” Professor of History D. Graham Burnett asked in The New Yorker this spring.

In interviews with The Daily Princetonian, humanities professors discussed how they have adapted their classrooms to fit the new world of ChatGPT. Some professors are outright banning AI, turning assessments that used to be papers into in-class exams. Others are trying to work alongside AI, asking students to be transparent when they have used AI in research and writing. Others still are encouraging AI use and embracing its potential. And yet, they all expressed fears that AI could deeply impact critical thinking and writing.

“For a long, long time, writing has been a way that we’ve had of teaching thinking,” Professor of English and the English Department’s Interim Director of Undergraduate Studies Jeff Dolven told the ‘Prince.’

“Maybe less homework is good, but to me, if you’re not reading novels, you don’t understand the world, and you don't understand people,” Chair of the Comparative Literature Department and Professor of African American Studies Wendy Belcher said.  

ADVERTISEMENT

Can AI produce better papers?

At this point, many professors say no.

In a recent article in The Chronicle of Higher Education, Belcher outlined “10 Ways AI Is Ruining Your Students’ Writing,” including making “banal arguments,” producing sentences that are “pretty but empty,” and, fundamentally, getting facts “flat-out wrong.” 

“It cannot help you write a good paper,” Belcher told the ‘Prince.’ “It can probably do a B paper, but it’ll still be kind of an empty paper.” 

Subscribe
Get the best of the ‘Prince’ delivered straight to your inbox. Subscribe now »

Still, Belcher and her colleagues were sympathetic to the reasons students are increasingly using AI to brainstorm, outline, research, and write for them.

“I think the students are being efficient,” Belcher told the ‘Prince.’ “A lot of students in my class are in STEM. [This class] is something they have to do; they’re just trying to get through it. You cannot get through Princeton [by] prioritizing all your classes.” 

“I don’t believe students are evil. Students are smart,” Belcher said. “This is a new tool. They’re trying it out.” 

While AI might be helpful when it comes to tasks like summarization, professors fear that it may harm students’ ability to think for themselves if they consistently use AI to write their assignments and papers.  

Professor of Literature Andrew Cole, for instance, mourned the “cognitive offloading that happens with AI.” 

“There are mental tasks that our brains are too tired to do, to extract this information or process this information. It’s easier for AI to do it, and it does it in milliseconds,” Cole explained. “The more infrequent [these mental tasks are], the more detrimental it is to our brains.” 

For Cole, this “cognitive offloading” can be understood as a lingering consequence of the COVID-19 pandemic and subsequent lockdowns, during which students’ educations were completed entirely online. 

“Every student that I have encountered within my classes within the last five years has had this experience of intellectual challenges, intellectual dormancy,” Cole said. “AI [has] come in to fill the gap with this particular population who has experienced this very traumatic episode in our global history.” 

No comprehensive policy

Currently, University policy on AI usage is broad. According to Rights, Rules, Responsibilities (RRR), students cannot directly copy AI-generated output or misrepresent it as their own work. However, “If generative AI is permitted by the instructor (for brainstorming, outlining, etc.), students must disclose its use.” 

In his memo to faculty, Gordin urged professors to clearly state their policies around AI. “[I]nstructors need to articulate what is permissible within their courses,” he wrote in bold-face type. In turn, departments and instructors have created their own policies.

The English department, for example, lets faculty set their own guidelines for classes. For independent work, students must obtain written permission from their advisor and the Department of Undergraduate Studies before using generative AI tools and include a brief defense of their AI usage. 

The History department’s policy, however, prohibits generative AI altogether as a text-generation tool for coursework, final examinations, and independent work. Instructors are granted exceptions to allow AI use for course assignments if a student follows the instructor’s guidelines and discloses AI use in writing. 

Many faculty members said they appreciated the freedom granted to them to decide their own policies on AI, rather than an outright ban on AI. 

“The question of how much you tolerate and allow is really up to you,” Assistant Professor of History Michael Brinley said. “My sense among the faculty is that they are not interested in policy guidelines that would further constrict things that they do in the classroom.” 

“My impression is that faculty in English recognize the wisdom of the current University policy: that AI is not something that should be prohibited, but that there should be a very clear case-by-case understanding on how it’s being used,” Dolven said. 

Martin, a member of the 2023–24 Committee on Teaching and Learning convened by the McGraw Center, had hoped that the University would implement some of the policy recommendations that the committee came up with for a more cohesive, University-wide AI policy. “There is a Princeton-specific issue with not wanting to get into the way of departmental autonomy,” Martin explained. “I think it would have been an occasion for departments to work together, and I think they need to.”

In the meantime, individual faculty members are already changing the way they administer assessments. 

Many professors have simply stopped assigning papers, especially when it’s becoming increasingly difficult to not only distinguish between AI-generated and student-generated work, but also to prove it.

“I don’t think plagiarism detection works,” Martin said. “And I also don’t think that citational practices are robust enough yet to cover the full range of things that one might use a model for depending on the class.” 

Brinley’s class, HIS 362: The Soviet Century, used to have a required writing assignment. This semester, students will instead have a 15-minute oral midterm exam, three quizzes, and an in-person final exam, though students can opt to write a 5-page paper in lieu of the oral exam. For Brinley, this kind of assessment structure makes sense for his course which is “much more about content delivery than necessarily about methodological training,” he said. 

Even in seminar settings, the future of papers is uncertain. Cole’s ENG 306: History of Criticism, which has 17 students, includes an in-class exam for the first time in his 16 years teaching at Princeton. 

“It may sound weird for literature, but … there is an empirical element to literary study that can be measured in a thing that you might call an objective exam,” Cole said. 

Other professors maintain that papers are valuable in their own right and are unsure about the pedagogical implications of replacing papers with in-class essays, because writing on the spot and spending time researching, drafting, and re-writing are different skills. 

“All writing is in the re-writing,” Belcher said, “To me, the first draft is terrible.” She added that she would continue to assign papers, while having students disclose their AI use — then point them to her Chronicle of Higher Education piece that said AI-generated writing was filled with “bloated emptiness.” 

Some faculty have evaded the consequences of AI-generated writing by assigning more creative, imaginative papers that AI is, at least right now, unable to effectively produce. Dolven, who teaches classes on poetry, assigns exercises that “involve creative imitation and different kinds of writing, not necessarily essay writing.” 

“It is a kind of assignment that makes the sentence-by-sentence business of writing, the imaginative act of it, front and center,” Dolven said. 

Working alongside AI

While many faculty have balked at the use of generative AI for writing and reading, others have integrated it into their research and teaching.

In April this year, Burnett published an article in The New Yorker explaining how he integrated AI into a class he taught last spring. One assignment asked students to use AI chatbots to explore a topic they had already learned about, condense the text down to four pages, and turn it in. “That produced some extraordinary papers, co-written, in a way, by the students and the chatbots they chose because these were conversations,” Burnett told the ‘Prince.’

“Reading the results, on my living-room couch,” Burnett wrote in his article, “turned out to be the most profound experience of my teaching career.”

While some professors might disagree with Burnett’s methodology, his enthusiasm for AI made sense to Brinley, given that the class itself was about attention and media. 

“It’s very appropriate in a course like that to also bring a methodological component of interacting with [AI] and then reflecting on the value of how this changes the patterns of thought and writing,” Brinley said. 

In the Center for Digital Humanities (CDH), where Martin is the Faculty Director, researchers engage with technology and computational tools as central elements of humanistic inquiry. One of the current CDH projects that Martin directs is working to develop new large language models — the same infrastructure that powers chatbots like Perplexity and ChatGPT — to help study style in existing literature. 

“Humanists should at least know where and how it’s useful, so that they can say, ‘here are places that [AI] can be useful,’” Martin said. 

Humanities professors, then, are in a moment of experimentation: while some, like Cole, deride the use of AI in the classroom, others are more open and optimistic about the technology’s potential. The University has largely left professors and departments to their own devices on the issue, and many are happy to have the freedom — for now. 

“I think there are ethical reasons that professors can opt out of all sorts of technology that they don’t want to use,” Martin said. “I think it’s not helpful for anybody to pretend [AI] doesn’t exist.”

“It’s a moment of challenge to traditional ways of teaching, but it is also a moment of possibility,” Dolven said, “[Teaching] is no longer something we can take for granted as a vocational skill. And I think that that’s really to the good. I think it requires of us an active imagination as faculty.” 

“My approach would be,” Brinley added: “we’re all in this together. Let’s think together about how to protect ourselves, continue to pursue the things we want, and sustain certain scholarly environments which are threatened, in part, because of the complete devaluation of writing that comes as a result of its ease of production.” 

“From my perspective, [students] can do all the AI they want in their own time. But when they come to me, it’s going to be them,” Cole said. 

Nikki Han is an assistant News editor and a contributing Features writer for the ‘Prince.’ She runs the Faculty, Graduate Students, and Alumni coverage area.

Please send any corrections to corrections[at]dailyprincetonian.com.