Follow us on Instagram
Try our free mini crossword
Subscribe to the ‘Prince’
Download the app

Humanities lag behind STEM in AI policy. They must catch up.

A photo of the entrance of the Princeton Engineering Quadrangle.
Engineering Quadrangle
Annie Rupertus / The Daily Princetonian

More than a quarter of the Class of 2025 admitted they had used artificial intelligence during their time at Princeton when it was not permitted on an assignment, according to The Daily Princetonian’s 2025 Senior Survey. Meanwhile, only 9.7 percent of seniors reported never using AI during their time at Princeton. AI use is prevalent at Princeton and other institutions of higher education, and universities have been scrambling to articulate their responses

Unfortunately, these responses have not been equal among the University’s various departments. While departments in the natural sciences and engineering and applied sciences have put thought into creating sensible policies on AI use, many courses in the humanities and social sciences are likely to include only one or two sentences in the syllabus telling students not to even think about using AI. 

ADVERTISEMENT

As this technology becomes increasingly prevalent in our modern world, the humanities and social sciences are falling further and further behind. To remain at the forefront of education and properly train students for a life outside of Princeton, humanities and social sciences departments must adapt by interrogating the technology and incorporating it into their disciplines.

In the past few years, I’ve seen STEM departments thoughtfully adapt their AI policies in response to technological developments. When I took COS 217: Introduction to Programming Systems three semesters ago, the rule on AI was simple: “You may not use … GitHub, Copilot, ChatGPT, or any other similar tool or source.” 

Today, if you check the course policies, there’s much more nuance: “You may consult with a Large Language Model … for assistance with conceptual topics for the class and for help with the assignments, subject to the full Generative AI policy below.” A multi-paragraph discussion provides further guidance on using AI in the course. Similarly, the syllabus for PSY 480: fMRI Decoding: Reading Minds Using Brain Scans details use cases and methods of interacting with AI across different assignments.  

But on the other side of the academic spectrum, the humanities and social sciences have taken a much different — and weaker — approach. Instead of treating AI with the nuance that it deserves, many departments and faculty have hastily rejected all uses of AI.

One department that mostly prohibits AI across the board is the history department. It bans AI as a “text-generation tool” and for editing. Furthermore, AI is explicitly not allowed as a brainstorming tool for the department’s independent work, denying students the opportunity to use a tool that could potentially supercharge their research and analysis. There is some instructor autonomy, but the department generally does not endorse AI use. 

The AI disparity between STEM and non-STEM not only applies to academic departments, but also to students. Your average A.B. student is less likely to use AI than a B.S.E. student. The Daily Princetonian’s Class of 2029 Frosh Survey shows this: only around 22 percent of incoming engineering students believed that AI is dangerous, while nearly half of incoming humanities majors said the same. 

ADVERTISEMENT
Tiger hand holding out heart
Support nonprofit student journalism. Donate to the ‘Prince.’ Donate now »

Perhaps this disparity in AI usage is inherent to the nature of these fields. In STEM, everybody can arrive at the same answer, so an AI model can easily generate answers to a problem set. Meanwhile, when it comes to generating a paper for a humanities class, models can sometimes struggle with the interpretation, contextualization, and open-endedness required to write a complete A-grade essay. The burden of uniqueness and creativity is higher. This is why, though AI can write, analyze, and summarize to some extent, some humanities scholars assert that humans remain irreplaceable. 

Yet AI is useful in the humanities: in literature, it can identify trends across centuries with textual analysis, analyze emotional states, and handle routine tasks. Instead of an outright ban, humanities scholars and social scientists should recognize where AI can be constructive by building on human analysis instead of just being a shortcut.

To incorporate AI into how humanities scholars go about their discipline isn’t just good on its merits, but is necessary to avoid a “Shadow AI” culture. Students aren’t waiting for the University to give guidance on how to use AI: some rulebreakers are already incorporating it into their work. Ironically, as a result, an outright ban could even worsen discrepancies in AI use, giving rulebreakers an unfair advantage over those who follow the guidelines. Faculty and departments should revise their AI policies to be more moderate, creating a more honest culture around AI. 

In the transition to incorporating AI in their courses, faculty could refer to the guidance posted by the McGraw Center for Teaching and Learning. Even within the humanities and social sciences, intellectual activities in each discipline are distinct, and each department must therefore analyze how AI can assist in educating and developing students. Departments must respect the autonomy of individual instructors while helping them consider approaches to AI that they may not yet have considered. Academic departments should also consider students’ perspectives through polling and structured conversations. 

Subscribe
Get the best of the ‘Prince’ delivered to your doorstep or inbox. Subscribe now »

There are promising and conscientious strides, and I’ve experienced them this semester. For example, in the introductory French sequence, we’ve seen more nuance in guidelines for using AI to correct one’s writing in the class. We must recognize existing efforts to address the lack of AI usage in the humanities, while acknowledging that thoughtful responses to AI are exceptions, rather than the norm.

We’ve seen the rise of transformative technologies before — many of which initially received the same blowback that AI is receiving now. In the next few years, AI will follow in the footsteps of innovations that came before it. It will become just as, if not more, essential to learning. Instead of treating it with fear, we ought to embrace it and teach students how to use it responsibly.

Luqmaan Bamba is a staff Opinion writer and an electrical and computer engineering major from New York. He can be reached at luqmaanbamba[at]princeton.edu.