In an otherwise insightful, hopeful, and at times even beautiful, piece in the New Yorker in April, Princeton Professor of History D. Graham Burnett makes one critical error: Compared to the rise of AI, he remarks, the Trump administration’s frightening invasions into university affairs seems like a “sideshow.”
But these are not two separate problems on two parallel tracks. The rise of AI in research and higher education and the administration’s actions are born from the same cultural trends: the devaluing of authentic human curiosity in favor of a single voice of authority, maximizing efficiency and usefulness in the generation and curation of knowledge. It’s no coincidence that the same administration pushing for increased AI usage in education is also pursuing government takeover of universities. These problems lead to the same end: Academics will cease to make the decisions about what sort of questions to ask, research to do, and concepts to teach.
To respond to these twin threats, we, as people with a great stake in free academic thought, must recommit to and appreciate the power of human-centered inquiry. Academic freedom, and its capacity to help us better understand ourselves and one another, will remain in danger as long as we do not confront this displacement of intellectual power away from human scholars.
Dr. Burnett suggests AI’s potential to support rather than endanger the humanities, describing a myriad of nuanced student research from his class which probed the depths of AI’s capabilities in order to highlight the untouchable power of authentic humanness. He also rightly highlights the ineffectuality of universities’ attempts to sweep AI under the rug and turn a blind eye to a revolution in both the production and synthesis of knowledge. And Burnett suggests, profoundly, that better understanding AI may return us to ourselves by reminding us of our humanness, of the ways in which our intellects surpass mechanized or formulaic responses.
Yet simply getting better at using AI, and trusting in the public’s universal epiphany that mechanisms like ChatGPT will never compare to nor threaten true humanness, is not sufficient action amid a surge of governmental efforts to undermine the authority of individual scholars and university leadership devoted to the pursuit of truth.
That’s because the Trump administration’s attacks on research and higher education as well as a more general blind reliance on AI chatbots are reflective of a societal backslide: a commodification of knowledge that prioritizes “efficiency” over the messy process of getting to an answer, where nuances are flattened and the truth is fungible.
It’s the same whether you’re a student plugging your essay prompt into ChatGPT or an administration cutting a research grant because it is perceived to be “waste, fraud, and abuse.” Both processes involve a presumption that everything worth knowing is already known and that there is an objective answer to every question.
When we let someone — or something — without true curiosity for an area of inquiry have control not only over what questions are asked, but what answers are produced, knowledge becomes a formulaic commodity. Rather than represent the inquiry and perspective of the people — a tenet of democracy itself— the voice of intellectual authority is concentrated in isolated figures of authority.
This means that discourse is not only less meaningful, but easier to control: another advantage for the power vacuum pursued by the current administration.
Thus, finding better ways to use AI does not ultimately address the collective attack on free human thought posed by the combination of excessive AI usage and the Trump administration. While there appears great eagerness to get better at living with the machines we have created, this is not a substitute for greater attention and dedication to human-centered scholarship: we have to get better at living not just with AI, but with our fellow humans.
We must be wary of the awe, the kind of magical quality and “stupefaction” that Burnett finds may emerge from engaging with AI, because it’s the first sign that human knowledge is becoming estranged from humans themselves. We must not grow so fascinated with experimentation into how closely the inhuman can impersonate the human that we devalue the actual human.
Academia is at its best when it can explore the human condition and thus improve and make more meaningful the lives of all people. That often looks like embracing change and extending boundaries, which includes learning to use AI. However, it does not include shunning authentic, imperfect human processes of discovery in favor of efficiency-optimizing shortcuts which devalue curiosity as well as discourse and dissent, whether technological or governmental. In an age when the integrity of intellectualism is at great risk, AI advancement must not be treated as a separate issue from our present government, but rather as another player in the larger, dangerous rearrangement of intellectual authority.

As students today, we must understand the power of our own intellectual work and the discourse in which we participate with our peers on our campus. In present times, opportunities to generate our own research and value our own processes of expression and exploration — even if it’s messier than an AI generation — is an act of resistance against an administration presently seeking to bury academic freedom for good. There is undoubtedly a place for awe in academia, but it doesn’t have to belong exclusively to machines, just as it certainly must not be granted to governmental bodies threatening universities with performances of strongman power.
We can instead find awe in ourselves, in human inquiry and discourse. We can and will live with AI. But every now and then, we should remember to turn away from the stupefying power of the machine and towards the intellectual power and fortitude of our own, human academic communities, for the sake of our freedom and humanity.
Lily Halbert-Alexander is an assistant Opinion editor and prospective English major from San Francisco. She can be reached by email at lh1157[at]princeton.edu.