At Princeton, you’d be hard-pressed to go a day without encountering generative AI. I’m constantly learning of a new student AI startup or reading a new story about AI in this newspaper, not to mention the sheer number of AI-focused club emails littering my inbox, which it feels like have doubled in the past year.
In campus discourse, too, AI is ubiquitous: I often hear people refer to the technology like some unpredictable, inevitable mammoth that will soon master us if we don’t master it first. It seems like we’ve all accepted AI as the undisputed “next big thing,” bound to upend every aspect of our academic and professional lives. This isn’t surprising, given a national discourse and industry leaders that rampantly fantasize about the technology’s supposedly endless potential.
But do we really need to be talking about AI in such a dramatic fashion? As much as ubiquitous access to chatbots has impacted our lives, generative AI technology is not so revolutionary — or new — that it must dominate the minds of Princetonians so extensively. When the cultural AI bubble inevitably bursts, we cannot be surprised; we must be equipped to pick up the pieces and move forward in other fields.
Part of the reason for this hysteria over AI is the promise of eventual “artificial general intelligence,” intelligence that matches or surpasses humans. But the hesitancy of industry leaders raises doubts about how soon this technology might emerge. From 2023 to early 2025, companies like OpenAI were advertising artificial general intelligence (AGI) as a near-future inevitability, with Sam Altman signaling as late as January he was confident about “how to build” it. Yet after GPT-5 released to an underwhelming reception in August, Altman switched up, admitting AI is a partial bubble and calling AGI “not a super useful term.”
This matches the opinions of academic experts: when surveyed by The New York Times, three quarters of top AI researchers believe an AGI breakthrough is unlikely to occur with current technological frameworks. In the face of doubt from those who know AI best, Princetonians would be wise to keep their expectations in check instead of expecting a world-changing revolution.
And, in case you forgot, AI isn’t thinking, anyway. Even the most advanced large language models (LLMs) still rely on a system of probabilistic word prediction through word vectors, trained on increasingly large data sets. The result is that AI doesn’t learn from its mistakes like humans do, and after its initial training period, AI learning plateaus with no new data to analyze.
None of this means AI doesn’t have vast practical applications. But it does raise doubts about the idea that AI will somehow develop into a revolutionary tool that will uproot every field, even ones that require human creativity or critical thinking. In the endless sea of emails and discussions at Princeton promising the revolutionary impact of AI, it sometimes feels like we’ve forgotten this simple fact.
Some would still argue that, regardless of the true value of the technology, AI is becoming so ubiquitous in the job market that it should be at the forefront of our education. This is a valid concern; for example, if paralegal jobs can be mostly replaced by generative AI, who knows how students will be able to become associates or partners in law firms. But it is exactly because of this that the world needs smart, ambitious people — like Princeton students — to figure out how to integrate real, human-centered thinking in those professions. That comes from engaging in disciplines across the liberal arts, not joining three different AI clubs.
As for the unstoppable image projected by AI CEOs, it seems the bubble may burst. Industry leaders seeking to use AI to replace jobs are continually running into unforeseen limitations. Experts on AI stress that the changes the technology will bring to industry are likely to resemble evolutionary adjustments, like gradual shifts across the job market, not revolutionary changes. It is important to push back against industry pressure to misuse AI in fields like healthcare and journalism. But panicking in advance about AI taking our jobs and killing us all only plays into the rhetoric of an AI revolution; the facts do not yet warrant the commotion.
It’s true that AI hysteria is not a problem unique to Princeton, but that’s exactly why the regard for AI at Princeton is so perplexing. As high-achieving students at one of the most resourced universities in the world, we’re uniquely positioned to break the trend. We shouldn’t pay outsized attention to a technology that isn’t all it’s cracked up to be.
None of this is to say that there aren’t genuine uses for — and genuine concerns about — AI. But to discuss these well, we must first have the right conception of the technology. Ultimately, the frequency with which we hear about AI doesn’t match the reality. Princetonians interested in the field of AI, of course, should pursue it. For the rest of us, there’s no need to buy the snake oil.
Shane McCauley is an Assistant Opinion Editor from Boston who, as you might guess, is a bit tired of hearing about AI. He can be reached at sm8000[at]princeton.edu.






