Follow us on Instagram
Try our daily mini crossword
Play our latest news quiz
Download our new app on iOS/Android!

Princeton should think twice before banning ChatGPT

mccosh-50 Candace Do (2).jpg
McCosh 50.
Candace Do / The Daily Princetonian

Over the last couple of weeks, there have been a number of  articles about ChatGPT, a new chatbot technology that can answer questions and provide information in a human-like fashion. These articles found in national publications seem based on the underlying assumption that education, among many other industries, is being rendered obsolete. 

The titles of these think pieces spell destruction: “Will ChatGPT Make Me Irrelevant?”, “The New Chatbots Could Change the World. Can You Trust Them?” The primary concern for most is that this technology could allow people to replace work that has historically been done by students or professionals with output from artificial intelligence. When there is a chatbot that can write an essay in seconds, it is easy to question the merits of its usage, especially given that we will continue to have access to this technology long into the future. 

ADVERTISEMENT

The emergence of ChatGPT shows that as new technologies continue to develop, learning and testing processes need to change to evolve with the times. Rather than trying to revert to a world in which this technology does not exist, we should focus on how education and testing should work with the existence of these language models. 

The University administration will almost certainly institute a policy in response to this easily accessible chatbot. This could play out in a few different ways. A seemingly easy option would be to include ChatGPT in some new addition to the Honor Code, claiming that any use of ChatGPT would constitute a violation of the Honor Code. However, there are a number of reasons why a ban is not the best solution. While there may be programs run by OpenAI to catch this type of cheating, such a policy choice would open up the door to a technological arms race between the University and its students. Expanding the role of the Honor Code and disciplinary processes may have adverse effects on students. As The Daily Princetonian found last year, the effects of going through this process can be extremely harmful to all involved. A more nuanced solution is necessary, and examining past policies could provide a path forward. 

Massive changes in technology are not a new phenomenon, and institutions and universities like Princeton have dealt with them in a variety of ways over the years. While some classes ban specific websites, others have embraced new technologies as a way to improve learning within the classroom. Language classes at Princeton have long dealt with Google Translate, with professors believing that it stymies learning and does not benefit student growth. Many classes permit students to use autocorrect without requiring students to acknowledge that they received help from that technology. Defining the line between the expected use of technology and gaining an unfair advantage through a new tool is a tall order.

However, that is precisely the task that many universities are presently facing. ChatGPT and other programs have tools that can be tremendously beneficial: debugging and generating code, explaining challenging concepts, and any number of other things. 

Presumably, Princeton students are learning skills that cannot be immediately reproduced by AI. If we aren’t, what are we doing here? Though automation may make obsolete some of the skills that used to be taught, there still remain skills that AI cannot replicate. Those are the ones that should be prioritized in the classroom. Analysis from precept and seminar discussions cannot be replicated by today's computers. Neither can personal thought and unique ideas.

Rather than banning ChatGPT, we need to see new solutions for assessments that evolve with the current technology. In the past, I have argued that more formative assessments should replace the summative assessment system that is frequently used in Princeton classrooms. Formative assessments would force students to engage more throughout the course, focusing on developing an understanding of the material. While large end-of-term papers would leave the door open to rule-breaking, routine discussions could demonstrate comprehension. Another option would be to start including oral examinations, as my colleague Henry Hsiao suggested in a recent article

ADVERTISEMENT

While no solution is perfect, the next steps taken by Princeton and other universities must balance the existence of new technology with some of the conventional teaching styles. Shifting assessments to focus on the aspects of education that AI cannot replicate is a good start. A stopgap solution is not. Programs like ChatGPT will continue to get better and better over the years, and the best plan going forward is to acknowledge that the existence of these new technologies can be a supplement to learning, rather than a replacement. 

Mohan Setty-Charity is a junior from Amherst, Mass., concentrating in economics. He can be reached at ms99@princeton.edu. 

Subscribe
Get the best of ‘the Prince’ delivered straight to your inbox. Subscribe now »