Good Morning!
When ChatGPT was first released, many predicted that it would be a huge risk to education, as it seemed to be able to reproduce relatively well-flowing pieces of text, potentially ones that could be used by students. Over winter break, Princeton student Edward Tian ’23 created a tool to detect ChatGPT, saying, “Everyone should use these new technologies. But it’s important that they’re not misused.”
Yet as evidence has mounted that ChatGPT has serious drawbacks, especially in terms of accuracy, stances have shifted against a firm ban. The University declined to ban the software on Jan. 25, 2023, instead suggesting that course instructors create their own policies surrounding ChatGPT. While senior columnist Mohan Setty-Charity published an article in December that argued against a ban on the grounds that it may create a “technological arms race” to detect it, technology columnist Christopher Lidard argued against a ban on a different set of grounds: that ChatGPT doesn’t work well enough to provide a real threat to academic work. This reflects the shifting perspective, mirrored in the views of the faculty members.
The faculty members caution that relying solely ChatGPT will not lead to academic success: “It’s built on GPT 3.5, but you’ll have a 2.0,” said Lecturer Steven Kelts.
READ THE REACTIONS →
Analysis by Aly Rashid
|