Follow us on Instagram
Try our free mini crossword
Subscribe to the newsletter
Download the app

Opinion Roundtable: AI and the classroom

gptzero
GPTZero finds text to be human generated instead of AI generated.
Julian Hartman-Sigall / The Daily Princetonian

In the first episode of Opinionated, new Opinion writers get together to discuss the role of AI in the classroom, taking on the implications of AI’s increased integration in academia for academic integrity and authenticity.

In this very first Opinion roundtable, staff debate the present and future of AI: what does it look like to be a student in a the technology-dominated world of today? What are our responsibilities, our expectations of ourselves and our fellow students? 

ADVERTISEMENT

Vitalia Spatola (0:32): I’m Vitalia Spatola, and I’m a member of the Great Class of 2028. 

Ravin Bhatia (0:37): I’m Ravin Bhatia, and I’m part of the Class of 2029.

Ana Boiangiu (0:40): My name is Ana Boiangiu, and I am also a member of the Class of 2029.

Audrey Tan (0:45): I’m Audrey Tan and I’m also a member of the Class of 2029. 

Lily Halbert-Alexander (0:48): I’m Lily Halbert-Alexander. I’m a member of the Class of 2028, and I will be moderating this evening. To start us off, it would be good if we could hear a little from all of you on this question. As a student, do you use AI? And if so, would you use it for and if not, why don’t you use it? 

ADVERTISEMENT
Tiger hand holding out heart
Support nonprofit student journalism. Donate to the ‘Prince.’ Donate now »

Spatola (1:03): I mostly use AI as a grammar checker after I write an assignment. If I’m not sure if a comma is in the right place, I’ll just copy and paste that sentence.

Bhatia (1:10): In my school work, I don’t tend to use AI. All the assignments that I do are mostly writing-based, at least in my classes this year. And I think there’s just something about doing the readings for myself that really does help with retention, which puts me in a better position come exams, come future essays, things like that. 

To the extent that I do use AI, I mostly use it for cover letters outside of school, honestly. I think that those can get a bit tedious to write afterwards, but I do find that AI can be useful when writing those.

Boiangiu (1:40): I also use AI for academic purposes. As an international student, I struggle with language sometimes, so it’s been very helpful helping me find the right word. Also, it has gotten very, very good at undergraduate math, so it’s been very helpful for explaining certain concepts that I might find tough understanding at first.

Subscribe
Get the best of the ‘Prince’ delivered straight to your inbox. Subscribe now »

Tan (1:56): Since coming to Princeton, I will say I haven’t really used AI, but in the past, while I was in high school, I definitely did use AI in order to help out with math problems. I used it especially to generate similar math problems to the one as I was practicing, so that it could give me, sort of, targeted practice and explain solutions as well.

Halbert-Alexander (2:15): So now, thinking particularly about the classes you’re taking now, other classes you’ve taken at Princeton, how do you feel about the way that AI policies are laid out for you as students? Do you think there should be more standardization, less standardization, and on the whole, do you feel clear about how you’re supposed to use AI? Do you think those policies are reasonable? How do you feel about the way your classes tell you you should be engaging with these tools? 

Spatola (2:32): In my personal experience, I think the classes have been pretty fair, like in my creative writing classes, for example, the use of AI is just strictly forbidden. So that’s pretty self explanatory. In some coding classes or some stats classes, AI use is sometimes permissible, so I think it can become a gray area for me there. 

But if you kind of prod your teachers and ask them for clarification, I think you’ll get the answers that you’re looking for. But I generally don’t have problems with policy here.

Bhatia (2:59): I agree. For the most part, my policies are very clear. They’re all very explicitly laid out on Canvas, and I think that they are reasonable: in all of my classes, which are all reading and writing based, the use of AI is strictly prohibited. 

And, I mean, I think that that’s a good thing. I think that especially when it comes to reading and being able to synthesize information, being able to write about said information, I think that that’s a very good skill that a reliance on AI, you know, might take away from you, and I think that especially at Princeton, when we’re expected to write a junior paper in a couple years, a senior thesis after that, those skills, building that foundation freshman year is important.

Boiangiu (3:37): I haven’t had any problems with AI policy either. The math department’s policy is: as long as you write solutions in your own words, it's fine anyway. So I guess it doesn’t matter the means you use to get to those solutions. 

For COS 126, which is a notably strict class on policy just all around, I think they’ve been very explicit about AI, but in its case, it prohibits you from looking up, say, error codes online, because now Google has integrated AI into its search system. However, for my other classes, I think it’s fine and very explicit as well. 

Tan (4:08): I feel like there hasn’t been a ton of issue around AI policy. I will say, among my classmates and peers, there has been some concern about that gray area. They’re unaware of how it varies class to class, and would like, maybe, more standardization. For me, though, I can definitely see why certain classes may want to use AI more than other classes. 

I definitely agree that in terms of, like, creative writing classes, or even any class that’s just literature or writing based, such as — I’m taking the Western Humanities sequence, I think it makes sense for there to be a zero AI tolerance policy.

Halbert-Alexander (4:40): And do you think that, when you’re in environments with a zero tolerance policy, do you think that students abide that, do you think that that gets fairly respected? What do you make of zero tolerance in this day and age? 

Spatola (4:53): There will always be people who break the rules. Even before AI was created, there were people that wrote their answers down on their hands or snuck a note into class. And I guess using AI when it’s prohibited is just a modern-day version of that. There will always be people who will just not care for the rules that are laid out. 

And with AI being so easy to access, I think that there must be people who are breaking those rules. But ultimately, I think there’s always going to be some of that.

Boiangiu (5:19): I think you put it very well, Vitalia. Cheating has always been around. AI is just a new and very, very efficient way to cheat. But I do have enough faith in my classmates and in academic communities, just overall, that AI won’t take too much of an important spot in the way we approach academic work.

Tan (5:37): Yeah, I definitely agree. I think generally, my classmates and peers tend to be very responsible. I think Princeton students in general have quite a bit of integrity and take pride in their academic work. I will say this is kind of where that gray area comes in, because I know for a lot of my peers, what does a zero tolerance policy mean? They’re kind of confused. 

For example, is someone who uses ChatGPT to generate their whole essay the same as somebody who uses Grammarly to grammar-check their essay afterwards, like, where do we draw that line? Is one allowed and the other isn’t? Or I know that so many different softwares right now are, kind of, jumping on that AI trend and trying to integrate themselves within that software. 

So if somebody used that software before integrated AI, and now after they integrated AI they’re still using them or maybe they’re unaware of how deep that AI usage is integrated within that system.

Halbert-Alexander (6:28): Do you believe that there are stages of the thought process, of the work process, where AI is acceptable, where it might one day be a natural part of the way that we do work? 

Where should the limit be where we find AI acceptable? Is it okay in brainstorming and not so much in more refined writing? Is there a point at which it becomes a violation, and maybe a point at which it’s just a daily practice, a part of the lives of students?

Bhatia (6:56): I do think that AI in brainstorming can be a useful tool, especially, you know, if somebody is stuck on where to proceed. 

I think back to my own high school, for instance, where students were allowed to use AI in the brainstorming process. There were just specific parameters. If one was to use it, they would have to ask very open ended questions. They wouldn’t be trying to ask AI to generate a question for them, to generate a research idea for them. Rather, the AI tools would ask them questions that would prompt their own thinking and get them to approach their work in a certain direction, things like that, to help them develop a thesis on their own. 

So I think that in the brainstorming process, you know, as long as AI isn’t doing the thinking for you, and it’s instead prompting you to do your own thinking, I think that there could be uses for it. 

Boiangiu (7:38): There’s a fine line between whether it's prompting you to do your own thinking or whether you innocently start asking it more specific questions. 

Bhatia: That’s true. Yeah. 

Boiangiu: And I think the brainstorming process, I think that’s deeply human, and there is a lot of value in ideas being human and not being interfered with by AI. But I do think it is okay to use in later stages of projects for polishing, say, write ups when it comes to science, for polishing, writing, when it comes to anything creative. 

Tan (8:07): I can kind of see both sides to this question. I would also think about if you’re, for example, saying, ‘give me a research thesis’, or ‘give me a research topic,’ I would think deeper on why your assignment is even asking for a research thesis, and what the goal of a research paper is. 

Really, the goal is for you to show your knowledge about something you’re deeply passionate about, and is a randomly generated thesis going to make you super passionate or super enthusiastic about that topic? I don’t necessarily think so. I think it has to come from your heart, and something that you are genuinely passionate about. 

I will say I think where AI does have a great spot is with studying. If you’re studying for exams or clearing up specific, targeted questions. I would definitely say, for example, if you have a question about something that’s super, super niche, or just about the topic you’re writing about, or just the topic you’re studying that maybe the internet can’t provide, at times, it could be helpful to have AI apply general internet knowledge to your specific issue.

But I would again caution you when you are using this, because I know that sometimes AI will synthesize misleading answers, so that’s also something to be aware of.

Halbert-Alexander (9:10): Yeah, I’m interested in this idea that there’s some loss of passion or interest in the process of using AI for brainstorming. How do you think AI relates to student energy or student interest in the classroom? Do you think there’s a crisis of care going on here?

Bhatia (9:25): I do think so, yeah. I do think that it ties into this, this crisis of care. I do think if you are passionate about something, you’re more willing to put in the work and the thinking necessary to write that paper, do that problem. And I think that it’s when you’re not as interested in the subject that you might turn to AI simply because you don’t have the time to do it, you don’t have the interest in doing it. So I think that, yes, AI, it does stem from a crisis of care in your work.

Tan (9:51): I do agree with that, but I will say here, I think that when we are having these sorts of discussions, it’s very important not to berate or demonize people who do use AI for efficiency, because I think everyone is so busy. When you have a ton of assignments and you also want to go out with friends or spend time with family, I think it is very reasonable for you to say, hey, this one assignment I’m doing maybe doesn’t matter as much as spending time with my family or getting this really cool research opportunity in another field that I’m interested in, or something like that. 

So when everybody is getting bombarded with so many great opportunities, it’s not criminal for people to prioritize one thing over another. But I definitely do think in doing so, there is sort of this intentional loss of care. That’s someone’s choice. You can say what you want about it, but I think that can sometimes be a valid choice. But it’s also important to be aware that that is an intentional choice that’s being made. 

Halbert-Alexander (10:43): All right, maybe we can switch gears a little bit. When you’re talking about AI, when you're thinking about AI with your professors, preceptors, TAs, do you think there are generational differences? Do you feel like your professors understand and feel about AI, the way that you do, the way that your peers do. Does it vary professor to professor? What’s the kind of student faculty relationship look like in your classes? 

Spatola (11:05): I think generally it just varies, professor to professor, preceptor to preceptor. Trying to make a generalization based on generation might be harmful. I think most professors, if they do talk about their AI policies, they are upfront about it, they are clear about it. 

And there is dialogue between students and professors about AI usage. And I think generally, there’s a positive relationship between students and faculty surrounding AI usage, especially in classes which do permit some AI usage. It might be hard for some students to accept that AI use is prohibited in some classes, but I’ve never seen it personally myself.

Bhatia (11:41): I do think that professors, just generally, being extremely experienced in their fields, they always, whenever talking about AI usage, there’s always the caveat of, ‘if you do use it, you should know that AI is misleading, that AI often features wrong information’. I think about my politics course, for instance, American politics, we have an essay coming up, and we are allowed to use AI to help with the thinking process, the research process. My professor was very upfront that AI does make mistakes quite often. 

And I think that, you know, having that kind of awareness that students do generally use AI for some purposes or another, I think that’s a good thing for for a professor to have, but I do also think that it’s important for them to make that distinction, that an overreliance on AI can often lead to misinformation or disinformation.

Boiangiu (12:35): Yeah. I think something that is often overlooked is that AI is a very new thing. We’re barely learning how to have these conversations. And I’ve found professors to be genuinely curious about, hey, how good is ChatGPT at my field? I’m already so good at it, obviously, because I’m Princeton faculty, but I would like to know, ‘how good is ChatGPT at undergrad math?’ 

And I think it comes from an amazement, and this conversation is very double-sided. I think they don’t have the authority to come up with a very, very strict opinion, you know, they’re not being rigid about it, and I think that’s a very good thing, because it’s so new, and there’s a lot to learn still. 

Tan (13:08): Yeah. I mean, I would have to agree with that. I think it does vary greatly by professor to professor, but overall, I think people are very interested in learning more, and there is, like, an active discussion going on between students and professors. 

And overall, professors seem quite willing to learn and quite willing to adapt as well. They know, for my freshman year seminar, we took time out of class just to have a discussion about AI, and my professor was essentially like, what do you guys feel about AI? What do you guys want our policy to be? Let me work with you, and let’s figure out a policy that works for everybody. 

So in that way, I think a lot of these professors are very genuine. They want to learn more, and they want to help facilitate that discussion. I also do think sometimes we overestimate the impact AI has had in classes overall. Really, I don’t think this policy making on AI, within classes, is going past like a quick discussion. 

It’s very much like if you think about calculators, if you’re taking a math class, maybe they’ll tell you about the calculator policy. And if you want to go argue with the teacher about the calculator policy, then you go do that. But it’s not going to be the central topic of that course. That course isn’t going to repeatedly state the fact of the calculator policy. So I think AI is treated very similar. 

Bhatia (14:21): I like that point, too, about the impact of AI, and I do, I do agree. I think sometimes we do overestimate it. I think that especially in writing-based courses, it’s not as though, in the dawn of AI, we’ve no longer had to cite sources or we’ve no longer had to synthesize arguments. 

All those skills are still very much present throughout the writing process. And I think that even in the wake of, you know, increasing AI usage, students are still doing those skills for themselves. 

Halbert-Alexander (14:48): Yeah, I think on that idea of overestimation, there’s a lot of speculation about this utterly AI-dominated future. And so, when you think about your future careers, or even just your future as students, do you think that AI proficiency, that even just a certain level of comfort with its presence in academia, do you think about that as super important to your future as students or to your future careers? 

Or do you see this as more of a peripheral tool that you’ll have to, kind of, more passively contend with, like, how relevant does this feel for you and your futures?

Spatola (15:17): I think it really depends on the major. I know a lot of people who are thinking about careers in computer science — which, I am not one of those people — are pretty scared, because there’s this fear that AI is going to take their jobs. 

And I know that the computer science department here has been shrinking because of that. There aren’t as many students who are interested in pursuing it. And then conversely, other departments, like I know ECE has been increasing because the students from there are being funneled elsewhere. 

I think for me, by someone who pursues fieldwork outside, AI can’t take my job because somebody has to plant plants, or collect data, and a robot cannot do that. 

But it is definitely a worry that with a lot of jobs that aren’t hands-on, AI will dominate those fields, but I think that AI can never truly be original, and that’s something that AI will never be able to take away from us. And I think originality is the cornerstone of a lot of research, and so if you have an original idea, you’ll be okay.

Bhatia (16:11): I agree with that. I echo this idea of hands-on work. I think that one day I’d like to be an attorney, a trial litigator, and I think that there’s a certain level of face-to-face interaction between attorneys and clients that you really just can’t achieve through the use of AI. 

You know, AI cannot replicate human relationships. And I think that at the end of the day, that’s what the law means to me, and that’s what politics means to me. Those are the fields I’d like to go into. They’re all about human interaction and human connection, and I think ultimately AI can’t replace that no matter how advanced it gets. 

Boiangiu (16:44): Yeah, I want to go into math academia. And for me, I think I am not scared about AI and the impact it’s going to have on higher math. I think AI is never going to get good enough to be able to replace actual researchers. However, it’s going to get good at doing the annoying parts, at grinding out the details, at solving that integral for you. 

But, the ideas are still going to be yours. For STEM fields in general, I think it’s becoming more and more important to not know just one: you can supplement computer science with some knowledge of physics, microbiology, math, even, and your knowledge of computer science is not going to become obsolete. You just need to have something to back it up with in the age of AI. And I think that’s a very good thing, actually.

Tan (17:26): Yeah, I think there’s definitely levels to this. I definitely think all industries will be impacted by the use of AI, and I think that’s something that’s not going to go away and only going to be more integrated into our futures. 

But I think the sort of fear that AI is going to take away all our jobs is misfounded. I think a lot of the sort of annoying parts or the office jobs, for example, I want to go into law, so I think probably most of the paralegal stuff will probably be replaced by AI, or lower level coders who do basic coding, I think maybe that will be replaced, but those top-tier lawyers, top-tier attorneys, or even top tier software engineers who have those visions, I mean, those are always going to be wanted, and those are always going to be needed because of their critical thinking skills and the vision they hold. 

So I think it’s just going to depend on each industry, but I definitely think there’s a threat to perhaps the lower levels, but the experts in each skill, so to speak, I think they'll be okay. 

Boiangiu (18:27): Which is why I would say we as Princeton students tend to not be as scared, because most of us have the dream of becoming the experts. So I think maybe we are biased in that way?

Tan (18:37): Yeah, definitely. And I think in this conversation, we do have to be aware of where we stand and the privilege we have, and be aware that the impacts AI is going to have on us is going to be a lot less than it’s gonna have on some other people.

Halbert-Alexander (18:50): So for sure, yeah, I think that this idea that there’s a great difference between what generating expertise looks like and what this kind of more basic or more labor that’s more easily automated is, is an important distinction. 

So I think we can end with as you’re thinking about what future institutional policy should look like at Princeton, there’s a lot of back and forth between this absolute ban or whether we have to accept the, kind of, creative integration of AI into the curriculum, What do you think on the whole like, what direction should we be moving in in terms of what can we allow AI to do for us, and what do we have to leave for ourselves when we think about future policy?

Spatola (19:24): I think that there definitely is a group on campus who wants to move away from AI and who wants to just completely ban AI for all classes. And I do understand that sentiment, and I do understand what the fears are. But I do think that it’s something that we do have to accept. AI isn’t going to go away anytime soon. It’s only going to proliferate. There’s only going to be more AIs that are generated each year. 

And as much as you try to run from it, it is going to be integrated into our society. And if you’re not moving alongside of it, you might get left behind, because other companies, other schools, other organizations, are going to implement it into how they do their work, and it is a beneficial tool in a lot of ways. 

So I think, not necessarily a full ban, but not necessarily, ‘okay, it’s okay for every class in all situations,’ is probably the best decision, you know, just a really clear, really upfront policy on AI that is decided by both the faculty and the students for each class, you know, through dialogue, would probably be the best way to go.

Bhatia (20:21): I agree with that. I think that emphasis on dialogue between students and teachers is so important. I think that, you know, lower level, sort of menial thinking, I think what we’ve described is the more tedious part of the work, leaving those for AI, wouldn’t necessarily be a detriment to any student’s education.

But I think ultimately what AI cannot replace, or should not replace, is critical thinking. And you know, obviously that looks different between a humanities class versus an engineering class versus a pure mathematics course, but I think ultimately that idea of critical thinking needs to be preserved in whatever form it takes.

Boiangiu (20:56): I do want to mention that to come up with new ideas and to produce research, you kind of have to get through that lower level stuff at first. So it’s very delicate, the way we’re going to handle how much of this lower-level work is going to be able to be replaced by AI in this academic setting.

But I would also like to say that it feels really premature to make a firm decision on this matter, because we really don’t know how good AI is going to get. I feel like it would be wise to wait a bit, or at least be very fluid on the position that we take. 

Tan (21:28): Yeah, I definitely agree with that. AI is so new, and I think right now, the current discussions we’re having are definitely moving in a good direction. I think it’s going to depend on each class and what each student and teacher group decides. 

But I do want to add in just one caveat here, which is I don’t necessarily think that policy is the only thing we should be counting on, or only thing we should be worrying about, because, just because a class says one thing, or just because a teacher says something, doesn’t mean students are necessarily going to follow it. There’s also the responsibility of the student. 

And I think if we’re at Princeton, we should be developing these critical thinking skills, and we should be developing our own morals and our own ideas of what we want to be. So in those cases, I would ask students: think about why you’re using AI, and think about, you know, in what cases would you like to use AI? Even if a course says, ‘oh, any and all AI is allowed’, I want you to think about: ‘will this further my learning process? Am I using it in a way that’s beneficial and that I think will enhance my work, or am I using it in a way that’s going to detract from my thinking?

Just because something's tedious, should I do it, or should I use AI to get rid of it? Do I really know what I’m talking about? Do I really know my stuff? When is my learning necessary for me to go through, and not for, you know, me to put this AI through?’ 

So I would say, like, there’s not only this responsibility on the teachers and the class, because no matter what they say, students can always find ways to get around that, right? It’s also going to matter on the students themselves. So I’d really encourage students themselves to think about what they individually value and what they’re looking for in their learning. 

Please send any corrections to corrections[at]dailyprincetonian.com.