During Exams Period, I ran into one of my classmates from a small computer science seminar I had taken last semester. He asked me how my final project had turned out.
I grimaced. "Not very well. I never actually got it working."
"Really? I'm so glad to hear that!" he said, sighing with relief. "Mine didn't work either!"
"That's awesome!" I replied, and we shared a happy moment between us, rejoicing in each other's failures. Neither of us was offended, of course. We knew that, under Princeton's grading system, two failed projects were much, much better than one.
Conversations like this are not so unusual at Princeton, especially after a big exam or assignment. It's not that we really want our classmates to fail — just some assurance that the curve will be generous. Still, considering how this school is always encouraging us not to compare ourselves with our peers, it seems odd that we have a grading system that does exactly that.
I had never been graded on a curve prior to coming to Princeton. Three years later, I still find the policy puzzling. I had always thought of "the curve" as a sort of insurance against exams that were accidentally made too difficult. But there's nothing accidental about excessively difficult exams at Princeton, especially in the first and second-year math and science courses. I've taken an exam in which a score of 65 percent curved up to an "A," and I've heard of exams that set the "A" at 30 percent.
Are these exams really testing students on material they should reasonably be expected to know? If not, what does an "A" really mean?
In discussing grading policies with students and professors, I've heard two basic arguments in support of grading on a curve: 1) It can be difficult for professors to know what students will find difficult, and 2) Each semester brings a different set of students with different abilities.
I'm willing to believe that it's difficult for a professor to design a fair test, especially if the test material is very basic compared to the professor's area of research. But designing a test does not have to be all guesswork. In many courses, the material changes very little from year to year, so past years' exams (and students' performance on them) should give some clue as to what constitutes a fair exam.
Certainly some groups of students will perform better than others, and not just because of statistical variation. For example, because most prospective Computer Science majors take "General Computer Science" in the fall, many non-CS students will take it in the Spring, hoping for an easier curve. Why should these two groups be held to two different standards? Should your grade in a class really depend on which semester you take it?
You might ask if this issue is really worth complaining about. Curves usually help students, not hurt them. I'm not asking for higher or lower grades, just for more meaningful ones. Ideally, grades should tell us how prepared we are for the more advanced material that will come in other classes and in our later work and research. To tell us this, our grades need to be based on more objective standards, not merely the performance of one semester's group of students. Thomas Ventimiglia is a computer science major from Princeton. He can be reached at tventimi@cs.princeton.edu.
