'Tis the season once again when Princeton undergraduates confront one of the more momentous decisions they must make in their lives, namely: Should they or should they not fill in the course evaluations distributed to them at the end of their courses? Less politely, the question is: Should these evaluations be taken seriously?
If the low response rates to these surveys are any guide, most Princeton students believe the answer is "No." They may have a point, for one may doubt that these surveys would be taken seriously by anyone in the world outside of academia — for example, in political opinion polls or in marketing surveys — where common sense trumps brilliance.
When opinion surveys have response rates of less than 50 percent, as tends to be the case with our course evaluations, they may be subject to serious response bias. This bias arises when respondents to a survey differ systematically in their opinions from nonrespondents. Given the archaic manner in which our course evaluations are administered, the University could not possibly have a clue about the nature of any such response bias. How would one get at it without knowing who had and who had not completed the evaluations?
Serious survey researchers would wonder whether the opinions of students who skipped 20 percent or more of the lectures in a course should be given the same weight as students who attended most or all of the lectures, as is the case now with our course evaluations. In a similar vein, it would certainly be instructive to know whether students who performed well in a course offer systematically different assessments of it than students who did not perform as well. Modern information technology has progressed enough so that such stratification would be eminently feasible. Not so with our paper-based surveys which rival ancient chalk boards in quaintness.
The wonder is that academics, who never hesitate to tell the rest of the world how to reach perfection in its various endeavors, are content to apply such a flawed, outdated information system to their own work. More curious still is that academics pretend to use these flawed survey data for serious purposes.
Students should know that sizeable sections of the subcommittee reports prepared for faculty meetings on the promotion of junior faculty are devoted to "Teaching," as are goodly fractions of those faculty meetings. One must wonder what goes through the minds of professors as they pour over these flawed data at such faculty meetings. Do they ever worry about the low response rates to these surveys, or about the courses with only a dozen or so students one outlier among, say, five or six respondents can seriously distort the averages, which, as noted, may be a seriously biased sample to begin with?
In my courses, I do not usually ponder the numerical responses to these surveys, for reasons that should now be obvious. But I do read quite diligently the constructive (as distinct from destructive) comments offered by students on the white sheets accompanying these surveys. Comments such as "the homework exercises, like, totally sucked" will be dismissed as destructive. On the other hand, comments such as "more emphasis should be placed in the homework exercises on applications relevant to current events" or "homework exercises should focus more on material on the final exam" are constructive and will attract my attention. Professors may or may not respond positively to these comments, especially when the respondents exhibit a wide dispersion of mutually contradictory preferences, as is so often the case. But I, for one, have always welcomed such comments from students and have taken many of them into account in composing future versions of a course.
Students who would like to benefit future cohorts of students could also communicate with their professors directly via email, after the course grades are completed, or use for that purpose the USG hotline, if they prefer anonymity. That approach would permit lengthier comments than are feasible on the white sheets. I would imagine that most professors would welcome such comments. I certainly would.
But never mind all that. Because I am paid to do so, I shall dutifully, and with an utterly straight face, hand out this week the Registrar's hallowed survey instruments, along with the cute little pencils he has sent me for that purpose. They are the University's gift to students who choose to complete the forms. A randomly chosen student will collect whatever responses are offered and rush them over to the Registrar's office, where eventually they will be computer-read and tabulated, without any stratification or adjustments whatsoever, for momentous decisions by the faculty further down the line. It's one of those rituals that form part of our tradition. Uwe E. Reinhardt is the James Madison Professor of Political Economy and a professor in the Wilson School. He can be reached at reinhardt@princeton.edu.
