The Evaluation of Teaching

This page written by Dan Styer, Oberlin College Physics Department;;
last updated 11 April 2000.

In February 1997 I started up a discussion concerning the evaluation of teaching on PHRSLRNR, a listserve devoted to physics education research. There were a number of interesting comments from which I learned a lot. In this page I summarize the discussion and my own thoughts.

Before you can assess the quality of teaching, you have to determine the goals of teaching. Theodore J. Marchese shows that this is more difficult than it sounds.

There are four main techniques for the evaluation of teaching. They can be used in parallel or in isolation. For an important decision, such as a tenure case, it might be reasonable to use all of them. For an annual or biannual salary review it is more reasonable to use one or two. The techniques are:

  1. Course evaluation surveys from current students.
  2. Teaching evaluation surveys from course alumni (either senior surveys or post-graduation surveys).
  3. Peer evaluation by faculty (either within in the department or from outside).
  4. Portfolios.
Course evaluation surveys from current students.


  1. High response rate.
  2. Easy to do.
  3. Gives immediate feedback to teacher.
  4. Students are evaluating events in the recent past, so they have good recall.
  1. Student response lacks perspective: e.g. students might not understand the basis for some subject matter choices made by the teacher until months or even years after taking the course.
  2. Tends to reward entertaining or flashy presentations rather than straightforward yet valuable ones.
  3. Even when asked to "evaluate teaching per se" students often evaluate instead whether they like the subject matter covered in the course.
  4. Rewards teachers for treating simpler subjects, because it's easier to get good evaluations for "clear presentation" if teacher presents easy stuff rather than hard stuff!
  5. Discourages innovation, because students usually rate highly the teaching formats that they've seen before. (This point was emphasized in the listserve discussion: One professor gave an instance where he presented arguments supporting and opposing a point of view. Student course evaluations came back with comments like "What do you expect us to do? Think?" Another stated frankly that "When I started cooperative groups in my large introductory course, my student evaluations plummeted.")

An important point concerning student course evaluations is the need for a variety of evaluation forms for a variety of course types. For example, the Oberlin Physics Department uses different forms for lecture and laboratory courses.

A richer discussion of student course evaluations, including research results that expose common misconceptions, is given in "What do they know, anyway?" by Richard Felder. (Richard's son Gary was an Oberlin College student and one of my honors students.)

Teaching evaluation surveys from course alumni.


  1. Alumni respond with perspective.
  1. Each alum responds with his/her own perspective, which might not be relevant to course goals.
  2. Low response rate.
  3. Hard to do.
  4. Delayed feedback to teacher.
  5. Tends to reward memorable presentations rather than straightforward yet valuable ones.
  6. Alumni are evaluating events in their distant pasts, so comments are likely to be colored and vague.
  7. Even more so than current students, alumni tend to evaluate subject matter rather than teaching. (This point is debatable.)
Peer evaluation by faculty. In this scheme each evaluator sits in on a few class meetings given by the professor under evaluation.


  1. Evaluators understand the difficulties involved in teaching and hence are less likely to be swayed by superficial gloss.
  2. Feedback to teacher can be immediate and excellent.
  3. Evaluators can themselves learn teaching techniques from those being evaluated.
  4. Good separation between evaluation of teaching and evaluation of subject matter.
  1. Time consuming and/or costly.
  2. It is easy for the evaluator to fall into the trap of asking "Does this professor teach the way I do?" rather than "Does this professor teach well?".
  3. Difficult to judge whether students are being excited, which is often a key course goal.
  4. Evaluator sees only one or two classes, but these are a small part of the total course, which consists of dozens of classes, as well as discussion groups, homework, readings, exams, papers, laboratories, etc.
  5. Difficult for evaluator to judge the overall structure and strategy of a course.
  6. Evaluators often find the presentation boring, but only because they already know the subject matter.

Richard Felder has written on this subject too: "It Takes One to Know One".

Portfolios. In this scheme the professor under evaluation prepares a "list of courses one has taught recently; changes that one has made as a result of reflection, or research, or attendance at conferences or workshops; sample course materials, evidence of student learning (e.g. extracts from assignments), papers on teaching, lists os workshops and/or education related conferences attended, letters of reference from students, and student evaluations." One could also include a statement of educational philosophy, syllabi, assignments and handouts given to students, computer programs used or written, class notes or lesson plans, etc. This portfolio is examined and evaluated by other faculty.


  1. Good rejection of the "superficial gloss" effect.
  2. Feedback to teacher can be excellent.
  3. Evaluators can themselves learn teaching techniques by examining portfolios.
  4. Good separation between evaluation of teaching and evaluation of subject matter.
  1. Time consuming to evaluate the portfolio. There is a great temptation to skip over important parts.
  2. Delayed feedback to teacher.
  3. Microscopic examination of the course can ignore overall course structure and strategy.
  4. Difficult to judge whether students are being excited, which is often a key course goal.
  5. Evaluator sees only part of the course, whereas student evaluators see all of it.

Richard Felder also has some thoughts on "The Uses and Abuses of Portfolios".