DESCRIPTION OF THE ASSESSMENT WORKSHOP
Judi Beinstein-Miller and Patty deWinstanley (Psychology Dept, Oberlin
College) developed and conducted an Assessment Techniques Workshop for
faculty at Oberlin College, June 2000.
Goals of the Workshop.
We had a number of goals we wanted to accomplish through this workshop.
We wanted to bring faculty to an understanding of the importance of doing
assessment, and to help faculty understand some of the basic principles
of assessment. We also wanted to teach faculty the basic techniques of
assessment and to help them develop tools they could use to assess their
own curricular innovations. We realized that most faculty members were
unfamiliar with assessment procedures and might be resistant to additional
work created by assessment requirements. We also realized that assessment
could pose threats to faculty members when the success of their innovations
was not certain. Consequently, an additional purpose of this workshop
was to enhance faculty members' commitment to assessment by affirming
their feelings as natural while demonstrating the benefits of assessment
work. As we indicated at the beginning of our workshop, we believed that
relatively small efforts could yield useful information about an innovation's
effectiveness and ways to improve its impact. In the long run, we hoped
that if we could get faculty members to appreciate and participate in
assessment then we would be able to perpetuate the assessment process
at Oberlin College so it persists after AIRE runs its course.
The Assessment Workshop took place during the summer, soon after classes
ended. It was a short, one day workshop; two hours in the morning followed
by lunch and then two hours in the afternoon. Faculty members whose proposals
for curricular integration of research had been selected for funding by
the AIRE were strongly encouraged to attend and were paid for their time.
Other faculty from the College of Arts and Sciences were invited but not
compensated (except for a free lunch). Faculty participants were told
that this would be an active workshop during which they would develop
their tools for assessment and were asked to bring in a list of educational
goals for a course they wanted to assess. Fourteen faculty attended, 8
of whom were AIRE curriculum development grant awardees.
We began the workshop by discussing the purposes and some of the problems
with doing assessment. After affirming participants' likely assessment
concerns, we introduced the workshop as a relatively painless means to
learn assessment principles and techniques. We then emphasized that the
first step in assessment is the clarification of educational goals and
we talked about how and why one would do so (see below). We then emphasized
that one needs to develop measurable outcomes to be able to determine
how well those goals have been achieved and we described four types of
measurable outcomes (see below). Subsequently, we discussed two assessment
tools; self-report and instructor assessment (see below). Once we had
discussed these basic principles of assessment, we worked on some concrete
examples. First we used the Assessment Workshop as an example of how to
develop assessment tools (see Workshop Handouts, example 1). Next, we
discussed a component of a course as an example of how assessment tools
were developed (see Workshop Handouts, example 2). We then split the participants
into small groups, where they developed tools to measure educational goal
achievement in a hypothetical course called General Introduction to Science
(see Workshop Handouts, example 3). The course was described as an application
of science to real world problems, such as conservation of environmental
resources, disease prevention, and improvement in racial/ethnic relations.
Five course goals were provided and we assigned each group two of these
goals, for which they developed outcomes and outcome measures. When they
were finished, we discussed their measurement ideas and shared our own.
After sharing ideas for this course, we had lunch, and then reconvened
in small groups where participants worked together and used ideas from
the morning session to develop assessment tools for their proposed curriculum
development projects. Clarifying Educational Goals and Measurable Outcomes
(See also Assessment Workshop Handouts).
Curricular innovation occurs because the course instructor
wants either to introduce new educational goals or to develop new ways
to help students to better achieve the existing goals. The first step
of assessment involves the clarification of these educational goals. Once
clarified, these goals must be further specified in ways that can be measured,
i.e. measurable outcomes must be determined. These outcomes can then be
measured through the use of assessment tools (see below). For example,
an educational goal might be to have students learn experimental design
and analysis, and a measurable outcome might be that students can detect
design flaws in an experiment. We recognized that curricular innovations
are often initiated with only general expectations of improvement, and
that more specific outcomes are often not apparent until the end of the
course. But we noted that post facto evaluations of this type, often unsystematic,
preclude the use of procedures, such as pretests and posttests, that can
supply important information about project impact. So we urged participants
to clarify their reasons for innovating at the outset. Once they recast
their reasons as educational goals, finding measures of their achievement
would not be difficult. For example, our goals for the workshop were to
create a favorable climate for assessment activities and to impart basic
assessment skills. However, to know whether we had achieved these goals,
we would need to measure the achievement of more specific outcomes such
as participants' changes in attitude about assessment and their abilities
to construct specific kinds of assessment tools.
We emphasized that there are often multiple reasons for curricular innovation,
using the workshop as an example. In an assessment of the workshop, we
would examine more than participants' acquired knowledge about assessment.
Because we also wished to encourage positive attitudes and intentions
regarding assessment, we would examine these outcomes as well. Similarly,
it is seldom that we wish students to acquire information only. We also
wish them to develop positive attitudes and intentions with regard to
what we teach. For these reasons, we usually use assessment tools to measure
We suggested that participants think in terms of four different types
of measurable outcomes. These types of outcomes include changes in attitudes,
intentions, knowledge, and skill. Attitudinal outcomes are so named because
they involve students' beliefs about course material and their consequent
feelings toward it. In general, we want our students to find course material
interesting, stimulating, and thought provoking so that they will feel
positively toward it.
The second type of outcome involves students' intentions to use the material
or seek out similar experiences in the future. Intentions could range
widely, from application of ideas outside the classroom, to taking additional
courses or expanding learning in other ways. One reason why student attitudes
are important is because they can influence intentions, for example, to
take additional courses, declare a major, or think about a related career.
The third type of outcome involves changes in knowledge. We want students
to learn something they didn't know before and to think differently because
of it, less like they did at the beginning of the course and more like
we think about the material. Of all outcomes, this is the one that is
most commonly evaluated, for example, through examinations and student
research papers. Here the desired outcomes are implicit in the grades
that we give for performance. However, for some outcomes it is only with
pre and post tests that we can determine if our particular course had
The fourth type of outcome involves skill acquisition, including information
collection procedures, analytic procedures, problem-solving procedures,
and communication, both oral and written. As in the case of knowledge
outcomes, we grade student skills and therefore know, at least implicitly,
what the desired outcomes are.
The first step in developing assessment materials, then, was to specify
the educational goals and measurable outcomes that were relevant to the
curriculum development project. Their specification would involve making
implicit evaluation criteria explicit, so that these could be used to
develop assessment tools.
Assessment Tools (See also Assessment
Once goals and measurable outcomes are specified, measures of their achievement
can be created. We asked participants to think about two types of tools,
those involving student self-reports and those involving instructor evaluations,
and stressed the value of using both. Attitudes and intentions are subjective
states, which could be estimated by student self-reports of beliefs and
feelings, but also by their future course-taking behavior, which could
be investigated by the instructor. Changes in knowledge and skill could
be estimated by relatively objective tests, similar to those administered
for midterm or final evaluations. These could also be estimated by student
self-evaluation of learning in the curriculum development project-related
areas relative to learning in other areas. Multiple measures that yielded
similar answers would increase confidence that the educational goals were
Again we used the workshop to provide assessment-related examples. If
we conducted an assessment of workshop outcomes, what kinds of tools could
we use? To measure participants' attitudes toward assessment, we could
ask them to rate the importance of assessment for teaching effectiveness
and other learning outcomes. To measure their intentions regarding assessment,
we could ask the likelihood of their using specific assessment tools.
To measure knowledge and skill acquisition, we could ask for self-ratings
of assessment knowledge and ability to construct assessment tools (Surprise
tests might be awkward to administer to colleagues!). Ideally these questions
would be asked before the workshop and then again afterwards, to determine
whether there were short-term changes in outlook, knowledge, and skill.
But we could also investigate the ways they evaluated, or failed to evaluate,
their projects in the following semester, and this information would increase
our confidence about the workshop's impact, or lack thereof.
We asked participants to remember four important points when constructing
assessment tools. First we stressed the use of multiple items to measure
each outcome, because answers to multiple items would be more reliable
than answers to single items. Instead of asking one question to measure
an outcome, a handful of related questions could be asked. Second, we
stressed the use of multiple types of assessment tools to measure each
outcome, because these supplied different yet converging estimates of
change. Self-reports provide a subjective view and instructor evaluations
a relatively objective one. Third, we encouraged them to take pre-project
and post-project measures whenever possible, because these would assess
how much individual students changed relative to their starting point.
And finally, because changes in attitude, intention, knowledge, and skill
could occur for a variety of reasons, we suggested they use a relevant
comparison group whenever possible. If there were two sections of a course,
and research projects were introduced in only the first, then the second
could serve as a comparison group. Comparison of their answers to the
same questions would indicate whether the curriculum development project
had made a difference.
Evaluation of the Workshop. The workshop was successful in a number
of ways. For those faculty who were already familiar with assessment principles
it encouraged them to develop assessment tools and use them. They enjoyed
talking with other faculty members about their goals and outcome measurements.
Faculty who were not knowledgeable about assessment learned the importance
of making explicit course goals before planning an assessment strategy
and some basic tools to implement the strategy, once planned. Even those
faculty who have not yet followed through by assessing their curriculum
development have gained an appreciation of how assessment can be useful
and appear to be more willing to assist in assessment plans.