Author Archives: Don Sutherland

The Debate over Assessment Intensifies

PRINT AS PDF

On April 17, 2019, Inside Higher Ed reported:

Ask the many assessment haters in higher education who is most to blame for what they perceive as the fixation on trying to measure student learning outcomes, and they are likely to put accreditors at the top of the list.

Which is why it was so unexpected last week to hear a group of experts on student learning tell attendees at a regional accreditor’s conference here that most assessment activity to date has been a “hot mess” and that efforts to “measure” how much students learn should be used help individual students and improve the quality of instruction, not to judge the performance of colleges and universities.

…”whenever we try to directly measure what students have learned, what they have gotten out of their education,” Etchemendy [former Stanford University Provost John Etchemendy] continued, “the effect is tiny, if any. We can see the overall effects, but we cannot show directly what it is, how it is that we’re changing the kids.”

The frustration over the seeming low returns on assessment when it comes to student learning might be the result of larger structural factors. First, learning is highly complex and not fully understood despite continuing advances in science. Second, researchers at Carnegie Mellon University and Temple University found that there are more than 205 trillion instructional options available. Third, perhaps the overall educational process is similar to chess in that an accumulation of nuances leads to decided outcomes over time.

The first structural factor could impede the ability of educational effectiveness assessment to determine causes of student learning and a college’s role in driving it. The second would leave a dizzying array of possibilities that would need to be evaluated to gain meaningful understanding. The third would lead to perhaps unrealistic assumptions about what assessment can actually deliver, as breaking through the forest of nuances would require a “systems approach” to assessment and a “systems picture” to explain outcomes.

And if the above structural challenges were not enough, there is also the reality that good assessment practices are not equally distributed across institutions of higher learning. Instead, they tend to be concentrated in select programs or schools, often those that are driven by program accreditation. Accessing such practices and expanding them throughout an institution can be a difficult endeavor, even as it is an important one.

However, the effort is not futile. If one looks across the broad landscape of organizations and activities for which planning takes place, assessment is integrated with the planning. Assessment is integral to the sciences and advance of scientific research. Assessment is critical in both the for-profit and non-profit world in regardless of industry or nature of business.

In all of those activities, assessment is guided by what can be known, what should be known, and how one should respond to the data/information that is gathered.

Those elementary questions might represent a useful point for reframing contemporary assessment in Higher Education. Legitimate concerns should be taken seriously. The practice, as with all professions, has opportunity to become better at what it does. At the same time, those who would simply do away with assessment bear the burden of advancing a viable, well-developed, and superior alternative to assessment.

That is a most daunting task. After all, without the rigors of the scientific method, science would never have achieved the breakthroughs that have created new knowledge, improved lives, and transformed the world at dizzying speed. A similar narrative may well hold true for Higher Education, even if the full promise of assessment has not yet been realized.