WHY I CALL IT ASSESSING THE OBVIOUS. Minding the Campus points to a statement from the Intercollegiate Studies Institute about the assessment fad.
Ongoing assessment diverts teachers from teaching. Instead of preparing their courses, meeting with students, or grading papers—in short, executing their teaching duties—they must spend a substantial amount of time worrying about how to assess what they teach. Moreover, academic deans, instead of overseeing assessment activities, might be better engaged in useful activities such as developing young faculty or securing grants. No one, to my knowledge, has done a serious cost-benefit analysis of whether the innumerable hours faculty and administrators expend on assessment could be better used on activities that directly benefit students. No one knows what opportunities have been lost to the demands of devising and implementing assessment instruments.
If the professors are not doing the assessing, the essayist suggests, the hired help is.

Universities, where much of the actual teaching is done by inexperienced graduate students, do not expect their research-oriented faculty to perform assessment. Indeed, many prestigious university professors have no idea what assessment is. At the most highly rated universities, assessment is carried out by staff hired expressly for that task. For example, the University of Virginia has a Department of Institutional Assessment and Studies that reports directly to the State Council of Higher Education. It is unlikely that the state would not accredit its own state-sponsored, tax-supported university.

The real scandal of outcomes assessment, the one nobody talks about, is that the methods used to assess usually produce very little worthwhile data. Departments and programs create assessment tools though a process that (1) sets goals for student learning, (2) gathers evidence of whether students have learned what is expected, (3) interprets the information gathered, and (4) adapts teaching methods in light of the evidence. Every social scientist knows that the only valid way to measure human phenomena is with double-blind experiments in which neither those who actually administer the tests nor those who take them know what’s being tested.

Of course, this is impossible when assessing college programs. Students know exactly why they are being assessed. Even worse, faculty and administrators whose programs are being assessed not only are the people administering the assessment instruments, but they are often the people charged with devising them. Such a system is easily abused since no one wants to look bad. Measurements tools are constructed that simply validate what teachers and administrators are already doing.

Put briefly, the essay identifies two problems. First, if there are assessment professionals, they are likely graduates of the colleges of deaducation, otherwise known as the home office of academic mediocrity. Second, if the assessment is supposed to produce constructive self-criticism, there has to be a Holy Office somewhere to identify the sinners (or is it running dogs of capitalism? I get my zealots confused) and specify the appropriate penance. I disagree with the claim that double-blind experiments are the only valid measurement tool.

Apparently I am not alone in characterizing the activity as assessing the obvious.
The dirty secret is that teachers pay almost no attention to assessment outcomes. They learn little from the exercise—considering it only another (usually uncompensated) onerous administrative duty—and they often dismiss the findings because of the way accrediting agencies structure the activity. Since assessors cannot be experts in every academic field, they require that every department and program aggregate information. The people doing the assessing are not capable of judging the merits of syllabi, tests, and papers from outside their field of study. Thus, they make all departments homogenize the “outcomes” into a form comprehensible to a generic reader. The problem is that students learn chemistry differently than they do a Dostoyevsky novel, and assessment measurements that attempt to aggregate information across disciplines may miss this important difference.
As if the graduates of the college of deaducation could distinguish Dostoevsky from Lobachevski in the first place, but I digress.

The post concludes with a reminder of the duty of the successful professor.
Teachers assess all the time. They read student papers and exams to discover if students have learned. They ask questions in class and engage students in discussion. They look over student evaluations to see if the way they are associating with students is being well received. They are always trying to find better ways to help students grasp the material. Why do they need to spend time in another elaborate and meaningless type of assessment? They don’t—and it’s time to say so.

No comments: