That's so staggeringly obvious that people don't understand it. (Sorry, the late Yogi Berra could probably put that more colorfully.)
Thus, it's noteworthy when the Pope Center, generally standing in opposition to business as usual, defends research and publication, and Phi Beta Cons, set up to call out the academy's follies, picks it up.
Making faculty engage in research and produce a finished, publishable product is good professional discipline. It shows that someone who engages in college teaching is more than a glorified primary or secondary school teacher. He or she is taking another step upward on the road to becoming a bearer of higher education.It's not so much that the emphasis on research and publication is "excessive" is that in the absence of solid metrics for the quality of the research being done, professors think in terms of the "minimum publishable unit," and the cynic advises the tenure-trackers that deans can't read, but they can count. Thus emerges the vanity press and archival journals and electronic publication, and the best efforts of faculties to rank journals or to calculate "impact factors" can be for naught. Or they can lead ambitious professors to cut corners.
Such discipline requires concentration and in this case excludes the possibility of college instructors getting tenure or promotion simply because they receive good “evals” from the kids and because they arrive at faculty meetings on time.
Most importantly, I can see no reasonable alternative to what has been mocked as “publish or perish” as a component in a tenure decision. The fixation I have observed on being an “effective’ teacher is a slippery slope leading nowhere but to a bigger popularity contest among instructors.
Student evaluations tell us nothing more than whether instructors come to class, speak audibly and are generally coherent. Since those evaluations that are notably expansive typically come from angry students, it may be necessary to make allowance for this mood factor as well as for the limited knowledge of the person making the judgment.
We also now have “scientific” approaches to evaluating teaching performance that take into account lists of class assignments and lesson plans. By the time I retired four years ago, young faculty were also obliged to attend seminars on how to make their teaching “delivery system” more student-friendly. Those seminars were almost always arranged by older faculty and administrators who had never done any research, other than what they euphemistically called “teaching research,” which meant preparing their classroom presentations.
Although the onetime emphasis in some universities on research and publication may have been excessive, the refusal to weigh these factors as significant criteria for academic advancement has engendered even worse results.
Researchers stated that there was strong pressure on them to publish in a limited number of top journals, “resulting in important research not being published, disincentives for multidisciplinary research, authorship issues, and a lack of recognition for non-article research outputs”. Even worse was that the need to get into these top journals led to “scientists feeling tempted or under pressure to compromise on research integrity and standards”.Incentives matter. Reward quantity, get quantity. Reward trendy approaches, get trendy approaches. Reward originality, even if it takes longer, get originality. Reward reasonable attempts at replication and improvement on method ...
The authors of the lament about corner cutting recognize as much.
In the end, science is a human endeavour. And like humans everywhere, those who work in it will do what they are rewarded for, for better or for worse. So we need to make sure those reward structures are encouraging good quality research, not the opposite.So staggeringly obvious that people won't understand it.