MEASURING INPUTS IS HARD. Measuring outputs is harder. Signs of Chaos links to a disturbing essay by two St. Louis Fed economists looking for simple-minded ways to improve the academy's productivity. The economists recognize this is not as easy as it looks.
While the economist’s general definition of productivity, namely outputs relative to inputs, is straightforward, the definition is too simple to guide management strategies aimed at increasing productivity. A more thorough definition of productivity recognizes that productivity can be divided into two parts: efficiency and effectiveness. Efficiency refers to the level and quality of service that can be obtained given an organization’s fixed resources. Thus, an organization is considered more efficient if it can increase the level or quality of service without increasing the amount of inputs used. Effectiveness, on the other hand, refers to how well an organization meets the demands of its customers. The customers in higher education are students, parents, employers and state legislatures. Customer demands may include such outcomes as a specialization of knowledge in a specific area, career assistance and job placement and, probably most important, the graduation of well-educated and productive students.
Let's begin at the beginning. The composition of outputs and inputs is harder to measure than the report suggests. Here is a productivity primer. This is Recommended Reading. I am still working on that promised longer post on academic productivity and the defect rate. It's job-talk week. Business before pleasure. For now, understand this.
The traditional measure of total factor productivity growth is defined by the path-independent Divisia index.
Read on. (It will appear again in that longer post.) Keep in mind that everything else in the paper follows from the premise of a single output Y. (That is true of primitive measures such as student credit hours per faculty member.) But even in the single-output case, the true measure of productivity growth compares quality-adjusted output to quality-adjusted inputs. The primitive measure misstates both parts of the comparison. A university, moreover, might be producing multiple outputs, such as quantitative knowledge, aesthetic sensibility, awareness of the oppression of others, or not running with scissors. Productivity growth is more difficult to measure under such circumstances. None of which stops the primitives.

Signs of Chaos sees the signs, and they are not encouraging.
First, note that this discussion apparently assumes the faculty has no interest in, or control over, what programs an institution offers, what courses their departments offer—or, perhaps more accurately, that the faculty ought to have no such interest. Faculty are to be treated purely as an input, not as anything more central to the academic process.
Although faculty governance appears to be a survival of the Joshua Lawrence Chamberlain age, in an era of administrations dominated by failed scholars, political zealots, and wannabe captains of industry with sails full of MBA wind but neither feel for the tiller nor understanding of the compass its value ought to be greater. That, however, will require faculty to rethink the comparative advantage argument by which strong researchers avoid the university committees.

A second point addresses the tradeoffs of tenure. The third point addresses scholarship.
Third, this proposal assumes that the only activity in which the university engages (ought to engage?) is teaching—the transmission of knowledge. For land-grant colleges, Congress explicitly identified a mission that includes generation of knowledge as well as transmission. And, at most research-oriented institutions, outside research funding generally more than covers research costs and can be an important contributor to the institution’s budget.
I like Steven Landsburg's metaphor for the research university. Would you rather be a fly on the wall at an animated conversation or a participant in it? Put another way, the University of Phoenix (motto: talk shop with competitors for three credits) must rely on the experimentation of others to put its learning modules together. Continuing the quote,
Fourth, this proposal would almost certainly lead to increased turnover in academic programs, which would, over time, almost certainly lead in instability and to a failure to develop substantive expertise.
Academic tenure exists as a safeguard against the excesses of legislators, trustees, and administrators. Would curricula be less faddish in its absence?
Fifth, this proposal assumes that the best arbiters of what academic programs ought to include are the students. Now, I’m broadly sympathetic with the notion that student concerns ought to be incorporated into the decision-making. But at my institution, taken to its limit, this would mean that business majors (for example) would know no statistics.
I'm skeptical about "take to limit" arguments in the absence of an epsilon and a delta. There are market tests. Presumably the university currently trades off some drop-outs against some hiring at the job fair in deciding to retain calculus in the business curriculum. (The essay being criticized proposes "well-educated and productive" students as two dimensions of output. That leaves room for "Statistics is tough. Deal with it.")

No comments: