As universities globally are being pushed to offer valid and reliable accounts of their work and performance, there is a global frenzy across university campuses for measurement and evaluation. University administrators, all set to capture university productivity in some raw number, are driven by the quest for simplified metrics, algorithms, and statistics.
The controversial Purdue president Mitch Daniels, who during his time as Indiana governor had come under fire for introducing school-wide performance-driven reforms that many academics argue fundamentally broke the backbone of K-12 education in Indiana, is now all set to introduce metrics for measuring critical thinking.
Faced with a faculty that is uncertain about the meaningfulness and effectiveness of a research design that would capture critical thinking, Daniels notes " "How about we get going on the meat and potatoes of critical learning and not put that off another 12 months? … There could be a little learn-by-doing involved, too."
For Daniels, a little learn-by-doing is the solution to the fundamental uncertainty about the meaningfulness of measuring critical thinking. The problem with the Mitch Daniels line of thought lies in the understanding of critical thinking as "meat and potatoes" that can somehow be weighed by a simplistic weighing device and then matched up to a dollar amount. In this worldview, we can have an ROI (return-on-investment) for critical thinking.
This race to measurement is of course not unique to Purdue but is an increasingly global phenomenon.
Long lines of administrators, with the increasing ranks of Associate Deans, Vice Deans, Assistant Deans, and sub-Assistant Deans need to justify their existence, their own lack of productivity, and their mediocre-caliber performance by inventing new sets of metrics that would create new domains of busywork for them. Presidents and Provosts, with their exorbitant salaries need to demonstrate that they are doing something to be accountable.
You see, the problem in all of this lies in the very intent and objectives behind these exercises. At the rate at which these metrics come and go, it becomes evident that for every administrator, there has to be some new campus-wide exercise to define her or his mark on the university. The concern for the administrator then is in showing that some busywork is taking place, some new paradigm of managing universities is being invented.
It does not matter how good the work is, how good the design is, and whether the design is informed by good science.
The race to measurement then also means that this a race to new initiatives, new processes, and new campus-wide exercises, often detracting from the fundamental commitments to research, teaching and meaningful engagement that ought to define the life of a professor. Let's not forget the amount of resources and money that go into these new measurement and accountability exercises. Where's the data to demonstrate that these new processes of measurement and new initiatives actually worked?
We as faculty are often made to grudgingly fill out another round of papers, evaluative tools, and performance metrics in order to satisfy the fanciful obsession of a new administrator with the "meat and potatoes" of some new entity.
We find our days being filled out with filing paperwork, completing some new e-process, writing up some new sets of objectives, and then randomly coming up with new sets of metrics to evaluate against these objectives. I say randomly because more often than not there simply isn't a robust set of systematic indicators to shape these kinds of processes.
Increasingly, the long hours on the computer filling out forms also mean less and less hours with our students, less and less time in understanding them, in guiding them, and in nurturing them. The busyness of the paperwork and e-forms take up so much of our time and energy that we start forgetting the fundamental mission of why we are here: to serve our students.
All these efforts would perhaps make sense if we knew that the measures and measurement processes were accountable, if we only knew in transparent ways the science behind the metrics, evaluative exercises, and new processes, and that these decisions were grounded in robust research. But all of this would mean that universities be redone in how decisions are made. The opaque decisions made by trustees and the short-sighted decisions made by administrators must be rendered visible to the faculty, for the faculty to debate on and decide on as a collective based on deliberation. For new initiatives to take place, they must be ratified by elected faculty senate or some such decision-making body grounded in faculty participation and faculty evaluation of data.
Critical thinking, President Daniels, can not be reduced to "meat and potatoes;" We can not run "fly by the seat of our pants" operation to measure critical thought. There exist fundamental philosophical differences on measurement and the meaning of measurement. I suggest you begin by reading this literature that would point you toward the key philosophical, theoretical, and empirical debates in this literature. Once you do so, you will perhaps have a greater sense of the uncertainty that faculty feel about such measurement operations, questions of research design, face validity, construct validity, reliability etc.
The controversial Purdue president Mitch Daniels, who during his time as Indiana governor had come under fire for introducing school-wide performance-driven reforms that many academics argue fundamentally broke the backbone of K-12 education in Indiana, is now all set to introduce metrics for measuring critical thinking.
Faced with a faculty that is uncertain about the meaningfulness and effectiveness of a research design that would capture critical thinking, Daniels notes " "How about we get going on the meat and potatoes of critical learning and not put that off another 12 months? … There could be a little learn-by-doing involved, too."
For Daniels, a little learn-by-doing is the solution to the fundamental uncertainty about the meaningfulness of measuring critical thinking. The problem with the Mitch Daniels line of thought lies in the understanding of critical thinking as "meat and potatoes" that can somehow be weighed by a simplistic weighing device and then matched up to a dollar amount. In this worldview, we can have an ROI (return-on-investment) for critical thinking.
This race to measurement is of course not unique to Purdue but is an increasingly global phenomenon.
Long lines of administrators, with the increasing ranks of Associate Deans, Vice Deans, Assistant Deans, and sub-Assistant Deans need to justify their existence, their own lack of productivity, and their mediocre-caliber performance by inventing new sets of metrics that would create new domains of busywork for them. Presidents and Provosts, with their exorbitant salaries need to demonstrate that they are doing something to be accountable.
You see, the problem in all of this lies in the very intent and objectives behind these exercises. At the rate at which these metrics come and go, it becomes evident that for every administrator, there has to be some new campus-wide exercise to define her or his mark on the university. The concern for the administrator then is in showing that some busywork is taking place, some new paradigm of managing universities is being invented.
It does not matter how good the work is, how good the design is, and whether the design is informed by good science.
The race to measurement then also means that this a race to new initiatives, new processes, and new campus-wide exercises, often detracting from the fundamental commitments to research, teaching and meaningful engagement that ought to define the life of a professor. Let's not forget the amount of resources and money that go into these new measurement and accountability exercises. Where's the data to demonstrate that these new processes of measurement and new initiatives actually worked?
We as faculty are often made to grudgingly fill out another round of papers, evaluative tools, and performance metrics in order to satisfy the fanciful obsession of a new administrator with the "meat and potatoes" of some new entity.
We find our days being filled out with filing paperwork, completing some new e-process, writing up some new sets of objectives, and then randomly coming up with new sets of metrics to evaluate against these objectives. I say randomly because more often than not there simply isn't a robust set of systematic indicators to shape these kinds of processes.
Increasingly, the long hours on the computer filling out forms also mean less and less hours with our students, less and less time in understanding them, in guiding them, and in nurturing them. The busyness of the paperwork and e-forms take up so much of our time and energy that we start forgetting the fundamental mission of why we are here: to serve our students.
All these efforts would perhaps make sense if we knew that the measures and measurement processes were accountable, if we only knew in transparent ways the science behind the metrics, evaluative exercises, and new processes, and that these decisions were grounded in robust research. But all of this would mean that universities be redone in how decisions are made. The opaque decisions made by trustees and the short-sighted decisions made by administrators must be rendered visible to the faculty, for the faculty to debate on and decide on as a collective based on deliberation. For new initiatives to take place, they must be ratified by elected faculty senate or some such decision-making body grounded in faculty participation and faculty evaluation of data.
Critical thinking, President Daniels, can not be reduced to "meat and potatoes;" We can not run "fly by the seat of our pants" operation to measure critical thought. There exist fundamental philosophical differences on measurement and the meaning of measurement. I suggest you begin by reading this literature that would point you toward the key philosophical, theoretical, and empirical debates in this literature. Once you do so, you will perhaps have a greater sense of the uncertainty that faculty feel about such measurement operations, questions of research design, face validity, construct validity, reliability etc.