The MacLean's Rankings:

Students as Pawns in Annual Game

by Dr. Kenneth Cramer and Dr. Stewart Page

No doubt, educators and students will again ponder the release and marketing of this year’s Maclean’s university rankings. The widespread attention and financial success of this issue have strengthened the prominence and apparent credibility of such statistical exercises. Students and concerned parents increasingly turn to such rankings as reliable guides to “what every student needs to know” when it comes to selecting which Canadian university to attend.

. . . the average rank on most indices is typically not significantly different in comparisons between the top and bottom 50 percent of schools within each classification.

Last year, the rankings again crowned winners and losers, and multiple references were made to where the best and brightest students may or may not be found. The naiveté of magazine readers, and their inexperience regarding matters of measurement error and expected sampling differences, were exploited once again. For example, it was once reported that 85 per cent of the students at one particular school indicated that they liked the quality of their education, while at other major schools this figure was only 82 per cent. But is this difference in any way meaningful?

What is measured?

Every year Maclean’s ranks each of the 47 universities in Canada according to their scores on several indices which supposedly reflect student ability and characteristics, class size, faculty qualifications, and other parameters including finance, library resources, and reputation. Schools are further classified in terms of basic program orientation, that is, as Medical/Doctoral, Comprehensive, or Undergraduate.

In previous publications (summarized in Cramer & Page, 2005) we have reported statistically-based analyses of the Maclean’s ranking data, for each year from 1992 to 2005. In every analysis so far we have found that the indices are not strongly intercorrelated either conceptually or empirically, nor are they strong predictors of a university’s overall final rank. And for each university classification as Medical/Doctoral, Comprehensive, or Undergraduate, the average rank on most indices is typically not significantly different in comparisons between the top and bottom 50 percent of schools within each classification. For example, in the comprehensive category, universities in the 2005 rankings, only 3 of 23 indices were significantly different in these comparisons.

Moreover, the measurement limitations of rank-based data do not allow for assessment of how “good” or “bad” the top and bottom universities are, or of how much they might differ in relation to others or each other. Using cluster analysis as a method for identifying any such similarity and dissimilarity between schools on the various component indices underlying the rankings, we have repeatedly found that many schools differ considerably in final ranking, classification, and many other aspects are actually highly similar in their pattern of rank scores on these indices—and vice versa.

Furthermore, using various sets of student satisfaction ratings, including those published by the Toronto Globe and Mail as well as those published last November by Maclean’s, we have found that student satisfaction seldom correlates well with the overall ranking results or with the individual ranking indices. While several prominent schools, which usually have fared poorly on measures of satisfaction, often do well in Maclean’s final ranks, several lower-ranking schools have scored highly on satisfaction measures. Similarly, the satisfaction indices have also correlated weakly with Maclean’s supplementary measures of “Most Innovative,” “Leaders of Tomorrow” and so on. Using the 2004 data, in most cases the top versus bottom halves of the schools (in all university categories) showed no statistically meaningful differences in the satisfaction scores of recent graduates of Canadian schools. These same data showed further that in fact every school was rated as either “good” or “very good” by 90 per cent of the12,400 graduates surveyed, seemingly supporting the view of essential comparability and overall perception of high quality across schools.

The Pitfalls of Rank-Based Data

. . .two schools may spend $30.00 and $29.99 in library holdings per student, yet be placed a full rank apart.

Maclean’s elevates and transforms small, sometimes minute, statistical differences to the status of discrete differences in rank. Vital information is frequently lost. For instance, three runners in a marathon could finish in 120 minutes, 121 minutes, and 180 minutes, thus ranking 1st, 2nd, and 3rd. Yet the first two runners are but a minute apart, and both an hour ahead of the next one. Similarly, two schools may spend $30.00 and $29.99 in library holdings per student, yet be placed a full rank apart. Actually, one school could improve its rank by spending one or two extra cents per student. This same situation applies to every measure and index used to construct the Maclean’s rankings. Rank-based data (third, fifth, etc.) allow only limited quantitative comparisons between the ranked items, and no reliable interpretation of the size or nature of differences obtained between them.

University rankings, as currently conceived, function in a dynamic system, something like the constantly changing fortunes of players in a golf tournament. A given school in itself cannot simply improve by “doing better,” since its final rank will depend simultaneously on the pattern of improvements or declines shown concurrently by other schools whether considering a single or many evaluative indices. The conceptual sense of arguments about apparent or sudden “improvements” in a given school must also be considered in light of the basic invalidity and unreliability in the overall system itself, defined as it is by an arbitrary and unreliable set of indices.

Rankings Reflect Resources

Careful consideration of the ranking indices show that the Maclean’s rankings essentially reflect a school’s resources and their budget allocations. In every analysis of ranking data to date, we have found high, significant correlations between final ranks and the size of operating budget. We must realize that a school’s claims about being “much improved” or “number one” in one aspect or another has to be evaluated both in terms of concurrent changes in other schools, as well as on any absolute or proportionate number of dollars expended on the aspect in question. Furthermore, it is difficult to achieve consensus on how rankings or their underlying indices will be perceived. For instance, a colleague at University of Toronto perceives his school as being “horribly underfunded.”

. . . most undergraduates chose their university for a variety . . . of reasons, unlike the type of indices put forward by Maclean’s.

What university indices or parameters are most important to students? In one recent study, we found that most undergraduates chose their university for a variety of personal, practical, geographical, logistical, and financial reasons, quite unlike the type of indices put forward by Maclean’s. In fact, most students do not simply “choose” a university in the usual sense of making a rational, statistically-based comparative choice among several alternatives as one might do in choosing from a used car lot. When University of Windsor students were asked to evaluate the Maclean’s indices alone, we found that what mattered most to them were factors such as the proportion of students who graduate, awards and scholarships, reputation, class sizes, and the proportion of classes taught by PhDs. Less important factors included the entry of students from out of province, finance and library issues, proportion of both graduate and undergraduate international students, and faculty research grants.

In some ways, ranking exercises have become more important to institutions than to the students who attend them, and in rather unprincipled fashion have come to affect managerial and administrative priorities. Administration at one school recently encouraged professors to report class sizes smaller than they actually were. This practice is potentially dangerous because courses with “small classes,” (perceived as pedagogically superior by Maclean’s) are often the first candidates for cancellation because they appear to have insufficient enrollment or “student interest.” In another example of “teaching for the ranking” a colleague at a major school reported to us that he had recommended a highly capable Master’s level student to teach an undergraduate class. The administration denied the request, pointing out that such an action would lower the percentage of tenured faculty offering courses to students, thus endangering the school’s standing in the underlying Maclean’s index. A more formally qualified, though less appropriate instructor was thus deemed more suitable than the less formally qualified, but more appropriate instructor.

The Implications for Student Welfare

The implications of ranking exercises for student welfare, amid much advertising and hype about ranking results and “where the bright students are,” remain unrecognized. Moreover, such a state of affairs is worsened by the idea that there exists a single (and annually decided, like the World Series) “best” or “worst” school. Clearly, students attending some schools of supposedly lower quality will come to feel less confident and less academically secure, and their academic performance may be compromised in the process. We know of or have met many such students.

Thus the stage and basic dynamics are set in motion for yet another form of educational self‑fulfilling prophecy, where students attending less prominent schools come to perform below what their potential might otherwise be, while likely perceiving themselves as disadvantaged. A large number of studies in social science have shown that teachers (even at the postsecondary level) may frequently alter the style, amount, interpersonal climate, and content of their teaching depending on expectations or assumptions held about students’ ability. For that matter, placing schools in a linear rank order, in the manner of Consumer Reports, with supposedly “brighter” students toward the top, reinforces the increasingly discredited assumption that intelligence and academic ability are conventionally measurable quantities, that is, linear or static skills, rather than collections of different and nonstatic skills.

A moral question thus remains: should Canada’s schools be ranked, merely because we can, as one might do with a toaster or television?

We also know from studies of stereotyping, that students don’t think or learn, or generally do as well, when they are stigmatized or put down in some way, through the operation of variables such as social class or other demoralizing or negative forces such as nationally publicized “evaluations” of the school they attend. Of course, in the opposite direction, other students will participate in positive rather than negative self-fulfilling mechanisms, wherein they come to achieve well, following cues and positive expectations communicated to them by their social environments. Ranking exercises may thus begin to affect students in ways similar to the effects of social class: a set of social forces which color much of the learner’s perception of ability, potential, and self-worth—and maybe job seeking success.

A moral question thus remains: should Canada’s schools be ranked, merely because we can, as one might do with a toaster or television? In this era of “top ten” lists, we tend to believe that ranking can and should be carried out in any domain, without regard as to whether significant and meaningful differences exist. Simply because it may be reasonable to rank toasters (since we can use reasonable and interpretable indices to do so), does this make it reasonable to rank communities, schools or, by direct implication, the students who attend them? Every school has a special mission and local colouration which cannot be reflected in typical rank-based data, or in the assumption of a single “best” school suitable to all students’ needs and all styles of learning.

Higher education in Canada faces an escalating educational problem, effectively based on the emergence of a new type of self-fulfilling mechanism. The future will show whether Canadian university authorities will choose to be sufficiently wise, apolitical, and morally principled to recognize and respond to the issue of rankings, including the matter of their implications for student welfare and academic performance. A telling feature of the Maclean’s rankings is that, while the overall response rate in its 2005 reputational survey was 11 per cent, by far the largest return from any subgroup of those respondents (41 per cent), was that of “university officials.” It seems like a plausible hypothesis that these unidentified individuals, comforted with completing anonymous forms, likely supported their present school or a school they had attended.

We recall when Maclean’s published its first (1990) set of annual rankings, several university presidents, academics, and various authorities were openly critical, on several counts, of the overall exercise and of the underlying ranking criteria used. Those criteria remain largely unchanged today. Yet universities now feel they cannot protest too much, nor fail to supply the raw data to help Maclean’s generate each year’s rankings. Now, with the financially lucrative ranking exercise recognized as a highly publicized annual event, those earlier voices—especially the ones from more prominent schools—have gone piously quiet.

Reference

Cramer, K.M., & Page, S. "Ranking universities: A moral issue with harmful effects."  Academic Matters, OCUFA, Fall issue, 2005.

Kenneth M. Cramer, Ph.D., is an Associate Professor of Psychology at the University of Windsor.
Stewart Page, Ph.D., is a Professor of Psychology at the University of Windsor.

© 2006, Ken Cramer & Stewart Page. All rights reserved.

Reader Comments

2006-Mar-09 at 12:38
Tim Brunet
The real loss for students is that the rankings don't give students specific departmental information which is a better measurement for prospects. Students are also missing out on great scholarships at some of the lower ranking Schools. I've met students that won't even consider Windsor because of the ranking. Yet if they did their research they would know that we would have their best scholarship offer, highest opportunity for mentoring and a great all around education. Ask Debbie Loach from the K/W area who chose Windsor for Operational Research. 98% average got her over 30,000, mentoring, co-op and one of the nicest res rooms in Ontario. A last place ranking steers these students against their best offers.
Tim.

Add your opinion!

(Sorry, no further comments are accepted for this article. You can contact us at ctl@uwindsor.ca for more information.)

Back to Top -- Updated February 16, 2006 04:03 PM
(Printer icon)Print