U.S. News & World Report's 2005 Law School Rankings: |
|
By JOANNA GROSSMANlawjlg@hofstra.edu ---- Tuesday, Apr. 06, 2004 |
Last Friday, U.S. News & World Report ("U.S. News"), a national news magazine, released its annual rankings of law schools. Law school deans, faculties, student, alumni, and, most importantly, prospective students monitored the Internet for the inevitable few-days-early leak of the rankings, for they knew they were certain to hinge important decisions on the results.
This "rite of spring," as law professor David Yamada has termed it, has become part of the law school culture. It is alternatively criticized and exploited--often by the very same people and institutions.
This column will cover both the history of law school rankings and the current controversy over them.
Some Rankings History: From Jack Gourman to U.S. News and World Report
The originator of law school rankings was Jack Gourman, a retired professor of political science, who self-published rankings of a wide range of graduate and undergraduate programs, beginning as early as 1967. The latest edition of "The Gourman Report" to include law school rankings was published by The Princeton Review, a nationally known test preparation company, in 1997. (The Princeton Review continues to rank aspects of law school programs, like student satisfaction, but has ceased producing a ranking based on overall quality.)
Gourman's rankings were always controversial, primarily because his reports assigned schools scores to the nearest hundredth--suggesting a fairly high-level of precision--and yet he refused to release his methodology publicly. Adding further fuel for those suspicious of Gourman's methodology, was the fact that his rankings sometimes included odd results-- ranking a highly acclaimed school near the bottom, or an obscure, regional school among elite national ones.
In 1990, U.S. News began ranking law schools, and it fast became the leader in the field. It sells hundreds of thousands of copies of its "rankings" issues (one for undergraduate colleges and universities, one for graduate schools). Since 2004, its law school rankings designate the "Top 100" law schools--in rank order--followed by an unranked "third tier" and "fourth tier" that together include all remaining law schools.
The rankings purport to provide an "overall" ranking of law school quality, and many who look at the rankings treat them as if they do just that. In fact, however, the rankings are derived from a series of discrete factors that may or may not, when added together, actually give even an approximate estimation of a law school's quality -- either standing alone, or relative to other law schools.
The Methodology Behind U.S. News's Rankings
U.S. News's methodology gives the greatest weight to a school's reputation, and to the LSAT scores of its incoming students.
Twenty-five percent of a law school's overall ranking is derived from its reputation among academics, and an additional fifteen percent based on its reputation among practitioners and judges.
How is "reputation" quantified? Four faculty members at each ABA-accredited law school (the dean, the academic dean, the head of the faculty hiring committee, and the most recently tenured faculty member) are asked to rank every law school on a scale of 1 to 5, and to leave unscored any school about which they do not have enough information. They are told nothing about each school, but are instructed to take into account a wide variety of factors that may bear on academic reputation.
Twenty-five percent of each school's score is based on "student selectivity," which includes the median LSAT scores of first-year students, their undergraduate grade point average, and the school's rejection rate.
The remainder of the score is based on factors such as expenditures per student, library size, employment rates for recent graduates, and bar passage rates.
Common (and Justifiable) Critiques of U.S. News's Law School Rankings
At first glance, this list of factors may seem sensible. And indeed they represent important components of legal education. But consider two things: what's missing, and how the relative importance of each factor is determined.
First, let's look at what's missing. A 1998 study of the validity of U.S. News's Rankings, commissioned by the Association of American Law Schools (AALS), noted with serious concern the lack of any measure of either faculty quality or the "educational benefits of attending a certain school." Yet one would think these criteria ought to be absolutely central.
The AALS study also raised concerns about the seeming arbitrariness of the weight assigned particular factors. As Indiana law professor Jeff Stake has demonstrated with "The Ranking Game," changing the weights of various criteria can cause significant changes in the overall rankings. At his site, a viewer can assign different weights to various criteria and re-rank the schools accordingly.
Finally, the AALS study reported on the significant risk of "strategic rating" by academics. The problem is this: Suppose a faculty member at a school in a competitive law school market (for instance, New York, which boasts an incredible density of law schools and therefore stiff competition for students) is asked to complete the reputation survey. That person has an incentive to assign unfairly low scores to competitor schools, in the hopes of improving his own school's ranking, relative to the competition. This kind of strategic rating almost certainly occurs in law school rankings, as it does with other sets of rankings.
Opportunities to Manipulate the Rankings: Perverse Incentives Hurt Students and Schools
Strategic rating is hardly the only way in which these rankings might be manipulated. Schools can take a variety of cosmetic steps to improve their score in particular categories, without changing the overall quality of the underlying education they provide. As a result, the U.S. News system incentivizes schools to value these cosmetic changes over far more meaningful ones.
For example, a school might restrict the size of their first year class in order to push their median LSAT and UGPA higher. Then, to make up the revenue lost, it might let in an inordinate number of second-year transfers, whose scores do not count toward the school's median.
This kind of a step virtually guarantees lots of student displacement and unhappiness -- as some students will have to transfer after they've settled in elsewhere, while those who started there in the first place will lose the cohesiveness of having a class that goes through all three years together. And it delivers no educational benefit in return.
Or a school might raise tuition, but give back the difference in scholarship money to students--thereby keeping the bottom line the same, but improving in the "expenditures per student category." (This is one of the specific problems noted in the AALS study.) One would hope some of the time spent on this creative bookkeeping could instead be spent on attracting more scholarship money for students who need it.
To improve one's student "rejection rate"--part of the selectivity rating--a school can both reject good applicants who are certain not to come, and encourage applications from students who are certain not to be admitted because they are not good enough. Again, misery is ensured -- as students get recruited, and then rejected, for cynical reasons that have nothing to do with what the school thinks of their potential merit. And top students might be left wondering why they got into better schools, while being rejected from worse ones.
The list of perverse incentives goes on and on. Law libraries have an incentive to keep books they don't need, such as out-of-date editions of casebooks, in order to report a higher number of volumes in their collection.
And perhaps the most pervasive attempt to "manipulate" the ratings is the vast proliferation of brochures, pamphlets, and other glossy, hyperbolic mailings law schools send around the country. These materials tout the schools' new hires, recently tenured hires, recent publications, distinguished visitors, or any other "category" they can devise. Because U.S. News places so much emphasis on reputation -- as opposed to, say, actual quality of education -- the inevitable result has been the schools' own wasteful and unseemly emphasis on public relations.
Dubbed "law porn" by one law professor, according to an article by Professor Brian Leiter, these mailings are advertising, pure and simple, designed to increase the likelihood that a law professor, judge, or practitioner will know something about a particular school and therefore give it a higher rating. There is no reason to believe that better brochures reflect higher quality law schools, and yet they may improve a school's ranking nonetheless.
This sort of strategic response to rankings is of course not unique to law schools. Writers were exposed earlier this year, according to a February 14, 2004 article in the New York Times, for writing ostensibly anonymous reviews on Amazon.com praising friends' (or their own!) books, and dismissing those of competitors. (A software glitch temporarily destroyed anonymity, so that real reviewer names were briefly posted on the Canadian site.)
Likewise, authors have been "caught" ordering massive numbers of their own books to boost their own sales numbers. Since "bestsellers" are determined by the number of books sold, and sites like Amazon.com rank authors, as often as daily, based on the number of sales, such big orders (which the author may later try to return) can often have at least a temporary positive effect.
Indeed, it is thus hard to imagine any system of rankings that would not trigger some sort of unintended response. But U.S. News's system is perhaps especially open to "gaming" -- with its stress on reputations and rejections, as opposed to quality of faculty and teaching.
"Inherent" Flaws in the U.S. News Rankings
And even without intentional manipulation, many of the U.S. News factors are vulnerable to criticism.
The academic reputation survey, for example, asks people to assign numerical scores to 178 law schools, some of which they have never heard of. While some respondents might decline to rank such schools, others will not. John Sexton, dean emeritus of New York University Law School, suggested to the New York Times that if "Princeton Law School" (which does not exist) would almost certainly end up in the top 20 if added to the list.
The importance of student scores to a school's overall rating also makes it more likely that a school will pursue a student based on numbers alone, rather than based on a broader assessment of the student's abilities, or the likelihood that she will contribute meaningfully to the educational environment. Educational diversity is threatened by the over-reliance on objective scores and rankings.
Even factors that are arguably very important to prospective law students, such as post-graduation employment rates, are not presented in a meaningful way by U.S. News. Granted, the survey reports on the percentage of graduates employed at graduation, and nine months later. But "employment" is not synonymous with "legal employment," and a job at McDonald's counts the same as a job at a prestigious law firm.
This is presumably not what a prospective student wants to know about the school he might attend. Also, what about partnership chances later? A lot of lawyers find that to be the most important hurdle of all.
Law school deans--collectively--have been citing these and other critiques of the U.S. News' rankings for years, and have asked the magazine to cease publishing them (to no avail, obviously).
And law school applicants receive a standard letter signed by almost every dean cautioning that all available ranking systems are "inherently flawed" and serve as an "unreliable guide to the differences among law schools that should be important to you." (The subtext, of course is: U.S. News, this means you.) Finally, individual deans often preface any mention of the rankings with a standard disclaimer about their invalidity.
But often those disclaimers are followed by a boast about the school's high overall rank, or its reputation for a particular specialty. Almost no school totally shuns the rankings, or lives up to the principles the deans' objections might dictate. It's just too tempting to cite a high rating -- or a "promotion" from a lower to a higher "tier" in the rankings.
One Answer to U.S. News' Flaws, but Is It Any Better?
The only real competitor to U.S. News's law school rankings today is Brian Leiter's Educational Quality Rankings (EQR), a relatively new web-published report issued every two years.
Leiter made an arguably successful launch of his EQR site by purporting to offer a more objective measure of law school quality than U.S. News. His first rankings were received more favorably by law school deans, who perceived fewer opportunities for manipulation than with the U.S. News rankings.
In its first iteration, the EQR site departed from U.S. News in two significant ways. First, seventy percent of a school's overall ranking was derived from "faculty quality" (a much more significant percentage than in the U.S. News rankings).
Second, EQR gave equal weight to subjective measures of faculty quality and objective ones. (U.S. News has no objective measure of faculty quality.) The subjective measures were taken from the academic reputation scores in the U.S. News survey. The objective measures of faculty quality were based on the faculty's frequency of citation and per capita rate of publication, since faculty judge each other primarily by their publishing records.
Beginning with the 2003-04 rankings, the EQR changed the subjective measure of faculty quality, replacing the U.S. News reputation scores with its own survey of "leading" junior and senior scholars in law schools.
This subjective ranking methodology differs from U.S. News's academic reputation survey in a few ways: First, EQR surveys only "active and distinguished" scholars (selected by him), rather than surveying designated people at every school.
Second, EQR provides respondents with a list of faculty at each school and lists each school by number rather than name in order to avoid undue influence of preconceived notions about the quality of a particular school. (U.S. News's rankings have also been rightly faulted for being self-perpetuating -- certainly the "reputation" U.S. News measures is powerfully affected by the U.S. News rankings themselves.)
Third, EQR seeks responses from participants with differing levels of seniority and diverse academic specialties.
For now, the objective and subjective measures of faculty quality in EQR are presented separately "for students to weigh as they deem appropriate."
U.S. News and Leiter's EQR do reach different results, at least at the margins. The top 10-15 schools are almost identical, with only minor variations in the order among them. But for some schools, the differences can be quite stark. For example, the University of San Diego is ranked 20th in the EQR, but only 67th by the latest U.S. News rankings, while Notre Dame is ranked 20th by U.S. News, but only makes EQR's list as a "runner-up" to the top 40.
For now, Leiter's rankings are not a co-equal competitor with U.S. News. Fewer students rely on them, particularly since they rank only the top 40 schools, which a majority of law students in the United States do not attend. But EQR's influence is likely to continue to grow, as students scrutinize their options, relying on as much information as possible.
EQR's rankings--both the overall rankings and more targeted ones like Most Cited Law Faculty, Supreme Court Clerkship Placement, Best Teaching Faculties, Where Tenure-Track Faculty Went to Law School--are offered as an alternative to U.S. News' rankings. But, in the end, the audiences for the two do not entirely overlap.
The Impact of U.S. News's Rankings on Law Schools
U.S. News's law school rankings, as I have noted above, drive decisions made by schools (which students to recruit, admit, and, most importantly, entice with scholarship money) and by applicants (where to apply, where to matriculate, and whether to transfer). It turns out that they also drive decisions that are more central to the academic enterprise -- decisions about resource allocation, faculty hiring, curriculum, and so on. U.S. News, in many cases, is the impetus for concrete decisions, regardless of any pedagogical purpose or effect.
The result of such a system--both arbitrary to begin with, and subject to manipulation--is that a school could make a meteoric rise in the rankings without actually improving its quality, or take a dive without actually declining. The latter can be devastating, and given the reliance by law school applicants on the rankings, major downward shifts in rankings, in particular, often become self-fulfilling prophecies.
Once a school falls in the rankings, students with higher numbers opt to go elsewhere, and, within a year or two, the student numbers match the school's new lower ranking. All this might happen even though the school is no worse than it was to begin with.
And students, the "consumer" for whom these rankings are ostensibly designed, fare no better. They may choose to attend a school that is not right for them, simply because it is ranked more highly than another one.
The exaggeration of differences in law school quality and the flaws inherent in techniques to measure them should make all of us in law school communities shy away from rankings. While law students are cautioned by many "authorities" to make decisions about where to matriculate based on self-knowledge and independent investigation of law schools, U.S. News remains, for many, the critical factor.
Legal education thus remains hostage to the U.S. News rankings. It is a loss to us all that standings in the rankings exert such an important influence on law schools, since money, time, and energy would be much better spent improving the quality of legal education we provide.
In the end, students might be better served by less ostensibly "objective" (but actually quite subjective) information about law schools, and more subjective assessment of the fit between a student's needs and a school's offerings.