Top Ten Ranking Brian Leiter's Law School Rankings

 ¶  The U.S. News Law School Rankings: A Guide for the Perplexed
 ◊  Advertise Here!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Brian Leiter
May 2003

The U.S. News methodology for ranking law schools is confusing, comprising 12 different factors in 5 different categories.  It is crucial to recognize two things, however, at the start.  First, the relative weight of the factors varies dramatically, with some having a significant effect on the results (reputation, median numerical credentials, expenditures), while others matter hardly at all (size of the library, acceptance rates, bar passage rates).  Second, the factors vary quite a bit in their susceptibility to artificial manipulation by law schools.  However, the fact is some of the factors are highly manipulable, and, as a result, the overall ranking results are meaningless, though no less important, alas, because of that.

Note also that U.S. News has actually held its "methodology" for ranking schools constant since 1999, after making changes every year prior to that.  In 1999-a consequence of U.S. News having hired someone with expert knowledge of statistics-they made perhaps the single most dramatic change in their methodology:  they started adjusting per capita expenditures (item #6, below) to reflect differences in cost-of-living.  The results were so dramatic-Albama turned up in the top 50, Fordham and Boston College dropped out of the top 25 (BC has since returned)-that in 1999 U.S. News stopped printing the "Faculty Resources" rank (as they call this category):  it would have been too obvious how this irrelevant expenditures category had skewed the rankings.  (Of course, it still skews them, in favor of small schools like Yale and against large schools like Harvard, but more on that shortly.)

Here are the factors U.S. News employs to rank schools, in descending order of importance.  The factor in question is also described as either "Highly Manipulable," meaning schools can exercise, through deceit or otherwise, a lot of control over this criterion; or "Not Manipulable," meaning the criterion is basically beyond a school's control.

  1. Academic reputation (25% of the overall score).  Not Manipulable.  25% of the overall score is a function of academic reputation, as measured by a survey done mid-fall of some 700 law school deans and faculty (about two-thirds fill out the surveys).  Since US News switched to the new ranking scale in 1998 (1-5, where 1 is marginal and 5 is distinguished), there have been far fewer ties than there were early in the 1990s.  The scores have also stabilized in to predictable clusters of Yale, Harvard, Stanford; then Columbia and Chicago; then Michigan, NYU, Berkeley; then Virginia; then Penn; then Cornell, Duke, Georgetown, Northwestern, and Texas; then UCLA; then Vanderbilt; then Southern California, Wisconsin, Minnesota, and Iowa, and so on.  These results aren't ridiculous, but, at least with regard to faculty quality, they are also dated:  NYU, for example, is clearly better than Michigan, and certainly on a par with Columbia; Chicago is at least on a par with Stanford; Yale is better than Harvard; and so on.  This is unsurprising given that evaluators are presented only with a list of 180 school names, and nothing more.  For a snapshot of what leading legal scholars think about faculty quality when actually presented with faculty lists, see http://www.utexas.edu/law/faculty/bleiter/rankings/rankings03.html.  Schools have little control over the results of the academic reputation survey:  even improving the quality of your school (its faculty, its student body) does not necessarily result in any increase in the academic reputation score.  (Case in point:  NYU has seen its academic reputation score decline during a period when both its faculty and student body improved.) 

  2. 15% of the overall score is based on reputation among lawyers and judgesNot Manipulable.  These results reflect a survey of lawyers at large firms and federal and state judges.  The response rate is low:  less than one-third complete the surveys.  Because U.S. News only surveys large firms, the survey is also dramatically skewed towards the Northeast (especially New York City):  schools that have large alumni contingents in New York City perform, shall we say, suspiciously well in this survey by comparison to schools that are otherwise comparable.  Schools have little control over the results of this survey.

  3. 12.5% of the overall score is based on the median LSAT scoreHighly Manipulable.  This criterion is one of many that favors small schools.  Consider:  a school that enrolls 180 students each year, only needs to recruit 90 with an LSAT of, say, at least 164 in order to have a strong median LSAT.  A school that enrolls 450 each year, by contrast, will need to recruit 225 students (more than twice as many) with that LSAT to report the very same median.  Note also that U.S. News has no way of verifying the data reported by private schools, since the American Bar Association does not collect median LSAT data, only data about the 25th and 75th percentile.  So this factor is highly manipulable by the schools.

  4. 12% of the overall score is the employment rate 9 months after graduation. Highly Manipulable.  This data is entirely self-reported by schools, and should be treated as essentially fiction:  it may have elements of truth, but basically it's a work of the imagination.  Schools report it, and U.S. News has no way of checking.  In addition, we know nothing about the nature of the employment-it could simply be as a research assistant, which is what Northwestern did a few years ago for its unemployed grads.

  5. 10% of the overall score is based on the median GPA of the entering classHighly Manipulable.  See the discussion in (3), above.  Note, too, that the feeder schools for a particular law school will have a significant effect on this criterion.  Example:  schools that draw on the "grade inflated" Ivy League have it easier than those that draw on universities with less rampant grade inflation.

  6. 9.75% of the overall score is average per capita expenditures for this year and the prior year for instruction, library, and supporting servicesHighly Manipulable.  This is the figure that is adjusted for differences in cost of living.  Once again, schools self-report the data.  This criterion, along with (3) and (5), gives a huge boost to small schools, since per capita measures penalize for economies of scale.  This explains how, in many years (including 2003), Harvard can have higher reputation scores than Yale, yet Yale will come out 1st and Harvard 3rd.  Harvard is three times the size, and that makes all the difference.

    The preceding are the six major factors making up a school's overall score in US News. Together they account for 84.25% of the overall score.  Four of these factors are highly manipulable, and three favor small schools.  The remaining six factors in U.S. News (that account for just 15.75% of the overall score) are as follows:

  7. 6% of the overall score is the employment rate at graduationHighly Manipulable.  See the discussion in (4), above.

  8. 3% of the overall score is the student-teacher ratio.  ABA collects data on this, and so does U.S. News, but there are often discrepancies, which U.S. News appears to let slide.  So the manipulability of this category is unclear, but seems to be high.  Much depends on how schools "count" their faculty.

  9. 2.5% of the overall score is the acceptance rate for studentsHighly Manipulable.  As with (3) and (5), U.S. News has no way of verifying the data reported by private schools.  In addition, many schools inflate their "selectivity" by giving fee waivers to applicants who have no chance of getting in.  NYU is reported to have pioneered in this arena, but many others have followed suit.

  10. 2% of the overall score is the bar pass rate adjusted to reflect the avg. pass rate in the major jurisdiction where students take the examNot Manipulable.

  11. 1.5% of the overall score is average per capita expenditures for the current and prior year on everything else OTHER than instruction, library & supporting services--so this includes utilities, financial aid, and the like.  Highly Manipulable.  As with (6), the criterion also favors small schools.  Stories abound about schools who, via little accounting changes here and there, boost their rank in this category astronomically.

  12.  0.75% of the overall score is the total number of volumes in the libraryNot Manipulable.  Schools reports this data to the ABA, which means it is checkable.  (Schools that might lie to U.S. News are unlikely to lie to the ABA.)

    Even putting aside the fact that this formula, with its various weightings, is impossible to rationalize in any principled way, the really striking fact about the U.S. News methodology is surely the following:

    1. More than half the criteria-over 54%--that go in to the final score can be manipulated by the schools themselves, either through outright (and undetectable) deceit, or other devices (giving fee waivers to hopeless applicants, employing graduates in temp jobs to boost employment stats, etc.).

    2. More than one-third of the criteria that go in to the final score favor small schools and penalize large schools.

Reread the U.S. News rankings with these two pertinent facts in mind, and a lot that looks academically indefensible about the results (Chicago behind Columbia, Penn ahead of Berkeley, Duke ahead of Georgetown and Texas) may begin to make sense.

 ◊  Site Sponsor