Top Ten Ranking Brian Leiter's Law School Rankings

 ¶  The Criteria, 2000-02
 ◊  Advertise Here!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The Criteria

The final rank of a law school is based on its performance in three categories:

Faculty Quality (70% of final rank): the rank in this category is based on three criteria:  scholarly productivity; scholarly impact of faculty work; and reputation.  More precisely, 25% of the rank is based on the per capita rate of publication for the period 1998 through summer 2000 of,

  1. articles in the ten most frequently cited student-edited law reviews (Yale Law Journal, Harvard Law Review, Stanford Law Review, University of Chicago Law Review, Columbia Law Review, Michigan Law Review, California Law Review, University of Pennsylvania Law Review, and Texas Law Review, plus New York University Law Review, which is less-often cited but benefits in prestige from being affiliated with a top law school;

  2. articles in ten leading peer-edited law journals (Administrative Law Review, American Journal of Comparative Law, Constitutional Commentary, Environ- mental Law, Journal of Legal Studies, Law & Contemporary Problems, Law & Social Inquiry, Legal Theory, and Tax Law Review);

  3. books from the three leading law publishers (Aspen, Foundation, West); and

  4. books from the six leading academic presses in law (Cambridge, Chicago, Harvard, Oxford, Princeton, Yale).

As in the study that appeared in the Journal of Legal Studies, point values for articles were halved in the case of in-house publications.

Another 25% of the faculty quality rank is based on the per capita rate of scholarly impact for the top quarter of each faculty based on citations to faculty work on the Westlaw JLR database as of August 2000.  Finally, 50% of the faculty quality rank is based on the subjective academic reputation of the school based on a fall 1999 survey of academics conducted by U.S. News & World Report.

Each measure of faculty quality has advantages and limitations, but together they promise to present an informative picture.  The rationale for the particular weightings, and the details of the study methodology, can be found in "Measuring the Academic Distinction of Law Faculties," op. cit.

Student Quality (30% of final rank): the rank in this category is based on data collected by the American Bar Association on student credentials for 1999 for the 75th and 25th percentile of the entering class.  The EQR employs the U.S. News formula, except giving more weight to LSAT:  60% of the score is for 75th/25th LSAT, 40% for 75th/25th GPA.  This runs the risk of penalizing state schools and schools with aggressive alternative admissions procedures, but student comments over the years convinced me that looking at 75th and 25th percentile presents a more realistic portrait of the student body as a whole.

Even the data on the 75th and 25th percentile, however, skews the picture in some respects (though I've no idea how to get better data or solve this problem).  For example, at Texas while the 75th percentile LSAT is 165, 17% of the class (the 83rd percentile) scored higher than 168 on the LSAT--an unusually large gap between the 75th percentile and the 83rd.  Because Texas is so much larger than most peer schools (Harvard and Georgetown excepted), that means that there are roughly 82 students in the class with scores 168 or higher.   Contrast this with smaller schools which report higher 75th percentile LSATs, but actually have attracted fewer students with the highest scores:  e.g., Northwestern and Duke have only about 52 students with scores 168 or higher; while Cornell and Vanderbilt have less than 50 students with scores, respectively, of 166 and 165 or higher.  For obvious reasons, it's easier to boost the 75th percentile numbers at smaller schools than larger ones, but it may not present the most accurate portrait of the quality of the student body (contrast, for example, the ranking of schools by placement as clerks on the U.S. Supreme Court, below). 

Teaching Quality is based on several years worth of Princeton Review Surveys of Student Satisfaction with Teaching; although the category is important, the available data is crude, and so it is used only to give "extra credit" to strong teaching faculties.  Nine schools have quite consistently gotten high marks for teaching quality in these surveys year after year:  Boston University, University of Chicago, University of Notre Dame, University of Texas, Cornell University, College of William & Mary, University of Virginia, Vanderbilt University, and Washington & Lee University.  These schools were, accordingly, pushed ahead one rank based on teaching quality. 

Faculty quality is given more weight than student quality because (1) it is the traditional measure of the academic caliber of an institution, (2) it correlates more reliably with reputation and prestige than any other factor, (3) it is less likely to produce a ranking that becomes a self-fulfilling prophecy, and (4) some schools have notorious reputations for boosting the numerical credentials of the student body artificially (e.g., by admitting the Phys Ed majors with 4.0 GPAs and the like, or by making LSAT-driven admissions decisions).  The faculty quality measure is more sensitive to actual faculty quality and actual changes in faculty quality.  Student quality largely tracks perceived prestige--hence the self-fulfilling prophecy aspect if rankings weight it heavily.  Schools historically favored by U.S. News because of that magazine's use of criteria that reward small, private institutions typically have far stronger student bodies than measures of faculty quality would predict (see, for example, the results for Duke and Washington & Lee, below).

It would be useful to be able to include data on reputation among practitioners.  Unfortunately, no remotely reliable data exists.  Because practitioner reputation is much more regional than academic reputation, any reputational survey that is not geographically balanced in very careful ways will produce meaningless results.  (The U.S. News editors have admitted to me in discussion that their reputational surveys of practitioners are not geographically balanced.)

The final rank is the proportional sum of each school's rank in the various categories, with the school with the lowest sum being ranked first and so forth.  Statistically insignificant differences are treated as ties.

 ◊  Site Sponsor