![]() |
Brian Leiter's Law School Rankings | |||||||
|
|
|
July 2005 This is a ranking of the top 30 law faculties based on a standard “objective” measure of scholarly impact: per capita citations to faculty scholarship. We looked only at the top quarter of each faculty, largely for logistical reasons--it made the study more manageable--but partly because the scholarly standing of a school depends more on its best faculty, than its average. Impact was measured using Westlaw's JRL database rather than TP-ALL, since the latter includes on-line versions of treatises (for example, Wright & Miller on Federal Practice & Procedure) and thus would artificially inflate the counts for schools at which these treatise authors teach. Names were searched as, Brian /2 Leiter except where multiple middle initials or similar factors made necessary a wider scope. To guard against false positives with common names, ten to twenty of the "hits" were reviewed; the percentage that were false positives was then multiplied against the total number of hits returned, and that amount was subtracted from the citation total. To fix the top quarter it was, of course, necessary to search one-third to one-half of the faculty. In response to feedback on the 2003 study, citation counts were halved for part-time faculty, so that this year’s study is a more accurate measure of the full-time faculty’s scholarly impact. The citation counts were completed over the course of several days in early July 2005—roughly, two years since the last study--thus obviating the need for adjustments in the counts to reflect changes in the size of the database. Impact as measured by citations has important limitations as a proxy for scholarly reputation, and it is worth noting those in detail before going further. Why might the correlation between impact and actual academic quality break down? My colleague Richard Markovits aptly summarizes some of the problems:
Although Professor Markovits leaps too quickly to his conclusion, he has certainly identified a genuine worry about the use of citations. Indeed, we might identify six kinds of phenomena at work here which skew the correlation between citation and quality. Although Professor Markovits leaps too quickly to his conclusion, he has certainly identified a genuine worry about the use of citations. Indeed, we might identify six kinds of phenomena at work here which skew the correlation between citation and quality. First, there is the industrious drudge: the competent but uninspired scholar who simply churns out huge amounts of writing in his or her field. Citation practices of law reviews being what they are, the drudge quickly reaches the threshold level of visibility at which one is obliged to cite his or her work in the obligatory early footnotes of any article in that field. The work is neither particularly good, nor especially creative or groundbreaking, but it is there and everyone knows it is there and it must be duly acknowledged. Second, there is the treatise writer, whose treatise is standardly cited because like the output of the drudge it is a recognized reference point in the literature. Unlike the drudge, the authors of leading treatises are generally very accomplished scholars, but with the devaluation of doctrinal work over the past twenty years, an outstanding treatise writer--with a few exceptions--is not necessarily highly regarded as a legal scholar. Third, there is the “academic surfer,” who surfs the wave of the latest fad to sweep the legal academy, and thus piles up citations because law reviews, being creatures of fashion, give the fad extensive exposure. Any study counting citations, depending on when it is conducted, runs the risk of registering the "impact" of the fad in disproportion to its scholarly merit or long-term value or interest. Fourth, there is work that is cited because it constitutes “the classic mistake”: some work is so wrong, or so bad, that everyone acknowledges it for that reason. The citation and organizational preferences of student-edited law reviews exacerbate this problem. Since the typical law-review article must first reinvent the wheel, by surveying what has come before, the classic mistake will earn an obligatory citation in article after article in a particular field, even though the point of the article may be to show how wrong the classic mistake is. True, some authors of classic mistakes may have excellent reputations; but who among us aspires to be best remembered for a "grand" mistake? Fifth, citation tallies are skewed towards more senior faculty, so that faculties with lots of “bright young things” (as the Dean of one famous law school likes to call top young scholars) won’t fare as well, while faculties with once-productive dinosaurs will. Sixth, citation studies are highly field-sensitive. Law reviews publish lots on constitutional law, and very little on tax. Scholars in the public law fields or who work in critical theory get lots of cites; scholars who work on trusts, comparative law, and legal philosophy do not. So for all these reasons, one would expect scholarly impact to be an imperfect measure of scholarly quality. But an imperfect measure may still be an adequate measure, and that is almost certainly true of citation rates as a proxy for impact as a proxy for reputation or quality. The overall ranking was based not only on mean per capita impact for the top quarter of each faculty, but also the median per capita impact. This was to guard against the distorting effect of having one or two faculty with enormously high citation counts on an otherwise low-cited faculty. (The data on mean and median per capita impact follow the overall ranking.) The final score—the basis for the “overall” ranking that follows--is the sum of the normalized scores for mean and median per capita impact divided by two Interpretive Comments on the Results: The University of Chicago Law School continues to lead everyone else in per capita citations, despite the fact that only half the citation totals for Frank Easterbrook and Richard Posner were counted (since they are both part-time at the Law School). Indeed, Chicago would lead everyone in per capita citations even without counting Judges Easterbrook and Posner at all. Chicago not only has two of the three most frequently cited full-time law professors in America (Richard Epstein and Cass Sunstein; the third is Laurence Tribe at Harvard), it also has a host of faculty who weren’t in the top quarter, but are still in the elite group of faculty cited in more than 1,000 articles (for example, Albert Alschuler, Douglas Baird, Saul Levmore, Martha Nussbaum, Eric Posner, and David Strauss).[1] The other noticeable changes since 2003 all have tangible explanations: Duke added Erwin Chemerinsky from Southern California and Curtis Bradley from Virginia; UC Hastings added Geoffrey Hazard from Penn and Joan Williams from American; Penn, in addition to losing Hazard (who retired in December 2005) to Hastings, lost one of their other most-cited faculty members, Edward Rubin, to the Deanship at Vanderbilt (Penn is also hurt by having many of its “top” faculty being mid-career folks, whose citations counts are necessarily lower; only two faculty—C. Edwin Baker and Paul Robinson—have garnered citations in more than 1,000 articles, for example); Cornell lost its most-cited faculty member, Jonathan Macey, to Yale; Texas added Bernard Black from Stanford; Michigan saw Yale Kamisar retire; Miami lost John Hart Ely; Illinois added two new faculty who are now in the top-quarter for citations, David Hyman and Larry Solum; and so on. Again, do bear in mind that citations are only one measure of faculty quality, with some well-known limitations noted above. I very much doubt that in a new reputational survey of leading legal scholars, like the one we conducted in 2003, that Duke would turn up in the top ten (though it would surely, and deservedly, place better than its 17th-place showing in 2003), or Berkeley in the top five, or Penn outside the top 20 (or top 15, for that matter). However, if you compare the 2003 reputation data (the last column, below) to the 2003 per capita citation results, one can see that there is some reasonably good correlation between the two, with some notable exceptions. So a significant improvement, or decline, in per capita citation rank between 2003 and 2005 is likely to be reflected in some change in reputation rank as well. The Corrected Version (April 2006) adjusted primarily for a handful of errors of omission (at Pittsburgh, George Washington, William & Mary, BU), that is, faculty who were in the top quarter of a particular school’s citation counts, but had not been properly credited in the July 2005 version. In one additional case, Vanderbilt, the top quarter was wrongly calculated (13 instead of 11). Because it has become clear that too many readers take the “top quarter” in citations to be equivalent to the “top quarter of the faculty,” I am no longer printing the list of faculty in the top quarter in this version. I may print it in future versions, so that there are opportunities for corrections, as there were on this occasion. OVERALL TOP 30 BASED ON MEAN AND MEDIAN PER CAPITA IMPACT JULY 2005
TOP 30 BASED ON MEAN PER CAPITA IMPACT
TOP 30 BY MEDIAN PER CAPITA IMPACT
Remember, as noted, that citations are field-sensitive: constitutional law, Critical Race Theory, feminist legal theory, international law, intellectual property, and law and economics, among others, are all high citation fields: schools strong in those areas will fare better than those whose strengths lie elsewhere. By contrast, tax, wills & estates, property, admiralty, legal philosophy, labor law, and comparative law are much lower citation fields; schools with substantial strengths in those areas will not, in virtue of those strengths, fare well in a study like this. Citation counts are also seniority sensitive: it’s hard to break into the top quarter of one’s faculty in citations for younger scholars, easier for someone who has been publishing for 20 years. In the earlier version I had listed the top-quarter of most-cited faculty; this was helpful in terms of eliciting corrections, but pernicious insofar as these lists were interpreted as identifying the “top faculty” at the respective schools, which they manifestly did not, because of the problems with citation counts noted. In lieu of listing individual faculty, let me note that every faculty member in the top quarter at Chicago, Yale, Harvard, Stanford, and NYU was cited in more than 1,000 articles (as of July 2005). Here are the number of faculty at other schools cited in at least 1,000 articles (as of July 2005): Columbia (13); Georgetown (12); Berkeley (9); Duke (4); Michigan (4); Texas (4); UCLA (4); Virginia (4); Emory (3); George Washington (3); Northwestern (3); Brooklyn (2); BU (2); Chicago-Kent (2); Cornell (2); Hastings (2); Illinois (2); Minnesota (2); Ohio State (2); Penn (2); San Diego (2) Vanderbilt (2); Arizona (1); Arizona State (1); Colorado (1); George Mason (1); Iowa (1); North Carolina (1); Pittsburgh (1); Southern California (1); William & Mary (1). [1]After the study was completed, we learned that Professor Alschuler will take early retirement at Chicago and move to Northwestern for 2006. This would not affect the results for Chicago, since Professor Alschuler was not in the top quarter, but would boost Northwestern a notch, tying or perhaps passing, Cornell (obviously this result might be affected by other changes between now and 2006). |
|
©1999-2008 Brian R. Leiter | Design & Development by 309 Multimedia Group Kevin C. Smith |