One Pill Makes You Larger: Flaws in Sisk’s Westlaw Methodology Illustrated with Leiter’s Citations


One Pill Makes You Larger: Flaws in Sisk's Westlaw Methodology Illustrated with Leiter's Citations

J.B. Heaton


The Sisk-Leiter rankings of scholarly impact use a Westlaw search to determine a scholar’s citation count. However, the search does not review the citations to determine if they actually are citations to a scholar’s work rather than other hits such as blog posts (whether authored by the scholar or by another on the scholar’s eponymous blog), citations to works by others in books that the scholar edited, citations to the work of other scholars who only mention the scholar under study (such as a citation in a work to the work of a scholar that had reviewed the book of the scholar under study), media mentions, or author acknowledgements for comments not eliminated by the search term, such as those that appear in footnotes or the body of the article. I use citations to Brian Leiter’s work to show that the Sisk-Leiter Westlaw citation count is overstated by about 40% in Leiter’s case, with 398 of 557 attributed cites being to citations to Leiter’s academic work. While Leiter’s case may be more upwardly biased than others because of his popular industry blog, the fact is that media mentions, citations to the works of others in edited volumes, and citations to works that discuss the scholar under study but are not cites directly to that work have an unknown bias that cannot be assumed away easily. Moreover, by ignoring cites in judicial opinions to scholarly work, the method as applied by Sisk undervalues the impact of scholars in some fields of more practical importance and, therefore, likely the scholarly impact and ranking of faculties with strong scholars in antitrust, bankruptcy, corporate and securities law. A combination of Westlaw (with judicial citations) and Google Scholar would provide more reliable results.


Scholars are fascinated by citation counts. A high citation count to a single work is strong evidence that work has been found thought provoking and worthy of recognition or response, in either case suggesting its ideas cannot easily be ignored. A high citation count to a scholar’s larger body of work is strong evidence that a scholar has interesting things to say over and again, even if no one work is an unquestionable hit. Matters get more tenuous, of course, when we attempt to ascribe the works of scholars to their employment collective, like a department or a law-school faculty. Faculties don’t write articles and books; people do. Nevertheless, the human need for group belonging and the ever-present “us versus them” mentality encourages efforts to identify boundaries¾easy to do with a school or a department – and then generate arguments to set that group apart in some way that generates pride and group identity versus others. These efforts often center around a star individual – the Patriots have Tom Brady, the Republicans have Trump, the physicists have Einstein, and so on – but the idea of forming group identity around one or a handful of stars is, in a sense, to average the achievements of those best members across its less distinguished members and then claim some form of worthy collective distinction over others.

This all-too-human tendency can show  up in the strangest of places, and a law faculty is one of them. In a fascinating first study, Brian Leiter, then of the University of Texas Law School, ranked law faculties by “academic distinction.” 1 By measuring the output and the citations to that output of the top quarter of scholars at a school – not, in this sense, even pretending that the rest mattered as much – Leiter generated a ranking of law schools that placed Chicago at the top (this was as of the late 1990s, and as an attendee at that time, I can’t disagree) and his own Texas tied for 11. 2

Here, I offer a modest critique of this exercise as it has developed. As a former practitioner, I have had (much too) much experience with two things, legal research on Westlaw and electronic discovery, in particular, the negotiation and renegotiation of search terms as part of the civil-discovery process. When I first came across the Leiter study and the more recent efforts of Gregory Sisk and colleagues, 3 I was struck by the descriptions they gave of their search process. They did little more, it seemed, than conduct Westlaw searches and then do some work to ensure that the “John Smith” law professor they were researching was the “John Smith” showing up in the Law Reviews & Journals portion of the Westlaw Secondary Sources database. 4 This was likely to bias upward the citation count, and it was likely to do so in ways that would be hard to say were not going to influence the outcome. To test this conjecture, I decided to use Brian Leiter as a test case and explore the implications of my results. 5This short essay is the report of that effort.

I. Leiter’s Westlaw Citations

The latest version of the Sisk study became available in August 2018. 6 Professor Leiter announced it on his popular blog. 7 Chicago still looks good in the study, though now only third instead of first. I repeated this search on August 15, 2018. 8 Because a few months had passed since the Sisk search and new journals or articles may have been added, I get four additional hits for a total of 557. I then went through each of the 557 entries to find the Brian Leiter “hit.” Here are the results.

 In each case, the citing article is ascribed to the highest value category, so the total is 557. That is, an article citing a Leiter book directly but also a blog post is treated as a cite to a Leiter book and not as a cite to a Leiter blog post.

As conjectured, because the Sisk-Leiter methodology does not review the citations its search returns, there are a large number of hits that are not cites to the scholar’s research work. Leiter is one of the most prominent bloggers on the academic law business, with entertaining pieces such as “David Segal’s hatchet job on law schools,” 9 “Four Changes to the Status Quo That Might Be Worth Something,” 10 and “On So Called “Empirical Legal Studies” and Its Problems,” 11 among many, many others. As these posts are gossipy and entertaining and address issues in legal education, they make their way into the work of others, so that 97 of the 557, by my count, or 17%, are Leiter blog posts. As Leiter is one of the most active bloggers in his industry (certainly the most fun to read), this is a pure upward bias in his citations that shifts his count up relative to his peers in a way not connected directly to his academic work.

But matters are more problematic because Leiter’s blog is an eponymous one, and other people like to write for it. In fact, 6 of Leiter’s Westlaw citations are to the blog posts of other people who show up because Leiter’s blog is called “Brian Leiter’s Law School Report.” 12 This again, is a pure shift of citations relative to his peers without eponymous blogs, but it is pretty small potatoes.

Then come false hits that are harder to judge relative to other scholars. Leiter has media mentions (for his blog topics) [3 of them], and blogs or posts on sites other than his own, including Huffington Post but also Legal Affairs, [8 of them]. The toughest categories are cites to works in books that Leiter edited or co-edited, but not to Leiter’s own work in the book [29 of the 557], and citations to the work of other scholars that mention Leiter – usually because citing scholar A is citing the book review or article of scholar B who cites Leiter, but where citing scholar A does not cite Leiter directly. There are 11 of these. These are especially problematic because a citation study will give credit to the cited scholars (if they make their way into the Sisk sample) and Leiter, though nothing Leiter wrote is being cited directly. A book review of a Leiter book (such as the provocative Why Tolerate Religion, which I ordered from Amazon after reading so much about it in this research) 13 certainly counts as a cite for Leiter. But then when another scholar cites that book review but not Leiter directly, relying on the observations of the cited scholar, Leiter gets a freebie in his citation count. This problem cannot be waived away as likely to sort itself out in the wash. A popular chapter in an edited book or a book review that is better than the book may inflate the citation count of an editor-scholar or book-author.

In sum, we can say that Leiter’s Sisk-Leiter count is overstated by about 40%, with only 398 of 557 attributed cites being to citations to Leiter’s academic work. This is a low signal-to-noise ratio that is completely curable by more careful study by Sisk, et al. Is Leiter’s bias higher than others so as to bias the rankings? Probably. But although we can be pretty confident that the blogging effect is worse for Leiter, we cannot be sure what benefits others derive because the methodology is simply too sloppy to be reliable.  As a first exploration, I performed the same search for Omri Ben-Shahar, Professor Leiter’s colleague and a highly-cited contracts scholar. I examined the top 100 “most cited” law reviews and journals as a sort of randomization, since there is no reason to believe that an article is more highly cited simply because Omri Ben-Shahar’s name appears in it. I found that only 4 of the 100 Omri Ben-Shahar cites were false hits, in each case to a book he edited where the work cited in the book was not his. The false-hit rate for Eric Posner was even lower, 2 of 100 sampled, in both cases cites to a article he authored.

II. A Note on the Omission of Citations in Judicial Opinions

Others have argued that the Sisk-Leiter method is problematic because it is too labor intensive and omits citations in judicial opinions. 14 That is a bit off the mark. As we have seen, the Sisk-Leiter method is not labor-intensive enough. That said, the omission of cites in judicial opinions to a scholar’s work is a real issue. Even Leiter—whose filed of law and philosophy is not so amenable to judicial citation—has three citations in any judicial opinions, though outside the 2013–17 time frame at issue in the latest Sisk study. 15 For many, however, the omission of citations to scholarly work in judicial opinions is a serious flaw in the methodology. 16 By comparison to most all other academic disciplines with connections to a profession, law seems to fare quite well in the practical usefulness of some of its scholarship. The best evidence of this is citation in court opinions. We can have little confidence that the latest academic work of business-school faculty, say, is attended to by many in the real world of business, whether accounting, microeconomics, finance, macroeconomics, marketing, or strategy. But in law, citations to academic writing in court opinions are solid evidence that scholarship can be useful, and courts routinely use some of the scholarship that legal scholars generate. 17 

We cannot push comparison to other fields very hard, of course, because litigation lends itself rather uniquely to a written record of scholarship found by a court to be useful. But court citation is evidence of usefulness, and it is probably better evidence than citation by other scholars, which has too many confounding variables. 18 I do not argue that this view is widely-held; it is not. 19 But it should be. And many scholars have been drawn to analysis of citations by courts of legal scholarship. 20 Perhaps surprisingly, courts appear to rely more on periodicals than on treatises and the like. 21 As with citation in law reviews and journals, citation in a judicial opinion does not imply that the cited scholarship is important or even that it contributed meaningfully to the court’s decision. One recent study of the U.S. Supreme Court’s citation of legal scholarship in trademark cases finds that most citations are of “the low quality, perfunctory variety.” 22 But one study finds that courts are more attentive to the substance of articles than citing scholars. 23 This suggests that citations in court opinion may not be a great measure of usefulness but may be more meaningful than citations by other scholars.

In any event, citations in judicial opinions to academic articles are surely more important as a measure of scholarly impact than citations to blog posts and media mentions. The relative strengths of these highly-cited corporate and securities academics and other judicial superstars might easily, if included, move Chicago from third place to below fifth, especially as Chicago has lost its former strength in these areas and in antitrust.

III. Google (Scholar) “Brian Leiter”

So, what is the answer? The Sisk-Leiter scores, done carefully, would be more informative, but they are limited to the Law Reviews and Journals in Westlaw’s database, which includes many journals of low reputation, some barely above newsletter status, while omitting much of the impact the studied scholars are having. Expanding to judicial citations is a must. In addition, it is time for the legal academy to catch up to the rest of the world and embrace Google Scholar.

Do a search, something like [Google Scholar Brian Leiter]. And be prepared for a surprise. 24 Not only has Brian Leiter set his Google Scholar page up (good work), but the full impact of his prolific work ethic and thought-provoking body of work hits you in the face. 557? Pshaw. He has, at this writing, 5,579 citations, 25 a remarkable output. One complaint with Google Scholar is that it “includes citations that some administrators might prefer to exclude, e.g., citations in unpublished working papers.” 26 But that is hardly objectionable. The quality of a working paper can be at least as high as a citation in a low-tier journal or law review. And a widely-circulated and cited working paper that cites a scholar’s work might be much better evidence of “impact” than a published article that itself is rarely or ever cited. As Brian Leiter’s Google Scholar page also shows, the influence of his work is an order of magnitude greater than his Sisk-Leiter score implies 27 Yes, there are problems with Google Scholar. For example, about 20% of Eric Posner’s also-prodigious cite count on Google Scholar (28,310!) includes over 5,000 cites to a book edited by Jon Elster that does not appear even to include a chapter by Eric Posner. 28 But these are fixable. Time to join the 2010s, law faculty.


Citation counts may be useful, though probably less so than we want to believe. A scholar’s influence is ultimately a matter of the subjective opinions of many and is not a “thing” we can hope to agree on. The best we might hope for is an “incompletely theorized agreement” that, for example, Brian Leiter is a very important, productive, and influential legal philosopher. 29 The Sisk-Leiter methodology is not implemented well-enough for confident inference, and even if it were, it should include judicial citations and additional information from Google Scholar. That will better help measure the impact of individual scholars.

* J.B. Heaton, P.C., Thanks to Brian Leiter for conversations about law school rankings and helpful comments on a preliminary draft. 



  1.  Brian Leiter, Measuring the Academic Distinction of Law Faculties, 29 J. Legal Stud. 451 (2000).
  2.  Id. at 457–59.
  3.  See Gregory Sisk et al., Scholarly Impact of Law School Faculties in 2012: Applying Leiter Scores to Rank the Top Third, 9 U. St. Thomas L.J. 838 (2012) [hereinafter The 2012 Study]; Gregory Sisk et al., Scholarly Impact of Law School Faculties in 2015: Updating the Leiter Score Ranking for the Top Third, 12 U. St. Thomas L.J. 100, 111 (2015) [hereinafter The 2015 Study].
  4.  The 2012 Study, supra note 3, at 850–51; The 2015 Study, supra note 3, at 118.
  5. See Leiter, supra note 1, at 468 (“In brief, I used Westlaw’s JLR database rather than TP-ALL, since the latter includes on-line versions of treatises (for example, Wright and Miller on Federal Practice and Procedure) and thus would artificially inflate the counts for schools at which these scholars teach. Names were searched as, for example, ‘Brian/2 Leiter,’ except where multiple middle initials or similar factors made necessary a wider scope. To guard against false positives with common names, 10–20 of the hits were reviewed; the percentage that was false positives was then multiplied against the total number of hits returned, and that amount was subtracted from the citation total.”).
  6.  Gregory Sisk et al., Scholarly Impact of Law School Faculties in 2018: Updating the Leiter Score Ranking for the Top Third (2018) (U. St. Thomas Legal Stud., Research Paper No. 18-15), available at
  7. Brian Leiter, Top 50 Law Schools Based on Scholarly Impact, 2018, Brian Leiter’s L. Sch. Rep. (Aug. 13, 2018), [].
  8.  The search is: adv: TE(brian /2 Leiter) and DATE(aft 2012) and DATE(bef 2018).
  9. Brian Leiter, David Segal’s Hatchet Job on Law Schools, Brian Leiter’s L. Sch. Rep. (Nov. 20, 2011), [].
  10. Brian Leiter, Four Changes to the Status Quo in Legal Education that Might Be Worth Something, Brian Leiter’s L. Sch. Rep. (Mar. 15, 2012), [].
  11.  Brian Leiter, On So-Called “Empirical Legal Studies” and Its Problems, Brian Leiter’s L. Sch. Rep. (July 6, 2010), [].
  12.  See, e.g., Anita Bernstein, Comment to Tamanaha’s Proposals on Reforming Legal Education Financing and Regulation, Brian Leiter’s L. Sch. Rep. (June 1, 2012, 11:08 AM), [] (cited in Paul Horwitz, What Ails the Law Schools?, 111 Mich. L. Rev. 955, 976 (2013), which comes up as a Brian Leiter Westlaw “hit.”).
  13.  And found fascinating; highly recommend!
  14.  See Gary M. Lucas, Jr., Measuring Scholarly Impact: A Guide for Law School Administrators and Legal Scholars, 165 U. Pa. L. Rev. Online 165, 170–71 (2017).
  15.  See Anderson v. Griffin, 397 F.3d 515, 521 (7th Cir. 2005) (citing Ronald J. Allen & Brian Leiter, Naturalized Epistemology and the Law of Evidence, 87 Va. L. Rev. 1491, 1527–29 (2001)); Libas, Ltd. v. United States, 193 F.3d 1361, 1368 (Fed. Cir. 1999) (citing Brian S. Leiter, The Epistemology of Admissibility, 1997 B.Y.U. L. Rev. 803, 818–19 (1997); Galvin v. Eli Lilly & Co., 488 F.3d 1026, 1034 (D.C. Cir. 2007) (citing Allen & Leiter, supra, at 1524–25).
  16. See, e.g., Michael D. McClintock, The Declining Use of Legal Scholarship by Courts: An Empirical Study, 51 Okla. L. Rev. 659, 659–60 (1998).
  17.  See Whit D. Pierce & Anne E. Reuben, The Law Review Is Dead; Long Live the Law Review: A Closer Look at the Declining Judicial Citation of Legal Scholarship, 45 Wake Forest L. Rev. 1185, 1208–1211 (2010) (finding a robust and on-going use of secondary scholarship in judicial opinions). Earlier research to the contrary that has not held up to later scrutiny includes the McClintock study. Id. at 1190–1211.
  18.  See Fred R. Shapiro, The Most-Cited Law Review Articles, 73 Calif. L. Rev. 1540, 1543 (1985) (“Citation counts do, of course, have limitations. Some problems stem from ambiguous motivations of scholars in choosing to include particular references. Citations may be made for many reasons . . . such as paying homage to pioneers, giving credit for related work, providing background reading, and substantiating claims. The most problematic of these reasons for citation are self-citations and negative citations. Self-citations may inflate an author’s citation total; negative citations—citations for the purpose of criticism—might result in a high total for a shoddy piece of scholarship.”). An entertaining read on the citation game is J. M. Balkin & Sanford Levinson, How to Win Cites and Influence People, 71 Chi.-Kent L. Rev. 843 (1996).
  19. See, e.g., Arthur Austin, The Law Academy and the Public Intellectual, 8 Roger Williams U. L. Rev. 243 (2003).
  20.  See Louis J. Sirico, Jr., The Citing of Law Reviews by the Supreme Court:1971–1999, 75 Ind. L.J. 1009, 1010 (2000) (study of citations of law reviews by the U.S. Supreme Court); Lee Petherbridge & David L. Schwartz, An Empirical Assessment of the Supreme Court’s Use of Legal Scholarship, 106 Nw. U. L. Rev. 995, 995 (2012) (analyzing the U.S. Supreme Court’s use of legal scholarship in its opinions); David L. Schwartz & Lee Petherbridge, The Use of Legal Scholarship by the Federal Courts of Appeals: An Empirical Study, 96 Cornell L. Rev. 1345, 1352 (2011) (analyzing the use of legal scholarship in the opinions of the United States Court of Appeals for the Federal Circuit); A. Michael Beaird, Citations to Authority by the Arkansas Appellate Courts, 1950–2000, 25 U. Ark. Little Rock L. Rev. 301 (2003); Richard G. Kopf, Do Judges Read the Review? A Citation-Counting Study of the Nebraska Law Review and the Nebraska Supreme Court, 1972–1996, 76 Neb. L. Rev. 708 (1997); Michelle M. Harner & Jason A. Cantone, Is Legal Scholarship Out of Touch? An Empirical Analysis of the Use of Scholarship in Business Law Cases, 19 U. Miami Bus. L. Rev. 1, 1 (2011) (analyzing the use of legal scholarship by Delaware state courts from 1997 to 2007); William H. Manz, The Citation Practices of the New York Court of Appeals: A Millennium Update, 49 Buff. L. Rev. 1273, 1273–74 (2001) (study of New York’s highest court’s citations, including citations of legal periodicals); Brent E. Newton, Law Review Scholarship in the Eyes of the Twenty-First-Century Supreme Court Justices: An Empirical Analysis, 4 Drexel L. Rev. 399 (2012).
  21. See, e.g., William H. Manz, Citations in Supreme Court Opinions and Briefs: A Comparative Study, 94 Law Libr. J. 267, 294–95 (2002) (finding that legal periodicals are cited more than other secondary materials like treatises, and a very low rate of citation to treatises, Restatements, annotations, legal encyclopedias, and legal dictionaries).
  22.  Derek Simpson & Lee Petherbridge, An Empirical Study of the Use of Legal Scholarship in Supreme Court Trademark Jurisprudence, 35 Cardozo L. Rev. 931, 953 (2014).
  23.  Jeffrey L. Harrison & Amy R. Mashburn, Citations, Justifications, and the Troubled State of Legal Scholarship: An Empirical Study, 3 Tex. A&M L. Rev. 45, 45 (2015) (finding that judicial cites are slightly more likely to address “the ideas, reasoning, methodology, or conclusions found in the cited work[,]” something that is rare in most citations of either type).
  24. Brian Leiter, Google Scholar [].
  25.  Id.
  26.  ucas, supra note 14, at 172.
  27.  Leiter, supra note 26.
  28.  Eric Posner, Google Scholar, [].
  29.  See Cass R. Sunstein, Incompletely Theorized Agreements, 108 Harv. L. Rev. 1733, 1735–36 (1995) (“Participants in legal controversies try to produce incompletely theorized agreements on particular outcomes. They agree on the result and on relatively narrow or low-level explanations for it. They need not agree on fundamental principle.”).