An Experiment on Law Review Placement

Our faculty had a lunch discussion of non-traditional legal scholarship and publishing today. As someone who sees growing convergence between scholarship in law and scholarship in other disciplines and who wonders how long the pre-Internet model of legal publishing is likely to persist, I started thinking about the classifying function of traditional legal scholarship.

The argument is sometimes made that traditional scholarship with publications in law reviews is useful because the selection process, even if it is done by student editors and even if it has a number of idiosyncratic features, conveys useful information about the quality of a piece. Thus, even if the editorial process adds little to the value of an article and even if access to the article is restricted by the Google-shielding proclivities and payment demands of many traditional publishers, still the method might have value if it produced useful information on the quality of articles. The torrent of information now available makes good filtration systems all the more valuable.

I only had an hour or so to spend on it, so I decided to develop a very simplified idealization of the placement process. That development is summarized here. Suppose the “actual merit,” whatever that may mean, is distributed in a bell shaped fashion on theinterval 0 to 1. I imagined 2000 submissions. And suppose each of 200 journals scores all of the articles. The ranking is dependent on the actual merit plus some parameterizable noise. Obviously not all articles are submitted to all journals, but I was trying to makethe best case for the argument that placement conveys useful information. Each journal then ranks the submissions based on score. The journals then go through a “feeding order” in which the “best” journals — the Harvards, the Yales, the Stanfords — pick first and lesser journals take the progressively less appetizing scraps. The zebra of legal scholarship is thus progressively devoured. Again, this “folding algorithm” is one highly likely to preserve information; if the process is more haphazard — which it is — the amount of information conveyed is likely less. I then want to look at the statistics of the articles published in each journal. By statistics I mean the average actual merit of itspublications and the standard deviations of its publications. This gives us some hint as to the likelihood that an article published in a journal ranked 63 is in fact any “better” than an article published in a journal ranked 116. The “Box and Whiskers” plot shown belowsummarizes one result. The solid area shows the mean true merit as well as the 25th and 75th quartiles of merit. You can also see the outliers.

We can also see the likelihood for each pairing of journals that an article in the higher ranked journal has a higher “true merit” than the article in the lower ranked journal.

I think this tells us that, best case, and with some parameters that are not entirely implausible, journal placement does convey some information, but not a huge amount.

One can also show similar box and whisker plots over various levels of “accuracy” in students’ perceptions of the true merit of law review articles.

A lunch colleague asked me what all the graphs mean. I said that it meant that the amount of information conveyed by placement depended on the ability of students to “correctly” perceive the merits of articles. I suppose that statement has the virtues and deficiencies of being unsurprising. But what continues to bother me is that we don’t have much empirical information on student accuracy (do we?) and that, if one fears it is not outstanding, we are placing weight on placement that it cannot bear. If that is correct, the case for traditional legal publishing seems even weaker. It bothers me when we don’t know and seldom seek to figure out if the empirical propositions needed to support our conclusions are actually true.

Note: I believe strongly that all empirical articles, published on the Internet or in traditional law reviews, should be open code, open data.  For these images, I have employed Mathematica, which is the language that I use 99% of the time.  I encourage you to contact me directly for more detailed information on the code I have used here.