VOLUME 162, ISSUE 1 December 2013


Impersonal default rules, chosen by private or public institutions, establish settings and starting points for countless goods and activities—cell phones, rental car agreements, computers, savings plans, health insurance, websites, privacy, and much more. Some of these rules do a great deal of good, but others might be poorly chosen, perhaps because the choice architects who select them are insufficiently informed, perhaps because they are self-interested, perhaps because one size does not fit all. The existence of heterogeneity argues against impersonal default rules. The obvious alternative to impersonal default rules, of particular interest when individual situations are diverse, is active choosing, by which people are asked or required to make decisions on their own. The choice between impersonal default rules and active choosing depends largely on the costs of decisions and the costs of errors. If active choosing were required in all contexts, people would quickly be overwhelmed; default rules save a great deal of time, making it possible to make other choices and in that sense promoting autonomy. Especially in complex and unfamiliar areas, impersonal default rules have significant advantages. But where people prefer to choose, and where learning is both feasible and important, active choosing may be best, especially if people’s situations are relevantly dissimilar. At the same time, it is increasingly possible for private and public institutions to produce highly personalized default rules, which reduce the problems with one-size-fits-all defaults. In principle, personalized default rules could be designed for every individual in the relevant population. Collection of the information that would allow accurate personalization might be burdensome and expensive, and might also raise serious questions about privacy. But at least when choice architects can be trusted, personalized default rules offer most (not all) of the advantages of active choosing without the disadvantages.

Lawyers and judges speak to each other in a language of precedents – decisions from cases that have come before. The most persuasive precedent to cite, of course, is an on-point decision of the U.S. Supreme Court. But Supreme Court opinions are changing. They contain more factual claims about the world than ever before, and those claims are now rich with empirical data. This Supreme Court factfinding is also highly accessible; fast digital research leads directly to factual language in old cases that is perfect for arguments in new ones. An unacknowledged consequence of all this is the rise of what I call “factual precedents”: the tendency of lower courts to cite Supreme Court cases as authorities on factual subjects, as evidence that the factual claims are indeed true. Rather than citing, for example, evidence from the record to establish that carpal tunnel syndrome regularly resolves without surgery, lower courts instead cite language from a Supreme Court opinion for that point.

This Article carefully describes how lower courts are using Supreme Court facts today and then argues that these factual precedents are unwise. The Supreme Court is not a factfinding institution. Facts change over time. And, unlike legal precedents, one cannot be certain that factual statements from the Supreme Court are carefully deliberated and carry the force of law. I argue that Supreme Court statements of fact should not receive any authoritative force separate from the force that attaches to whatever legal conclusions they contributed to originally. If a fact is so central to the legal holding that the two meld together, then the Supreme Court is free to so state and thus insulate the factual conclusion from future challenges by making it part of the legal rule. But the presumption, I suggest, should be no precedential value for generalized factual claims – even if they are facts found in the U.S. Reports.

Scholars engaged in empirical legal research have long struggled to balance the methodological demands of social science with the normative aspirations of legal scholarship. In recent years, empirical legal scholarship has increased dramatically in methodological sophistication, but in the process has lost some of its relevance to the normative goals that animate legal scholarship. In many empirical studies, the phenomena that are readily measured have a complex relationship with the values that are relevant to legal reform, yet empirical scholars often neglect to explain how their positive findings relate to normative claims. Although some empirical studies offer prescriptions, they often rely on normative premises that are clearly untenable or simply fail to explain how they purport to derive an ‘ought’ from an ‘is.’ Other empirical studies avoid prescription altogether, reporting results without clarifying how they are relevant to meaningful questions about law or legal institutions.

Using as examples three types of measures commonly used to evaluate judges and institutions—citation counts, reversal rates, and inter-judge disparities—this Article describes widespread flaws in efforts to connect the ‘is’ and the ‘ought’ in empirical legal scholarship. The Article argues that normative implications should not be an afterthought in empirical research, but rather should inform research design. Empirical scholars should focus on quantities that can guide policy, and not merely on phenomena that are conveniently measured. They should be explicit about how they propose to measure the goodness of outcomes, disclose what assumptions are necessary to justify their proposed metrics, and explain how these metrics relate to the observable data. When values are difficult to quantify, legal empiricists will need to develop theoretical frameworks and empirical methods that can credibly connect empirical findings to policy-relevant conclusions.


Over the last fifteen years, unpaid internships have become a part of our generation's psyche. You try to get into the best college; then you try to get the best unpaid internship; and finally you try to get the best full-time job. This pattern, however, has raised four primary problems. First, it disadvantages students from the middle and lower class because they can't afford to take unpaid internships, which increases and perpetuates socioeconomic and often racial inequality. Second, when interns are not paid, various federal sexual harassment and discrimination legal protections do not apply, since courts have held that such interns are not classified as employees. Third, the emphasis on having work experience in today's employment market necessitates that already debt-burdened students take unpaid internships, putting themselves into further financial trouble. And fourth, employers are firing full-time employees and replacing them with teams of unpaid interns.

Despite these concerns, unpaid internships persist, and as of today, there is little to no case law about them—mainly because interns fear the whistle-blower stigma that would arise from bringing a lawsuit. Recently, however, two class action suits were launched against prominent media and entertainment companies. My Comment seeks to shape the law for these cases of first impression. Using a Supreme Court case from the 1940s, the Fair Labor Standards Act, and the Department of Labor’s Fact Sheet #71, I propose a simple test to determine the legality of unpaid internships: if a for-profit employer, ex ante, expects to derive a benefit from the internship, then the intern is an "employee" (not a "volunteer") who deserves at least the minimum wage and also protection from sexual harassment and discrimination.

Existing project financing structures utilizing the Investment Tax Credit (ITC) and depreciation benefits have helped spur growth in the solar industry but are insufficient on their own to enable the residential solar sector to scale up and become a mainstream energy source. In the span of only a few years, the solar market has grown from a fledgling niche industry to an important global player. Solar installations in the United States grew at an annual rate of 70% between 2005 and 2012. Federal tax incentives and state-level subsidies have largely driven this growth. However, for reasons I explore in this Comment, these tax incentives and subsidies will be unable to sustain such rapid growth in the coming years, especially in the residential sector. If the solar industry is to continue to grow and become competitive with other energy sources, innovative private financing mechanisms are needed to allow residential solar developers to tap into capital markets and access new classes of investors (e.g., mutual funds, pension funds, and other institutional investors).

The securitization of solar leases presents a promising solution to this problem, but a variety of barriers currently prevent solar companies from securitizing these assets successfully. This Comment identifies and assesses these barriers and recommends strategies to promote low-cost securitization of residential solar leases while minimizing the potential risks that such securitization poses.

In Part I, I introduce the solar market, emphasizing in particular the current mechanisms to finance solar systems, the existing policies promoting solar energy, and the residential solar leasing model. In Part II, I present an overview of the asset-backed securitization process, outline how it might apply to solar leasing, and assess the risks and benefits of solar lease securitization. Finally, in Part III, I recommend strategies to reduce the risks posed by solar lease securitization and offer some predictions for the sector going forward. This Comment focuses primarily on residential solar systems but will also address some concepts common to commercial and utility-scale solar systems. Ultimately, I argue that while securitization is not a quick fix, it is a valid option for increasing liquidity and attracting new sources of capital to the solar leasing market.

(Visited 339 times, 1 visits today)