VOLUME 161, ISSUE 6 May 2013

Articles

One of the central questions in free speech jurisprudence is what activities the First Amendment encompasses. This Article considers that question in the context of an area of increasing importance—algorithm-based decisions. I begin by looking to broadly accepted legal sources, which for the First Amendment means primarily Supreme Court jurisprudence. That jurisprudence provides for very broad First Amendment coverage, and the Court has reinforced that breadth in recent cases. Under the Court’s jurisprudence the First Amendment (and the heightened scrutiny it entails) would apply to many algorithm-based decisions, specifically those entailing substantive communications. We could of course adopt a limiting conception of the First Amendment, but any nonarbitrary exclusion of algorithm-based decisions would require major changes in the Court’s jurisprudence. I believe that First Amendment coverage of algorithm-based decisions is too small a step to justify such changes. But insofar as we are concerned about the expansiveness of First Amendment coverage, we may want to limit it in two areas of genuine uncertainty: editorial decisions that are neither obvious nor communicated to the reader, and laws that single out speakers but do not regulate their speech. Even with those limitations, however, an enormous and growing amount of activity will be subject to heightened scrutiny absent a fundamental reorientation of First Amendment jurisprudence.

One of the most astounding and largely underappreciated developments accompanying the recent proliferation of mass-market computer technology has been the rise of video gaming. From arcade to console and computer desktop to interactive multiplayer network, the explosion in computer video games has been spurred by Internet accessibility, whether for downloading and updating software, tendering payment, or finding and interacting with other players. The result has been a flourishing new entertainment sector, with revenues that now consistently rival or exceed that of the established music and movie industries.

In this Article, I consider a fundamental set of legal issues, integral to e-sports, that concern the ownership and control of rights in player perfor- mances. The nature of such competitions presents a new and fairly complex practical configuration for legal analysis. Analogous questions regarding the ownership of physical performances have certainly arisen in the past, but the nature of e-sports generates certain novelties in the analysis. Unlike physical sports, where player activity is observed and recorded directly for broadcast and similar dissemination, e-sports competitions are by definition mediated by computer game software that is itself the subject of various intellectual property rights. This characteristic of e-sports adds to the legal discussion an additional layer of complexity, implicating the interests of additional rights-holding entities not found in negotiations over competitive performances in physical sports.

This Article considers one of the challenges of this evolution: the role of intermediaries’ liability for the harm they cause to users. All online interactions are conducted through intermediaries—the routers, servers, applications, services, and switches that make up the Internet’s “core.” In the era of the trust-based Internet, intermediaries were largely passive participants in the technological ecosystem. This limited both the harm they could cause and the basis for liability against them. In today’s Internet, intermediaries are increasingly active; they make real-time decisions about how to handle user data, and they have the ability to store or share that data for private purposes. In the post-trust Internet, intermediaries can cause real harm. Without trust, it remains unclear which institutions, if any, safeguard users from such harm.

This Article proceeds in four parts. Part I considers the role of trust in the early Internet, how the evolving Internet is moving away from this trust-based model, and how the loss of trust affects and limits online institutions. Part II looks to how other institutions function absent trust. Part III considers the limitations and lessons from these standard approaches and synthesizes them into a set of principles for establishing intermediaries’ liability.

Antitrust agencies around the world are increasingly focusing on digital indus- tries. Critics have justifiably questioned the ability of competition agencies to make beneficial enforcement decisions given the complexity and rapid pace of change in online markets. This Article discusses those criticisms and addresses the argument that, because the error costs of overenforcement of antitrust laws in digital markets would be much higher than the error costs of underenforcement, courts and antitrust agencies should presume against antitrust intervention in digital industries. While acknowledging that there is often good reason for such modesty in enforcement, this Article discusses several ways in which competition policy can adjust to better account for potential costs and benefits of enforcement in digital platform markets. It argues that nonprice effects related to information and innovation are particularly important to the performance of online platforms, and may hold the key to a better understanding of the costs of antitrust underenforcement and the assessment of the competitive effects of conduct and transactions in digital industries.

Cloud computing is the locating of computing resources on the Internet in a fashion that makes them highly dynamic and scalable. This kind of distributed computing environment can quickly expand to handle a greater system load or take on new tasks. Cloud computing thereby permits dramatic flexibility in processing decisions—on a global basis. The rise of the cloud has also significantly challenged established legal paradigms. This Article analyzes current shortcomings of information privacy law in the context of the cloud. It also develops normative proposals to allow the cloud to become a central part of the evolving Internet. These proposals rest on strong and effective protections for information privacy that are also sensitive to technological changes.

This Article examines three areas of change in personal data processing due to the cloud. In doing so, it draws on an empirical study in which I analyzed the data processing of six major international companies.5 The first area of change concerns the nature of information processing at companies. A second legal issue concerns the multidirectional nature of modern data flows, which occur today as a networked series of processes made to deliver a business result. A final change relates to the shift toward a process-oriented manage- ment approach. Users no longer need to own technology, whether software or hardware, that is placed in the cloud. Rather, different parties in the cloud can contribute inputs and outputs and execute other kinds of actions. Thus, this Article’s focus is a comparative one from which it explores significant changes in data processing due to the cloud and the resulting tension with contemporary information privacy law.

Computers are making an increasing number of important decisions in our lives. They fly airplanes, navigate traffic, and even recommend books. In the process, computers reason through automated algorithms and constantly send and receive information, sometimes in ways that mimic human expression. When can such communications, called here “algorithmic outputs,” claim First Amendment protection?

An architectural principle known as protocol layering is widely recognized as one of the foundations of the Internet’s success. In addition, some scholars and industry participants have urged using the layers model as a central organizing principle for regulatory policy. Despite its importance as a concept, a comprehensive analysis of protocol layering and its implications for Internet policy has yet to appear in the literature. This Article attempts to correct this omission. It begins with a detailed description of the way the five-layer model developed, introducing protocol layering’s central features, such as the division of functions across layers, infor- mation hiding, peer communication, and encapsulation. It then discusses the model’s implications for whether particular functions are performed at the edge or in the core of the network, contrasts the model with the way that layering has been depicted in the legal commentary, and analyzes attempts to use layering as a basis for competition policy. Next the Article identifies certain emerging features of the Internet that are placing pressure on the layered model, including WiFi routers, network-based security, modern routing protocols, and wireless broadband. These developments illustrate how every architecture inevitably limits functionality as well as the architecture’s ability to evolve over time in response to changes in the technological and economic environment. Together these considerations support adopting a more dynamic perspective on layering and caution against using layers as a basis for a regulatory mandate for fear of cementing the existing technology into place in a way that prevents the network from innovating and evolving in response to shifts in the underlying technology and consumer demand.

(Visited 271 times, 1 visits today)