Current Print Issue

Vol. 166, Issue 2

  January 2018


Featured Article

Pandora’s Digital Box: The Promise and Perils of Digital Wallets

By
Adam J. Levitin
166 U. Pa. L. Rev. 305 (2018)

Digital wallets, such as ApplePay and Google Pay, are smart payment devices that can integrate payments with two‐way, realtime communications of any type of data. Integration of payments with realtime communications holds out tremendous promise for consumers and merchants alike: the combination, in a single, convenient platform, of search functions, advertising, payment, shipping, customer service, and loyalty programs. Such an integrated retail platform offers consumers a faster and easier way to transact, and offers brick‐and‐mortar retailers an eCommerce‐type ability to identify, attract, and retain customers. At the same time, however, digital wallets present materially different risks for both consumers and merchants than traditional plastic card payments precisely because of their smart nature.

For consumers, digital wallets can trigger an unfavorable shift in the applicable legal regime governing the transactions, increase fraud risk, create confusion regarding error resolution, expose consumers to non–FDIC‐insured accounts, and substantially erode transactional privacy. These risks are often not salient to consumers, who thus cannot distinguish between different digital wallets on the basis of risk. Consumers’ inability to protect against these risks points to a need for regulatory intervention by the Consumer Financial Protection Bureau to ensure minimum standards for digital wallets.

For merchants, digital wallets can divest valuable customer information used for antifraud, advertising, loyalty, and customer service purposes. Digital wallets can also facilitate poaching of customers by competitors, impair merchants’ customer relationship management, deprive merchants of influence over consumers’ payment choice and routing, increase fraud risk, subject merchants to patent infringement liability, and ultimately increase the costs of accepting payments. Merchants are constrained in their ability to refuse or condition payments from digital wallets based on the risks presented because of merchant rules promulgated by credit card networks. These rules raise antitrust concerns because they foreclose entry to those digital wallets that offer merchants the most attractive valuation proposition: wallets that do not use the credit card networks for payments.


Featured Comment

FDA Regulation of 3D‐Printed Organs and Associated Ethical Challenges

By
Elizabeth Kelly
166 U. Pa. L. Rev. 515 (2018)

The implications of pervasive implementation of 3D printing with biological material, also known as “bioprinting,” are vast. They present never‐before‐seen hurdles, which are particularly complicated due to the vulnerability of the patients, who often need new organs to survive, involved. In this Comment, I limit the scope of this inquiry to the most immediate challenges of embracing 3D‐printed organs in our health care market: potential statutory roadblocks, regulatory concerns over manufactured organs, and ethical challenges of which we must remain aware. I submit one path by which 3D‐printed organs can fit in our current legal and regulatory framework. I also define who should be charged with regulating them and propose how future regulators should do so. Finally, I raise additional concerns of 3D‐printed organs that will require deeper analysis as more information becomes available, including the myriad ethical challenges presented by this new technology.

The U.S. Food and Drug Administration (FDA) is the appropriate body to regulate 3D‐printed organs because a manufactured organ must be treated differently than a human organ, which can be transplanted as “simply” part of the practice of medicine. It remains to be seen how the FDA will gather sufficient data to satisfy premarket approval requirements, determine who gets access and when, and how to govern the marketing of 3D‐printed organs because the output is individualized. But the process by which the organs are created can be scaled dramatically. In so doing, those in charge must also confront unique, multifaceted ethical challenges.


Online Exclusives
 Last updated: December 12, 2017


Essay

The Unicorn Governance Trap

By
Renee M. Jones
166 U. Pa. L. Rev. Online 165 (2017)

This Essay highlights emerging governance problems presented by persistent Unicorns. It argues that recent market trends and deregulatory reforms have weakened or eliminated the principal mechanisms that imposed discipline on start‐up company founders. Recent scandals at prominent Unicorns suggest that investors have erred in placing blind faith in the honesty and capabilities of start‐up founders. Policymakers should learn from these disasters and close regulatory loopholes that allow Unicorns to persist in limbo between private and public status for extended periods of time.

Part I provides an overview of how the IPO has shifted from the preferred exit strategy in the eyes of entrepreneurs to a regulatory morass to be shunned. It traces developments in the market for start‐up company shares, and regulatory reforms that facilitated the proliferation of Unicorns. Part II highlights unique governance risks posed by Unicorns, addressing both societal and investor protection concerns. Part III offers suggestions on how to address Unicorn risks, and raises fundamental questions about the future of Unicorns in our economy.


Response

Auditing Algorithms for Discrimination

By
Pauline T. Kim
166 U. Pa. L. Rev. Online 189 (2017)

As reliance on algorithmic decisionmaking expands, concerns are growing about the potential for arbitrary, unfair, or discriminatory outcomes in areas such as employment, credit markets, and criminal justice. Legal scholars have lamented the lack of accountability of these automated decision processes and called for greater transparency. They argue that the way to avoid unfair or discriminatory algorithms is to demand greater disclosure of how they operate. Accountable Algorithms resists this call for transparency, calling it “a naive solution.” Instead, it argues that technology offers tools—“a new technological toolkit”—that can better assure accountability.

One of the examples that Kroll et al. rely on to illustrate their argument is the goal of ensuring that algorithms do not discriminate. Many commentators have pointed out the risk that automated decision processes may produce biased outcomes, and in prior work, I have argued that serious policy concerns are raised when these algorithms exacerbate historic inequality or disadvantage along the lines of race, sex, or other protected characteristics—what I’ve referred to as “classification bias.” Recognizing that the precise meaning of discrimination is uncertain and contested, Kroll et al. do not try to resolve debates over the meaning of discrimination. Instead, without choosing among the competing definitions, they simply survey the available technical tools, suggesting that these tools will be more effective at ensuring nondiscrimination than calls for transparency.

Transparency involves outside scrutiny of a decision process, for example, by allowing third parties to examine the computer code or the decision criteria it implements. Auditing is another method for promoting transparency. When the goal is nondiscrimination, auditing could involve techniques to ensure that an algorithm follows a specified rule—for example, sorting must not occur based on race or sex. Alternatively, auditing for discrimination could take the form of examining inputs and outputs to detect when a decision process systematically disadvantages particular groups. The latter form of auditing does not involve direct examination of the decision process, but is useful in detecting patterns. This type of auditing, in the form of field experiments, is well established in the social science literature as a technique for testing for discrimination in decisions such as employment and consumer transactions. Auditing the effects of decisionmaking algorithms similarly offers a method of detecting when they may be biased against particular groups. Kroll et al., however, express skepticism about auditing as a strategy, arguing that it is not only technically limited, but also likely restricted by law. More specifically, they suggest that when an algorithm is found to have a disparate impact, the Supreme Court’s decision in Ricci v. DeStefano may prevent correcting for that bias.

This Essay responds to Kroll et al., arguing that, despite its limitations, auditing for discrimination should remain an important part of the strategy for detecting and responding to biased algorithms. Technical tools alone cannot reliably prevent discriminatory outcomes because the causes of bias often lie not in the code, but in broader social processes. Therefore, implementing the best available technical tools can never guarantee that algorithms are unbiased. Avoiding discriminatory outcomes will require awareness of the actual impact of automated decision processes, namely, through auditing.

Fortunately, the law permits the use of auditing to detect and correct for discriminatory bias. To the extent that Kroll et al. suggest otherwise, their conclusion rests on a misreading of the Supreme Court’s decision in Ricci. That case narrowly addressed a situation in which an employer took an adverse action against identifiable individuals based on race, while still permitting the revision of algorithms prospectively to remove bias. Such an approach is entirely consistent with the law’s clear preference for voluntary efforts to comply with nondiscrimination goals.


Case Note

Of Laundering and Legal Fees: The Implications of United States v. Blair for Criminal Defense Attorneys who Accept Potentially Tainted Funds

By
Philip J. Griffin
164 U. Pa. L. Rev. Online 179 (2016).

“In the common understanding, money laundering occurs when money derived from criminal activity is placed into a legitimate business in an effort to cleanse the money of criminal taint.” 18 U.S.C. § 1957, however, prohibits a much broader range of conduct. Any person who “knowingly engages” in a monetary transaction involving over $10,000 of “criminally derived property” can be charged with money laundering under § 1957.

Because § 1957 eliminates the requirement found in other money laundering statutes that the government prove an attempt to commit a crime or to conceal the proceeds of a crime, § 1957 “applies to the most open,

above‐board transaction,” such as a criminal defense attorney receiving payment for representation. In response to pressure from commentators, Congress passed an amendment two years after § 1957’s enactment defining the term “monetary transaction” so as to exclude “any transaction necessary to preserve a person’s right to representation as guaranteed by the sixth amendment to the Constitution.”

The statutory safe harbor found in § 1957(f)(1) has successfully immunized defense attorneys from money laundering prosecutions. However, United States v. Blair raised concerns among the criminal defense bar because of its holding that an attorney‐defendant was not entitled to protection under § 1957(f)(1). In Blair, an attorney‐defendant was convicted of violating § 1957 for using $20,000 in drug proceeds to purchase two $10,000 bank checks to retain attorneys for associates of his client. Noting that Sixth Amendment rights are personal to the accused and that Blair used “someone else’s money” to hire counsel for others, the Fourth Circuit held that his actions fell “far beyond the scope of the Sixth Amendment” and were not protected by the safe harbor. In his strongly‐worded dissent, Chief Judge Traxler criticized the court for “nullif[ying] the § 1957(f)(1) exemption and creat[ing] a circuit split.”

This Case Note discusses the implications of Blair for the criminal defense attorney who accepts potentially tainted funds and proposes a solution to ameliorate its unintended consequences. First, Part I provides relevant background information by discussing the money laundering statutory framework, the criticisms leveled at the framework as it was written, the Congressional response to that criticism, and § 1957(f)(1)’s application up until Blair. Next, Part II describes the Blair decision in detail and examines its implications. Part III then proposes a novel solution to the problems it created. Finally, the Case Note concludes with a brief word of practical advice for the criminal defense bar.