I thought I would start my journey AI discovery with Prof. Lobel’s call to policy makers to focus on the potential good of Artificial Intelligence (“AI”) when thinking about regulation.1 I agree that too much of today’s conversation about AI is driven by Terminator-infused screeds that imagine our future Skynet is about to be unleashed, but for the wise regulation of Washington bureaucrats who will save us from our self-destruction.
In The Law of AI for Good, Prof. Lobel states the general problem with how we talk about AI:
The issue is not whether we should be concerned with tech wrongs, tech risks, or tech fails: the answer is clearly yes. The issue is whether the concerns are unpacked, nuanced, concrete, and balanced — or whether they are bundled, blunt, abstract, at times overstated, and shaping the conversation in distorted and counter-productive ways. (p. 13)
In framing up the need for a new perspective, Prof. Lobel begins by identifying some of the flawed thinking or assumptions in the current “techlash”2 that is fueling the “AI is Bad” sentiment.
The Double Standard: the first problem is people’s proclivity to demand that AI systems “behave” perfectly, even when humans performing the same activity often make mistakes. The most obvious example is autonomous driving; i.e., many people expectation appears to be that cars driven by AI won’t get into accidents.
Once a Problem, Always a Problem: the second problem is people’s tendency to assume once a problem with AI has been identified, that problem is permanent. The infamous story of a single facial recognition system that initially misidentified women of color turns into “AI can’t identify faces of color.”3
Scarcity: “The current dialogue surrounding AI underappreciates the potential of AI to help alleviate social ills where humans are short-staffed.” (p. 18). I find this one particularly acute in the context of applying AI in the Global South. For example, the Global North might decide not to deploy an AI that only performs as well as a typical radiologist in reading mammographies (because of an abundance of radiologists), but that AI may be game-changing in those countries or communities where there are no radiologists.
Risk Aversion: one of many insights from psychologists Daniel Kahneman and Amos Tversky was that people place more value on losses than they do on gains. Said differently, people will go to greater lengths to avoid a loss than to obtain a gain. Prof. Lobel notes that, for all of the discussion about banning risky AI, there is surprisingly little discussion about mandating the use of AI that works better than existing systems. Requiring the use of “superior” AI is a recurring theme in this article and one that I admit to not having thought much about.
Binary Thinking or “adopt or ban:” is the tendency to discuss choices around AI as either adopting whatever the existing standard is or permanently banning it.
Assuming Distributional Impact: is a variant of the Once a Problem problem; i.e., many people assume that facial recognition, credit scoring algorithms or resume screening AI will always disadvantage certain groups of people (e.g., people of color). While there may be a heightened risk that AI will perpetuate broader, existing social biases, it is a mistake to conclude that this will always be the case. That is, if everyone assumes AI will lead to bad outcomes for certain groups, then potentially AI that is potentially beneficial to these groups may be be created.
Next, Prof. Lobel looks at some areas where AI has the potential to vastly improve lives; i.e., “AI for Good.” While I happen to be huge proponent of the potential for AI to help solve some of the world’s most persistent development problems, I did find some of Prof. Lobel’s examples pretty underwhelming
Environmental / Climate Applications. Given Prof. Lobel’s expansive definition of AI, its application to climate is already ubiquitous. All climate models use vast amounts of data to model how future changes in inputs (e.g., CO2) may affect temperature or rainfall or some other environmental outcome. Prof. Lobel highlights The Ocean Cleanup, a non-profit that has developed AI that tracks plastic pollution and directs technologies to remove plastics from the ocean. (p. 29). Ironically, The Ocean Cleanup has come under fire for exaggerating its ability to remove plastics4 while others have claimed that their removal methods cause more harm to the environment than the plastics themselves5.
Food Scarcity / Poverty Alleviation. Prof. Lobel refers to Stanford University research on the use of satellite imagery to estimate poverty levels across communities in Africa. (p. 68). I checked out the research cited6 and can only imagine that these Stanford researchers have never been to Africa and/or don’t know anything about development work. The idea that we need AI to tell us where poor people are is laughable. On the other hand, I agree that AI applied to agriculture (so-called “AgTech”) holds a lot of promise, though nearly all of the AgTech solutions I’ve seen proposed in Africa are too Western-centric to actually work.
Health & Medicine. I completely agree that AI has the potential to revolutionize the delivery of health care, particularly in low-resource countries. The biggest (and most obvious) challenges are (1) the lack of local training data and (2) the for profit, economic rationale for investing in such AI solutions. I believe that if the data existed (in pre-annotated sets), local technology entrepreneurs could develop workable solutions—i.e., it is the lack of data more than the lack of profit opportunity that is holding back MedTech in Africa.
Accessibility & Accommodation. I have to say I was pleasantly surprised to see disability inclusion featured in Prof. Lobel’s list of “AI for Good.” Disability inclusion is so often ignored in general that it was heartening to see it called out among the environment and medical care. And AI is already contributing to improving the accessibility of Persons with Disabilities, including speech-to-text and text-to-speech tools, imagine recognition applications, and translation apps. Again, in the African context the crucial limiting factor is the lack of local data. For example, a computer vision program designed to narrate the environment only works with the images on which the model was trained are representative of the environment in which the person using the tool lives. A model trained on images from New York won’t perform very well in Nairobi.
Education. The ability of AI to deliver personalized education and training at scale is one of its most alluring potentials. A digital tutor with all the answers that is always available and never discouraging would be amazing. Again, the challenge in most of Africa is the lack of annotated data sets that could be used to build such AI applications.
Agency Compliance & Law Enforcement. While one may be tempted to be skeptical of AI in law enforcement in the Global South, I actually think this is an area of tremendous promise. As Prof. Lobel notes, “Automates decision-making is often fairer, more efficient, less expensive, and most consistent than human decision-making.” Importantly, we haven’t developed a way to bribe AI yet! AI could make the application of the law much more consistent and less susceptible to the kinds of bribing that causes people to lose faith in the judicial process.
Having laid out the faulty assumptions underlying the “AI is Bad” narrative and listing areas in which AI holds the promise of improving outcomes, Prof. Lobel enumerates several positive rights regarding the application of AI to which we should be entitled if we are going to maximize AI’s positive impact.
A Right to Automated Decision-Making
Human Out of the Loop. According to Prof. Lobel, many policy makers view human intervention as a necessary “safety valve” to prevent AI systems from running amuck. According to Prof. Loblel, the FTC’s report to Congress on Combatting Online Harms through Innovation7, the European Union’s draft AI Act, and new provisions in California’s Privacy Rights Act all contain language that requires a human to be involved in any automated decision-making process and/or enables individuals to opt out of such automated processes altogether. (p. 44). But since we already know how bad people are at making decisions, Prof. Lobel argues that—when AI is proven superior in reducing bias and error—“there should be a prohibition on humans entering the loop when such entrance would diminish the benefits of automation and bring error and bias.” (p. 45).
I generally don’t disagree with the idea that human intervention will often make otherwise automated decision-making worse. I think, however, that so-called “explainable AI” is a pre-requisite before people are willing to consider giving up on the "(admittedly fallacious) notion that having a human involved will protect them from the rise of the machines. As I will explore in other posts, it is clear that even the computer scientists who build AI models don’t understand how the AI is making its decision (if that is even the appropriate metaphor what what is happening in the computer code). It is hard for me to imagine people voluntarily giving up on their (admittedly illusory) belief that having a human involved will protect them from arbitrary automated decision-making if they don’t have at least a vague sense of how the AI is making decisions. I say all of this knowing a bit about the psychological literature that demonstrates the people very often don’t know how or why they made a particular decision. The split-brain experiments demonstrate how unexplainable our own decision-making is. But, the fact that we don’t know how we make decisions isn’t going to make it any easier for us to relinquish decision making control to machines.
Data is Desirable to Detect Discrimination. While public policy discussions of AI generally revolve around data privacy and shielding people from intrusive data gathering, Prof. Lobel argues that this default may create precisely the discriminatory AI we are trying to avoid. “When biases stem from partial, unrepresentative, and tainted data, the solution may be the collection of more, rather than less, data.” (p. 50).
Machines Are Major. This one is a bit in the legal weeds because it involves the so-called “major questions” doctrine.8 Under Supreme Court precedent, when a regulatory agency seeks to adopt rules that are “transformational to the economy” the Court must find that Congress specifically delegated that regulatory authority to the agency. Because mandating the use of automated decision-making via AI would, in many instances, transform the economy, Prof. Lobel is worried that federal agencies either won’t enact those rules or that such rules will be challenged in court. On this one, I have to disagree. If we are going to get people to willingly adopt and adhere to automated decision-making, then the people should feel as if they had some real input into what AI is being proposed. Letting the proverbial nameless, faceless, unaccountable bureaucrat promulgate rules about when AI will be implemented is a recipe for precisely the backlash that Prof. Lobel seeks to avoid and overcome.
A Right to Data Collection
Against Privacy’s Privilege. Reiterating her point about the importance of data collection, Prof. Lobel laments the current focus on data privacy as a potential hinderance to developing “AI for Good” models. Citing authors such as Shoshana Zuboff’s Survelliance Capitalism, Prof. Lobel argues that we have elevated “the right to be left alone” that we risk not collecting the kind of data that could ultimate help “vulnerable people and communities who have not had equal access to shaping our knowledge pools.” (p. 54). While I generally agree with the sentiment that exclusion from data collection could exacerbate the negative consequences of AI models built on such data, I find Prof. Lobel’s example of the lack of women and minorities in health and clinical trials orthogonal to her concern. Zuboff is worried that Google and Meta are sucking up everyone’s online behavior and using that information to flood us with targeted ads. Clinicians decisions about who to include in a clinical trial is a completely different issue—and one that is pretty easy to solve!
Data Maximization. On the one hand, I completely agree with Prof. Lobel’s concern:
Data collection is not neutral. When certain groups are underrepresented in the data used to train an algorithmic model, then predictions about these groups will be inaccurate. By its very definition, a majority population has more data to be studied. A right to inclusive data collection is needed. (p. 58)
However, I find Prof. Lobel’s proposed “solution” particularly problematic:
Privacy is an individual right that stands against the collective’s goals. In a social democracy, we can envision subverting the script, from surveillance capitalism to guardianship liberalism, imagining how under the conditions of democratic trust, millions of surveillance cameras can become “a friendly eye in the sky, not Big Brother but a kindly and watchful uncle or aunt.” (p. 57)
Respectfully, while Prof. Lobel might be able to envision such a script-flipping scenario, I most certainly can’t. The number of benevolent tech companies or government bureaucrats that I would ever trust to forego self-interest or political partisanship in service to some “guardianship liberalism” is precisely zero. And weirdly, Prof. Lobel comes back to the clinical trails issue as a justification, when the solution to that particular problem is simple and straight-forward: mandate more representative participants in clinical trials! (not the implementation of mass surveillance). (p. 60)
I agree with Prof. Lobel that a lot more research needs to be done to understand our attitudes towards AI. Why do we trust autopilot on a plane but not in a Tesla? Why do we prefer a human physician to give us a diagnosis, even in situations where AI is proven to do a better job? A better understanding of human psychology could help produce better AI public policy and identify types of AI that could be easier to implement.
Prof. Lobel defines “AI” as “automated systems, techniques, and algorithms that perform functions—cognition, action, or emotion—traditionally performed by humans.” (p. 12)
The “techlash” is the “growing animus toward large technology companies (a.k.a. ‘Big Tech’) and to a more generalized opposition to modern technology itself, particularly innovations driven by information technologies.” Robert D. Atkinson et al., A Policymaker’s Guide to the “Techlash”—What It Is and Why It’s a Threat to Growth and Progress, ITIF (2019), https://www2.itif.org/2019- policymakers-guide-techlash.pdf [https://perma.cc/9KRW-UNWE].
Prof. Lobel repeatedly notes the importance of training data as both the potential source of AI’s problems and the potential solution to same; e.g., “Training data should be representative and inclusive.” (p. 16).
https://www.vox.com/down-to-earth/22949475/ocean-plastic-pollution-cleanup.
https://nautil.us/a-dubious-cure-for-ocean-plastics-444088/.
https://news.stanford.edu/2020/05/22/using-satellites-ai-help-fight-poverty-africa/.
https://www.ftc.gov/reports/combatting-online-harms-through-innovation.
See, e.g., West Virginia v. EPA, 142 S. Ct. 2587 (2022).