page image

CHAPTER 4 - Toward A Philosophy of AI Design and Governance

In attempt to move beyond critique to problem-solving, this conference hosted a larger conversation about two issues that may be critical to the future of AI: 1) developing more astute, coherent philosophical approaches for assessing the design and social and political impacts of AI; and 2) devising new evaluation metrics and governance systems to assure that AI systems will be accountable and deliver positive outcomes.

Every major shift in economic history has been accompanied by step-changes in the philosophical and economic frameworks for understanding the world, one participant pointed out. Therefore, the questions we need to face are not simply a matter of “How can AI solve a given problem,” but rather, “How can we develop a richer, larger and more appropriate understanding of the new situations that AI technologies engender?” Addressing this question forces us to consider what sorts of new social and political institutions, working metrics and policy architectures may be needed. AI is not just a technical domain, after all, but a set of tools that has the ambitions and capacity to design new worlds. Can we therefore develop a philosophy, or at least better working understandings, commensurate with the powers of the technology? This necessarily requires us to revisit first principles and fundamental ethical and perhaps religious notions of what a human being is and should be.

The “Executive Compass” as a Tool for Building a Good Society
As a first step in grappling with the clash of values that invariably arises when making decisions, the conference considered an ethical tool known as the Executive Compass, which was developed by business school professor James O’Toole in the 1990s for the Aspen Executive Seminar, a values-based leadership project started by philosopher Mortimer Adler. The Compass is an attempt to distill some of the complexities of modern philosophy into a simpler, practical tool for thinking about the tensions among values and how to resolve them.i The Compass consists of four primary poles, with two sets of values in fundamental tension with each other. One axis counterpoises the value of liberty with that of equality, while the other axis sees the value of community in tension with efficiency. Ultimately, O’Toole sees each of the values potentially in tension with each of the others.

O’Toole regards these four polar forces as “tugging at all modern polities.” Indeed, he writes, “The tensions among those values have provided the drama to political life in the West since the time of Hobbes. In particular, the choice between liberty and equality is said to be the most fundamental, and inescapable, of all the trade-offs facing society.” To illustrate his point, O’Toole invokes the conflicting values of Alexander Hamilton in favoring “economic growth and technological advance,” as opposed to the priorities of Thomas Jefferson whose primary concern was “communitarian values such as the quality of life.”

A key goal of the Executive Compass is to make certain tensions among values more explicit, thereby triggering deeper discussion about what values are really important in a given situation. It is also meant to help identify and guide acceptable tradeoffs that an organization or society might make, so that a happy blending of value-priorities can be made. At the very least, however, the Compass seeks to make a productive discussion possible among people who disagree, helping them to understand the other’s point of view.

In terms of its application to AI and its future, the Compass could help situate various value commitments within a larger framework and identify fundamental tensions that must be addressed. For example, the manifesto of Theodore Kaczynski, the Unabomber—“Industrial Society and Its Future,” a reading for the conference—makes quite clear that Kaczynski prioritized liberty over everything else—to the extreme. The concerns for equality explored in the 2013 Aspen report on the “power-curve society” are juxtaposed against business interests in efficiency.

While the Compass may be useful as a point of entry to discussion, participants found it a limited tool. Marc Rotenberg, President of the Electronic Privacy Information Center (EPIC), said that “real world interactions are more complicated than economic ‘indifference curves,’” and that tensions in the political economy are dynamic, not static. Follow-on action in complex systems are likely to be complicated and unpredictable. Rotenberg also questioned whether the two axes of values necessarily apply to a situation. In some societies such as China, AI appears to be creating a world in which there is neither liberty nor equality; that axis seems irrelevant.

Meredith Whittaker stressed that any consideration of values must address distributional questions: “Whose liberty? Whose efficiency? These contextual, historical frameworks matter,” she said. Others agreed that it is important to bring other stakeholders into any value analysis. This discussion underscores the importance of trust as another core, independent value beyond the primary four. One party’s liberty may enable it to amass great power and wealth, causing social distrust that unfettered markets are not likely to address. Even though Facebook has been plagued by many high-profile scandals involving user trust, for example, its stock prices have not suffered, noted one participant.

While the Compass aspires to help government or business leadership come to better value-based decisions, some participants questioned the apparent premise that centralized sources of power can be effective nowadays. “To me, the Compass assumes that government both understands and has power over technology,” said Terah Lyons, Executive Director of the Partnership on AI. “But in our version of democracy, government isn’t necessarily the arbiter of these conditions any longer.” One statistic from 2016 makes this alarmingly clear, she said: U.S. government investment in AI in unclassified settings currently amounts to one-eighth of the amount invested by the top five companies operating in the AI ecosystem. “Public policy is not really in the driver’s seat,” said Lyons—a fact that is underscored by the relatively slow pace of government, law and policymaking. Michael Chui, Partner at the McKinsey Global Institute, agreed: the more meaningful river of AI’s future is the political economy, involving the companies that are developing AI, not just government.

The Relevance of Philosophy for AI
One takeaway from this discussion was the need for more serious philosophical reflection and debate about the design and deployment of AI technologies. “What I find really interesting,” said Anita LaFrance Allen of the University of Pennsylvania, “is how few big visions are being created by intellectuals for the kind of world we’d like to see exist. Philosophers have been pretty silent about AI and the digital world. Because of that, there is something missing in our discussions. To me, that’s a sad loss. We’ve got Mill in the utilitarian tradition, standing for voice, and Montesquieu in the Aristotelian tradition, and Kant standing for the Enlightenment. But these traditions don’t take us far enough. We’ve got to go deeper, and force ourselves and our colleagues in relevant disciplines—political science, philosophy, law—to help us mine more deeply. We shouldn’t abandon the canonical ideas, but take them forward.”

There is a singular lack of “Big Think” about the social and human implications of AI, said Vilas Dhar, a Trustee for the Patrick J. McGovern Foundation:

Humanity has lost sight of a vision of what’s possible. The people in the forefront of AI are not just defining the iterative technological process, they are in control of the massive social changes that come with it. There is almost no time spent on the fundamental questions of how you design the system. How do you break it apart and build it back up? What are the philosophical and political thoughts behind it? Our lawmakers are no longer equipped to ask these questions because of the increasing technical and moral complexity of these topics. So, the mantle falls to people in rooms like this one, and let’s hope they don’t fall victim to the master-of-the-universe syndrome.

Many participants agreed with this general sentiment. Neil Jacobstein of Singularity University said the historical divide between the sciences and humanities is causing friction, and limiting our imaginations, adding that “humanity is estranged from its authentic possibilities.” He said interdisciplinary thinking is one reason why DARPA, the Defense Advanced Research Projects Agency, is funding “world models” and other programs that seek to integrate causal modeling with the deep learning AI algorithm.ii An interdisciplinary approach to the development could help AI engineers deal with technical and design challenges, and it could also illuminate social and political implications and enable us to deal with them pro-actively.

At the design level, for instance, machine learning needs to be able to “read context,” said John Seely Brown. “I sometimes wonder if we realize that we’re going to have to invent a new kind of literacy. For the past hundred years or so, we’ve focused on content, but as we move into a world in which context matters—culture, history, economics, politics—we don’t have very good ways to honor context. Yet so much sense of agency has to do with being able to read the context.”

Seen from this perspective, designing AI involves some profound “epistemic challenges,” said Brown. We are currently locked into notions of “optimization, which usually implies a form of reductionism. But most problems have externalities; they can’t be separated from their context. So we need new tools to unpack some of those externalities, each of which is entangled with others in very powerful ways. In some sense, our real challenge is how to disentangle a profoundly entangled system.” Brown pointed out that public policy tends to put problems “in a box,” as if core issues can be dealt with in isolation and through optimization strategies. But problems have a tendency to leak out of those boxes so fast, said Brown. “We kid ourselves into thinking we’ve solved a problem, when in fact we are making a bigger mess of the world.”

Meredith Whittaker ascribed this problem to teams of narrowly focused tech experts making decisions, rather than cross-disciplinary teams. “We have tech people making quantified, reductionist determinations for domains without drawing upon the expertise of people in those very fields,” she said. One example is Epic electronic medical records that may or may not reflect the full medical dimensions of a patient that a nurse identifies, but instead the crude taxonomy of billing codes. A similar reductionist logic can be seen in IBM’s Watson for Oncology supercomputer, marketed as a superior way to make cancer diagnoses and treatment, said Whittaker. The AI system was aggressively marketed as a superior tool for cancer diagnoses and treatment, but its actual capabilities were quite limited, according to the medical publication STAT. These types of stories point to the need for stronger interdisciplinary work on AI, said Whittaker, and for greater sensitivity to context and the philosophical assumptions behind AI design.

Neil Jacobstein noted that this same sort of thinking—solving specific and narrow problems without regard for context—prevailed among agricultural/biotech companies in the 1960s. They did not really think much about the second- and third-order consequences of pesticides on ecosystems. He said that a useful corrective to this kind of thinking can be found in a seminal 1971 essay by systems scientist Jay Forrester on the counterintuitive behaviors of social systems. Jacobstein suggested that “we could better understand context and second- and third-order consequences if we combined pattern recognition, modeling and simulation.”

Structural Imperatives Driving AI Development
The discussion about the importance of context spurred a broad conversation about structural and institutional imperatives driving AI design and deployment. Some observers worry that AI’s enormous efficiencies, capacity for continuous learning, and reliance on centralized repositories of data make it a perfect tool for autocrats and authoritarians. This theme was previewed in a reading, “Why Technology Favors Tyranny,” in The Atlantic, in which author Yuval Noah Harari explains how AI has the potential to empower dictatorships:

We tend to think about the conflict between democracy and dictatorship as a conflict between two different ethical systems, but it is actually a conflict between two different data-processing systems. Democracy distributes the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given 20th century technology, it was inefficient to concentrate too much information and power in one place…. However, artificial intelligence may soon swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. In fact, it might make centralized systems far more efficient than diffuse systems, because machine learning works better when the machine has more information to analyze.

Tim Hwang, Director of the Ethics and Governance of AI Initiative, a joint project of the MIT Media Lab and the Harvard Berkman-Klein Center, agreed with this general analysis. One could easily make a “strong techno-determinist argument” that AI favors autocrats, said Hwang, because it takes a lot of money and institutional power to build large compute centers and acquire the massive quantities of data needed. Only a few large tech companies such as Apple, Google and Facebook—and the Chinese government—control sufficiently large quantities of personal data. Centralized players, whether autocrats or big companies, have strong motivations to build AI systems, for both surveillance and marketing purposes, and to use the psycho-social dynamics of online information. In recent years, there have been powerful efforts to manipulate voters through phony information sent to precise demographic groups. The Russian Internet Research Agency (IRA) was able to influence the 2016 U.S. elections by targeting African Americans through online platforms. As Harari notes, we may soon have to deal with “hordes of bots that know how to press our emotional buttons better than our mother does, and that use this uncanny ability, at the behest of a human elite, to try to sell us something—be it a car, a politician, or an entire ideology.”

Hwang thinks that the control of chokeholds in AI infrastructure will be a key factor in whether centralized or decentralized control will prevail. To illustrate his point, he cited an historical comparison, commercial grain shipping in 19th century Chicago, as described in the book Nature’s Metropolis.iii A major economic and political transition occurred when the shipping of grain shifted from boats to railroads. “It turns out that once you shift to a railroad-based form of transportation, a relatively small number of people have a large amount of control over these markets,” said Hwang. “I sometimes wonder, What is the ‘railroad for AI?’ And at what point do you implement certain types of infrastructure to assure that people have access to the technology?”

A related question, said Hwang, is, “Could you pull off the development of strong machine learning systems with a lot less data? If you could do that, suddenly the barriers to entry change quite a bit, which could shape up the potential for competition.” Hwang believes this is the debate we need: “Can we actually achieve this goal in practice? The answer could influence whether or not there will be a technology lockin or not in the future.” This issue is important, said Vilas Dhar of the Patrick J. McGovern Foundation, because “AI may operate outside the boundaries of self-correcting behavior. The first-mover advantage may allow the aggregation of serious financial and technological resources, creating a threshold that prevents other people from being able to access the technology.”

A number of participants expressed concern about the AI-driven concentrations of power to persuade and control. David Ferrucci, CEO of Elemental Cognition, noted, “It seems unfair when pockets of power have greater access to a given channel of persuasion than others, especially if that channel, powered by AI, is far more efficient in directing messages and persuading people than the conventional channels.” Marc Rotenberg of EPIC put it more bluntly: “These systems do tend toward centralization and monopoly control.”

What is so interesting, he added, is that computing in the Sixties and Seventies was largely centralized in large companies—and then the PC Revolution in the Eighties decentralized that power by pushing computing, applications and data out to individual consumers and businesses. Now we could be undergoing a “counterrevolution” that is re-aggregating computing power, he speculated. In any case, Rotenberg raises a tantalizing question: “Is there a current model under which AI authority could be genuinely distributed in the way that the early personal computer Revolution was? Is an alternative architecture viable?”

It was suggested that perhaps companies such as Amazon Web Services and the open source TensorFlow application represent a model for democratizing access to AI technology, despite ownership by a giant company. But Meredith Whittaker rejected that idea, pointing out that users of Amazon Web Services do not own the AI software or devices, nor are the systems easy to use.

However, for some business cases, this decision to rent-or-own on a cloud service may help offset costs for resources such as hardware for computing power, machine models and data. This model supports the idea of a “federated architecture” which allows for interoperability via a set of standards without having one central authority. Additional cloud service providers, such as Microsoft Azure, Google Cloud Platform, IBM Cloud, Salesforce cloud and others in the same market may even offer data sources at free or a low cost.

Amazon Web Services or federated technology is not necessarily helpful, said Patrick McGovern, Trustee of the Patrick J. McGovern Foundation, because of the sheer volume of data that you need—and control of data is only going to continue to get more consolidated. One outstanding question is whether companies who control this data will be good stewards of it.

For some participants, the future of AI development and decision-making will hinge upon whether we alter current structures of capitalism or not. Meredith Whittaker pointed out, “AI is controlled by a few large companies with the resources to build it. The technology is under the auspices of capitalist decision-making. If we are interested in applying AI to ends that would not be profitable, this is a political and deeply structural question. And so we would have to ask: What would be the incentives and mechanisms to drive that approach, and how would we do that in an ecology governed by the shareholder-value model?”

While this poses a formidable challenge, Whittaker thinks that now is a ripe moment to make a broader re-evaluation. She suggests that the rise of driverless cars should provoke this sort of questioning: “Are we going to take the individual car ownership society that was essentially architected by Henry Ford and city planners, and just automate it? I love the idea of using this moment to think about how we might actually change structures, and not simply automate or make more efficient the structures we already have.”

This is a particularly vexing challenge, however, because AI investments are driving AI development and thus the scenarios for its use, said Tim Hwang: “The problems that AI will solve are going to be defined by what the AI toolkit is good at doing. And this reflects the particular types of investments being made.” If there is relatively little interest in trying to make AI systems take account of context and causal inference (to harken back to the earlier discussion), that is because there are “much more profitable ways of developing the field,” said Hwang. “The actual scope of AI technologies is therefore quite narrow, in ways that I think are counterproductive.” Whittaker agreed with this assessment, adding that profitmaking generally favors goals that are easier to measure and short-term; qualitative goals that pay off over the longer-term and benefit broader constituencies are less likely to be attractive to businesses.

“One could argue that AI, on balance, has not been so great for society so far because a lot of it is just about ad placement and manipulating eyeballs,” said Gary Marcus, the New York University professor. Marcus said that Google and Facebook are not likely to invest in causal inference modeling, for example, unless there were a short-term commercial advantage in doing so. And therefore, “It may just be that we won’t get to paradise unless there is some other means for funding research for long-term priorities.”


i James O’Toole, The Executive’s Compass (New York, NY: Oxford University Press, 1993), Chapter 4.
ii DARPA describes its World Modeler program as seeking “to develop technology that integrates qualitative causal analyses with quantitative models and relevant data to provide a comprehensive understanding of complicated, dynamic national security questions. The goal is to develop approaches that can accommodate and integrate dozens of contributing models connected by thousands of pathways—orders of magnitude beyond what is possible today.”
iii William Cronon, Nature’s Metropolis: Chicago and the Great West (New York, NY: W.W. Norton & Co., 1992).
 
 
Title Goes Here
Close [X]