page image

CHAPTER V - AI Governance and Its Future

As it becomes clear that AI could “change every facet of life,” as Wendell Wallach put it, it becomes equally clear that focused, intelligent forms of governance are needed to address the disruptive economic, social and political impacts. This is a daunting challenge not only because the questions are so large and complex, but because the vehicles for political choice and thoughtful policymaking are so fragmented and inadequate.

“If we had to write down the five rules that we want some government agency to impose on, say, Internet of Things networks,” said Reed Hundt, the former FCC Commissioner, “we wouldn’t even know what to say right now.” There is a void in government in terms of addressing AI governance, whether it is automation, robotics, medical diagnostics or consumer marketing. As AI technologies commoditize human judgment and intelligence in dozens of jobs and even in white-collar professions, Hundt concluded that “there is no future for work as we know it today; only a future for a different kind of work. But there is basically no transition plan in the world of politics at all.”

The void in AI governance may have many explanations: the sheer speed of AI innovation and the uncertain pathways it will take; the disruptive and complicated ramifications that elected officials might rather avoid; the chronic difficulty in coordinating diverse laws and federal agencies; and general industry resistance to the very idea of government regulation.

“The political structure of the United States is not well-suited to the social and technological challenges posed by AI,” said Hundt dryly. Wendell Wallach added, “Industry doesn’t want to be regulated because it feels government doesn’t know how to regulate it. Yet it wants the public to see it as being responsible. Ultimately, there is a need to demand that industry take some responsibility here.” The government, for its part, has a keen interest in developing some sort of AI governance, if only to track the international implications of AI for national security, cyberwarfare and progress toward Sustainable Development Goals.

A primary task, many conference participants agreed, is to figure out the right institutional structures and terms of governance for AI. What sort of institutions are needed, and which could be effective? How can the benefits of AI be maximized while the social harms mitigated or stopped?

The three basic challenges that any governance regime must meet, said Wallach, are to evaluate AI innovations according to ethical criteria; determine if these ethical standards remain relevant, and use governance to “nudge AI toward a better path” that promotes the benefits and mitigates risks. Wallach added that conversations about these topics will inescapably raise deep philosophical questions about individual autonomy, collective action, and our vision of humanity.

Participants identified a number of other questions that must be asked: What unit of governance is appropriate? Should policy be driven by regulations, best practices or consensus norms? Should the authority to intervene be based on existing laws or are new laws needed? Should the focus be on individual AI sectors or on certain types of machines and capabilities?

Kate Crawford of Microsoft Research believes that, given the way that AI generally works, governance should be focused on specific industry sectors — workplaces, healthcare, retail, etc. — which would allow policy to get very specific. She also believes that there are many laws on the books that could be appropriately extended to cover AI systems. Part of the challenge would be to harmonize the different statutory regimes. However, Joi Ito of the MIT Media Lab believes that “there’s something fundamentally different about AI that requires new laws.” He based this judgment on his belated recognition that new types of cyberlaw should have been enacted in the 1990s to take account of the special character of the World Wide Web.

Based on his experience in developing effective technology policies, Marc Rotenberg of EPIC urged that regulation be focused “on data, not devices.” This helps keep any regulations technology-neutral and thus more innovation-friendly. Rotenberg noted, too, that “the application of rights and responsibilities are necessarily asymmetrical” — that is, the parties who are most able to reduce risks should shoulder greater responsibilities, and those who are more vulnerable — usually, the unorganized public, consumers or workers — should have greater rights.

At a more refined level, participants raised questions about which instrumentalities might be best for governing AI and Big Data. Should they use “hard” statutory law and regulation, or “soft law” that attempts to promote certain best practices and norms in industry? Perhaps government structures could have a looser framework than strict regulation, much as the United Nations’ Sustainable Development Goals have sought to spur new types of business investment and practices.

Because the speed of change in AI is so great, there is always a question whether governance can act in a timely fashion, or even in proactive, anticipatory ways. For many participants, this problem suggests that “we will need AI to control AI.” Governance that “builds in” design features into AI itself is more likely to be timely, focused and effective.

Existing AI Governance Initiatives
It turns out that there are quite a few initiatives already underway to study the future of AI, its ethical and social implications, and potential governance approaches. But there is little coordination among these projects, or even much mutual awareness of the landscape of players.

In terms of the federal government, potential authority over AI is, as noted, diffuse or at least uncertain. However, there are two science policy bodies — the National Science and Technology Council, within the White House, and the American Academy for the Advancement of Science, that could play important roles. There is also the White House Office of American Innovation. In Congress, Senator Maria Cantwell of Washington in July 2017 proposed creating an AI committee within the US Department of Commerce, with a special focus on how automation will affect the workforce.

At the international level, the Organisation for Economic Cooperation and Development (OECD) in 2017 launched a major two-year project called “Going Digital” that will attempt to sort out the impacts of the coming technological revolution. The project, which spans ten OECD directorates, is focused around “jobs and skills, privacy, security, and how to ensure that technological changes benefit society as a whole, among others.” Concerned that some countries and sectors of society may be left behind, the Going Digital project is also addressing how to “build a coherent and comprehensive policy approach” to help assure “stronger and more inclusive growth.”

Douglas Frantz, the Deputy Secretary-General of OECD, said that 35 OECD-member countries will rely on this project to help guide their own policies going forward. Marc Rotenberg of EPIC said that he has high hopes that OECD will indeed produce a new consensus framework for AI accountability because the OECD did just that in the 1980s in developing a “light-touch” policy framework for privacy that is still used today.

Meanwhile, there are several independent academic and industry-sponsored projects exploring various ways to manage AI technologies. These range in focus from industry best practices and technical standards, to state-based policy principles and standards, to open-ended research to make sense of the many AI developments now unfolding. Some of the more notable research efforts include:

  • The MIT Media Lab and Berkman Klein Center for Internet and Society at Harvard University in January 2017 embarked upon a new $27 million initiative (Ethics and Governance of Artificial Intelligence Fund) to “bridge the gap between the humanities, the social sciences, and computing by addressing the global challenges of artificial intelligence from a multi-disciplinary perspective.”
  • The Institute of Electrical and Electronics Engineers (IEEE) has eleven standards-working groups right now that are dealing with various aspects of AI. They draw upon hundreds of engineering experts to develop consensus technical standards that are interoperable and practical. “These are the first suite of standards directly addressing AI ethical issues,” said John C. Havens. “In effect, they serve as a kind of ‘soft governance,’ even if not all of them say ‘AI and ethics’ explicitly,” he said.
  • The organization, Partnership on AI, was established by, Google, Facebook, Microsoft, Apple, IBM and DeepMind, among others, “to study and formulate best practices on AI technologies” and promote public discussion and understanding of AI.
  • The International Telecommunications Union has a standing AI Group. It also hosted an “AI for Good Global Summit” in Geneva to explore the issues in June 2017.
  • The United Nations Economic and Social Council hosted an event in October 2017 on how AI could help “achieve economic growth and reduce inequalities.”
  • Through a three-year project, “Control and Responsible Innovation in the Development of Autonomous Machines,” law professor Gary E. Marchant and ethics scholar Wendell Wallach explored governance options for AI. They have proposed a Governance Coordination Committee (GCC) to try “to harmonize and integrate the various governance approaches that have been implemented or proposed.”

It is unclear how these various projects will evolve or what impact they will have, noted Tim Hwang, Director of Ethics and Governance at the Artificial Intelligence Fund and former Global Public Policy Lead, AI/ML at Google, as each participant “tends to lean very heavily in one direction or another,” based on their particular perspectives. What is significant is that “everybody is kind of putting their chip down on the table.”

AI Governance by Design?
A presentation by FTC Commissioner Terrell McSweeny explained the shifts in regulatory approaches at the FTC in recent decades, which could inform future regulatory approaches to AI. (Again, McSweeny was speaking for herself, and not necessarily for the FTC or any other commissioners.) Based on its general authority to regulate unfair and deceptive trade practices, the FTC has long relied on “rational choice theory” in its regulatory interventions. “The idea is that if individual consumers can acquire accurate information, they will make rational choices in the marketplace that will prod acceptable balances between individual and commercial interests,” said McSweeny. Of course, this framework assumes that consumers have accurate information in a transparent context.

Another approach that the FTC has relied upon, especially in the 1990s with respect to online privacy, is a “notice and choice” framework. The Commission has seen itself as a “norms entrepreneur” to prod websites to post their privacy policies online, and then consumers can click “I agree,” or decline. The weakness of this approach has been acknowledged, however, spurring the FTC to develop a “context model” that tries to get companies to offer appropriate set of choices and information about privacy settings, at a suitable time and in a usage context. Any data disclosure must be appropriate to reasonable consumer expectations regarding the usage context, so that, for example, geolocation data are not collected for, say, a flashlight app without a clear and timely opt-in choice being provided to the user.

About seven years ago, the FTC began advocating moved to a “privacy by design” approach that attempts to prod industry players to build in privacy or security features into their products at the outset. McSweeny suggested that perhaps governance by design might be an effective approach toward the regulation of AI technologies. This approach would attempt to create governance frameworks to assure accountability for AI performance and decision-making. In creating a usable framework, stakeholders and policy makers would need to ask such questions as:


  • Where within organizations does accountability lie for performance of, or decisions made by, AI?
  • Should humans and/or their organizations be held accountable for actions taken by their AI?
  • Are there decisions that should remain human?
  • What is the appropriate role for government regulation vs. self-regulation?
  • What are the key components for AI governance? What would reasonable governance by design include?
  • Is the concept of “compliance” sufficient?
  • What cultural norms, governance models and laws can we draw on to inform governance frameworks? What’s different and requires specific response?
  • Do we have the right knowledge-base to draw on for governance frameworks?
  • Are there sufficient incentives in the marketplace for adoption of governance frameworks, or is a stronger government response needed?
  • Even if we come up with the right frameworks, can they keep up with AI?

For McSweeny, the goal is to come up with the “Goldilocks zone” in which the intensity and scope of governance is “just right” – a “habitable zone of AI governance.”

Participants noted specific challenges in devising effective forms of governance for AI. There is, first of all, a cross-disciplinary challenge in drawing insight from a wide array of scientific and academic disciplines. There is an analogous coordination problem in orchestrating many different government officials and agencies, not just at the federal level, but at the state and local level. What should be the role for governors and mayors, for example, and what role for existing federal agencies?

At each level, there are likely to be gaps in technical expertise that are not easily filled, if only because so much AI research and development is at the cutting edge, leaving only a limited pool of expertise. In addition, as Alberto Ibargüen, President and CEO of the Knight Foundation noted, “The speed of innovation makes it mind-bogglingly difficult to deal with governance when we have institutions that are basically backward-looking. How to create new structures or systems for changes that happen tomorrow is another discussion entirely.”
Another problem might be called the “ontological mismatch” of law and AI: Most existing laws are based on human intention, but the behaviors and impacts of machine-learning are likely to be unpredictable or unknowable as they evolve.

For all of these reasons, transparency in the process is key, said Marc Rotenberg. Following a series of privacy complaints that his group EPIC had brought against various search engines and websites, he concluded that “we cannot rely on companies’ representations about what they have done.” Rotenberg suggested that ultimately government must have the clear authority and willingness to “pull the plug” on AI projects that are incompatible with core societal values. This is not unthinkable, he said, citing Facebook’s own cancellation of an AI project in which machines reportedly had developed a language that its overseers did not understand. Kate Crawford reported that regulators at the AI Now symposium at MIT in 2017 specifically discussed the possible need for moratoria on “certain domains of algorithmic determinations until they can be shown to be far more fair than they are right now.” The unresolved issue in such cases is who shall shoulder the burden of proof to show harm — those questioning its fairness or the owners of the AI technology?

A Future Research Agenda for AI
Based on the remarkable potential of new AI systems, it is clear that much more research needs to be done — about the technology itself and its human interfaces, but also of the economic, social, civic and political implications. This constitutes a rather sizeable frontier.

France A. Córdova, Director of the National Science Foundation, noted that in a review of ten big ideas for future investments, machine learning was important in each area — so there really is a very broad agenda for AI in the future. Several participants pointed to quantum computing as a field likely to yield the next fundamental advances in AI. Another field of interest is “artificial general intelligence,” AGI, which focuses on how devices can think about what other devices are thinking.

An important focus for research is learning the limits of AI and how it blends with existing social institutions and dynamics. It is not always clear, for example, when the technology is reliable enough to replace humans. “Studies on deep neural nets in medical contexts raise serious concerns, as we’ve seen from Rich Caruana’s research,” said Kate Crawford. “We are not at a stage where we can know for sure why a model produced a particular result.” Similar concerns seem to apply to autonomous vehicles and other deep neural nets in open social contexts.

Crawford warned that we need to be mindful of the errors retroactively discovered in technologies once thought to be utterly reliable, such as MRI scans (which had a software error). “We have a lot more work to do on the socio-technical, legal and fairness frameworks” before AI can take over, she said. We do not really have any metrics for understanding the social impacts of AI or how biases come to be embedded in AI, socially, contextually and technically. More thought is needed for how to implement accountability mechanisms, and who should oversee them. Jean-François Gagné of Element AI generally agreed, “The level of maturity of the technology and our understanding of it are still very, very, very low.”

But Gagné added, uncertainties give us the time and opportunities to deal with fairness issues, which is a positive thing. If AI is developed as augmented intelligence and not a replacement for people, that also gives humans the chance to intervene in AI processes. Such an approach may be the better part of wisdom, in any case, because our understanding of user/AI interfaces remains very rudimentary if not naive.

Ruchir Puri, Chief Architect of IBM Watson, pointed out that many AI systems remain fairly rudimentary and fragile, “failing to recognize things that are obvious to human eyes. Just changing a couple of pixels in a photo of a school bus in the ImageNet database, for example, can make it register as a pizza. AI technology relies heavily on massive amounts of ‘labeled data,” he said. “This makes it prone to cyberattacks.” Another major problem, Puri said, is the enormous increases of power needed by AI systems if they are to emulate the capabilities of the human brain (which runs on a meager 20 watts of electricity).

The challenges are not just technical, however, but ontological. “We are dealing with a poverty of understanding about human cognition and how effectively humans and machines will interface with each other,” said Wendell Wallach. “AI systems will need something like emotions and emotional intelligence, a theory of mind, consciousness and other supra-rational faculties such as empathy in order to make appropriate decisions in morally significant situations. Furthermore, we humans are dynamically embodied and embedded in the sociotechnical environments in which we dynamically interact with other humans and other agents. The complex adaptive behaviors represented by these qualities and capabilities aren’t fully captured by reason alone,” said Wallach. “The idea that we are going to be living in a constantly interfaced world comes out of a rather simplistic notion of human cognition.” He said that it would be useful for AI to develop a “theory of mind and emotions” that takes account of supra-rational faculties and morality, and recognizes the dynamics of agents being embedded in complex adaptive systems. “These are things that aren’t fully captured by reasoning,” said Wallach.

There was broad agreement that future AI research must be interdisciplinary, precisely because the ramifications of the technology reach into so many different corners of life. “AI cannot be solely a technical field,” said Crawford, “It has to be a ‘sociotechnical field.’” This suggests the need for training graduates to speak across disciplines; to convene more diverse sets of researchers; and to establish the right norms in legitimizing what sets of problems shall be studied, and how.

It seems that academics are likely to take some different approaches to research than corporate researchers, many participants agreed. But this may be less a matter of research priorities than “the power and capacity to instrumentalize research,” said Kate Crawford. “Universities commonly have less data, less infrastructure, and far less capacity compared to the private sector.” She told of a researcher who left a large company to work in academia, and realized that he could no longer ask the same questions. Not surprisingly, academics often lag behind some of the questions that corporate researchers are addressing.

But Ruchir Pur of IBM said that academics may not be as handicapped as they might think because they tend to be very resourceful and come up with different perspectives that do not emerge in corporate settings. While academics may have fewer resources, said Naveen Rao of Intel, “I would argue that academic research is much broader and more open. Academics have access to philosophers and social studies, and a cross-pollination of ideas, which you don’t have as much of in a corporate setting.”

The AI Now Institute at New York University, now in its third year, is a new effort to bridge a lot of topics related to AI. It is focused on four major areas of study: bias and inclusion, specifically in machine learning, labor and automation; personal autonomy, including work issues; basic rights and liberties; and critical infrastructures such as power grids, hospitals and education. Another focal point for AI research is an annual conference known as FAT ML — Fairness Accountability and Transparency in Machine Learning — which is now entering its fifth year.

Participants noted activists and journalists are a rich source of research and new ideas because they are often closer to problems on the ground than professional researchers. J. Nathan Matias recalled that the founders of the medical journal Lancet were doctors concerned about food safety who went out to do their own firsthand research on the streets of London. It seems likely that many insights into the problems of AI will emerge from such practices, said Natalie Bruss, who focuses on Special Projects for tronc, Inc., the media company. “Many issues only manifest and get talked about when they are used in narrative storytelling.”

Share On