page image

CHAPTER V - AI Bound or Unbound

In the conference’s brief survey of how artificial intelligence is affecting automobile transport, healthcare and journalism, some general themes emerged about how society should think about AI and its great potential and dangers.

Reid Hoffman, Co-founder and Executive Chairman of LinkedIn, opened the concluding session by citing a classic essay, “Artificial Intelligence Meets Natural Stupidity,” by Drew McDermott. McDermott laments: “Unfortunately, the necessity for speculation [within AI] has combined with the culture of the hacker in computer science to cripple our self-discipline.” This is perhaps an unavoidable problem within a young discipline such as AI, McDermott argues, because “artificial intelligence has always been on the border of respectability, and therefore on the border of crack-pottery.”

Hoffman said that we can approach AI through the lens of utopia or dystopia. On the one hand, AI can help us better solve diseases, improve longevity, and help us address climate change, for example, or it can usher in a dystopian future that terminates life and healthy possibilities. AI can point us to utopian work scenarios as embodied in, say, Star Trek, where people can overcome basic needs and pursue their passions, or toward a neo-feudalism that monopolizes AI to manage a large class of serfs.

Within the next ten years, we are going to see inflection points that could take us in either utopian or dystopian directions, said Hoffman. Because of the high uncertainties about the implications of AI, there is a tendency to move toward greater certainties, if only to block any dystopian possibilities. But Hoffman believes there is a sounder way to move forward, to “get to the good solutions faster as a way to avoid dystopia.”

“This leads to great questions around what are good outcomes and who gets to make the decisions balancing risks and hoped-for outcomes.” said Hoffman. While he agrees that there ought to be forums to explore these questions, he is cautious about government regulation “because that increases the possibility of dystopian outcomes.” The fundamental design challenge, he said, is “figuring out how we actually achieve the good outcomes…..How can we have utopian solutions nudge aside the dystopian ones? How to make the benefits of AI more inclusive and minimize the disruptive impacts on jobs? How to deal with the cyber-security issues? Can we have a broader range of training to people, and help teach empathy in better ways?”

The answers to these questions will hinge upon who controls AI, he said, and this could be a problem because the control is not likely to be perfectly democratic. Nonetheless, Hoffman said, “I think AI can be very positive for humanity.” Indeed, he thinks it is not just about developing new tools and services, but also about “how we evolve positively as a species.”

Astro Teller agreed with Hoffman’s “getting there faster” scenario because of the dangers of slowing down development of AI. The bad actors who exploit AI technologies for anti-social purposes are not going to slow down, he noted. “Surely we would prefer for the best and most responsible people with the most thoughtful versions of the future, to be the ones that get there first.”

But Stuart Russell, the UC Berkeley computer scientist, cited a famous 1960 article by mathematician and philosopher Norbert Wiener that pointed out that the problem is not just “bad guys doing bad AI, but good guys accidentally doing bad” — a scenario exemplified by the “sorcerer’s apprentice” tale and King Midas. Wiener writes: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively…we had better be quite sure that the purpose put into the machine is the purpose which we really desire.”

The metaphor of “good AI crowding out bad AI” is misplaced, said Russell, because the field should not define itself as maximizing one set of objectives and then castigating any constraints on those objectives. He noted that much of the work of nuclear fusion researchers is in fact focused on containment; the field incorporated within its own research norms a deep ethical concern for preventing harmful social outcomes.

Marc Rotenberg of the Electronic Privacy Information Center proposed a third vision for AI technology that is neither utopian nor dystopian, but rather a “dystopia that appears to be a utopia.” He cited the 1997 science-fiction film Gattaca, in which people’s lives are predetermined by their genetic code — a utopian enactment of technological perfection. “What do you do when technology provides so much opportunity that it raises very deep questions about our roles as individuals and humans in a very technologically advanced world?” Or as the tagline for the movie puts it, “There is no gene for the human spirit.”

What Vision for Responsible Control and Social Trust?
Expanding upon the earlier discussion about regulation of AI-enabled cars, the remainder of the conference focused on key social and governance issues that AI technologies raise. Since the world is still at a rather early stage in the development of commercial AI systems, this topic remains something of a frontier issue. However, there was broad consensus that society must consider what appropriate control structures should manage the development of AI systems. If there is going to be social trust and acceptance of AI, there must be systems for open debate and effective control and accountability. Participants were also concerned about ensuring democratic access and use of AI technologies, and fair distribution of their benefits.

“Who controls AI” is a central question because it will largely determine the types of public oversight that are possible, as well as the character of the labor market. The question is how such oversight and accountability might be established.

Astro Teller suggested that the constant churn of competition that has characterized most major industries — autos, computer chips, personal computers, e-commerce — will provide an important check on AI firms. “There has been plenty of competition in these industries, and who stays on top from one decade to the next is not clear,” said Teller. He noted that it was also likely that a relatively small group of companies would become the technological leaders because of their more intensive investments and expertise.

Antonio Gracias, CEO of Valor Equity Partners, suggests that competitive markets may be the wrong framework for thinking about AI accountability. A more apt analogy might be to the nuclear power and weapons industries than to consumer-facing industries, he said. Gracias thinks we are living in a time similar to when the atom was split. We are starting to realize that the technology has enormous military and geopolitical implications. “The real issue here is state and non-state actors,” said Gracias, because AI could enable interference with state or non-state actors in ways that are “basically undetectable.” This is why “we should worry about power structures that control AI,” he said.

Joi Ito of MIT Media Lab said that he “slightly disagrees” in the sense that “dozens of breakthroughs in AI could happen,” especially ones in which “computational capabilities could be made more accessible to a wider range of people.” As an historical comparison, Ito cited the release of Lotus 1-2-3, the spreadsheet software, which enabled small businesses and ordinary people to do accounting services that were once only available from large accounting firms. “What if some sophisticated user interface were to be developed that could democratize access to AI?” asked Ito. At the same time, Ito conceded that such interfaces may not materialize (look at Linux) and agreed that “we should worry about what happens to the power structure that is built around this.”

Marc Rotenberg is convinced that “algorithmic transparency” is needed to ensure the accountability of AI systems. This is important, in part, to ensure that we can determine who is legally responsible for an AI system’s performance. He invoked a recent ruling by the Wisconsin State Supreme Court involving a proprietary algorithm that had been used in criminal sentencing proceedings to predict the likely recidivism of an individual. The court ruled that while the algorithm could be considered in making a recommended sentence, “there has to be a human agency in the loop,” as Rotenberg paraphrased the ruling.

AI and Livelihoods: An Inescapable Challenge
One of the biggest issues surrounding AI technologies is how they will affect people’s livelihoods and jobs. Will the various innovations enabled by AI be widely and equitably shared? That is likely to affect public acceptance of AI and, perhaps indirectly, how the technologies will be allowed to develop. “It’s not clear that the benefits of AI technologies, left to their own devices, will be evenly distributed,” said James Manyika, Director of the McKinsey Global Institute. There are estimates that between 5 and 9 percent of full-time jobs may be automated out of existence over the next ten years, he said. “These rates of automation will be in the 15-20 percent range for middle skill jobs,” he added. “We also find that 30 percent of activities in 60 percent of jobs will be automated, which means many more jobs will be changed rather than automated.”i

Even partial automation — i.e., technology augmenting human skills — tends to have negative impacts, said Manyika. First, the effects of new technologies on employment tend to have two tracks — one in which well-educated, privileged people are enabled to do amazing new types of work while the jobs of many other workers are deskilled, leaving them with fewer responsibilities and lower wages. “Easier tasks can be paid less and probably need less certification, so employers get a bigger supply pool for such jobs,”ii said Manyika. Jobs are often structured so that “smart systems” can operate in the background, reducing the skills needed by on-the-ground technicians. So even when you have partial automation or augmentation of human work, “it often has a depressive impact on wages,” he explained.

“So we come back again to the wage and income question,” said Manyika. “That’s a much more complicated conversation that we are going to have to have at some point.” In the meantime, he said, these concerns are fueling a lot of social unrest and political populism. This is primarily because close to two-thirds of households in the advanced economies have seen stagnant or falling incomes over the last decade or so. He noted, “While, the recession has a lot do with that, technology along with other factors are also to blame.”iii

The resentment is particularly emphatic, said Mustafa Suleyman, because people realize “who gets to direct which tech applications are built, how they will be deployed, and who’s on the receiving end of the decisions being made, perhaps by three or four people in this room and their teams. There’s not very much oversight, accountability or transparency.”

This argues for paying close attention to this problem now, argued Wendell Wallach, the Yale bioethicist, because “there is a principle called the Collingridge dilemma, which states that ‘by the time undesirable consequences [of technology] are discovered. . . the technology is often so much a part of the whole economic and social fabric that its control is extremely difficult.’ The Collingridge dilemma has stymied technology policy for decades,” said Wallach. “Those like me, who advocate for more anticipatory governance, reject the dilemma’s simple binary logic. I argue that there is an ‘inflection point,’ a window of opportunity, in which we can act once the problem comes into view but before the technology is fully entrenched. That window can be short or long. This is all fully discussed in my book, A Dangerous Master.”

Wallach added that modulating the pace of developing a technology — for example, by speeding up investment or slowing down with regulation — is a separate matter. While Wallach advocates modulating the pace of technology development, it is not necessarily an outgrowth of the Collingridge dilemma. However, he said, “When we do slow down the rate of development we can stretch out the inflection point and perhaps have a better opportunity to act.”

“Very few of us would say we should stop greater efficiencies just because they are taking jobs away,” he said. “So it becomes a political problem: How do you distribute goods and resources if jobs are no longer providing enough income to people?” There is great resistance to guaranteeing basic incomes or other forms of meeting people’s needs, he said, although this topic has gained greater currency over the past year.

Concerns about a structural loss of good-paying jobs are usually rebuffed by reassurances that new technologies over time will create enough new jobs to offset short-term losses. That, after all, has been the long historical record. But based on the disruptive impact of the Internet, which has indeed created many new jobs, the new jobs being created “aren’t paying people enough income to support themselves.”

This topic was extensively discussed at a 2013 Aspen Institute conference on this topic, “Power-Curve Society: The Future of Innovation, Opportunity and Society Equity in the Emerging Networked Economy,” said Charlie Firestone of the Aspen Institute Communications and Society Program. The report probes how inequality seems to be structurally related to today’s networked economy:

Wealth and income distribution no longer resemble a familiar “bell curve” in which the bulk of the wealth accrue to a large middle class. Instead, the networked economy seems to be producing a “power-curve” distribution, sometimes known as a “winner-take-all” economy. A relative few players tend to excel and reap disproportionate benefits while the great mass of the population scrambles for lower-paid, lower-skilled jobs, if they can be found at all. Economic and social insecurity is widespread.

Reid Hoffman of LinkedIn said, “I’m generally of the belief that the problem will sort itself out in the long-term. But the problem is that the long term is long term — and it allows for a lot of pain and suffering in the meantime, and potentially very volatile circumstances.” He added that “if you think we’re experiencing exponential problems, then we need to think in terms of exponential solutions.”

What’s missing from discussions of this issue, said Mustafa Suleyman of DeepMind, is a vision. It’s hard to devise any transition plans for dealing with transitional disruptions if we do not have a vision for how things could work instead. For Suleyman, this means going beyond debates about regulation to broader questions of governance. He also urged that AI firms “create porous and stable trust-based boundaries within our own organizations through the use of independent oversight boards,” as previously mentioned. “When we were acquired, we made it a condition of our acquisition that we set up an ethics and safety board with independent representation to steward our technology in the public interest,” he said. “It is just an early experiment in governance, but it demonstrates our intent and takes the first steps.”

Cynthia Breazeal of MIT Media Lab added that who innovates matters, too. Despite its efforts to broaden the race, gender, ethnicity and socio-economic backgrounds of its leaders and employees, the tech industry still faces major challenges in this respect.

ENDNOTES
iMichael Chui, James Manyika, and Mehdi Miremadi. “Where Machines Could Replace Humans — and Where They Can’t (Yet)”. McKinsey Quarterly. (July 2016).
iiMichael Chui, James Manyika, and Mehdi Miremadi. “Four Fundamentals of workplace automation”. McKinsey Quarterly. (November 2015).
iiiMcKinsey & Company. “Poorer Than Their Parents? Flat or Falling Incomes in Advanced Economies”. McKinsey Global Institute. (July 2016).

Share On