page image

CHAPTER III - Co-Evolution of AI and Humanity

Any consideration of AI technologies can benefit from reflecting on historical critiques of technology. Marc Rotenberg, President and Executive Director of the Electronic Privacy Information Center (EPIC), asked the group to consider the ideas of four big thinkers. He started with Gary Kasparov, the chess champion and activist who was a central figure in highly competitive human/machine chess competitions from 1996 to 2006. Kasparov had agreed to play the IBM Deep Blue computer program in a series of chess matches in 1996, and lost the first game, but won the match. The next year, he lost to Deep Blue. This “signaled a turning point” in human/machine chess, Rotenberg said, “because the world chess champion just stopped playing machines.”

Kasparov was very upset about the triumph of Deep Blue, but twenty years later, he wrote Deep Thinking, a more measured, optimistic book about the experience. In that book, Kasparov wrote that while the disruptions caused by increasingly “intelligent” machines may be upsetting to humans (like himself) in the short term, “no matter how many people are worried about jobs, or the social structure, or killer machines, we can never go back. It’s against human progress and against human nature. Once tasks can be done better (cheaper, faster, safer) by machines, humans will only ever do them again for recreation or during power outages.”i

Rotenberg observed that Kasparov is basically optimistic about the future of AI, and personally supports the idea of augmented intelligence as the best path forward. Interestingly, in a later chess tournament on the website that mixed grandmasters, computer-assisted players known as “centaurs,” and chess-playing computers, all four teams left in the quarter-finals were centaurs. The winner was a lower-ranked chess player who was a data scientist who understood how to work with computers.

Another landmark figure in the attempt to understand AI was the MIT computer scientist Joseph Weizenbaum, author of a 1976 book Computer Power and Human Reason. “Weizenbaum was trying to get people to think about what it is that separates man from machine,” said Rotenberg. One key human attribute that computers simply don’t have, said Weizenbaum, is autonomy — the capacity for passion, wisdom and independent desire, including the desire to preserve autonomy from intrusions by machines.

The French philosopher Jacques Ellul was less optimistic than Kasparov or Weizenbaum. Ellul published a hugely influential book in 1954 (published in English in 1964) called The Technological Society, which “described with tremendous insight what happens as we give over more human activities to technique, a term he used to refer to any complex of standardized means for attaining a predetermined result,” said Rotenberg. For Ellul, technique is “the totality of methods, rationally arrived at and having absolute efficiency (at a given stage of development) in every field of human activity. Modern technology has become a total phenomenon for civilization, the defining force of a new social order in which efficiency is no longer an option but a necessity imposed on all human activity.”

Ellul foresaw the pervasive use of technique to optimize the outcomes of all social functions, including elections, and to start to control people’s destiny. This is precisely what computers have increasingly sought to do — to gather and process vast amounts of information about individuals in order to “make decisions” that affect their lives. To counter this risk, Ellul, influenced by many European thinkers, argued for a new regime of accountability, transparency and fairness, which in fact have become foundational principles of modern privacy law. But given the power of technique, Ellul was ultimately “not very optimistic about our prospects in finding solutions,” said Rotenberg.

A fourth major thinker with compelling insights is the German sociologist Max Weber, who is famous for his studies of modern bureaucracy, the rise of capitalism, and how social relations have changed as a result. Bureaucracy and capitalism elevated the “rational legal” construct as the essence of modern relationships, especially as played out in organizations. In contrast to premodern societies that were organized around traditional authority or charismatic leaders, modern societies seek the rationalization and routinization of human activity, said Weber.

This idea reaches its logical culmination with such techniques as the FAST computer program [Future Attribute Screening Technology] introduced by US Homeland Security in 2011. By compiling sufficient personal attributes about an individual, the FAST system purported to predict with some degree of probability the likelihood that someone would commit a crime. In this, FAST echoes the plotline of the dystopian sci-fi film Minority Report, a Tom Cruise thriller about “a special police unit that is able to arrest murderers before they commit their crime.”

Rotenberg believes that these four thinkers help us frame the primary questions we must ask: “How do we preserve autonomy (Weizenbaum) in a world of pervasive technique (Ellul) and continuing rationalization (Weber)? What distinguishes human beings from machines (Kasparov)? What makes us human? What does it mean to move to a point where machine intelligence, however we define it, exceeds human intelligence?” A memorable cautionary tale about such questions is the famous scene in the sci-fi film 2001: A Space Odyssey. When the astronaut Dave commands the spaceship’s computer, “Open the pod bay doors, HAL,” to de-activate the AI system, the computer, replies: “I’m sorry, Dave. I’m afraid I can’t do that.”

Does AI Enhance or Diminish Human Beings?
A previous Aspen Institute report outlined the many benefits that AI is likely to bring in developing autonomous vehicles, improving healthcare, and introducing new reporting and analytic techniques to journalism. In addition to offering new capabilities, AI is seen as automating work that is repetitive, arduous or dangerous while improving efficiencies and customizing goods and services. While acknowledging these many benefits, conference participants challenged some of these conventional ambitions and suggested that there are larger, deeper questions that need to be asked. As one computer scientist put it, “The real societal question is ‘What’s the end game? Is the ultimate goal consciousness in a machine?’”

The current utility function of AI systems, replied Jean-François Gagné of Element AI, “is to maximize the efficiency of a specific task — to maximize profit/efficiency. But replacing humans having full perspectives with tools that are narrow, but so much more efficient, is creating distortions and introducing tons of fragilities. Given the power of these tools, we now need to question what exactly it is we are shooting for. It cannot be as simple as just profits or efficiency. The power of these tools is now getting us to question what exactly it is we are shooting for.” Father E. Salobir, a Roman Catholic priest and Founder and President of OPTIC, a network that promotes research and innovation in the digital humanities, believes the motivations behind AI systems are critical, “That’s my question: Who is designing and training the machine, and are those things in accordance with our values?”

The pitting of machine intelligence against human capabilities sets up an invidious comparison that some find troubling. John C. Havens of the Institute of Electrical and Electronics Engineers (IEEE) believes that AI as currently cast “begins this larger narrative that positions humans as ‘worse’ than machines, or somehow deficient. There is a real risk that AI will cast humans as ‘flawed’ and ‘in need of improvement,’ which then puts discussions about humanity into a whole new paradigm,” he said. “In a sense, humans have already ‘lost’ because we’re saying that ‘machines will be better than us.’” Havens argued that it would be better to see AI as complementing humans, and to avoid conceiving AI design “as if human beings are broken.”

Joi Ito, Director of the MIT Media Lab, agreed with the earlier suggestion that perhaps AI systems really are disrupting our Enlightenment faith in the individual and rationality: “Most of our problems today are problems where having ‘more’ doesn’t make them better. Throwing more resources at problems, as in rebuilding Europe and Japan after World War II, or improving productivity, is not necessarily the solution any more. We have run the course of ‘more is better.’”

Although many computer scientists regard enhancing autonomy as an ideal goal for AI, Ito argued that autonomy is really illusory: “We are constantly involved in relationships with each other and the Earth, and machines mediate and rearrange those relationships. From a systems dynamics perspective, there is no such thing as ‘autonomy.’ We are embedded in a world that is a complex, self-adaptive system; we are not ‘autonomous.’” Terrence Southern, Founder of Illuminate STEM and Global Lead Robotics and Automation Engineer for GE Global Research, expanded, “The technologies we’re using are mimicking social connectivism and simultaneously making us still feel more isolated from the world. We are reducing our social and emotional interdependence, ultimately reducing how we value each other.”

For Ito, “This makes the basic question of ‘What is good?’ really interesting. What does ‘flourishing in nature’ means in this context? What does it mean to be happy?” Instead of focusing on traditional notions of “liberty,” “autonomy,” “control” and “growth,” Ito suggested that “these kinds of Western paradigms are kind of outdated.”

Given the scope and power of emerging AI innovations, there was broad agreement that AI will likely change our ideas about what it means to be human. But are current trends in AI development encouraging or alarming?

Naveen Rao of Intel’s AI products group has little doubt that “the notion of what it means to be human will change and evolve” as a result of new AI systems. He suggested that the changes wrought by smartphones today will simply be extended by neuroprosthetics and other AI systems in the future. Are these changes all that different?” The brain has always filtered out “noise” from our environment; AI will simply augment that function in the future, Rao predicted, to the extent that it may “literally be merged into our conscious experience at some point.”

Rao quickly added, “I don’t see that as a horrible thing. This process has been going on for a long time. I’m a neuroscientist, and I can tell you with a very high degree of certainty that your brain is not the same was it was ten years ago. We are different human beings today than we were one hundred years ago, because of technology. Is it really such a horrible moral problem that machines change what it means to be human?”

Paul Blase, Managing Partner of AI & Data Solutions at tronc, Inc., sees AI as a valuable tool to sift through large, diverse datasets to identify problems and model improvements to our collective social and economic systems. He cited a PricewaterhouseCoopers (PwC) study that used advanced analytics to assess barriers preventing the advancement of women in India. By drawing on data about domestic violence, family structures, education, women in the workplace, and so on, the model yielded new, more holistic insights into what might be done at the policy level to yield greater benefits, than discrete interventions. Blase concluded, “AI can be a force for good by helping us assemble and model data that provides a more accurate representation of the way the world works to help solve complex problems like this.”

The Hidden Biases Embedded in AI
Other participants raised yellow flags about the hidden biases sometimes embedded in AI, most notably the quest for greater efficiency and uniformity among human beings. “The pressure that will drive co-evolution of humans and AI as complex adaptive systems is efficiency,” said Louis Rosenberg, Founder and CEO of Unanimous A.I. “If you’re an AI system,” he said, “the more uniform that humans are in the system, the better. AI would love to get rid of outliers and have us all be uniform. That is not a prescription for autonomy, or for what’s best for humanity.” Rosenberg also noted that AI routinely makes pivotal, undisclosed decisions about what is “noise” in datasets — meaning information that we can safely ignore. “Well, that noise might be important to us humans,” said Rosenberg. “Yet AI will decide what is the ‘thought drudgery’ that we don’t have to concern ourselves with.”

Rosenberg argued that human societies, like any biological systems, thrive on a swarm intelligence based on a wide diversity of opinion. “If a group is too monolithic,” he said, “then it loses something. It gets dumber. And so we have this tension, which is that the human part of the system benefits from diversity, but AI systems are intensifying uniformity and efficiency. As AI becomes smarter, it could make people dumber and humanity more uniform.”

There may be a certain hubris in thinking that AI evaluations of data are more insightful and reliable than analog methods, noted J. Nathan Matias, a postdoctoral researcher at Princeton University and Aspen Institute Guest Scholar. Just as the behavioral economist Daniel Kahneman has shown that humans are not as “rational” as economists like to think they are, so AI “may be over-optimistic about its ability to influence and change behavior,” said Mathias.

He cited the case of Instagram adjusting its search algorithms in an attempt to reduce users’ interest in self-harm. “Instagram had this great idea that if they made self-harm information harder to search for, maybe it would produce better mental health outcomes for Instagram users. But it turns out that people who support and organize around self-harm are part of a distinct culture, and they found ways to circumvent the search barriers,” he said. When researchers came back four years later, they discovered that Instagram’s changes to its algorithms had actually caused self-harm material to become more popular on Instagram. In other words, AI is no magic bullet; the law of unintended consequences still applies.

Applications of AI that presume cause-and-effect relationships reflect a simple-minded notion of what human beings are, said Wendell Wallach, the Yale University policy expert and ethicist, “We have a scientific model of what humans are right now that misses the point. I don’t think we even have the science to begin to talk about who we are collectively.” Wallach faults, among other things, the mind/body dualism inaugurated by Rene Descartes that persists to this day, and mechanistic worldviews about how the world works. While AI machines are making some tremendous advances in calculative rationality and efficiency, they are less capable of recognizing and respecting some core aspects of our human consciousness and behavior.

Joi Ito countered that “machines don’t necessarily have to optimize for efficiency.” He argued that AI systems can model complex, self-adaptive systems, and in so doing, build systems that are resilient, adaptive and capable of healing themselves. Ito pointed out that neural cognitive science, which increasingly informs the design of many AI projects, “is not about the most firepower. It’s about how do we create things that are more interestingly complex.” While AI in the short run may indulge its “efficiency addiction” and drive for “economic growth,” it is fully capable of moving in more positive directions, said Ito.

But this will require a greater respect for human agency and more appropriate AI/human interfaces, a.k.a “augmented intelligence” strategies, instead of AI infrastructures designed to minimize meaningful human agency. This raises the discomfiting question, Can AI be designed to accommodate human self-determination and diverse interests open-source style? Kate Crawford, Co-Founder of the AI Now Institute, Distinguished Research Professor at NYU, and Principal Researcher at Microsoft Research is doubtful. “One of the more pernicious myths that we need to explore is the idea that autonomy [in AI design] is evenly shared,” she said. “There are very few people who can really create AI at scale. We also have very strong empirical evidence that these tools may be accelerating inequality. So when we use the term ‘we’, I want us to remember that there isn’t a shared 'we’ here – instead, let’s consider who is being included and who is not included.”

Douglas Frantz, who was Deputy Secretary-General of the Organisation for Economic Co-operation and Development (OECD) at the time of this conference, agreed, “One of the greatest risks of machine learning and automation is that we will leave huge segments of the population behind.” Frantz worries that the largest tech companies and nation-states, especially China and the US, could “turn everyone else into client states” because of their vastly disproportionate control over resources, talent, data, and machine-learning technology. Michael Chui, Partner with the McKinsey Global Institute, reported that the latest McKinsey research shows that “of all experimental investment in AI, 66% is in the United States, 13% is in China, with the remainder split among many countries.”

“At the moment,” said Wendell Wallach, “it looks like there will be an AI oligopoly that will be much more powerful than any oil company ever was. We are moving into a universe where multinationals may have unbelievable amounts of power, all of it entwined with how technology is deployed, the control of data, and the capture of profits from productivity gains. In previous administrations, the State Department actually had an “ambassador to Silicon Valley” because the Government’s relationships with companies there are more complex than its relationships with many states.”

i Gary Kasparov. Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. Public Affairs Books. (2017), p. 255.

Share On