page image

CHAPTER 3 - The Perils of Artificial Intelligence

While much of the conference explored the transformational potential of AI, the group was mindful of its serious risks and limitations as well. Two significant perils have already been mentioned: the disruptive impact of AI on various industries and work life, and the structural intensification of social inequalities.

AI and Inequality
A number of prominent civil rights groups, tech and business leaders have raised sharp and even extreme warnings about the threats that AI poses for societies. Entrepreneur Elon Musk has famously called superintelligent AI systems “the biggest risk we face as a civilization,” comparing their creation to “summoning the devil.” Even respected business commentators worry that AI could trigger social and economic upheavals. For example, Kevin Roose of The New York Times has called AI a tool by which the world’s business executives will “transform their businesses into lean, digitalized, highly automated operations,” creating new private concentrations of wealth at the expense of workers and the public.

A 2013 report by the Aspen Institute Communications and Society Program explored this theme in depth, focusing on the rise of a “power-curve distribution” of wealth and income that are associated with network platforms. The term refers to a power-law distribution in which a small number of people reap a disproportionate share of benefits while the bulk of participants receive very modest gains. This so called winner-take-most dynamic, or 80/20 rule (in which 20 percent of participants reap 80 percent of the gains) appears to be a structural feature of network-based activity because well-positioned business players are able to realize most of the productivity gains that materialize as economic “frictions” are radically reduced at an extremely rapid rate. This is displacing middle-class jobs at an accelerated rate, leaving people reeling from the pace of change and government scrambling to solve market disruptions using archaic policy architectures.

In the report, Kim Taipale, Founder and Executive Director of the Stilwell Center for Advanced Studies, said that the paradoxical result of network effects is that “freedom results in inequality. That is, the more freedom there is in a system, the more unequal the outcomes.” This stems in part from the self-reinforcing benefits that accrue to the “super-nodes” of a network, a phenomenon sometimes called “preferential attachment.” Players that function as super-nodes capture a far disproportionate share of rewards relative to their effort, while hard-working smaller players and individuals find it very difficult (for structural reasons) to increase their share of benefits. Because of this dynamic, said Taipale, “The era of bell curve distributions that supported a bulging social middle class is over, and we are headed for the power-law distribution of economic opportunities. Education per se is not going to make up the difference.”

Skeptics of this analysis respond that these outcomes are not inevitable. We must recognize that we are living through a period of historic economic transition, which will eventually result in greater prosperity, widely distributed, if the economy is allowed to pursue its course.

Embedded Biases in AI
Inequality is not just a result of network effects or the winner-take-most dynamic outlined above. It is sometimes embedded in the very algorithms and data that are used to drive AI. Deciding what information shall be collected in the first place amounts to a bias, one that might be amplified by biases in the sampling methods used. “As someone with a background in large-scale systems measurement, I have learned that data can tell you a lot about the world, but there is no unbiased data,” said Meredith Whittaker of the AI Now Institute. “At some point you’re making a methodological decision that says this data means this, and not that; that we’re going to measure something this way and not that, based on whoever is in the room; and that this is the particular way we are going to represent it quantitatively.”

Steve Chien of the Jet Propulsion Laboratory suggests that a similar problem arises when you build an autonomous system to track natural phenomena. In monitoring the weather, for example, “Instead of tracking the entire world, or all of the storms in the world, you want to be more selective and have smart measurement of that subset that best allows you to predict and model—such as a critical storm front—at a higher resolution. However, the challenges are (a) to figure out what are these key parts; and (b) the algorithms used to control any sensors are introducing bias into the data because you’re not collecting world wide data on a 24/7 basis, you’re only collecting a tiny subset based on what the algorithms tell you are the most important data.” Scientific models may therefore be limited at the start by the implicit biases of these AI selective data acquisition algorithms and resulting data. Weather data, for example, may implicitly adopt the baseline of certain types of storms (e.g., ones with more ice and less wind) characteristic of certain regions of the world, and not others, if those are used to design the sensing (data acquisition) algorithms.

Expanding upon this idea, scientist Gary Marcus of New York University stressed that AI currently is a very data-driven paradigm, one that may or may not reflect causal relationships in reality. “In machine learning, we tend to have algorithms that at some level try to mimic the data that they’ve seen before without having a deep understanding of the causal laws that generate those data. But that’s a design choice that happens to be the easiest way to go right now; it’s not the only way that we can build algorithms.” Indeed, he continued, “It might be possible to build deeper models of human interaction that reflect the underlying psychology that causes the behavior.” But that’s not possible now, which is why AI designers must be mindful of the causal models implicit in their algorithms and data. Deeper models for AI might emulate the process of language acquisition in children, said Marcus: “Children are not direct slaves to the data. When they learn language, for example, they’re learning an abstract grammar that they can use in a lot of different circumstances, so they don’t have to exactly mimic everything that their parents say. But the algorithms that we have right now are very much blind mimics, and that accounts for some of the bias.”

Meredith Whittaker suggested that we ought to see AI as a mirror that reflects our own limited perceptions and social and political biases. Instead of using AI systems uncritically as simple “solutions” to problems, we ought to see AI as a set of “diagnostic technologies” that can help reveal the embedded biases in processes, much as AI-driven recommendations for criminal sentencing help reveal racial and political biases in our judicial system.

It became clear in discussion that many of the conversations about AI focus on how it currently functions, while what we really need is a larger conversation about how AI could be. “We focus a tremendous amount on learning from data, but do almost nothing about how to build better causal inferencing,” said John Seely Brown, Independent Co-Chairman of the Deloitte Center for the Edge. “Causal inferencing is complicated and can take a lot of time to disentangle the underlying contexts and to explore recursive trails of ‘whys’ and we don’t yet have good enough techniques,” he said, “but I think we’re beginning to see a real shift in some fields toward building stronger causal models into our systems.” Brown suggested that we need to take abduction—a logical and plausible argument stemming from the major premise—more seriously than limiting ourselves to just deductions, for example. We also need to find ways to mimic the process of learning, which sometimes requires constructing new types of causal models.

Alix Lacoste of BenevolentAI agreed that AI must find better ways to “marry the data-driven with a hypothesis-driven way of finding novel insights,” and to navigate the gap between bias and expertise. One useful strategy would be to use AI-driven “attention mechanisms” to make certain patterns of data more salient to machine learning algorithms, but ensure that human expertise is then called upon to render more refined, subtle judgments. In this fashion, attention mechanisms can serve two objectives: one is to help bring the most relevant information forward; the other to help with interpretability by leveraging experts to review what the machine learning models decide to pay attention to.

Peter Norvig, Director of Research at Google, noted that there is a famous paper in the computer science literature, “Attention Is All You Need,” which argues that attention mechanisms are an effective way to elicit insights from complicated, deep learning networks. The term “attention mechanism” does not necessarily relate to the ways that humans direct their attention, but rather to how computing systems can take account of relational factors in a given context (e.g., other words or phenomena) to provide guidance to humans. Computer scientists continue to debate which attention mechanism or process may be the best way to achieve good results.

One problem in this entire debate about causality may be that we have narrowed our understanding of AI to machine-learning, said David Ferrucci, Founder and CEO of Elemental Cognition. “I don’t think that’s right or, in fact, good. To be provocative, I would say that human intelligence is not data-driven. It’s rational, collaborative and communicative—and none of these is data-driven. Perhaps human stupidity and savantism are data-driven, but human intelligence? Decidedly not,” he said.

“So when we think about what we want out of AI,” Ferrucci continued, “we have to think about how AI ultimately can be communicative and collaborative with human beings, and engage our thought processes and the ways that we think and communicate. AI should ultimately help us understand stuff, but we have to realize that understanding is hard. We cannot understand causal relations between real-world phenomena by statistically analyzing data that may only weakly reflect the underlying phenomena. It’s one thing for Google to bring us gazillions of pages; it’s another thing to understand what they say and how to synthesize that knowledge.”

One apparent general solution to this challenge is to make sure that we “keep humans in the loop” for any process involving AI, as one participant noted. But Meredith Whittaker hastened to point out that the idea of “autonomous learning” is a bit misleading in the first place. Deep learning systems generally rely on vast quantities of labeled data, in any case, she said, “which requires very low-paid people to label and classify data. So we need to keep in mind the full stack on which AI is being built, and keep in mind which humans AI is serving. The term ‘autonomous’ or ‘automation’ often hides a system’s levels of dependency on precarious labor, and writes out the experiences of a lot of humanity that is enmeshed in these systems.”

Educational Barriers to AI Careers
The “hidden labor” that sometimes play a part in AI points to a related problem—the lack of diversity among AI designers and business people, and the limited educational opportunities for women, people of color and disadvantaged students. These issues matter not just as a matter of social equity, but as factors that shape the very design and deployment of AI. As seen in recent controversies about the racial limitation of facial recognition systems, the identities of AI business strategists, engineers and researchers can greatly affect the performance and social character of AI.

Fewer than 23 percent of people working on AI globally these days are women, and the number for other underrepresented groups are abysmal. It’s truly a crisis,” said Tess Posner, CEO of AI4ALL. “I think the real AI moonshot is inclusion. We need to try to bring underrepresented populations into AI.” This is why Posner is working with high school students to try to interest women, minorities and low-income kids in computer science and AI. Posner stressed that tracking diversity in these fields requires holistic interventions to address access, hiring, retention and growth of diverse teams and education needs to begin at the high school and middle-school levels, and even in elementary schools—well before post-secondary education.

“I feel as if the target age is really middle-school, and sometimes even earlier,” said De’Aira Bryant, a second-year doctoral student at the School of Interactive Computing at the Georgia Institute of Technology. “That’s when kids are figuring out what their favorite subjects are, and thinking about potential careers. By ninth and tenth grades, students are already in the preparatory classes for specific disciplines, so it might be a bit late by then.”

Anita LaFrance Allen, the Vice Provost for Faculty and Professor of Law at the University of Pennsylvania, echoed these concerns as a university administrator. She noted that, “At the moment, we just have a very, very non-diverse set of technical trainees and people producing technical trainees. About 33 percent of our tenure-track faculty are women, and in the engineering school, only 17 percent. We have one black engineering professor and one black math professor.”

Bryant noted the distinct lack of access to computer science in the small town in which she grew up, Estill, South Carolina. “My high school still doesn’t offer computer science and very few of the schools in a twenty- or thirty-mile radius in the Low Country do. That’s why I’ve been working with the South Carolina legislature to try to get CS [computer science] in every school in South Carolina, at least at the high school level. That’s just CS, not even AI.”

Across the state, there are very few teachers capable of teaching AP classes in computer science, in part because of the scarce resources and funding to retain qualified teachers. “There is a huge lack of women in AP CS programs across the state,” said Bryant, “even in top schools in Greenville, Columbia and Charleston. And at the University of South Carolina, the AI course is an elective and at Georgia Tech, the AI courses on campus are always full.” Bryant believes that getting the educational system more involved and reformed will be crucial to addressing the diversity problems that Posner mentioned.

Several participants agreed that a “revolution” is needed to improve access to CS and AI programs and, in turn, increase the diversity of trained AI graduates and engineers. Reforms must be attempted at middle-schools with outreach programs, on up to graduate programs, said Alix Lacoste of Benevolent AI. Lacoste continued, “We need to fully integrate diversity and inclusion into our recruitment process, and educate the general public and media.”

To counter the stigma that is sometime associated with computer science and AI, Bryant suggested “using ‘gamification’ and other techniques that have been successful in keeping kids interested.” John Seely Brown agreed, noting how the hands-on, participatory approach introduced by remix culture “really opened kids’ minds about how to use media in new ways.” If that sensibility could be brought to AI, it could really expand interest and engagement among young people. “You would be shocked at what kids can figure out,” said Brown. “If you can create a set of tools in the right kind of ‘club,’ it’s amazing the kind of learning that can result.” A prime example is the Hacker Dojo in Santa Clara, California, an open working space for software projects.

Tess Posner described a new effort by AI4ALL to improve access to basic AI education, through the AI4ALL Open Learning Program: “We’ve found that online education doesn’t work, especially if you’re trying to be inclusive for all types of learners and people who may not already be involved in tech,” she said. “You need a peer learning community and adult follow-on support. A static, online program only works if you already have other resources built into the system, such as caring adults and peer groups and a sense of inclusive belonging.” Posner said that AI4ALL’s program can fit into existing STEM programs, high schools and community based organizations because it works as a kind of “guide on the side” teaching model that is enough to facilitate learning in peer group situations.i

But when a suggestion was made to provide federal credits or incentive to take AI-related online courses, Anita LaFrance Allen was skeptical suggesting that by providing more incentives for universities to create more online education would simply cater to their desire to make money, resulting in many low-quality courses. This is not to say that the federal government should play no role, Allen hastened to add, “The federal government has a symbiotic relationship with higher education, with funding coming from the National Institutes of Health, the Department of Defense, the Department of Energy, and so forth.” This relationship needs to be leveraged to produce greater learning about AI and greater diversity of students and graduates, she said. At least one participant reacted by saying that “the hair on the back of my neck goes up when we talk about federal solutions, particularly at this moment. I think we could get better and quicker results from cities and states.” Another participant suggested that philanthropy could play a particularly useful role in improving CS and AI education.

Meredith Whittaker reminded the conference that unequal opportunity is not confined to education; it is a problem within the tech industry itself: “We need to acknowledge that the cultures into which we are sending these folks have their own problems in terms of pay, transparency, opportunity and social equity. Black women have the highest attrition rate as employees at Google, according to its public diversity report. People interested in AI careers are pushed out at every stage of a leaky educational pipeline, and then once they reach the premier academic labs and companies, they are pushed out in different ways. This is something we need to look at closely.”

Public Understanding about AI
There was a general consensus that changing some of the embedded biases in AI and computer science education will require changes in public awareness. While many technophiles are thrilled at the coming applications of AI, other people are confused or fearful. Others, meanwhile, feel abandoned or victimized by “the system,” said Father Eric Salobir, a Roman Catholic priest and President of OPTIC, a network that explores the field of digital humanities. He said that such people may feel wary about new technologies and whether they will personally benefit from them, or be exploited by them.

Measuring and changing public opinion on AI is likely to be difficult, however, because the technology is not a retail product used by ordinary people; it is usually a centrally managed technology invisibly embedded in other systems. Not surprisingly, the public is often illinformed about the potential uses and risks of AI technologies.

Mainstream media coverage does not help this situation. A great deal of news stories and commentary about AI tends to be either sensationalist horror stories or in effect press releases from industry itself. In a survey of UK media coverage of AI, nearly 60 percent of stories were indexed to industry-driven news about products, announcements and research, according to the Reuters Institute for Study of Journalism. Coverage of AI as an emerging public issue was relatively modest, and non-industry voices such as academics, activists, politicians and civil servants were heard less often than industry sources.

Raina Kumra Gardiner, Director of the Omidyar Network, believes that we need to pay more attention to the “cultural narrative” about AI. “We have Terminator and Black Mirror,” she said. Participants also suggested that foundations could play a constructive role in supporting the development of positive cultural narratives.


i Other participants suggested several web resources that could be valuable points for engagement for AI education: the Udacity and Coursera websites (www.udacity.com and www.coursera.org), and SARA (Socially Aware Robot Assistant) developed by Carnegie Mellon University. Such resources are potentially valuable because so much AI education requires self-learning and peer learning in some form or another.
 
 
Title Goes Here
Close [X]