page image

CHAPTER I - Introduction

In recent years, the development of new artificial intelligence technologies has been surging at unprecedented speeds. The innovations are introducing countless new disruptions and uncertainties into everyday life, commerce and public policy as well as new efficiencies, practical solutions and markets. The reverberations are already being felt acutely in certain fields where AI technologies are quite advanced, especially self-driving motor vehicles, healthcare and the news media.

While the advances are exhilarating to many, they also pose significant challenges in terms of their social, economic, legal, ethical and even political dimensions. The technologies make possible some incredible new feats in transportation, urban life, medical research, healthcare and journalism. They have the potential to create new markets, spur new efficiencies and extend human capabilities in remarkable ways. But AI systems may also eliminate millions of jobs, cause social disruptions and require significant updates to existing systems of law and regulatory oversight.

In an effort to take stock of some vanguard sectors of AI innovation, the Aspen Institute Communications and Society Program convened the first annual Roundtable on Artificial Intelligence on August 1 and 2, 2016. A special emphasis was put on addressing the values that should animate development of AI technologies and how to develop appropriate policy responses. The conference, held in Aspen, Colorado, brought together twenty-two leading AI technologists, computer industry executives, venture capitalists, and academics who study technology (see Appendix for a list of participants).

The dialogues were moderated by Charles M. Firestone, Executive Director of the Communications and Society Program. The report that follows, written by rapporteur David Bollier, is an interpretive synthesis that distills the key themes, insights and points of consensus and disagreement that emerged from the conference.

Putting Artificial Intelligence Into Perspective
To put current debates about AI into an historical context, Walter Isaacson, President and Chief Executive Officer of the Aspen Institute, and the author of books on Steve Jobs and the history of computing,i pointed to a recurrent public debate about technologies that has been going for nearly 200 years: Will machines replace human beings with superior performance, rendering them irrelevant, or will machines assist and augment human intelligence, surpassing what machines can do on their own? Will technology create more jobs and prosperity, or will it lead to a net loss of jobs, economic decline and social unrest?

This debate goes back to the 1840s, said Isaacson, when a woman who arguably pioneered computer programming — Ada Lovelace, the daughter of Lord Byron — wrote a seminal scientific article about the nature of computing. As Isaacson wrote, “It raised what is still the most fascinating metaphysical topic involving computers, that of artificial intelligence. Can machines think?” Lovelace emphatically rejected this proposition with an argument that has come to be known as “Lady Lovelace’s Objection: The Analytic Engine [a computer] can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytic relations or truths.”ii

Computer scientist Joseph Weizenbaum famously illustrated the limitations of AI in the 1960s with the development of the Eliza program. The program extracted key phrases and mimicked human dialogue in the manner of non-directional psychotherapy. The user might enter “I do not feel well today,” to which the program would respond “Why do you not feel well today?” Weizenbaum later argued in “Computer Power and Human Reason” that computers would likely gain enormous computation power but should not replace people because they lack such human qualities and compassion and wisdom.

At the dawn of personal computing, in the 1970s, the two poles of this debate were personified by Steve Jobs and Bill Gates, tech rivals with conflicting views about AI, according to Isaacson. Gates saw machines as capable of mimicking exactly what humans do, and surpassing us, ultimately rendering humans unnecessary. Jobs disagreed — along with computer visionary Doug Englebart — believing that the humanities and technology will always work together, and in that human/machine collaboration is a more fruitful avenue for the development of AI. This viewpoint is reflected in the work of the noted computer scientists Marvin Minsky and Seymour Papert, Isaacson added, noting that this school of thought is carried on by the MIT Media Lab.

Confidence in the prospects of artificial intelligence have ebbed and flowed as federal and corporate funding was slashed, responding to a consensus in tech circles that AI would not work, at least in the ambitious ways previously imagined.iii So-called “AI winters” ensued the late 1960s and early 1970s and again in the late 1980s, which saw a deep decline in expectations, investment and new research approaches.iv the intervening years, interest in AI later picked up, occasionally punctuated by setbacks, with a strong surge of interest in AI technologies in recent years as practical commercial applications became more feasible.

Once again, the question is arising: Will AI assist and augment human intelligence or replace it? The answer implicates a related question: Will AI benefit society or harm it? Isaacson confessed, “I don’t know how the story ends. That’s partly what this conference is about. But I’m a little more on the Doug Englebart/Steve Jobs side of this debate” – i.e., that AI will work in tandem with humans, augmenting their capacities, and not supplanting them.

AI thinkers have themselves become more nuanced in their thinking, as reflected in Bill Gates’ retreat from his former position. Gates, according to Isaacson, now concedes that the great mistake in AI was its faith that fully digital, binary, algorithmic functions embedded on silicon chips would somehow replicate human intelligence or consciousness. Gates now believes that AI scientists should attempt to reverse-engineer the way nature does things, and perhaps rely on some analog, carbon-based, “wetware” systems (the human brain linked to AI) as well.

There is another difference in today’s debates about AI, said Isaacson. This time, there are some serious economists, led by Harvard economist Larry Summers, who says that this time things may be different.v It is quite possible that technology will not create a net gain of jobs over the long term. This time, many people may be put out of work permanently, with no net increase in per capita gains in jobs, productivity and economic growth.vi Of course, others remain convinced that AI systems will boost overall productivity and jobs, as other technological advances have in the past.

This debate takes on a different flavor today because we live in a different historical moment and computing technology has a significantly different character. So we revisit the old questions, said Isaacson: “What skills and intelligence are distinctly human? What do we bring to the party? And what do machines bring?”

ENDNOTES
iWalter Isaacson. Steve Jobs. Simon & Schuster (2011); The Innovators: “How a group of inventors, hackers, geniuses and geeks created the digital revolution.” Simon & Schuster (2014).
ii Isaacson, The Innovators, p. 29.
iiiEnthusiasm for AI emerged in the early 1980s along with concerns about Japanese “Fifth Generation computing” (McCorduck 1983), but support diminished later in the decade (Unger 1987).
ivThere were two “winters” – one in the late ‘60s/early ‘70s after the work of ALPAC (Automatic Language Processing Advisory Committee, 1966), Marvin Minsky and Seymour Papert (69), and then another one in the late ‘80s after the “failure” of rule-based expert systems, around 1987. The second one was called an “AI Winter,” and had been predicted by Levesque and named after nuclear winter, a phrase coined in 1983 by Turco.
vSee, e.g., Larry Summers. “Robots Are Hurting Middle Class Workers and Education Won’t Solve the Problem.” The Washington Post. (March 3, 2015); or his review of Robert Gordon’s book, The Rise and Fall of American Growth, in The American Prospect. (February 2, 2016). Summer writes: “…it is hard to shake the sense that something unusual and important and job threatening is going on. The share of men between the age of 25 and 54 who are out of work has risen from about 5 per cent in the 1960s to about 15 per cent today as it appears that many who lose jobs in recessions never come back. Perhaps, as [Robert] Gordon seems to suggest, this is a sociological phenomenon or a reflection of increasing problems in education. But I suspect technology along with trade have played an important role.”
viErik Brynjolfsson. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. WW Norton & Company. (2014).

Share On