page image

CHAPTER 1 - Introduction

The development of Artificial Intelligence, or AI, is surging ahead at remarkable speeds, promising to bring dazzling new breakthroughs and efficiencies to medical treatment, public infrastructure, workplaces, education, households and everyday life. Yet at the same time, respected observers point with alarm to the grave disruptions and risks that such powerful technologies may entail. Various AI systems are likely to undercut some fundamental premises and infrastructures of modern life. This includes potentially jeopardizing the physical safety of people, the integrity of democratic governance and culture, and expectations of economic opportunity, privacy and fairness, among other social values.

Given the power of AI technologies and uncertainties about their impact, perspectives on the future tend to be polarized or at least ambivalent. There is excitement at the enormous benefits that AI systems might yield—“the prize is very big”—and also legitimate fears about the unintended, far-reaching consequences of technologies that are still in early stages of development.

Addressing the many questions about AI is a difficult empirical issue, however. The field of research and development is sprawling and diverse, and much knowledge is regarded as proprietary or subject to state secrecy. Compounding these problems, existing philosophical frameworks have trouble assessing the ethical, social and political ramifications of many AI systems, in part because their impact is likely to be so transformational. In a piece for The New York Times, Taiwanese technologist and venture capitalist, Kai-Fu Lee predicts that the coming AI revolution “will disrupt the structure of our economic and political systems,” and will provoke “an AI-driven crisis of jobs, inequality and meaning.”

Questions abound. Will AI systems largely extend the historic dynamics of modern capitalism to more people through economic growth and innovation? Or will they disrupt the economy and societal systems in dangerous, destabilizing ways—for example, by overriding traditional structures that assure individual freedom, privacy and democratic sovereignty? Do AI systems tend to strengthen authoritarian, centralized control, as can be seen in the surveillance and control of citizens in China? Or is this simply one manifestation of a broader spectrum of possibilities? Setting aside the larger geopolitical and economic questions, there remain many questions about how AI will affect American society, especially government, politics and culture.

To address the many concerns raised by AI, the Aspen Institute Communications and Society Program convened twenty-four leading entrepreneurs, academics, technologists, philanthropists, educators, law scholars and other AI thinkers for a conference in Santa Barbara, California, on February 11-13, 2019. The gathering sought to bring some focused intelligence and expertise from diverse perspectives to consider the promise and perils of AI, especially over the next decade.

Special attention was paid to the transformative “moonshot” possibilities that AI could enable, and to general scenarios in which AI could remake healthcare and employment. Discussion also focused on helpful changes that could be made in education and public understanding to facilitate the development of AI. A major portion of the conference dealt with the need for a more coherent philosophical approach to developing AI and for devising effective new systems of measurement, governance and public accountability.

The two days of discussion were moderated by Charles M. Firestone, Executive Director of the Communications and Society Program. This report, an interpretive synthesis of the most salient themes discussed, was written by rapporteur David Bollier.

 
 
Title Goes Here
Close [X]