page image

Foreword - Words from Charlie

The suite of computing techniques commonly referred to as artificial intelligence has had and will continue to have profound affects on our society. AI’s benefits not only include increased efficiencies across societal sectors but also a transformational change in knowledge generation, communication and personalized experiences. At the same time, these advances can have counterweights in certain uses, unintended consequences, or control by bad actors. This includes the potential to disrupt fundamental societal values and norms as well as exacerbate existing systemic issues such as inequality and inequity. Balancing AI innovation against these potential harms is both critical and necessary for the future of human progress.

In February 2019, the Aspen Institute Communications and Society Program convened twenty-four leaders across industry, academia and civil society to begin chipping away at this vexing challenge. Participants of the Roundtable on Artificial Intelligence engaged in extensive and meaningful dialogue around the need for a coherent philosophical approach towards AI, for both quantitative and qualitative metrics and for new systems of governance and accountability.

The following report, “Artificial Intelligence and the Good Society: The Search for New Metrics, Governance and Philosophical Perspective,” authored by David Bollier, reflects on these discussions and debates. It highlights the promises and perils of AI systems and captures a robust debate regarding the proper methods for measuring progress or set-backs.

The report is divided into four sections. First, “Moonshot Visions of AI,” features the enormous power, speed and scale of AI systems to positively impact our daily lives, specifically in healthcare and employment. “The Perils of Artificial Intelligence” then lays out numerous serious risks and limitations of these systems, ranging from embedded bias to a lack of public understanding of AI.

The second-half of the report shifts focus to suggest two cornerstone issues for the future of AI. “Toward A Philosophy of AI Design and Governance” articulates the need for a cohesive values-driven or philosophical AI approach to better assess its impact on society. The issue, addressed in “Envisioning New Metrics, Governance and Accountability for AI,” is then to devise the appropriate evaluation metrics and governance mechanisms to steer AI in socially constructive directions.

At the end, whether the need is to enlist community review boards to provide oversight or to adopt certain metrics for public accountability, it is clear that there will be no single, universal solution. Instead, just as the technology itself is a suite of multiple computing techniques, there are multiple approaches to steer AI uses towards the good society. Ideally, this volume will help readers understand both the compass and maps to chart our way forward.

Acknowledgments
On behalf of the Aspen Institute Communications and Society program, I want to thank the Patrick J. McGovern Foundation for their support and leadership in developing this roundtable. Thanks, also, to David Bollier, our rapporteur, for capturing the various dialogues, debates and nuanced viewpoints of participants. As typical of our roundtables, this report is the rapporteur’s distillation of the dialogue. It does not necessarily reflect the opinion of each participant in the meeting. Finally, I want to thank Sarah Eppehimer, Project Director; Dr. Kristine Gloria, Senior Project Manager; and Tricia Kelly, Managing Director, for their work on the conference and bringing this report to fruition.

Charles M. Firestone
Executive Director
Communications and Society Program
The Aspen Institute
June 2019

 
 
Title Goes Here
Close [X]