page image

CHAPTER 6 - Conclusion

Harnessing the immense power of artificial intelligence while controlling its potentially destabilizing consequences is indeed a wicked challenge. There are highly attractive breakthroughs that AI could deliver to humankind in terms of healthcare, scientific research and discovery, productivity, business innovation and wealth-creation. But there are also likely to be many complicated negative impacts—on employment, social inequality, democratic processes and possibly national security. There will be no universal solution—AI itself is too diverse and rapidly evolving—but clearly new modes of anticipating and controlling the unintended and/or catastrophic dimensions of AI are needed.

For a set of technologies that are still embryonic and evolving, and not necessarily even discussed with a common vocabulary within the U.S. government, this is a tall order. However, this Aspen Institute conference was encouraging in its own way because it surfaced some of the key vectors of engagement that must be joined: more cross-sectoral discussions, deeper philosophical inquiry, greater reflection on structural forces directing AI development. And most of all, how to prod AI development in the right directions—and what, indeed, are those “right directions?” These lines of exploration could be greatly aided by adopting new consensus metrics to assess AI and by establishing new governance mechanisms that can provide a greater measure of public accountability over the design and uses of the technologies. The challenge amounts to something of a koan, however: Can a technology that is inherently disruptive be made socially responsive, too?

Title Goes Here
Close [X]