page image

CHAPTER 2 - Moonshot Visions of AI

In his opening presentation, Reid Hoffman, Co-founder of LinkedIn and Partner at Greylock Partners, described himself as a “techno-optimist, not a techno-utopian,” who sees AI as providing “an amplifier effect [in today’s economy] equivalent to the Industrial Revolution.” This is mostly a matter of exponential increases in productivity, he said. Hoffman envisions AI bringing new technologies to scale very rapidly, improving the practice of medicine through cheaper and better diagnoses and treatments, and rejuvenating rural economies where manufacturing has left.

“Amazon’s robotized facilities actually employ more humans than they did before the introduction of robots,” he said. “There is a higher output of goods per number of humans, so there are still productivity gains, but as more things have been automated, more humans are brought in to do a lot of other things.” Hoffman sees this dynamic extending well beyond the workplace: “If we start thinking about unreachable areas of the world, from ocean floors to the moon and Mars, robots are going to be an important part of that, whether it’s manufacturing new materials or developing new places to live.”

Of course, this vision of the future will be accompanied by all sorts of dangers and bad actors, Hoffman noted. “Humans are a fractious lot,” and so there will be plenty of competitive and inhumane behaviors associated with AI. But in the end, AI may be “the only answer” to such problems and dangers—bioterrorism or fake news on social media, for example—because of the technology’s unparalleled speed in diagnosing, measuring and reacting.

These technical capabilities cannot help us with many social and political questions, however, such as “Who has the power to make decisions, how are rewards distributed, whether control should be centralized or decentralized, and questions of human freedom, autonomy, and privacy.” These are inescapable questions. But in thinking about AI, said Hoffman, it is important to remember that “the prize is large.” Hoffman cited one of his favorite books, Nonzero, by Robert Wright, which explores the concept of “non-zero sum” in game theory and the idea of developing new, more complex social systems that move us beyond “zero-sum” tradeoff scenarios. “How do we take these technologies and construct non-zero games of the future?” Hoffman asked.

An early observation in the conference highlighted a point about AI that needs to be kept foremost in mind: AI is many things, not a single thing. Gary Marcus, Professor of Psychology and Neural Science at New York University, said, “People talk about AI in generic terms, but it’s not one magical technique. Our hopes and fears all rest on what kind of AI we’re talking about.” While one type of AI in a given context may pose serious risks—deep learning systems in driverless cars that misidentify objects, for example—particular AI risks are likely to be very situational and technology-specific. This is an important caveat to keep in mind when having more general discussions. Deep learning is a subset of machine-learning, and both are forms of AI—but there are many forms of AI that have nothing to do with machine-learning of any sort. Differentiating types of AI matters.

Another meta-issue that will affect our perception of AI technologies going forward is the role of language and culture in making things visible—or invisible. As the technology gets faster, comes online, and become pervasive, the term ‘AI’ may become obsolete, similar to the way in which the term “cell” in “cell phones” has been largely abandoned. AI will become more ubiquitous and many see this moment as the opportunity to set the right principles, concepts and criteria for talking about it.

In general, AI technologies have such enormous power and versatility that they offer the possibility of enormous benefits to humankind. The real game-changer is the capacity of machines themselves to learn and incorporate new insight into their functionality. Thus, instead of relying on the fallible and limited human brain to detect patterns within large repositories of data, AI algorithms can sift through mountains of data and emerge with potentially actionable insights. This represents a great leap forward over historic modes of computing that were more or less static and repetitious.

So, what are some of the “moonshots” that AI might tackle? There are many. Gary Marcus, Professor and author of Rebooting AI, believes that brain neuroscience should be a prime target. “The human brain has 80 billion neurons, connections between all of them, and a thousand proteins at every synapse,” said Marcus. “The sheer mass of data is something that no human can understand.” But, “We could use AI to understand the brain better,” he said, “and that would help us build better AI.”

Such applications of AI could open up new revolutions in medical science more generally, said Marcus, where discoveries in certain fields have been stalled for decades. “We don’t have any good new drugs for depression, for example, and we still don’t have really great leads on Alzheimer’s Disease,” he said. There are so many proteins floating around and interacting in such complicated ways that we, as human individuals, cannot understand them. If we did AI well, however, it could help us come up with answers by “combining the causal reasoning of human beings with the sheer computation of computers. That would be revolutionary.”

Neil Jacobstein, Chair of Artificial Intelligence & Robotics at Singularity University, believes that AI technologies have the capacity to “create much more wealth around the world than we used to have,” and “vastly improve digital manufacturing with solar-driven, low environmental impact, open source processes.” The surge of new wealth could allow societies to “grow bigger pies of material wealth rather than just squabble over how we divide up a fixed pie. However, solving the social problem of wealth distribution is as important as solving wealth generation. Otherwise, we may continue to see hyper-concentration of wealth.” To hasten this process, he suggested several ideas: 1) Start a new competitive challenge for the utilization of AI, sponsored by different federal agencies; 2) Institute a system of high-quality, AI-powered educational courses available for free to anyone; and 3) use AI to build better infrastructure for “smart cities.”

The ability of AI systems to learn and evolve will open up entirely new frontiers of science, said Steve Chien, Senior Research Scientist and Head of the Artificial Intelligence Group at the California Institute of Technology’s Jet Propulsion Laboratory. In the areas in which Chien works, studying the environment, he sees AI helping to build more focused, precise, environmental modeling, such as in addressing natural hazards. “Flooding is the world’s most dangerous natural hazard, affecting tens of millions of people and causing billions of USD damage every year,” he said. AI systems that rely on larger datasets and machine learning could help improve the forecasting, mitigation and response processes. These benefits could apply to agriculture as well.

AI’s capacity to analyze large datasets also has enormous potential benefits for evaluating decision-making processes. Terah Lyons, Executive Director of the Partnership on AI, a multi-stakeholder nonprofit dedicated to machine intelligence, envisions AI providing for measurement and continuous improvement of decision-making, especially in domains where audits of systemic decisions are rare, such as in healthcare and criminal justice. AI could be used to monitor “confidence estimates” for predictions, decision-making and accountability structures on a continuous basis. The goal would be more probing, reliable evaluations of outcomes than are generally possible today. This will also lead to more public accountability for decisions or patterns of decisions that otherwise may be challenging to understand, but may suffer from bias, injustice or negative structural dependencies. In this respect, AI takes on an enabling role that is both more subtle and context-embedded than the simple automation of human processes. AI can be used as a tool of inquiry to clarify the character of problems, direct our attention to the most salient facts, and improve critical judgments. For example, if AI systems could be used to analyze millions of data points about people’s behaviors—say, eating habits or disease patterns— it could yield better public policy strategies.

Technology moonshots have historically been the province of governments, the only entities with sufficient institutional resources to organize and fund super-ambitious projects. Today, that has changed, noted Reid Hoffman. “We are now at this interesting place where businesses have the scale and capital needed for moonshots—hopefully to serve a social good.” He said that tech businesses may have a better “moonshot capability” than governments, at least within the AI context, despite the Trump Administration’s announcement that it is going to do an AI moonshot as part of its “American First” policies.

Given these realities, Hoffman believes that our best, most practical option is “to try to potentially shape any [corporate] moonshots in ways that have positive, inclusive impacts on the majority of humanity.” He pointed to mechanisms such as new market incentives for AI developers, new accountability structures to influence corporate decisions, and leveraging trending public concerns to jawbone AI design and practices.

Such considerations are obviously moot when it comes to authoritarian governments like China, which is aggressively developing AI to compete economically and rule over its population socially and politically.i “There is an AI tech race underway,” said Hoffman, which is not just an economic competition but one for geopolitical dominance. A relevant question, he continued, is “which political value system will be most embedded in AI systems and how they shape the future?”

AI and Healthcare
In the course of the conference, breakout groups were asked to come up with general scenarios for how AI could plausibly improve life in three areas—healthcare, employment and governance. The scenarios were intended to build upon current trends and identify both positive and negative signposts as they might evolve over the next five to ten years.

The spokesperson for the healthcare group—Alix Lacoste, Vice President of Data Science at Benevolent AI, a firm that uses data and machine learning to advance biomedical discoveries—noted the need for more informed patients. “Patients are not really well-educated [about their healthcare decisions] and do not necessarily participate in their care. So we would like to democratize expertise for both patients and providers. They don’t seem to talk to each other that much about best care.” Yet the group also wants to assure that patients’ values and risk preferences are taken into account, so that specific treatments, resuscitation instructions, and the like, are available. “It’s really about empowering patients to make better decisions,” said Lacoste.

To achieve this, the healthcare group concluded that AI in this field should focus mostly on enhanced decision-making as it applies to diagnosis and treatment planning. The decision-making should reflect “explainable causal reasoning,” said the group, by which it means that “AI should be a continuous learning system about how to provide best care. It should synthesize information, keep it up to date, and make it available, based on appropriate rights to information.”

Meanwhile, Amazon, J.P. Morgan Chase and Berkshire Hathaway have come together to rethink the entire healthcare system for more than one million employees, on a nonprofit basis. This alone could have a catalyzing effect throughout American healthcare. Tom Gruber, an AI product designer, said, “AI is giving us a new toolkit to think about playing a different kind of game. There are some new options on the table that may break the holds that we currently have.”

The group identified a number of metrics to assess the positive impacts that AI could have on healthcare:

  • increased discovery of new medicines
  • increased access to public data for discovering medical treatments
  • lower healthcare costs (via simulations of incentives in the healthcare system)
  • healthier populations
  • earlier diagnoses of diseases
  • shifts toward preventative care
  • increased efficiencies in hospitals while optimizing patient care, and
  • decreased latency of information needed for care.

How might AI tools be used to improve the healthcare system as assessed by these metrics? The group made a number of suggestions.

  • Internet of Things systems could continuously monitor and analyze data to diagnose and propose preventative treatments. They could also increase the frequency of sampling of data.
  • An omnipresent AI system could be used to aggregate all of a person’s health data and serve as a decision-support system. This function could be augmented by bringing medical literature into the analysis, providing new forms of patient and provider education.
  • An AI agent could serve as a “health companion and assistant” by dispensing health advice and patient education.
  • AI could also help with disaster relief by connecting affected hospitals and connecting patients with similar conditions.
  • Compliance with medical instructions could be improved through greater use of specialized robots or even games.
  • A more ambitious goal might be to use AI systems to render hospitals less necessary except in extreme cases. After all, hospitals are expensive and dangerous places for patients. Could AI systems be used to keep ourselves sufficiently healthy that there would be less need for hospitals?

The healthcare group also identified a number of negative outcomes that should be tracked. It is possible that insurers and employers with extensive access to employees’ health data could use AI to identify greater risks for individuals, prompting them to raise insurance rates or refuse employment; hence the necessity of strong privacy protections. Neil Jacobstein of Singularity University noted that machine learning can now use photographs of the retinal fundus of the eye to determine a person’s age, sex, cardiovascular risk and whether they smoke. This is not just accomplishing something better, faster or cheaper than doctors; it is performing a task that ophthalmologists did not know was possible.

New York State recently passed a law allowing health insurers to use social media to set premiums. This could have significant negative effects on who gets coverage and who doesn’t, said Meredith Whittaker, Co-founder and Co-director of New York University’s AI Now Institute. “I really worry about the ‘datafication’ of insurance and the very clear direction in which incentives are going,” she said. In extreme cases, if machine learning were used to make highly granular calculations of individual risk, the very basis for health insurance—the pooling of risk—would be seriously undermined. This is one reason, among others, for assuring strong privacy protections.

Without global privacy safeguards, AI systems could easily be used to rifle through patients’ medical data for all sorts of unauthorized commercial purposes. Or they could be used to detect mental health issues and order preventative measures without patient knowledge or consent. These possibilities point up the need for strong legal protections to assure that patient data is used only to enhance an individual’s own or other people’s care, with their informed consent.

Another negative signpost for AI development is the use of population statistics to make decisions about individual patients without evidence or causal reasoning. For example, a neural network algorithm looking at correlations in healthcare treatments once concluded that asthmatic pneumonia patients are actually at a lower risk of dying than other patients.ii The algorithm based its judgment purely on correlations and not on evidence-based causality, and therefore it did not understand that such patients are immediately put into intensive care units, which is the real reason for their higher survival rate.

AI systems pose the risk of discriminatory, unequal treatment of people that might otherwise go undetected. Facebook has been accused of racial and gender profiling in its targeting of advertising for housing, for example. If AI were to make judgments about people based on their genetic makeup, or even help identify what genetic changes to implement in order to achieve desired traits, it could facilitate discrimination that would otherwise be illegal.

In some cases, unequal treatment could be a secondary effect caused by unequal access to healthcare data and bias in the data itself, either for individual patients or national populations. This could leave some segments of society without access to important medical information.

AI and Employment
A second breakout group offered a scenario that imagined how AI might affect employment in the coming decade and what might be done in response. The group made a number of assumptions about the future—that the primary societal goal is efficiency and equal opportunity, not equal results. The group also assumed that 100% employment is not the goal, and indeed, that a net increase in jobs over the next ten to fifteen years is unlikely to materialize. “AI will create new job opportunities, but it may well also drive underemployment and unemployment. The ratio of new jobs to jobs displaced is key. The real issue is not just whether jobs are lost, but rather the pressure on middle-class incomes,” said Neil Jacobstein, the spokesman for the group, citing a Fortune magazine special report on the shrinking middle class. “We didn’t assume that jobs are always the goal,” he said, “because jobs are just one way to provide people with material well-being and a sense of purpose and self-esteem…. We asked what are the conditions that have to be met to help people flourish?”

What does “flourishing” mean with respect to AI? The group concluded that it means the ability of people to experience a sense of purpose and agency, whether through work, religion, art, sports or game, and to experience a sense of belonging to communities. Flourishing means access to material goods, knowledge, energy and healthcare, all of which could be more made efficient and affordable via AI (e.g., clean, inexpensive manufacturing; high-efficiency products; renewable energy; responsive healthcare systems).

As AI makes production more efficient, a big question is how to assure that the distribution of benefits can be fair and adequate to sustain households. If the ratio of jobs created to jobs displaced is unfavorable, the disappearance of millions of jobs could cause significant social unrest and a “lot of angry young people,” said Jacobstein.

Other tech thinkers, such as Kai-Fu Lee, have called for a bold, aggressive government role in dealing with AI and job displacement. “We can’t know the precise shape and speed of AI’s impact on jobs, but the broader picture is clear. This will not be the normal churn of capitalism’s creative destruction, a process that inevitably arrives at a new equilibrium of more jobs, higher wages and better quality of life for all. Many of the free-market’s self-correction mechanisms will break down in an AI economy,” he writes, warning of the risk of “a new caste system, split into a plutocratic AI elite and the powerless struggling masses.” As a corrective, Lee proposed what he calls a “Social Investment Stipend,” in effect a government salary for those who provide care work, community service or educational instruction. “There are a lot of New Deal-style possibilities that could be explored and put in place fairly quickly. But it is also prudent to start experimenting with various forms of Universal Basic Income (UBI) possibly at the city level rather than the national level,” he said. Lee envisions a UBI with not just handouts to people, but one that sets some sort of means-test for eligibility and requires people to do something in exchange for their money.

The group envisioned opportunities in part-time or occasional jobs whose wages could be matched by the government. There could also be many new jobs available in sports and entertainment, and public service jobs that provide childcare, eldercare and environmental improvements.

“People would also have access to free, high-quality, Web-based education in almost any area they choose, which is now a very real possibility,” said Jacobstein. It would also be important to have retraining and reskilling programs ramp up in a very short time-frame—something that AI tutors could assist.

How could such a system be financed? The Employment group posited “an abundance economy scenario in which the potential for a ten-fold increase in wealth generation is possible through the growth of ‘open source almost everything.’” In such a world of super-robust wealth-generation, taxation would not pinch so much, especially for high-flying industries, and in general would be eminently affordable. Even if raising taxes is not seen as desirable, it would be far cheaper than dealing with the social chaos or crime that could otherwise result.

Since the rapid development of AI is accelerating, the time-coefficient for developing a variety of effective responses is absolutely critical, said Jacobstein. It makes sense to focus on cities and best-practices at the local level as a way to increase resilience. Mayors are more likely to be innovative, accountable agents of change than national governments.


i Besides its well-known censorship of the Internet, the Chinese government now pressures tens of millions of its citizens to use its propaganda/indoctrination app, “Study the Great Nation.” See Javier C. Hernández, “The Hottest App in China Teaches Citizens About Their Leader—and Yes, There’s a Test,” April 7, 2019, at https://www.nytimes.com/2019/04/07/world/asia/china-xi-jinping-study-the-great-nation-app.html. The Chinese government is also using a secret system of advanced facial recognition technology to track and control the Uighurs, a largely Muslin minority. Paul Mozur, “One Month, 500,000 Face Scans: How China is Using A.I. to Profile a Minority,” The New York Times, April 14, 2019, at https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html.
ii Cabitza F, Rasoini R, Gensini GF. Unintended Consequences of Machine Learning in Medicine. JAMA. 2017;318(6):517-518. doi:10.1001/jama.2017.7797.
 
 
Title Goes Here
Close [X]