page image

CHAPTER III - AI and Healthcare

A second session of the AI Roundtable focused on the role that artificial intelligence is playing in healthcare. In diverse contexts, emerging AI systems are transforming the character of medical research, patient diagnoses and treatment options. AI systems are also changing the economics of certain types of medical care and broadening access to specialized knowledge — shifts that could dramatically reconfigure medical treatment norms and healthcare markets. As tech firms such as IBM, Dell, Hewlett-Packard, Apple and Hitachi develop AI plans for healthcare, it is expected that AI’s use in medicine will increase tenfold within the next five years.

In trying to take stock of these changes, it helps in the first instance to distinguish the different healthcare spheres that AI is affecting. Perhaps the largest, most consequential realm of AI applications, at least in the near term, involves Big Data. AI systems can be tremendously effective in searching and analyzing large pools of patient data to identify unusual patterns of physiological factors and symptoms. This knowledge, in turn, can help improve diagnosis and accelerate and refine new treatments.

AI as a research tool is primarily relevant to large institutions that administer or finance healthcare, such as government, insurers, hospitals, medical researchers and the like. Another important tier of AI applications focuses on individual patients, often in their home environments. Here the goal is to develop more insightful personalized medical assessments, diagnoses and treatment plans. AI can also help in providing “smart patient monitoring and alerts,” such as tracking prescription drug regimens and flagging symptoms that require intervention.

Yet another field for deploying AI tools is in augmenting the intelligence and skills of physicians in the course of their work. By having quick, searchable access to vast quantities of information, AI can help physicians make more discerning choices while providing patients with greater knowledge and statistical predictions about treatment outcomes. AI can also help doctors make more precise, personalized prescriptions of medicine and, through robots, perform automated surgery. At some point, AI is likely to improve healthcare management by improving efficiencies, accuracy and cost-effectiveness of medical practice.

Summarizing AI’s potential contributions to medicine, one commentator writes: “AI can help diagnose illness, offer novel treatment options, eliminate human error, and take care of all the repetitive tasks that clog up the system. These time saving measures mean more efficiency and reduced costs.”

The following sections review these different uses of artificial intelligence in healthcare before turning to the structural challenges and policy complications that often stand in the way.

AI as a Tool for “Deep Learning” in Medical Research
Jeff Huber, CEO of Grail, made a presentation about the applications of AI and machine learning to improve medical diagnosis and treatment of cancer. This is a significant healthcare issue because about fourteen million new cases of cancer are diagnosed each year, and eight million people die of cancer each year, he said.

“The premise behind Grail is actually a very simple one,” said Huber. “Cancer that is detected in its early stages today — Stage I or Stage II — can be cured in 80 to 90 percent of cases. Their lives can be saved. Cancer detected in late stages — Stage III or Stage IV — is the inverse, a negative outcome 80 to 90 percent of the time, where people die. So instead of detecting cancer late, when the outcomes are usually bad, we want to detect it early, when people can be cured.” Huber believes that early-stage detection of cancer could increase positive outcomes, cures, to 95 or even 99 percent of cases.

The catch, of course, is how to successfully detect cancer in its earliest stages when it is often invisible to conventional medical tests. For Grail, the tool for improving early diagnoses is known as “ultra-deep genome sequencing,” a system that uses immense amounts of data and AI to try to detect nucleic acids and fragmentary RNA and DNA circulating in a person’s blood. Those elements are shed by a cancer from its very earliest stages, and so identifying them through a blood test could help detect cancer and treat it far earlier than is now possible.

The Grail test has four functions: Detecting whether a person has cancer; identifying how aggressively it is growing; pinpointing its location in the body; and helping doctors select the most appropriate therapies. Since medical scientists know the molecular and mutational drivers of various cancers, the knowledge revealed by the test can inform which therapeutic options should be considered — chemotherapy, immunotherapies, surgery, etc.

Huber said that Grail’s AI system amounts to a tool for looking for needles in a haystack: “We’re finding the needles at almost the limits of physics — a handful of those molecules in a tube of blood.” This sequencing tool goes “an order of magnitude broader and two or three orders of magnitude deeper than anyone else is doing,” he said. At the moment, every test using the ultra-deep genome sequencing is generating about a terabyte of data (1012, or one trillion, bytes). Grail combines this test data with data from clinical trials and phenotypic data related to a patient’s other diseases, co-morbidities, drugs he or she is taking, family medical histories, etc.

Grail’s AI system pores through all this data looking for patterns that may reveal something about the four goals of the test. The process requires the creation of powerful machine-learning algorithms designed to penetrate to deeper levels of biological knowledge about cancer.

It is here that the dynamic forces shaping AI as a field become more significant — not just in this instance for Grail, but for AI systems more generally. Huber cited an essay by Beau Cronin, an expert in computational neuroscience, who identifies four basic ingredients for making AI systems work effectively. They are “data, compute resources (i.e., hardware), algorithms (i.e., software), and the talent to put it all together.”

While most people today assume that data is the most important element in successful AI applications — why else are Google and Facebook so successful? — Cronin argues that different scenarios could lead to the other factors becoming more influential. New hardware architectures could accelerate the development of better learning algorithms, for example. Or the wealthiest tech companies could attract the most talented programmers. Or access to good data could improve (or diminish) if privacy policies, security concerns or public opinion change.

It is also quite possible that the character of AI systems could be significantly affected by network dynamics. A successful AI firm could attract the most users and evolve into the biggest network, propelling a self-reinforcing “winner-takes-most” dynamic. Cronin quotes tech analyst Kevin Kelly, who predicts: “Our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.”

For now, Grail is trying to assemble a new dataset that has not existed previously while assembling the compute resources, algorithms and talent that integrate computer science, life science and biology.

As the Grail project demonstrates, machine learning with sufficiently large aggregations of data can open up vast new fields for medical research. As one small example, Huber cited a recently published study that analyzed electronic medical records. It discovered that a subset of diabetes patients had far lower incidence of cancer — on the order of one-third than the general population. This was a counterintuitive finding, said Huber, because one would expect that patients with diabetes, an inflammatory disease, would have higher rates of cell mutations at the margin and thus higher cancer rates. After looking more closely, researchers discovered that these diabetes patients were taking Metformin, an inexpensive drug for managing glucose levels, and this was apparently helping to fight cancer (further studies are seeking to confirm this suspicion).

“That’s a relatively trivial case of machine learning using a sufficiently large aggregation of data to make important findings,” said Huber. The problem is that most data is “incredibly siloed,” he said. “Electronic medical records are in tiny different pools all over; there aren’t any good aggregations.” There are also many privacy, security and business-model factors that are preventing the aggregation of medical data— a topic below.

AI as Augmented Intelligence for Conventional Medical Care
Beyond medical research, AI systems can have important applications in the everyday practice of medicine, especially in helping physicians gain access to a wider body of knowledge and make better judgments. Given the explosion of the medical literature, physicians understandably may not be aware of new or unusual findings or treatments for a given medical condition. One conference participant said that his wife had to visit five doctors before getting the correct diagnosis for a health problem. Others noted that doctors may not provide a balanced perspective about the available treatment options. Rao Kambhampati of the Association for the Advancement of Artificial Intelligence, envisions a future in which patients will consult doctors, but also ask, “What does the AI system say?”

Even before formal medical diagnoses, AI could be used to provide early informal assessments to patients. The largest diagnostic expert system in the world — heavily used and totally unregulated — is Google. Even many doctors turn to Google when certain configurations of symptoms puzzle them. Of course, Google can function in this capacity only because it is an unofficial, non-authoritative source of medical information, and therefore it cannot be held liable for the information it provides.

AI systems could provide highly refined and targeted assistance to doctors, if only as a second-opinion drawing upon a vast pool of digitized knowledge. It could also help provide some measure of authoritative confirmation for their diagnostic and treatment choices.

Once again, the issue of liability arises: What if a doctor relying on the AI system makes an incorrect or unwise judgment? While an AI system might predict that a patient has only a 1 percent chance of surviving a given disease, should the doctor and patient take that data-driven judgment as conclusive — “AI systems as death panels?” as one participant joked. There is enough that we do not know about health and disease that it may be dangerous to conflate data-driven analysis with the mysteries of the soma. The human factor matters. The will to live may triumph over the statistical predictions.

A related problem is the general lack of numeracy. Doctors are sometimes poor communicators, especially about statistical probabilities, and patients themselves may not be equipped to make good judgments based on numbers. Indeed, doctors themselves, when faced with fatal diagnoses, disproportionately choose not to receive medical care in order to avoid dying in hospitals.

AI as a Tool to Empower Individuals
AI systems offer a wealth of new ways that patients can take better care of their health directly. There are consumer-facing apps that can monitor vital signs (see the “quantified self” movement); make preliminary diagnoses of illness and disease; manage prescriptions for patients; and oversee their adherence to drug regimens. When combined with AI systems used by physicians, individual patients are beginning to have a dizzying array of choices.

Huber believes that AI could begin to consolidate more of the medical information inputs and synthesize them, relieving both patients and doctors of that impossible task. Right now, he said, patients get routed through a series of specialists, but in effect, you need to be your own general contractor because no single doctor can know everything. “So let AI be the general contractor,” he urged. AI could potentially understand all of the symptoms and outcomes, beyond what a specialist can.

It is too early to make any general conclusions about better treatment outcomes and reduced medical costs from such an approach, he conceded. But there are many anecdotes that suggest that prevention, early diagnoses and treatment, could save considerable money. A $500 blood test for cancer and $20,000 for early surgical intervention, for example, could save $2.7 million in futile treatments of late-stage cancer, Huber said.

One subtle but potentially huge impact of consumer-oriented AI systems is the disintermediation of conventional medical institutions. Just as individuals have used open networks to seize greater autonomy and choice from large, centralized institutions — newspapers, broadcasters, record labels, government — so new consumer AI systems could change how medicine is practiced, and where it is practiced. Services like WebMD, IBM’s Watson, Grail, 23andMe, and even Google search are already changing the economics of healthcare by making it more affordable to shift services to more accessible and even remote locations. Local drug stores now offer flu shots, a variety of wellness services, and nurse practitioners who diagnose illnesses, injuries and skin conditions. Google search is the first medical advisor that many people turn to.

These trends suggest that new innovations in medical AI may first take root and flourish in unregulated corners of the world. It is in these spaces — websites, adjacent retail sectors, foreign nations — where people are likely to have greater personal agency in opting into new types of healthcare delivery that leverages AI in creative ways. Unfortunately, the new disintermediated opportunities for AI-assisted healthcare will also be prime targets for bad actors having dubious medical expertise. Some sort of rapprochement between social responsibility and medical innovation will have to be negotiated and refined.

Structural Barriers to Expanding AI in Healthcare
If the vision for AI-driven change in healthcare is often compelling, the forces of resistance are deeply entrenched. “There is no shortage of heartwarming stories about what we could do with big data, and what people are beginning to do,” said Wendell A. Wallach, the Yale bioethicist. “But at some point in these discussions, we always come upon these macro, structural problems.” There are many players in healthcare policy debates who have their own reasons for opposing the use of AI and Big Data in medical contexts. There is, of course, the problem of technical coordination among the owners of different pools of data, but that is arguably remediable.

Michael Ferro, Chairman and CEO of Merrick Ventures and tronc, Inc., added that many governments have their own reasons for not pursuing AI-based innovations in healthcare: “Very powerful people in governments are really worried about all these healthcare innovations because it creates a whole new issue for them. If everyone lives longer, they [politicians] don’t know how to pay for it.”

Another major impediment to many AI-based approaches to healthcare is privacy. If healthcare data becomes available to employers or insurers, it could lead to discrimination against people in hirings, firings and insurance applications. And yet there are potentially important public and individual benefits from using artificial intelligence to detect illnesses and disease. Jeff Huber said that it is technically feasible for AI agents on one’s smartphone or computer to detect signals of potential mental disorders in users. “Early treatment would save lives and be a society good — but where does privacy start and stop in a situation like that?”

Stuart Russell, the computer scientist, said that he has an adjunct position in neurosurgery at University of California San Francisco, where “it took three years to get legal permissions to use our own data from our own ICU [Intensive Care Unit] for research purposes.” A far bigger problem, Russell added, is that “medical equipment manufacturers won’t allow researchers to access data being collected by their physiological measurement devices. They want to have a monopoly over the data.” Russell said that his colleagues have struggled with this issue for twenty-five years, and a nationwide consortium of researchers on which he sat has tried and failed to find solutions for five years.

Marc Rotenberg of EPIC noted that there are techniques for de-identification and anonymization of data that could provide some solutions by allowing data to be used in research with minimal privacy risks. “Of course, privacy experts tend to be a bit skeptical of this scenario,” he conceded, “and want to know how it is really going to work.” Rotenberg nonetheless believes that privacy is not a “zero-sum problem” and that win-win solutions are possible.

Then, of course, there are liability concerns. “Who is going to be responsible for these decisions — the recommendations and action based on AI?” asked Jeff Huber. Google says that it is willing to underwrite liability for Level 4 cars, but so far no one in the healthcare industry is willing to take on liability for AI systems and data-driven decision-making. This may be a case in which the state is the only player with sufficient incentive and means to address the problem. Any private insurer or tech company is ill-equipped to handle the magnitude or complexity of liability.

Perhaps the trickiest issue is whether AI-enabled healthcare would reduce or raise costs. To date, said Jeff Huber, “There is no evidence that all the health improvements we’ve made with technology decade after decade have actually lowered healthcare costs or improved outcomes. We’re dealing with a healthcare economy that is $3.05 trillion, the seventh largest economy by itself in the world — a half trillion dollars less than the powerhouse economy of Germany. And yet there is no end in sight for how we are going to control costs.”

Several participants agreed that a paradigm shift in healthcare is needed. It would benefit consumers who want personalized medicine and better medical outcomes, and help physicians, who could improve diagnoses and treatment; and researchers, who could have access to large bodies of aggregated data to improve their understanding of disease. But pursuing such a paradigm-shift, and developing a new systemic infrastructure to host AI systems and Big Data, remains elusive.

A few participants argued that this discussion is not just about biology, data and medical knowledge, but equally about patient agency and social trust. Mustafa Suleyman of DeepMind said: “An individual must have the personal agency to develop trust. He or she must be able to say ‘I approve this legitimate research use of my data,’ or ‘I withdraw consent from this particular use.’ We should be creating a verifiable digital structure around that.” Conversely, others argued that government agencies such as the Veterans Administration, which has one of the biggest repositories of healthcare data, could and should assert ownership of the data, and put it to use for public benefit.

While some argued that the debate is essentially a choice between government paternalism and individual empowerment, others replied that this is too simplistic. There are systemic differences among healthcare systems in the world, such as the single-payer system in the UK and the market-driven healthcare in the U.S. These differences are more influential in the deployment of AI data systems than a paternalism-versus-individual empowerment framing, said Suleyman. Jeff Huber agreed, saying that the U.S. healthcare system is simply unable to facilitate the kind of data-collection and analysis that Grail is currently undertaking. In many respects the U.S. system favors late-stage cancer treatments because it is more profitable than prevention. By contrast, the UK healthcare system is more structurally aligned with advancing long-term outcomes, he said, which is why Grail is doing its data trials in the UK.

Share On