page image

CHAPTER II - AI and Self-Driving Cars

Although there have been numerous experiments with autonomous vehicles over the decades, especially in the 1980s, the demonstration of autonomous cars on public roads in the past several years — especially the Alphabet self-driving car and the Tesla Motors Autopilot — has quickened interest in the technology. The Google prototype, released in May 2014, is fully autonomous, and notably does not have a steering wheel, gas pedal or brake pedal. The Tesla Autopilot, released in October 2015, can function autonomously on limited-access highways, but still requires drivers to be prepared to take control, as necessary, because the vehicle cannot detect lane markings, pedestrians or cyclists, and cannot shut itself off.

The arrival of these self-driving prototype cars has triggered a spirited debate about the implications of the technology and how its development should proceed responsibly. To give an overview of the current state of debate about AI cars, Dr. Astro Teller opened the first session of the conference with a brief presentation. Teller currently oversees the company X, which is Alphabet’s “moonshot factory for building magical, audacious ideas that through science and technology can be brought to reality.” (Alphabet is the parent company of Google.) For Teller, the public debate about autonomous cars is important because “self-driving cars amount to a microcosm of the whole AI debate, which means that there is more at stake than just self-driving cars.”

What distinguishes self-driving cars may be their very public character, said Teller. While there is a lot of AI work going on today, very little of it is as public or focused as self-driving cars. One reason for public interest may be the obvious safety implications. Unlike other fields of AI research, “it’s not hard to make the argument that someone could get hurt if it goes wrong,” said Teller, “and there is no field where it’s as easy to make a mistake.”

Teller noted that self-driving cars would be able to help many people. “There are more than a million people worldwide who die each year in car accidents,” he said. He also noted the economic impacts: Traffic in the U.S. alone wastes hundreds of billions of dollars a year. (The automobile fatality rate in the U.S. is 35,000 people per year.) A paper by two University of Texas researchers found that if 10 percent of the vehicles on the road were self-driving cars, there would be savings of more than $37 billion from lives saved, fuel savings, reduced travel time, and so on. The estimated benefits would exceed $447 billion if 90 percent of the vehicles on the road were self-driving.

Self-driving cars could greatly reduce the burdens of commuting and fuel consumption. The cars will enable the elderly and disabled to get around more easily. Drunk driving could cease to exist. Self-driving cars could also hasten a shift to electric vehicles. “So the upsides of self-driving cars are as self-evident as the downsides,” said Teller.

Teller worries, however, that concern about the risks of self-driving vehicles could needlessly derail the technology and its widespread use. While some people find the whole concept of self-driving cars alarming, he said, we need to remember that “airplanes already fly themselves.” Regulatory and technological systems for managing large-scale, autonomous transportation —aviation — already exist.

Nonetheless, Teller agrees that the unique challenges posed by self-driving cars require careful attention: “We must make sure that the process for regulating self-driving cars goes well, because a lot of other robotics and secondary fields will follow the good or bad path that this one goes down.” Teller hopes that automakers can demonstrate best practices and cooperation among each other, for example, in developing performance standards, and that a rigorous and flexible regulatory process can be established as soon as possible.

Teller and others in the room pointed out that the struggle of new technologies to gain acceptance is a familiar story. “Autonomous cars are going to happen, and they are the right thing,” said Michael W. Ferro, Chairman of tronc, inc., and an investor and philanthropist, “But everyone fights the pioneers.”

Conference participants identified a number of concerns about autonomous cars that must be addressed. The first set of issues (discussed below) involve the unsolved technical design challenges, most of which involve their safety. The resolution of these technical issues, in turn, are likely to affect how regulatory oversight and legal liability regimes are crafted.

Self-driving cars also raise a variety of secondary, indirect issues beyond the functioning of the car itself. Chief among them is the likely economic impact of eliminating jobs for drivers. Autonomous cars would also affect energy use, traffic patterns, urban design and real estate markets. Finally, AI-driven cars raise privacy and security questions: Who shall control the data generated by self-driving cars? And can AI systems successfully prevent malicious hacker attacks that could surreptitiously seize control of a car?

The next sections look first at the technical issues that must be surmounted in making self-driving cars safe enough for universal use. Then, the report moves to consider the various ethical, social and legal issues that need to be addressed, and the challenge of devising appropriate regulatory oversight.

Technical Challenges of Self-Driving Cars
When talking about self-driving vehicles, it is not widely appreciated that there are different levels of autonomy. People accustomed to science-fiction or Hollywood depictions of self-driving cars imagine vehicles that automatically whisk a person to a desired destination with no involvement by people at all. In fact, there are different gradations of autonomous design.

The Society of Automotive Engineers published a system in 2014 to classify just how autonomous a vehicle is. The National Highway Traffic Safety Administration formally adopted the classification criteria in September 2016. Cars at Levels 1 and 2 (“Driver Assistance” and “Partial Automation,” respectively) have minor automation systems such as adaptive cruise control, “Lane Keeping Assistance” and automated acceleration, braking and steering. Drivers in Level 1 or 2 cars cannot sit back and relax, however; they must be ready to take control of the automated system at any moment to avoid hitting objects or to deal with real-world events.

At Level 3 (“Conditional Automation”), automated vehicles have a more significant degree of autonomous control. They can function on known, limited environments such as highways, enabling drivers to avert their full attention from driving. However, an occupant still must monitor the vehicle’s operation, and may be required to take charge of it. A car with Level 4 (“High Automation”) features is almost completely autonomous. It can function in most driving environments except severe weather, and the driver need not pay any attention to the operation of the vehicle. A car with Level 5 capabilities (“Full Automation”) governs all aspects of dynamic driving tasks all the time, under all roadway and environmental conditions, without any driver role.

The differences between Level 3 and Level 4 — and the perceptions of what each should be capable of doing — is a source of some controversy. Designers of self-driving vehicles realize that there is a big difference between Level 3 and Level 4 vehicles in terms of their capabilities, how drivers must interact (or not) with the car, and what sorts of regulation may be needed. But, the general public may not necessarily understand these differences in automation.

Stuart Russell, professor of computer science at the University of California Berkeley, believes that Tesla itself may have contributed to confusion about the extent of its vehicles’ capacities by saying its prototype has an “auto-pilot” mode. Russell also cited a public comment by Tesla Motors’ co-founder and CEO Elon Musk in 2015 that autonomous cars, “would be like an elevator. They used to have elevator operators, and then we developed some simple circuitry to have elevators just automatically come to the floor that you're at...the car is going to be just like that.”

Russell criticized the implied claims that the Tesla vehicle is a “fully autonomous vehicle” when in fact “that problem has not been solved yet….I worked on automated vehicles from 1993 to 1997, and it was clear that there was no [intermediate] place between ‘smart cruise control’ and full autonomy. So here it is twenty years later, and we keep going through this.” Another participant believes that Tesla made an error in calling its car “auto-pilot” instead of “driver-assist,” and that Musk’s “elevator” comment needlessly caused confusion. At the time of this report’s release, Tesla announced its intention to produce and make available fully-autonomous vehicles by Fall 2017.

Highlighting an engineering design disagreement with Tesla, Dr. Teller explained why Google turned away from the development of a Level 3, “driver-assisted” vehicle similar to the Tesla’s four years ago. Google engineers discovered that they could not reliably assure that on-board testers would constantly monitor the car’s operation. They concluded that humans could not serve as reliable backups to the AI system, and so they re-focused their efforts on building a fully autonomous, Level 4 vehicle. “We [the company X] built a car that didn’t have a steering wheel in it,” said Teller, “because that was the only way we could teach our engineers not to trust humans as a backup system.”

Humans as the “Failure Point” in the Technology. The engineering design choices for Level 3 versus Level 4 cars raises a host of issues about the ways in which humans may need to interact with self-driving technologies. One troubling conclusion is that human beings — whether as drivers, pedestrians or cyclists — may be the key “failure point” in the technology.

For Teller, the key question is, “How safe should a self-driving car be to make it a legitimate use case?” It is reassuring that more than two million miles of real-world test data show significant safety improvements, he said, but distressing that many users of Level 3 vehicles are not prepared to take the steering wheel if necessary. Some test users of Tesla vehicles have actually climbed into the backseat, which Teller regards as reckless and perhaps criminal.

Such behaviors suggest that user education may be an important priority going forward, said Professor Rao Kambhampati, President of the Association for the Advancement of Artificial Intelligence and a computer scientist at Arizona State University. Kambhampati argued that “some driver-assist technologies can actually increase the cognitive role for drivers” by engaging them. But for a small minority of drivers, including himself, driver-assist features such as cruise-control are more annoying than welcome. He suggested that designers of autonomous cars develop ways to deal with such variable human responses.

Can auto-pilot technologies actually diminish a driver’s skills and alertness? That is a concern raised by Stuart Frankel, Chief Executive Officer of Narrative Science, a tech firm that generates natural language from data in enterprise settings. “The more that people use their semi-autonomous cars in autonomous mode, the more that their skills are going to atrophy,” said Frankel. “If you look at airline pilots who are under 40 years old or so, their ability to effectively deal with an emergency is significantly lower than that of older pilots.” The point is underscored by Maria Konnikova in her article “The Hazards of Going on Autopilot” in The New Yorker:

As pilots were becoming freed of responsibilities, they were becoming increasingly susceptible to boredom and complacency — problems that were all the more insidious for being difficult to identify and assess. As one pilot…put it, “I know I’m not in the loop, but I’m not exactly out of the loop. It’s more like I’m flying alongside the loop.”

Marc Rotenberg, President and Executive Director of the Electronic Privacy Information Center, raised a similar point, noting that the U.S. Naval Academy is now requiring young cadets to learn celestial navigation for the first time in twenty years. “They are anticipating failures of GPS,” he said, referring to the Global Positioning System, the navigation satellite technology that provides location and time information in all weather situations. “When you’re at sea, on a boat, entirely dependent on GPS, what do you do if GPS fails?” asked Rotenberg. “I’m sure that a lot of judgment went into this decision [to bring back the teaching of celestial navigation].” Should this line of thinking be applied to self-driving cars as well?

For Joi Ito, Director of the MIT Media Lab, there is no evading the fact that humans will have to co-evolve with new technologies, and over time, reach a stable rapprochement and familiarity with them. “I have a Tesla X,” said Ito, “and when driving I know exactly when [the driver-assist function] should be on and when it should be off. The training of humans comes from using a technology, in an iterative process. Each community of users is going to be different. You will have a co-evolution of humans and technology as people become accustomed to knowing more about the limits of the machine, and then they will start to trust it.”

Can AI Engage with Tacit and Dynamic Social Factors? While technology and humans will surely have to co-evolve, a deeper question may haunt the future of AI: Can it accommodate irregular driving practices and social norms, many of which are tacit, subtle, idiosyncratic and dynamic?

Conference participants pointed out that driving is a social act and tradition that varies immensely from one culture to another. “I think it would be easier to teach a car to drive in Japan, where people tend to follow the law, versus a country like, say India, where you don’t expect drivers to follow the law,” said Joi Ito of the MIT Media Lab. Ito cited a design proposal about how a self-driving car trained in England “would have to be put into quarantine before being allowed to drive in another country, because driving on the street is really about figuring out how people are going to react, and not about following the law.”

AI scientists are not unaware of these tacit, cultural dimensions of driving. The technology to “generate that [social] handshake is here today,” said Ken Denman, an entrepreneur and former Chief Executive Officer of Emotient, a tech startup that uses computer vision and behavioral and cognitive science to predict emotions. Denman said that computers, data systems and cameras can be used today to locate faces and interpret the meanings and emotions that are being expressed. The camera can make a prediction as to “Is that person looking at me? Are they engaged?” That data is available in real time today. The question is, “Is there some need for the car to signal the pedestrian?”

Astro Teller said that the Google car is addressing such issues as well. “We spend much of our total engineering time modeling issues such as a bicyclist waggling his hand.” Teller thinks that these are “temporary problems. We mostly don’t actually get signaled by the drivers of other cars. We just think we have a good model of what they’re like. And we don’t yet have a model for what self-driving cars themselves will be like or what they will do. Once they’ve been out for twenty years, it will be fine.”

Beyond the cultural quirks of driving, the deeper question may be, “What is the ability of AI to understand human foolishness?” said Father Eric Salobir. Salobir is a member of the Order of Preachers (known as Dominicans) and President of OPTIC, a network that promotes the digital humanities. He elaborated: “We should not be assuming some future world where everything is easy because everything is rational. We should assume the high level of irrationality that currently exists.” This insight may be especially important when we think about “rational” autonomous cars sharing the road with unpredictable human drivers, he said.

AI engineers are well aware of the tension between a world governed by formal rules and the messy realities of “real life,” said Astro Teller: “We [at X] have discovered that people are dangerous around our cars because our cars follow traffic laws. But people are so bad at following laws that they don’t expect that a car on the road next to them will actually do what the law says it should do. This puts us in this weird quandary.”

To move beyond the formal rules of an AI system, even one that is capable of learning and evolving, requires moving beyond what Wendell A. Wallach calls “bounded morality.” Wallach, an author and scholar at the Interdisciplinary Center for Bioethics at Yale University, notes that while many traffic rules are clear and self-evident — you stop at a stop sign, you brake when you see a child’s ball bouncing near the road, i.e., examples of bounded morality — other rules are highly situational and open-ended. In other words, social practice contrasts with purely automated programming. “Driving is actually a social practice,” said Wallach. “A classic example is when four cars come to a four-way stop at the same time. Which one should go first? People give each other social cues such as looking at each other, nodding, or nudging their car forward to establish who should go first. We don’t know how to program an understanding of these social practices into driverless vehicles.”

Part of the problem may be in conceiving of cars as “autonomous,” said Cynthia Breazeal, Associate Professor at the MIT Media Lab and Founder of JIBO, Inc. “Human driving is a very collaborative social process,” she said. “There is a lot of signaling of intent, and not just a following of rules.” It may be better to view the challenges of autonomous driving as a “collaborative teamwork problem, where the car is part of a team of humans who are in the car, pedestrians walking on the side of the road, and drivers in other cars,” said Breazeal. Such a framing of the challenge can help us “think about the interfaces we need and how to design the signaling of intentions, which are fundamental to the intuitive ways that people drive.”

For example, eye contact between a driver and a pedestrian can be a way of signaling intent, helping decide which car at an intersection is going to proceed first. Breazeal said that the problem with autonomous cars is that people cannot signal their intent to it, and it cannot read or anticipate what actual human beings will do. “I can’t signal to it” and thereby establish some measure of trust, she said, “and I can’t understand what the car may be signaling to me. These questions are really worth thinking through.”

Humans are not the only unpredictable factor. As computer pioneer Norbert Weiner said many years ago: “As machines learn they may develop unforeseen strategies at rates that baffle their programmers….By the very slowness of our human actions, our effective control of machines may be nullified. By the time we are able to react to information conveyed to our senses and stop the car we are driving, it may already have run head on into a wall . . . . Therefore,” Wiener advised, “we must always exert the full strength of our imagination to examine where the full use of our new modalities may lead us.”

The Ethical Design of Autonomous Vehicles
What do these realities mean for the ethical design choices of autonomous vehicles? By contrast, the real-life choices and behaviors of human drivers are arguably more unpredictable, improvisational and perhaps even unknowable because of the welter of situational factors and cultural predispositions at play.

An often-invoked ethical scenario for self-driving cars is whether a car should “choose” to hit a baby carriage careening into the road or instead swerve into a trolley filled with nuns. The algorithmic design of the car supposedly makes such ethical choices inescapable. While there are indeed ethical choices to take seriously, Astro Teller considers such scenarios removed from everyday reality. “When you give a human a driver’s test, you don’t ask them, right before you hand them the driver’s license, ‘Are you going to hit the nun or are you going to hit the baby?’ People say, ‘Jeez, I’m going to drive really safely.’”

The comparisons may be moot, suggested Father Eric Salobir, because drivers may or may not actually exercise moral judgment in such situations. Split-second driving decisions are not necessarily moral choices in any conventional sense, he said. “When something happens on the road, you react instinctively. It’s not morality. It’s just survival — an instinct.”

By this logic, replied Astro Teller, “It’s immoral for humans to be driving at this point, if they don’t really have time to choose.” Teller suggested that if alternatives to current driving practices could save lives and function more safely, then the proper “moral choice” is to use the alternatives: “Imagine if it turned out that robots could do some surgical operation with half the mortality rate of human surgeons. Would we let surgeons continue to do it? No, it would be immoral.” Teller suggested that the same reasoning might be applied to self-driving cars versus conventional cars. What matters is “having that conversation in a functional way with regulators, and getting away from this ‘nun versus baby’ nonsense, which is not useful because that’s not how AI works,” he said.

For Wendell A. Wallach, a bioethicist, “Programming the self-driving car to save the most lives in an accident (short-term utilitarian calculation) even if that meant killing the car’s passengers, could lead to more deaths in the long run (long-term utilitarian calculation) if that meant that people would not buy such a car. In other words, to minimize the harm from a once-in-a-trillion mile accident, we could lose many more lives because people won’t buy a car that might kill them.” And without consumer acceptance, there might never be a market for self-driving cars that could save tens of thousands of lives.

The critical question for Wallach is what sort of philosophical or conceptual framework will be used in making necessary ethical and policy choices. “These are social problems,” Wallach insisted. “These are problems of social practice. We need to be establishing social norms. So how do we go about doing that, and who can you trust to do that? People don’t trust government to do that. So who will be the good-faith brokers who can create a framework within public policy to establish some norms and trustworthy outcomes? Who?”

Other Policy Concerns Raised by Autonomous Cars
Beyond safety, there is a raft of other social, economic and policy concerns that self-driving cars are raising. These include:

Liability. Who’s responsible for any harm that the cars may cause? According to Teller, test data clearly show that cars with auto-pilot technology engaged are safer than conventional cars. This would be a huge improvement, but liability issues would remain, particularly for the potentially fraught real-world interactions between autonomous cars and human-driven cars.

Wendell Wallach believes that “autonomous technologies threaten to undermine the foundational principle that there is an agent, either human or corporate, that is responsible and potentially culpable and liable for what can go wrong.” Teller agrees that liability issues are complicated, at least for Level 3, driver-assisted cars. However, perhaps the issue is more straight-forward for Level 4 cars.

An ideal scenario for implementing such complete liability would be a city that entirely converts to autonomous cars, thus avoiding the messy, unpredictable encounters between Level 3 cars, Level 4 cars and conventional cars. Under some scenarios, the city of the future may wish to ban all conventional privately owned cars, converting automobile transport in cities into a service.

The cybersecurity of cars. An abiding problem for self-driving cars is their cyber-security. “We have to consider third parties who may be intent on causing significant harm by hacking into systems and disabling them,” said Marc Rotenberg of the Electronic Privacy Information Center, calling the problem a “huge soft target.” In a remarkable demonstration of this fact, a group of “white hat” hackers in September 2016 took control of a Tesla Model S vehicle from twelve miles away, unlocking the car and activating the brakes, bringing the car to a stop. Malicious hacks are likely to be an ongoing risk of self-driving cars.

Data privacy and due process. Because self-driving cars will generate and store vast quantities of data about driving behavior, control over this data will become a major issue, especially from a driver’s perspective. Following a crash or criminal allegation, for example, will the data belong to the manufacturer, to be used as forensic evidence to defend itself, or will the driver have full and exclusive access to the data? Marc Rotenberg suggested that the issue is not simply about privacy, but also about due process rights and fairness. Increasingly, states are declaring that people should have the right to know what data is being collected about them, and to be informed about how information may be legally used in the future.i

To help clarify what this process should entail, Rotenberg proposed two additions to Isaac Asimov’s famous “Three Laws of Robotics,” a proposed set of fundamental ethical behaviors that all robotic systems must implement. Rotenberg said two new rules should be added: “The machine must always provide the basis for its decisions,” and “A machine must always reveal its identity.” These rules are likely to become more important as autonomous devices (cars, drones, other) begin to proliferate across the social and geographic landscape.

Urban design and planning. Self-driving cars will have multiple transformational effects on the life of cities. It is unclear whether “robot taxis” and other autonomous cars will decrease traffic by reducing the fleet of cars in a city, or whether it will encourage more people to simply keep their cars circulating because parking spaces are so expensive. It is possible that a city may want to develop more dedicated bus lanes, segregated bike lanes and pedestrian paths if autonomous cars come to dominate city streets.

Real estate values may well shift as transportation patterns shift. Urban planners already know that public transportation boosts real estate values in the areas it reaches, and reduces values in less accessible areas. Uber cars are already having an effect on real estate markets, said Teller; autonomous cars will likely intensify this impact. For example, storefronts without adequate street parking — which normally would prevent their use as restaurants — could suddenly be feasible, which could raise the value of such properties by two or three times. Uber recently persuaded the City of Pittsburgh to give it free rein to experiment with autonomous vehicles within the city, which may yield some insights into this issue.

Economic disruption and jobs. One of the biggest unresolved policy questions is how to deal with the economic disruption that would affect millions of taxi drivers, chauffeurs and truck drivers whose jobs could potentially be eliminated by self-driving vehicles. The first self-driving truck began testing in the deserts of Nevada in May 2015, and firms such as Daimler and Otto (a startup launched by two former Google engineers) are now attempting to perfect the technology. Morgan Stanley predicts completely autonomous capability by 2022 and massive market penetration by 2026.

While autonomous trucks could surely help reduce the 330,000 large-truck crashes that killed nearly 4,000 people in 2012, they could also eliminate the jobs of 3.5 million truckers and an additional 5.2 million non-drivers employed within the trucking industry. The technology could also threaten the jobs of millions of people who work in restaurants, motels and truck stops that service truck drivers, with community ripple effects flowing from those job losses. The appropriate policy responses to such developments — re-training? a basic income? — are relatively unexplored.

The cultural appeal of autonomous cars. It remains to be seen whether the American people will embrace autonomous cars. For a country raised on the ethic of “hitting the road” and the association of cars with personal freedom, it could be hard for many Americans to “move from a world of personal autonomy to one of device autonomy,” said Marc Rotenberg.

What Type of Regulatory Oversight Is Needed?
There was a general consensus among conference participants that self-driving cars would create new regulatory challenges. But what specific sort of regulatory oversight is needed, and what structural principles and procedures should guide it?

One significant answer to these questions arrived a month after this conference, in September 2016, when the National Highway Traffic Safety Administration (NHTSA) announced voluntary federal guidelines for self-driving cars. Automakers will be allowed to self-certify the safety of autonomous vehicles based on a fifteen-point checklist for safety design and development. While the guidelines are not mandatory or enforceable, federal regulators expect compliance.

According to news reports,ii some consumer advocates object to the voluntary guidelines, preferring a formal rule-making process that would have provided greater opportunity for the public to register its views. But such a process would likely take years, say the makers of autonomous vehicles, who are eager to move forward rapidly; many critics believe that NHTSA authority will be extended. For the time being, the federal guidelines will likely deter states from enacting their own laws for autonomous vehicles, except for liability standards, which have long been a state concern.

While many makers of autonomous vehicles welcomed the NHTSA policies, the character of regulatory oversight is sure to evolve in the years ahead. The guidelines provide a general framework and assert a federal role, but many still-emerging issues will need to be addressed as self-driving cars move towards commercial sale and actual use at large scales. It is therefore worth reviewing some of the conference discussion about regulation, despite its occurring prior to the NHTSA announcement.

In thinking about how autonomous transportation should be regulated, an obvious analogy comes to mind: aviation regulation. Commercial aircraft have many autonomous and semi-autonomous technologies that have public safety ramifications. Is that history instructive in thinking about the regulation of autonomous vehicles?

One could argue that autonomous cars present an “easier” and more responsible challenge than aviation, said Jeff Huber, Chief Executive Officer of Grail, a firm that seeks to use AI to detect early cancers in asymptomatic individuals through a blood screen. “In my view airline regulation was spectacularly irresponsible. It was effectively ‘trialed-and-errored’ with people in planes without any feedback loop other than whether the plane crashed or not.” By contrast, the Google, Tesla and other autonomous cars have driven more than two million miles, and data from these ongoing tests in real-world circumstances are being fed back into the system in real time. The AI systems are learning, and the rate of learning is dramatically faster [than that which occurred in aviation].

However, “Autonomous driving on the roads is a much more complicated problem than commercial aviation,” said Rao Kambhampati, President of the Association for the Advancement of Artificial Intelligence. He noted that both aviation and self-driving automobile designs need to do a better job of taking human factors into account — but autonomous cars, in particular, need designs that can address human interventions necessary in the zone between 100 percent autonomous and 0 percent autonomous. “It’s not enough for the car to simply buzz and say ‘Pay Attention!’” he said, adding that more user education and training are needed.

Stuart Russell, professor of computer science at the University of California Berkeley, believes that regulation of commercial aviation has a very different character than what may be needed for autonomous cars. “It took forty years of step-by-step experimentation for the Federal Aviation Administration to approve things like automated landings,” he said. It helped that Boeing essentially had a monopoly, so there were no competitive pressures to deploy new innovations before the other guy.” At every single stage in the evolution of new technologies, said Russell, the FAA spent years to see if something worked before moving on to the next step. “Expecting that [regulation of autonomous cars] can jump all the way to Level 4, and that we can just stick cars out there and hope for the best — and appeal to the history of aviation as our precedent — is not reasonable,” he said. “The history of aviation regulation shows that a great deal of care was taken at every step along the way.”

However, the copious amounts of real-time data on actual performance of autonomous vehicles make a big difference in evaluating the technology, said Astro Teller of X. On the other hand, AI systems are likely to produce lots of unanticipated emergent behaviors, said James Manyika, Director of the McKinsey Global Institute, especially in moving from Level 1 to Level 4. The complexity of assessing subtle, interactive algorithms is likely to elude even many experts and possibly regulators, he said. Indeed, a big part of the research agenda for AI is to get better at understanding and modeling these emergent properties, along with verification and control.iii

There were a number of suggestions for improving any regulatory process for self-driving cars and refining the technology itself.

David Kenny, General Manager of IBM Watson, recommended the use of AI-based audits as a far better oversight tool than human auditors. It would also be faster than many forms of conventional regulation. Kenny suggested rotating working AI experts in and out of regulatory agencies so that the agencies could make more informed, sophisticated decisions.

Finally, Kenny suggested a global competition among smaller countries and city-states like Singapore as test beds for proving autonomous car technologies in systemic ways. Joi Ito of the MIT Media Lab cautioned that a global competition among allies may not be the best way to proceed. He did note that, in terms of thoughtful regulation, “Socialist countries have a much better alignment of incentives for taking a long-term view of what’s going to prevent harm and have the best impact.”

If tech improvement, transparency and social trust all matter, “Why not ‘open source’ the oversight?” asked Mustafa Suleyman, Co-founder of Deep Mind, an AI company based in London. “Why not be much more transparent about our models, our processes, our development frameworks and test frameworks? I think there are lots of really smart, technically savvy people who are willing to be part of a collective process of governance and oversight if we, as developers and companies, are prepared to provide a framework and be much more open and willing to engage.”

Suleyman described an experimental model used in healthcare that empowers a panel of independent reviewers to act as non-contractual, unpaid, independent reviewers of his firm’s work. “Their mandate is essentially to audit us in the public interest. There is a terms of reference and scope document, and the panel members meet periodically. They can interview people on my team. They will publish a report. They have a budget. I think it’s a first step towards trying to build public trust by proactively providing access. It provides some reassurance that we’re behaving responsibly and that we’re prepared to hold ourselves pro-actively responsible,” he explained.

Suleyman acknowledged that there are some proprietary issues that would need to be addressed in such a scheme, but added, “You can solve for those sorts of things.” He added that transparency and oversight also help improve the technology as outsiders identify bugs.

ENDNOTES
i The Drivers Privacy Act of 2015 establishes baseline privacy standards for Event Data Recorders. See Rotenberg, Privacy Law Sourcebook (EPIC 2016).
iii Russell, Stuart, Daniel Dewey, and Max Tegmark. "Research priorities for robust and beneficial artificial intelligence." (2016).

Share On