page image

CHAPTER 2 - Design Decisions

The old design adage, “form (ever) follows function,” suggests that the style of architecture (building or object) should reflect its purpose. The purpose of a coffee mug is to safely transport (presumably) hot liquid. Hence its cylindrical body, wide mouth, and ceramic thickness. But what happens when the form becomes dislocated from the function? Or, in the case of digital technologies, when multiple functions manifest within one form factor? To what degree is form really then a factor?

Making a guest appearance at this year’s roundtable was PARO, an extremely adorable, fluffy baby robotic harp seal that squeaks and tilts its head in response to human interactions. First introduced in 1993 by Takanori Shibata, its acceptance into society took several years and many iterations. Shibata, Chief Senior Research Scientist Human Informatics Research Institute at the National Institute of Advanced Industrial Science and Technology (AIST), recalled investigating multiple robot prototypes and various companionship models. He settled on the human-animal bond and cited research from animal therapy which had successfully been applied in various situations and across demographics. In addition, Shibata recognized a need for an alternative therapeutic intervention for situations where real-life animals would be problematic, such as in ICUs or with patients with allergies. PARO is categorized as a medical device by the FDA and is said to improve depression, anxiety, pain and increase group participation. Shibata commented on the choice of using a baby seal stating, “in the case of a human-type robot, people expect too much from the robots because people associate the same functions or role of human beings. But, it’s very hard to realize similar functions.” For the record, PARO can learn names and remember faces.

Tom Gruber, Impact Advisor for Humanistic AI and former CTO, co-founder, and head of design of Siri, also shared his experience. “Now, is it true that we were influenced by the Knowledge Navigator video of 1987, absolutely. But, from a design point of view, we rejected the face from the beginning,” explained Gruber. For Gruber and team, adding a face would be a distraction and potentially run the risk of engaging the “uncanny valley” - the subjective phenomenon of feeling unsettled when faced with a human-like robot.

This deliberate omission was augmented by other cues to help with public adoption of a faceless conversational agent. For example, the voice is gender-neutral and can be changed via a setting. Siri is also species-neutral. “It’s not human . . . it comes from a different place and maybe even a different number of dimensions. And, that was also by design; so that we could have a stance of ignorance but still have legitimacy,” said Gruber. Today, the integration and adoption of (faceless) intelligent virtual assistants continues to move upward. In 2018, over 43 million Americans owned a virtual assistant like Amazon’s Alexa, Apple’s Siri or Microsoft’s Cortana. Market reports project this segment to reach approximately USD$ 19.6 billion globally by 2025.

Both Paro and Siri serve as examples that form and function are (actually) one. A product’s purpose can be shaped by form and or function—and with success. The key, according to some roundtable participants, is in a deliberate, inclusive, and thoughtful design process. Specifically, Michael Chui, Partner at McKinsey Global Institute, suggested the need for affirmative design guidelines such as incorporating regulation for identification like “this [machine] isn’t really understanding you, but feel free to play with it.” Yet, as machines become more sophisticated, is a mandate for better design truly enough?

For some in the room, design is useful but not an omnibus solution. To illustrate why, MIT Professor Sherry Turkle shared the following story:
So, I’m watching an older woman who’s recently had a child die, talk to the Paro, and the Paro does very little. But, the woman understood that the Paro was sad for her, and the woman began to comfort the Paro. . . this was a really extraordinary moment because it wasn’t just that the Paro had convinced this woman that the Paro was sad, but the woman was now responding to the Paro as though the Paro needed comforting and care.
This is technology pushing our “Darwinian buttons,” as Turkle coins. . In other words, this machine that offers pretend-empathy, regardless of design or form, exploits a deep psychological human vulnerability to feel an attachment for sociable artifacts. Turkle’s concerns is for what happens to us in our vulnerability for pretend-empathy: “For it is our nature to project humanity onto these humanoid objects that have none, but which nevertheless push our Darwinian buttons to relate to them as human.”

Good-old-fashioned empathy is a two-way street.

“Human relationships are messy,” Turkle re-emphasized, “what matters here is not so much these robots or screens, but what it’s doing to our relationships, and to take great care that what we don’t ever substitute the kinds of things that we can do with screens for what we need from with each other.” And, if humans begin to offload our mental and emotional labor onto machines, what is the potential ripple effect? And to what extent is this problematic?

 
 
Title Goes Here
Close [X]