
It’s a chilly Friday in Tokyo, and Shibuya Station bustles with people. In a few minutes, I will meet Arisa Ema, a leading researcher and academic who spent the past decade studying the impact of artificial intelligence on society.
Arisa-san knows that I am not at home with Tokyo’s topography, and a few days before our meeting she had warned me: “Shibuya is one of the busiest places in the world: it’s very easy to get lost.” She suggests we meet at Hachikō Square, one of the city’s most popular rendezvous points, right next to the statue of Faithful Dog Hachiko (忠犬ハチ公). To make herself look more noticeable in the crowd, she said, she would be wearing a yellow knitted beanie. As I wait for Arisa in the drizzle, I chuckle thinking that, even in über high-tech Japan, statues and colorful headwear are still the most efficient reference points. In the face of the most advanced GPS technologies, the beanie recognition system does the trick, and Arisa and I eventually manage to find each other. She would like us to go to the (in)famous Henn na Cafe, a place where customers are served by a robotic barista. Given Arisa’s field of expertise, this suggestion doesn’t come as a surprise.

As we walk toward our destination, the conversation begins. How come there is so much confusion around AI? Faced with the double challenge of a notion as inscrutable as intelligence that also pertains to the non-human, it is extremely difficult to develop a well-rounded opinion. Much more common is to fall into the trap of simplistic and prejudiced assessments—usually some kind of robo-apocalyptic scenario or escapist techno-utopia. Succumbing to such generalizations is not just ill-informed, but also dangerous. “When we lump together facts with fiction,” Arisa maintains, “we compromise on our capacity to distinguish well-grounded preoccupations from unedifying anxieties. This, in turn, prevents us from adequately responding to the fast-paced changes that AI is inducing across domains such as labor, health, social inequality, and ethics.”

The breadth of AI’s impact on society brings us to the second question: How deeply is artificial intelligence changing the way we conceive of reality? There is little doubt that AI’s infiltration into everyday life—with machine translation, crowdsourced traffic and ride-sharing apps—is dramatically shaping our habits. Yet, arguing that we are facing an epistemological (if not ontological) shift is a whole different kettle of fish.
First, our understanding of artificial intelligence is affected by the limits of our personal experience, which is inevitably bound to the present. This historical bias makes us wonder whether AI is qualitatively different from anything humans have experienced before. Or whether instead we are simply reproducing the same questions and preoccupations that have emerged in the past when—let’s say—the locomotive, the telegraph, and other groundbreaking technologies came to light. Second, the notion of impact is quite broad and can be approached from different angles. It might take decades before a technology becomes available (i.e., affordable and simple to use) to the broader public. This is true of AI as much as of other technologies, too. For instance, computer technologies that already existed in the 1990s and 2000s have just recently become an integral part of our ordinary lives. And this figure does not even account for the more than half of the world’s population that does not have access to the internet. This is to say that when we try to assess the social impact of AI, considering the highest technology available is not enough.
There is a fundamental difference between the highest and the most appropriate technology: the former is constricted within the perimeters of a laboratory; the latter is accessible to ordinary citizens. When the gap between the two is bridged, there is always a very intentional political will behind it. Arisa refers to it as “the network of infrastructures”; namely, the set of incentives, services, structures, and facilities that turn abstract technologies into available, user-friendly, and convenient tools.
And, crucially, a certain amount of (re)education is required as well. When it comes to AI, this is not about understanding the nitty-gritty of its mechanics and technical operations. Rather, it is about learning how to interact with artificial intelligence in an active and critical manner. First and foremost, this means being wary of how, when managed by the wrong hands, AI can become a dangerously powerful tool of social engineering and mass control. It means being aware of the bias that currently exists in the data sets AI feeds itself of—which, in turn, produces corrupted images of reality, reaffirming prejudices and injustices. Finally, it means demanding that these distortions are promptly corrected.

And what about educating artificial intelligence? Some of the most promising applications of AI involve realms of activity that, until now, have been the prerogative of human-human interactions, such as healthcare and education. Now things have changed. One among numerous examples is PARO, the robotic baby harp sea developed by Takanori Shibata in 1993, which has been introduced in hospitals and nursing homes to provide long-term care for elderly people and patients with cognition disorder. Designed as the ultimate kawaii object, PARO’s features have been found to have a calming effect and elicit emotional responses in patients.
When nonhuman entities infiltrate the spheres of intimacy and trust, we must ask ourselves: Do we need artificial intelligence to have morals? If so, who should be responsible for educating it? Is it a task we should entrust to philosophers, roboticists, or even to machines themselves? Perhaps most importantly, what kind of moral standards should AI comply with? Can we, as the masterminds behind AI, provide the objects of our creation with a universally accepted system of values, to be applied in all circumstances and corners of the planet? The answer is “no,” because in the two hundred thousand years or so that the human species has inhabited the planet, we haven’t been able to come up with anything as such. This, perhaps, teaches us the most important lesson: that studying AI is, ultimately, about thinking about the human nature.

In the meantime, Arisa and I have reached Jinnan, in the center of Shibuya, and we make our way into the robot coffee shop. The venue, which opened in February 2018, is the newest addition to the HIS Henn na family, along with the world’s first hotel staffed by robots. In Japanese, henn translates as change, but also as weird, strange; and in fact, the experience of entering a cafe and facing an avatar’s face on a screen is uncanny, to say the least. Surrounded by department stores, karaoke, and entertainment centers, the bar currently appeals more to robot freaks than to espresso aficionados. But prices are competitive with human-staff cafes (a cup of drip coffee sells for less than $3), and the ambition is to turn it into a viable business model.
As Poursteady (that is the name of the robot barista) grinds and brews the beans, we have the impression we are catching a glimpse of much larger changes, whose scale is hard to comprehend fully. How will businesses like this affect the job market and our conception of labor? How long will it take before our favorite bar down the road will be replaced by its robotic alter ego? Or maybe there will be no replacement, and the robotic and the human cafes will coexist in a beautiful symbiosis?
Our coffees are ready, Poursteady announces. The range of actions that the robot can execute is limited (lifting, lowering, pushing, and pulling), and this was taken into account when the 108 square feet (10-square-meter) bar was devised. For example, designers had to come up with a little gizmo so that the conic pour-over coffee filters could be grasped by Poursteady’s robotic claws. This is design that considers machines, rather than humans, as its main reference point. A little spooky; but also fascinating.
I sip my Americano, which is warm and delicious. “Hello, would you like delicious coffee?” the robot greets the next customer.
***
Arisa Ema is Assistant Professor at the University of Tokyo and Visiting Researcher at RIKEN Center for Advanced Intelligence Project in Japan. She is co-founder of Acceptable Intelligence with Responsibility Study Group (AIR), established in 2014, which seeks to address emerging issues and relationships between artificial intelligence and society.