One of the obvious giveaways that you’re interacting with a robot is their blank dead-eyed stare. The eyes don’t connect with yours the way they would if they were, you know, human. A research team at Disney is trying to fix that using subtle head motions and eye movements that make the robot seem more lifelike—despite it lacking skin and looking like pure, unfiltered, nightmare material.
The robot, which mostly consists of a static torso (wearing a stylish dress shirt) supporting a highly animated and articulated head, was developed by engineers at Disney’s Research division, Walt Disney Imagineering, and robotics researchers from the University of Illinois, Urbana-Champaign and the California Institute of Technology. It seems like a lot of people for an animatronic that just barely resembles a human being, but despite the lack of muscles and skin, it represents an impressive leap forward when it comes to making a human-like robot that could potentially fool a real person.
There are subtle movements you don’t really think about when engaging with another person, but even just when staring into someone’s eyes, your head will slowly make small movements and adjustments (including a subtle up and down motion as you breath in and out), and your eyes will make constant corrections, given human vision is limited to focusing on an area that makes up just 2% of a person’s complete field of view.
Robots and animatronic characters designed to look humanoid and interact with real people can usually turn toward a person and focus the direction of their eyes on a human face, but they tend to just freeze in place at that point, which is the complete opposite of what real living beings do.
In a paper titled “Realistic and Interactive Robot Gaze,” the researchers describe a better approach they’ve developed, and it sounds like a layer cake of behaviors and interactions that add up to create a genuine illusion of life. Using a chest-mounted sensor, the robot can identify when a person is trying to engage with it directly and turn to face them, but this behavior is then enhanced with a series of other smaller motions layered on top. These can include attention habituation where an external stimuli, like a sudden sound in the distance, can cause the robot to momentarily shift its gaze to try and determine the source, but eventually return to focusing on a person’s face. Saccades, which are quick darting movements of the eye as it’s examining the entirety of a subject’s face, head movements that occur as a result of simulated breathing, and even simple realistic blinking motions are all possible.
By giving the robot the ability to perceive its overall environment—not just a person right in front of it—and the ability to react to other stimuli (imagine a kid screaming in delight at discovering a robotic Olaf character at Disneyland, for example), it accurately recreates the subconscious behaviors we rarely catch ourselves doing, but notice when they’re not happening in others. If you’re wondering why this is so important, watch any videos of people interacting with Sophia the robot, which lacks these behaviors, and you’ll suddenly notice them missing. But before these upgraded bots end up in places like amusement parks, Disney better slap some fake silicone skin on these heads. No one wants to see that nightmare in person.