From the day we’re born, we orientate towards faces and look for human-like qualities in everything around us. We’re attracted to lifelike objects, and develop emotional attachments to dolls and cuddly toys. We also have a tendency to anthropomorphise even very unlifelike objects, like boats and cars, especially if those objects move.
Given our tendency to relate to human-like objects, it would seem logical that making robots as lifelike as possible would encourage humans to trust them and work alongside them. However, researchers in the 1970s found that, while giving a robot human features did increase people’s sense of familiarity with the robot, once a certain point of human-likeness was reached, the familiarity turned to revulsion.
The point at which familiarity turns to revulsion is known as the ‘uncanny valley’, a term coined by robotics professor Masahiro Mori to describe the sudden dip seen in a graph depicting how familiar people feel with various lifelike objects.
If an object is obviously a machine, with no lifelike qualities beyond movement, people feel very little emotion towards it. Once humanlike features are added, such as arms, legs and a face, people start to feel more familiar with the object and the general attitude towards it tends to be positive. However, once the object seems almost human, but not quite, people then start to feel a sense of unease and even fear and disgust. Once that ‘almost human’ phase has passed, and the object becomes indistinguishable from a human, the unease disappears again.
Psychologist Stephanie Lay has been studying the phenomenon since 2006. She came across it while completing her masters degree and discovered that no one had attempted to analyse it from a cognitive psychology perspective and that, in fact, solid theories about why the phenomenon existed were thin on the ground.
Stephanie describes how she approached her research, and what she has discovered so far:
“My earliest questions were around whether the effect could be reliably elicited and demonstrated at all. It was just a hypothetical construct at the time and as fear and disgust are highly subjective emotions, I did think that the subtler emotion of uncanniness might vanish entirely when subjected to empirical scrutiny, or be so related to individual perceptions and preferences that no reliable patterns would be found. I was surprised and pleased to see that an uncanny valley type pattern of emotional response appeared in my first study. People found the near-human agents to be consistently more aversive than either the human or artificial agents they saw. Heartened by this result, I then carried out a qualitative phase where I asked people to describe how they felt when looking at different near-human entities, and also to try and describe them well enough for other people to find in a crowd.
There was a striking finding that people consistently referred to particular features a lot more when describing the near-human agents, which then prompted me to start exploring the perceptual mechanisms underpinning the effect.
Perhaps the most surprising finding was that few people appeared to be ‘immune’ to the effect: even when they didn’t report an overt sense of being afraid, their scale-based ratings still found a distinct pattern of aversion to the near-human entities.
At the start of my research, I had a lofty ambition to know everything about the uncanny valley. Did the sense of disquiet that seemed to typify the effect count as an emotion in its own right, rather than a combination of other emotions such as fear, disgust and aversion and, if so, how could it be reliably elicited in people?
As soon as I started to look into the phenomenon empirically, I found that answering those questions first meant resolving a key circular argument: things are classified as examples of the uncanny valley because they seem to be uncanny, and things are called uncanny because they’re almost but not quite human. To break this cycle, I’ve found that it’s necessary to establish independent baselines for “is this thing near-human?” and “is this thing uncanny?” and only then is it possible to analyse the effect of changing human likeness on how people feel.
My research has taken this systematic approach using several methods. I’ve made gradual variations in human likeness by morphing pictures of robots to make them slowly more human, and I’ve also gone beyond just looking at robots to examine other non-human agents that can be humanised.
I’ve created ‘chimeric’ faces by blending images of different emotions together to create novel faces that would be very difficult to encounter in real life, such as that shown right.
At each stage, I’ve measured how humanlike each of the variations appears so I can be sure that the manipulations are effective. To measure how the different images affect people, I’ve used qualitative and quantitative methods to assess emotional responses ranging from free text descriptions of near-human agents through to collecting precise ratings of how angry, afraid or disgusted different types of face make people feel.
Using this approach, I was able to narrow down a very broad topic to a specific question of whether near-human faces are perceived differently to either human faces or artificial faces. My key findings have been that they are, and that we process near-human faces in a manner closer to how we process objects.
Near-human faces do not seem to elicit some of the processing mechanisms that human and artificial faces elicit. For example, I found that ‘eeriness’ could be reliably elicited when viewing particular types of chimeric faces where different emotional expressions were combined, and that this effect was consistent even when the faces were presented in conditions where you would not expect an emotion to be reliably perceived. This finding is a novel contribution both to the field of the uncanny valley and also to face perception research in general.
While studying the uncanny valley, I’ve come to think that we can’t help reacting emotionally when we encounter something that has humanlike properties and so, as we experience technologies that are able to interact with us on that level, our ability to engage with them will be a key component of whether they are accepted. To an extent, this has always been the case even when that engagement is one way and unintentional.
For example, there is evidence that people’s car buying preferences can be influenced by the front aspect created by the headlights, grille and bumper. Traditionally, cars that give the appearance of a friendly ‘face’ were preferred over those with a non-smiling or frowning face but this changed in the early 2000s as more aggressive faces became preferred.
Regardless of the direction, emotional resonance with an inanimate object does seem to have a significant effect on buying behaviour, so if even car purchases can be influenced by that type of connection, an embodied companion robot would experience it all the more strongly.
My expectation would be that near-human agents that fall into the uncanny valley will be harder to accept due to those barriers in establishing a sense of empathy and credibility. My research suggests that these barriers would be particularly problematic if the agent is unable to reliably present a believable emotional expression, especially if there is an incongruity between the expression that is shown in the eyes and the rest of the face. These were the faces that tended to be the most aversive when tested in static images and I would expect this would transfer to embodied agents.
There are two schools of thought when it comes to how the impact of uncanny agents will change over time. One approach suggests that as we become familiar with near-humans, we will be reassured that they aren’t threatening and there will no longer be an unsettling association with their almost but not quite human traits. In this scenario, senses of disquiet will reduce, they will no longer seem as uncanny and, over time, familiarity will extinguish the uncanny valley experience.
The other approach suggests the opposite: that our mechanisms for detecting the discrepancies between the near-human and the human are highly specialised and sensitive, and as near-humans get closer to human, those mechanisms will increase in sensitivity and we will always be aware that the near-human is close but not quite acceptable. The uncanny valley may narrow, but we will always be able to tell reality from artifice.
There is evidence supporting both predictions but at the moment I think it’s too early to be sure. While encounters with the near-human are becoming more common they’re not yet an everyday occurrence for most people.
I think it’s a perfect time for research on the uncanny valley to move beyond studies using videos and pictures of near-human agents to conduct more research with embodied agents in real-world settings.
There have been several case study pieces of work to date, but as people do become more likely to encounter near-human agents in their day to day lives, systematically researching those real life experiences of interacting with them would be a rich and fascinating source of material.
While it’s not really a general direction for future research, I regret that I was unable to take up the offer of a study working with some reborn dolls, dolls designed to look like newborns, looking at how people respond to their extremely life-like appearance. Anecdotally, they do seem to evoke strong reactions in a lot of people, from intense affection to an almost overwhelming sense of disquiet. It would have been fascinating to discover whether any individual factors such as experience with real newborns would systematically predict those reactions.
I’d really like to explore the cross-cultural aspect of perceptions of the uncanny valley to examine whether cultures who have greater experience with near-human agents experience the valley differently.
For example, one of the solutions to the problem of an ageing population in Japan has been to advance the development of companion robots to the stage where these are now commercially available. This gives a unique opportunity to explore the question of whether the uncanny valley effect does indeed extinguish with increased real world experience.
Exposure to such embodied near-human agents may increase or decrease sensitivity to those aspects that mark them out as not quite human. I would be interested in re-running my studies measuring emotional responses to chimeric faces on a cohort of people who have experience of living with companion robots and compare whether they also found certain expression blends to be unsettling.”
Frances Brown, editor of the The Ergonomist. This article first appeared in The Ergonomist, January 2015.
Stephanie Lay is a postgraduate psychology researcher at the Open University. She can be contacted at email@example.com. Full details of her work can be seen at uncanny-valley.co.uk.