The Uncanny Valley of Generative AI

Maren Hogan
6 min readMar 1, 2024

--

Picture this: a social media feed filled with posts, articles, and comments, each crafted with precision by algorithms designed to emulate human language. At first glance, it’s seamless, indistinguishable from the prose of flesh-and-blood authors. But delve deeper, and the cracks begin to show. There’s something missing, an essence that only genuine human expression can capture.

These days, every click and scroll unveils a cascade of content, and the rise of generative AI has 10x’d creation and consumption. But despite the excitement of reading and consuming as much content as I possibly could, I am feeling a subtle dissonance — a feeling like standing at the edge of the uncanny valley, where the lines blur between what is human and what is artificial.

Understanding the Uncanny Valley

You know that weird feeling you get when you see a robot that looks almost human, but not quite? That’s what we call the ‘uncanny valley’. It’s this idea in human-robot interaction that if a humanoid object doesn’t quite nail the human look, it can really give you the creeps. (Also, read this paper, it’s bonkers.)

The term was first coined by Japanese roboticist Masahiro Mori in 1970 (yes! Something older than me.) Mori’s theory posited that as robots become more human-like in appearance and movement, people’s emotional responses to them become increasingly positive and empathetic up to a certain point. When the resemblance is almost, but not perfectly, lifelike, the slight imperfections cause discomfort and eeriness — a dip in the emotional response graph Mori termed “the uncanny valley.”

So, as an object’s humanness hits about 70%, we start liking it more and more. But, once it hits 80%, it’s like we hit a wall, and our liking drops big time. Then, oddly enough, when it gets super close to 100% human, we start digging it again. It’s like a valley on a graph.

Now, things that don’t look human at all, like industrial robots, don’t really do it for us. But give us something that looks a bit human, like toys or cartoons, and we’re all over it. But here’s the weird part — when something looks almost human but not quite, around 80 to 90%, it starts to freak us out a bit.

Even though people have been studying this uncanny valley thing for a long time, they still can’t agree on why it happens. Some think it’s because these almost-human robots make us think of illness or remind us of death. Others believe it’s because our brains get confused when we can’t decide if something is a robot or a human.

The “Stay Away from Danger” Idea: (Threat Avoidance Hypothesis)

Way back when humans had to be on the lookout for things that could make them sick or even be the end of them. So, when we see robots or figures that look almost human but are a bit off, our ancient “uh-oh” alarm bells go off. If something looks human but not quite right like it might be sick and infect our whole tribe or whatever, we get the creeps because our brains think, “Danger! That could make me sick, too.”

The “Looks Matter” Theory: (Evolutionary Aesthetics Hypothesis)

This one’s about how good-looking robots or figures don’t freak us out as much. Over time, humans have developed a liking for certain looks — like when faces are symmetrical (but not too much), or skin looks smooth (never too smooth, I am looking at you ‘glass skin’ trend.) So, if a robot looks really good by those standards, we’re more likely to be cool with it. It’s like our brains go, “Hey, this one’s alright.”

The “Do Robots Have Feelings?” Question: (Mind Perception Hypothesis)

This idea is all about how weirded out we get when robots seem too real like they could have feelings. We’re okay with robots doing stuff like talking or moving around, but the moment we start wondering if they can feel sad or happy, it gets too weird for us. It’s like, “Hold on, robots aren’t supposed actually to feel anything, right?”

The “This Isn’t What I Expected” Reaction: (Violation of Expectation Hypothesis)

We all have ideas about how robots should act. We think they should move smoothly or talk just like us. But when they don’t, when they’re jerky or sound robotic, it throws us off. It’s like expecting to bite into an apple and tasting an onion instead — our brain goes, “Wait, that’s not right,” and we feel uneasy.

Pre-AI Examples of the Uncanny Valley

Wax Figures: Lifelike wax figures, such as those found in Madame Tussaud’s museums, offer an early example of the uncanny valley. While admired for their artistry, certain figures can creep you out due to their almost-but-not-quite-human appearance, especially when viewed up close.

Animated Characters: In animation, when movies like “The Polar Express” (2004) try to make human characters super-realistic, they sometimes stumble into what’s called the uncanny valley. They used this tech called motion capture to make the characters, but it kinda backfired. Some folks ended up finding them a bit off, saying they looked eerily lifeless.

Prosthetics: Advanced prosthetic limbs, especially those with realistic skin textures and movements, can also evoke uncanny valley sensations. Though they’re impressive tech advancements, their look and movement can sometimes seem eerily unhuman-like to both the user and those watching.

The Uncanny Valley in the Age of AI

Social Robots: You know those social robots that are being designed for places like care settings, customer service, or just to keep us company? They’re trying to cross this weird gap known as the uncanny valley. Take Sophia, this robot developed by Hanson Robotics. She’s got pretty lifelike facial expressions and can carry a conversation. But, her almost-human look and behavior can make folks feel a mix of fascination and unease.

Deepfakes and Virtual Influencers: The proliferation of AI has led to the creation of deepfake videos and virtual influencers on social media. Deepfakes use AI algorithms to create convincing video forgeries, while virtual influencers, like Lil Miquela, are entirely computer-generated characters with realistic human appearances. Both technologies demonstrate the uncanny valley effect by blurring the lines between reality and artificiality, leading to both amazement and unease as viewers discern their non-human origins.

Writing and Communication: That feeling that something feels off or unsettling to the reader, even if you can’t pinpoint exactly why. It’s like encountering a realistic-looking robot that moves and talks almost like a human but falls short in subtle ways that evoke discomfort.

Imagine reading a blog post that seems well-written and informative at first glance. But on closer inspection, you notice a lack of personal voice or authenticity. The language feels too polished, almost robotic, devoid of the quirks and nuances that typically characterize human communication. My bet is, you’ll skim right through it, just as we did boring textbooks in school. There is nothing about it that connects.

You’re interacting with a facade rather than a genuine expression of someone’s thoughts and emotions. (Okay, but also check out how uncanny literature was already a thing, and in some cases, the desired effect, CREEPY AF. Also, you can fight me if you think that technical writing, copywriting, etc do not have anything to do with being a good WRITER. If you think they are different skills, fine. But if you think they have nothing to DO with one another, then you know nothing, Jon Snow.)

Verbal communication: The uncanny effect can manifest when someone speaks with unnaturally perfect diction or intonation, resembling a scripted performance rather than spontaneous dialogue. It creates a sense of artificiality that undermines the authenticity of the interaction, leaving the listener feeling uneasy or distrustful.

The uncanny valley effect has profound implications in the era of AI, influencing how technologies are designed, implemented, and received by the public. It underscores how important it is to be authentic and sincere when we write or chat. It just goes to show, nothing beats real, genuine human expression when it comes to connecting with others.

--

--

Maren Hogan

Chief Marketing Brain of @RedBranch Media. I help folks in recruiting, talent acquisition and HR, figure out marketing, community and social. #TBEX #TChat