🧠 Has AI Become Too Human?

PLUS: How Social Psychology Explains Our Reaction to Human-Like Robots

Welcome back AI prodigies!

In today’s Sunday Special:

  • 🤖What Is It?

  • ⚙️Does It Exist?

  • 💭Does It Matter?

  • 🔑Key Takeaway

Read Time: 7 minutes

🎓Key Terms

  • Combinatorial Explosion: the increasing complexity of a mathematical problem due to the rapid growth of inputs and constraints (e.g., Sudoku).

  • Medial Prefrontal Cortex (mPFC): a brain region in the frontal lobe that regulates higher cognitive skills like language, reasoning, planning, and social interactions.

🩺 PULSE CHECK

Do “almost-human” robots make you feel uncomfortable?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

🤖WHAT IS IT?

Since the inception of OpenAI’s ChatGPT, AI applications have transformed from a niche resource for hobbyists, programmers, and industry insiders to an emerging technology carving out new industries.

AI-enabled content is everywhere, whether through distribution (e.g., the recommendation engine powering Instagram’s feed) or pixelation (e.g., AI-generated images on The AI Pulse’s Daily Reports).

But AI, like any breakthrough technology, has introduced unique challenges like security, data privacy, and job automation concerns. However, these challenges assume adoption.

⛄️The “AI Winter” Hits?

In 1974, the first “AI Winter” started a period of reduced funding and interest in AI research. During this period, The UK Parliament asked British applied mathematician Sir Michael James Lighthill to evaluate the state of AI research. In his Lighthill Report, he criticized the utter failure of AI to achieve its “grandiose objectives.” To him, the combinatorial explosion problem prevented AI algorithms from tackling real-world problems. Instead, Lighthill believed AI algorithms were confined to narrowly defined numerical challenges in controlled environments with limited variables.

As a result, British engineers began supporting a hypothetical phenomenon known as the Uncanny Valley, which is when people find objects that are almost, but not quite, human to be unsettling or repulsive.

🏔The Uncanny Valley?

Japanese robotics professor Masahiro Mori proposed the Uncanny Valley at the Tokyo Institute of Technology (“Tokyo Tech”). He envisioned people’s reactions to robots that looked and acted almost human. In particular, he hypothesized that a person’s response to a humanlike robot would abruptly shift from empathy to revulsion as it approached, but failed to attain, a lifelike appearance.

For decades, the Uncanny Valley served as the wellspring of ideas for science fiction (“sci-fi”). Now, interest in the hypothetical phenomenon is intensifying as conversational chatbots like OpenAI’s ChatGPT and humanoid robots like Tesla’s Optimus Gen 2 continue to evolve.

Will backlash to increasingly humanoid AI creations be widespread, and how might it manifest? Before determining the Uncanny Valley’s relevance, we must assess its legitimacy.

⚙️DOES IT EXIST?

Some Neuroscience Research supports the Uncanny Valley. For instance, monitoring humans with Magnetic Resonance Imaging (MRIs) shows that the brain behaves differently when confronted with near-humanness.

German scientists published a behavioral and cognitive research article titled “Neural Mechanism for Accepting and Rejecting Artificial Social Partners in the Uncanny Valley,” which firmly supported the Uncanny Valley by pinpointing brain regions activated during a subject’s rejection of an artificial social partner.

Participants were shown images of humans, artificial humans, and humanoid robots. They were required to rate the likability and human likeness of each image.

Next, the participants were told to select which image they trusted to choose a personal gift a human would like.

Here, the German scientists discovered that participants favored choosing humans or the more human-like artificial humans but avoided the images closest to the human/non-human boundary.

By measuring the brain activity levels of the participants when selecting images, German scientists traced this sense of discomfort to the Medial Prefrontal Cortex (mPFC), which processes and evaluates higher cognitive skills like language, reasoning, planning, and social interactions.

Two distinct parts of the mPFC were necessary for the Uncanny Valley. One part converted the human-likeness signal into a “human detection” signal, and another part integrated this “human detection” signal with a likability evaluation to produce a distinct activity pattern that closely matched the Uncanny Valley response.

However, associating brain activity levels with human-like stimuli doesn’t justify scientific evidence. A recent review of Uncanny Valley research suggests little evidence to support the hypothetical phenomenon. Critics say that the Uncanny Valley is a conflation of other psychological effects or the simple result of greater exposure to humans than to robots. As is often the case in social psychology, the variety of potential causes of a hypothetical phenomenon with little substantive research makes it difficult to make conclusive judgments.

💭DOES IT MATTER?

Setting scientific literature aside, we’ve felt suspicious of plenty of realistic AI-generated content. For example, deepfakes create hyperrealistic media manipulations through computer-created artificial images or videos that depict events, statements, or actions that never happened. Deepfakes are leveraged to create funny memes or weaponized to generate misinformation. Either way, we instantly try to detect it when we see it.

MIT Media Lab recently launched Detect Fakes, a research initiative that tests our ability to detect synthetic media.

Blurring the lines between reality and fantasy raises our suspicions, worrying us that we won’t be able to tell the difference in the near future. This anecdotal evidence alone warrants further investigation.

Analysts also point to AI Artifacts, the flawed outputs AI models generate because of issues with their training datasets. For example, Generative AI’s (GenAI’s) tendency to leave physically inaccurate or impossible details in its creations. You’ve likely seen AI-generated images with oddly shaped human hands that possess extra fingers. Maybe the extra fingers trigger the feeling of the Uncanny Valley?

With video generators like OpenAI’s Sora, I was amazed by the highly pixelated, well-colored videos of an utterly made-up visual scene, especially amidst a seemingly endless set of possible prompts. However, I’ve been simultaneously surprised by the poor physics through blatant disregard for Newton’s laws of motion. This paradox between utility and futility shows that GenAI is a complement, not a supplement. Then, imperfections aren’t a limitation, as the Uncanny Valley proposes; they remind us to fine-tune outputs to human standards.

🔑KEY TAKEAWAY

While the Uncanny Valley suggests that people may experience discomfort or revulsion towards AI creations that are nearly, but not quite, human, there is limited scientific evidence to support its existence definitively. However, anecdotal evidence, such as the widespread suspicion of deepfakes, indicates that humans may still have reservations about AI-generated content that blurs the lines between reality and fantasy. As AI developments continue to improve, addressing these concerns to ensure AI benefits society while minimizing potential negative impacts is crucial.

📒FINAL NOTE

FEEDBACK

How would you rate today’s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

❤️TAIP Review of The Week

“Another Sunday, another Sunday Special!😁”

-Jack (1️⃣ 👍Nailed it!)
REFER & EARN

🎉Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving ⚙️Ultimate Prompt Engineering Guide.

Refer 3 friends to learn how to 👷‍♀️Build Custom Versions of OpenAI’s ChatGPT.

Reply

or to participate.