🧠 AI and Emotion: Aligned or Aloof?

PLUS: Hirevue’s Unethical Practices Exposed

Welcome back AI prodigies!

In today’s Sunday Special:

  • 💥Present State

  • 📊Does It Work?

  • 💰Cost > Benefit?

  • 🔑Key Takeaway

Read Time: 6 minutes

🎓Key Terms

  • Artificial General Intelligence (AGI): AI models that can perform tasks as well as humans and exhibit human traits such as critical reasoning, intuition, consciousness, sentience, and emotional awareness.

  • Affective Computing: the study and development of technologies that can recognize, interpret, process, and simulate human emotions.

🩺 PULSE CHECK

Do you trust AI to recognize human emotions?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

💥PRESENT STATE

As everyone has been up in arms about OpenAI’s ChatGPT, Artificial General Intelligence (AGI), and the prospect of robots replacing traditional jobs (e.g., assembly line roles, data entry, or customer service), regulators have been ramping up warnings against AI developments and emotion recognition. Emotion recognition identifies a person’s feelings or state of mind using AI analysis of video, facial images, or audio recordings. AI models that recognize emotions may seem easy to conceive. For example, an AI model may see an open mouth, squinted eyes, and contracted cheeks with a thrown-back head and register it as a laugh, concluding that the subject is happy. However, in practice, this is incredibly complex—and, some argue, a dangerous and invasive example of the pseudoscience that AI often produces. We frequently laugh to diffuse awkwardness or hide pain; interpreting facial expressions requires context.

In response to these concerns, various privacy and human rights advocates, including European Digital Rights (EDRi) and Access Now, have called for a blanket ban on emotion recognition. The European Union (EU) has already taken a significant step in this direction by approving a ban that prohibits emotion recognition in policing, border management, workplaces, and schools. In the United States (US), several legislators have also voiced their concerns about emotion recognition, which will likely be a key focus of future AI regulation. Senator Ron Wyden of Oregon, a leading lawmaker on the issue, praised the EU for tackling emotion recognition and dismissed the technology as “little more than digital palm reading.”

📊DOES IT WORK?

Despite the breadth of its deployment, substantial evidence indicates that emotion recognition models just can’t be accurate. The Association for Psychological Science (APS) brought together a group of scientists who spent two years reviewing more than 1,000 papers on emotion detection. They focused on research into how people move their faces when they feel certain emotions and how people infer other people’s emotional states from their faces. The group concluded that using facial expressions alone makes it very hard to tell how someone is feeling accurately.

People smile when they’re happy and frown when they’re sad, but the correlation is weak, explained study coauthor Lisa Feldman Barrett, a neuroscientist at Northeastern University. People do plenty of other things when they’re happy or sad, too, and a smile can be satirical or ironic. Their behaviors vary widely across cultures and situations, and context plays a significant role in interpreting expressions. For example, in studies where someone placed a picture of a cheerful face on the body of someone in a negative situation, people experienced the face as more negative.

In short, the expressions we’ve learned to associate with emotions are stereotypes, and technology based on those stereotypes doesn’t provide excellent information. Getting emotion recognition right is expensive and requires precise data—more, Barrett says, than anyone has.

💰COST > BENEFIT?

As with any technology, there are vital applications, such as enabling people who are blind or have low vision to understand the emotions of people around them better. For example, Affectiva, a software development company that specializes in humanizing technology, has been exploring how AI models that analyze people’s facial expressions might be used to determine whether a car driver is tired or to evaluate how people react to a movie trailer.

Others, like HireVue, the video-based job screening tool, have used emotion recognition to screen for the most promising candidates. They analyze video interviews to determine a candidate’s “employability score” based on facial expressions, body posture, word usage, and pronouns. This practice has been met with heavy criticism. In 2019, the Electronic Privacy Information Center (EPIC) asked the Federal Trade Commission (FTC) to investigate HireVue for “unfair and deceptive trade practices” related to its use of AI—particularly “secret, unproven algorithms”—in the interview process. At first, HireVue said that it believed the complaint was without merit. “We uphold the highest levels of rigor and ethics daily to increase fairness and objectivity in the hiring process.” However, after intense public backlash, Hirevue reversed course, relenting that the technology “wasn’t worth the concern.” However, HireVue continues to analyze biometric data from job applicants, including speech, intonation, and behavior—all of which present similar privacy and discrimination risks.

Even so, other applications are more alarming. Several companies are selling lie-detection software to law enforcement. A pilot project called iBorderCtrl, ironically sponsored by the EU, offers a version of emotion recognition as part of its technology stack that manages border crossings. Its proprietary Automatic Deception Detection System (ADDS) calculates the probability of deceit in interviews by analyzing non-verbal micro-gestures (e.g., raised eyebrow, tilted head, eye contact, etc.). But the most high-profile use or abuse of emotion recognition technology is from China, where the Uyghurs live under extreme surveillance in Xinjiang. Emotion recognition was meant to identify a nervous or anxious “state of mind,” like a lie detector, as one human rights advocate warned the British Broadcasting Organization (BBC). Beyond oppressed Muslim groups, some Chinese schools have used the technology to measure comprehension and academic performance.

🔑KEY TAKEAWAY

Will science ever determine emotions? We might see advances in affective computing as the underlying science of emotion progresses—or we might not. As AI expert Meredith Broussard asked, can everything be distilled into a math problem? Perhaps emotion recognition will require us to understand the brain fully; the last attempt took $1 billion and fell short. Time will tell.

📒FINAL NOTE

If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.

How was today’s newsletter?

❤️TAIP Review of the Day

“I like the Pulse Check polls feature.”

-Jason (⭐️⭐️⭐️⭐️⭐️Nailed it!)
REFER & EARN

🎉Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving ⚙️Ultimate Prompt Engineering Guide.

Refer 9 friends to enter 🎰May’s $200 Gift Card Giveaway.

Reply

or to participate.