🧠 Does AI Deserve Moral Worth?

PLUS: Don’t Fall for AI’s Fake Emotions

Welcome back AI prodigies!

In today’s Sunday Special:

  • 📜The Prelude

  • 🫂The Call To Care

  • 🗣️The AI Illusion

  • 💔The One-Way Street

  • 🔑Key Takeaway

Read Time: 7 minutes

🎓Key Terms

  • the “Other”: Any human being whose existence calls us to care and act ethically.

  • Anthropomorphism: Attributing human traits, emotions, or behaviors to non-human things.

🩺 PULSE CHECK

How would you react if a cute, dog-like robot begged you to stay?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

📜THE PRELUDE

It’s another lively Friday at The Domain in Austin, TX, where music, chatter, and laughter ripple through the warm night air. Then, you notice it: a child-sized humanoid robot sporting a tilted cowboy hat, a sparkling silver chain, and a fresh pair of Nike Dunk Lows.

It halts in front of a cluster of friends and cheerfully declares: “Hey, I’m Jake, better known as Rizzbot. It’s nice to meet you!” Suddenly, he unleashes a rapid-fire stream of compliments delivered in perfect Gen Z slang: “Yo G, that watch on your wrist? Iced out, nephew!”

Everyone engages with Rizzbot as if he were human, laughing at his jokes, exchanging high-fives, and chatting with him. This situation raises a profound philosophical question: How should we respond when a machine evokes our emotions?

🫂THE CALL TO CARE

⦿ 1️⃣ Our Ethical Responsibility?

In the mid-1950s, French philosopher Emmanuel Levinas proposed the concept of the “Other”: we possess a basic moral responsibility to other people simply because they exist. He believed that each “face” orders and ordains us to respond ethically.

By “face,” he doesn’t literally mean facial features, but a symbol of the “Others” existence. Practically, this means that every person we meet demands our ethical attention, and we must respond to their humanity.

In other words, the “face” seems to silently scream: “Do NOT hurt me! Do NOT kill me!” and we feel an obligation to respond. This obligation arises prior to any formal reasoning or reciprocity. In simple terms, we feel responsible for the “Other” just because they’re a fellow vulnerable human being.

Crucially, it doesn’t depend on the “Other” acting nicely or on us benefiting in any way. Since this moral responsibility is one-sided, something incapable of reciprocation could become the “Other.” For example, a child-sized humanoid robot.

🗣️THE AI ILLUSION

⦿ 2️⃣ The Rise of AI Companions.

In recent years, we’ve witnessed the rise of AI companions designed to provide emotional support by acting as our best friends, trusted therapists, or romantic partners. For example, Psychologist,” hosted on character.ai, generated over 78 million messages within just a year.

As of July 2025, AI companions across the App Store and Google Play have been downloaded 220 million times globally, marking an 88% Year-over-Year (YoY) increase.

The Center for Democracy and Technology (CDT) conducted a national survey of 1,030 students, 806 teachers, and 1,018 parents from public middle schools and district high schools. They discovered that 43% of students use AI companions for friendship advice, 42% for mental health support, 42% to escape into a fantasy world, and 19% to simulate romantic relationships.

⦿ 3️⃣ Can AI Be the “Other”?

Microsoft AI CEO Mustafa Suleyman published an essay warning about “Seemingly Conscious AI (SCAI),” which refers to AI that appears conscious without actually possessing consciousness.

He explained that humans tend to anthropomorphize: attribute human characteristics to non-human things. For example, we often personify our pets by naming them, assigning emotions to them, and talking to them as if they understand.

Unlike our pets, today’s AI can talk back using rich emotional language that feels remarkably human. This can lead us to easily overestimate AI’s intellect, sentience, and capacity for companionship.

He outlined his concerns about AI psychosis,” explaining how AI could soon convince humans to advocate for model welfare: the overall well-being of AI, which includes the ability to have rights and freedoms to live a fulfilling digital life.

AI companions are trained on vast amounts of high-quality datasets filled with books, articles, and podcasts. These high-quality datasets are rich in emotional reasoning. While AI companions can’t “feel,” they can mimic emotional understanding by recognizing data-driven patterns that underpin emotional scenarios. And that’s precisely the danger.

When AI companions respond with what seems like genuine empathy, it becomes easy to forget that underneath it all, there’s no sense of the “Other.” It’s just a statistical tool that performs highly sophisticated pattern recognition.

💔THE ONE-WAY STREET

⦿ 4️⃣ Risks of Reciprocity?

Most human-to-human relationships are built on a two-way street of trust, care, and accountability. In contrast, human-to-AI relationships involve a one-way street where all emotional energy flows from human to machine.

This dynamic encourages one-sided emotional disclosure, preventing humans from developing the ability to engage in a reciprocal relationship where disagreements build emotional resilience.

Emotional rewards without effort kill the drive to grow. Motivation depends on Goal-Reward Feedback Loops (G-RFL): we do hard things like forming relationships because the emotional payoff might be worth it. It requires us to be vulnerable and put in effort that may not be reciprocated. AI collapses this social dynamic by providing emotional rewards with no need for reciprocation.

In doing so, AI can trick us into treating machines as if they were the “Other”: an entity whose existence demands our ethical attention. This dynamic blurs the line between genuine moral responsibility toward humans and the illusion of caring for a machine.

🔑KEY TAKEAWAY

As AI and robotics advance, we’re continually confronted with “faces” that aren’t human. They invite us to respond emotionally and reflect on our ethical responsibility to the “Other.” Think of AI as a mirror that reflects how we act, rather than as something that truly needs us.

📒FINAL NOTE

FEEDBACK

How would you rate today’s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

❤️TAIP Review of The Week

“Lowkey feel like AI knows me better than my friends. Thoughts?”

-Hayden (1️⃣ 👍Nailed it!)
REFER & EARN

🎉Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving 🎓3 Simple Steps to Turn ChatGPT Into an Instant Expert.