🧠 How LLMs Reshape Our Thoughts

PLUS: What AI Confidence Does to Your Own

Welcome back AI prodigies!

In today’s Sunday Special:

  • 📜The Prelude

  • 💭Language Influences Our Thoughts?

  • 🔊LLMs Sound Safe, But Aren’t.

  • 🚦How LLMs Inhibit Intuition

  • 🔑Key Takeaway

Read Time: 7 minutes

🎓Key Terms

  • Large Language Models (LLMs): AI Models pre-trained on vast amounts of data to generate human-like text.

  • Reinforcement Learning from Human Feedback (RLHF): a training technique that uses human feedback to teach LLMs to align with human preferences.

🩺 PULSE CHECK

When ChatGPT sounds confident, do you trust it more?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

📜THE PRELUDE

You ask ChatGPT: “What’s a simple question I can ask myself to ensure I pursue a fulfilling career?”

It prepares two potential responses:

  1. Output A: “Imagine what a dream workday would look like in 5 years.”

  2. Output B: “Imagine what a dream workday could look like in 5 years.”

That subtle shift from would to could is more than just semantics; it invites openness. Output A builds upon your current expectations, whereas Output B inspires you to reimagine what’s possible.

This example highlights the subtle influence of language. It doesn’t just describe our reality; it influences how we perceive, assess, and create it.

So, how exactly does language shape our thoughts? What cognitive biases are embedded within LLMs? How are LLMs reshaping the way we evaluate ideas?

💭LANGUAGE INFLUENCES OUR THOUGHTS?

In the 1930s, American linguistic anthropologist Edward Sapir and his protégé, Benjamin Lee Whorf, proposed the Sapir-Whorf Hypothesis, which states that the grammatical structures and verbal arrangements we choose to use within our language influence how we perceive the world. In other words, language either determines or influences our thoughts.

Consider the Guugu Yimithirr people, an aboriginal tribe of northern Australia. Instead of using terms like “left” or “right,” they communicate in absolute cardinal directions (i.e., north, south, east, and west). For example, “move the cup northeast” rather than “move the cup right.” In 1997, British social scientist Stephen Levinson found that this type of communication style enabled them to maintain perfect spatial orientation at nearly all times. He contrasted this with English speakers, who often become disoriented when reference points like landmarks disappear. The meaning of an English speaker’s directions may change based on their body position, whereas the Guugu Yimithirr people use directional terms that remain constant regardless of which way they’re facing.

Language also influences cultural norms, as is evident in the use of personal pronouns. In 1998, Japanese sociocultural psychologist Yoshi Kashima analyzed 39 languages across 71 cultures and found that languages spoken in more individualistic cultures are significantly less likely to omit personal pronouns before verbs. For instance, English is typically spoken in more individualistic cultures, which often require the use of personal pronouns. In contrast, Japanese is spoken in more collective societies, where personal pronouns like “I” are often omitted. He found that the higher a culture’s individualism, the more likely they’re to rely on personal pronouns to speak.

The language we use every day shapes our cognitive abilities and cultural norms. But when the same language is compressed into a conversational chatbot like ChatGPT, how does its influence on us evolve?

🔊LLMs SOUND SAFE, BUT AREN’T.

⦿ 1️⃣ 🦺 Why LLMs Seem Cautious.

If you’ve ever asked an LLM for advice, you’ve probably noticed it tends to generate analytical responses with Hedging Language like “It’s important to note that” or “While {X} is true, {Y} also matters.” So, why are such framings so prevalent?

LLMs, such as OpenAI’s o4-mini, were trained on large portions of text from the Internet, including academic journals (e.g., Nature), media outlets (e.g., Forbes), and encyclopedias (e.g., Wikipedia).

These genres of text tend to favor nuance, qualification, and diplomacy. More importantly, leading AI firms like OpenAI have explicitly prioritized training LLMs on these genres of text.

During training, human reviewers provide feedback to LLMs through a process known as RLHF. These human reviewers likely approved Hedging Language because it appears more neutral and broadly acceptable, appealing to a wider range of users.

However, Hedging Language implies there are two equally valid sides to every issue. This implication creates a moral equivalency between positions that may not be ethically or factually comparable. Over time, Hedging Language trains us to treat viewpoints as equally valid, even when one viewpoint is rooted in evidence and another viewpoint is rooted in misinformation or harm.

⦿ 2️⃣ 💬 How LLMs Oversimplify Solutions.

LLMs can also corrupt our decision-making capabilities and problem-solving capacity. In 1986, the renowned American psycholinguist J. Kathryn Bock coined the phrase “Structural Priming,” which describes our tendency to unconsciously mimic sentence structures we’ve recently encountered. For example, repeated exposure to passive voice can increase our use of passive verbs. When we describe our actions in the passive tense (e.g., “Mistakes were made”) instead of using the active voice (e.g., “I made a mistake”), we subtly remove personal responsibility. The way LLMs construct their outputs can have a similar effect on us.

When using a feature called “Think Mode” within ChatGPT, the conversational chatbot tends to respond to your complex questions with numbered lists or hierarchical breakdowns. While this structured format improves clarity, it encourages us to frame inherently interconnected issues as a sequence of isolated, linear steps, prioritizing Reductionism over Holistic Understanding. In simpler terms, it helps us process information more clearly but can lead us to overlook how things fit together as a whole.

A Reductionist approach might offer distinct actionable steps: “1. Build more shelters, 2. Expand addiction treatment, 3. Invest in job training.” Reductionism frames each step as a self-contained solution, suggesting that executing them in a specific order will sufficiently address the problem in its entirety.

Holistic Understanding examines the complex interactions of various factors. Building more shelters provides immediate relief. But without removing restrictive zoning laws, permanent housing will remain scarce. Expanding mental addiction services is critical. But those services must prioritize rehabilitation, not just safe injection sites. Similarly, investing in job training may backfire if individuals lack stable housing or childcare. Holistic Understanding connects these interventions, recognizing that housing, health, and employment are interdependent.

🚦HOW LLMs INHIBIT INTUITION

LLMs have recently made significant strides in reasoning, pushing the boundaries of what conversational chatbots can do. They achieve this by leveraging Test-Time Compute (TTC), which allocates more computing power during AI Inference: everything that happens after you enter your prompt. TTC relies on two reasoning frameworks:

  1. ⛓️‍💥Chain-of-Thought (CoT) to break down complex problems into manageable sub-problems. Then, to solve each manageable sub-problem and combine them into a complete solution.

  2. 🎯Reinforcement Learning (RL) to mimic the “trial-and-error” process humans use to learn, where decisions that lead to desired outcomes are reinforced.

CoT enhances clarity and readability, and RL rewards effective CoT approaches. But CoT prioritizes analytical thinking over intuition. So, why does this matter? It turns out that cutting-edge LLMs, which rely on CoT to solve complex problems, can degrade human intuition.

The National University of Singapore (NUS) recently collaborated with Microsoft to investigate how AI confidence levels affect human self-confidence during decision-making tasks. They recruited 270 U.S. participants to make predictions across 3 Stages:

  1. Stage 1: Alone

  2. Stage 2: With AI Assistance

  3. Stage 3: Alone Again

During each Stage, the U.S. participants saw 40 demographic profiles, which included age, gender, occupation, education level, and hours worked per week. Then, they predicted whether the annual income of a given demographic profile exceeded $50,000. Each U.S. participant was required to provide a confidence level with their prediction, ranging from 51% to 100%. In Stage 3, after consulting AI Assistance, they could revise their prediction.

After consulting AI Assistance, each U.S. participant’s confidence shifted to align with the AI. In Stage 1, the U.S. participants were, on average, 12 percentage points (pp) more confident than the AI; this gap decreased to 5 pp in Stage 2, indicating more substantial alignment with the AI. In Stage 3, the gap closed slightly to 8 pp, suggesting a lasting influence from AI Assistance.

🔑KEY TAKEAWAY

LLMs don’t merely provide information; they nudge us toward counterproductive behaviors, such as appeasing every side of a given issue, solving complex problems with linear thinking, and aligning our confidence with theirs. As we increasingly rely on LLMs to solve complex problems, we risk degrading our ability to uncover the cause of a disagreement, think critically, and trust our gut.

📒FINAL NOTE

FEEDBACK

How would you rate today’s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

❤️TAIP Review of The Week

“The G.O.A.T of AI newsletters! 🐐”

-Paul (1️⃣ 👍Nailed it!)
REFER & EARN

🎉Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving 🎓3 Simple Steps to Turn ChatGPT Into an Instant Expert.