🧠 How AI Makes Words Weightless

PLUS: What Happens When Language Speaks Without a Speaker

Welcome back AI prodigies!

In today’s Sunday Special:

  • 📜The Prelude

  • 📚How LLMs Remove Context

  • ⚓️What Do We Lose?

  • ⚔️LLMs vs. Truth

  • 🔑Key Takeaway

Read Time: 7 minutes

🎓Key Terms

  • Tokens: A word or part of a word. For example, “yesterday” might be broken into three tokens: “yes,” “ter,” and “day.”

  • Large Language Models (LLMs): AI Models pre-trained on vast amounts of high-quality datasets to generate human-like text.

🩺 PULSE CHECK

Would you trust advice more if you knew who gave it?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

📜THE PRELUDE

Imagine walking into a courtroom: the judge is absent, the jury is missing, and the defendant is gone. Still, a gavel strikes, and the verdict is read aloud. Justice is rendered, but no one’s there to bear it.

That’s what AI-generated text often feels like: words that mimic meaning but lack the human presence that once gave them weight.

When we type, our words carry context, connection, and conviction. In other words, our words possess an underlying consciousness. Each sentence reflects a lived experience crafted from our unique perspective.

LLMs provide us with an infinite amount of AI-generated text on demand, but they’ve also changed the nature of text itself. We used to assume that when we read something:

  1. Someone meant to say something.

  2. Someone stood behind the words.

How exactly are LLMs engineered to produce text without the usual human stuff behind it? Why can’t LLMs reliably tell what’s true?

📚HOW LLMs REMOVE CONTEXT

⦿ 1️⃣ ⚙️How LLMs Remove Context.

To understand what we lose, we must first examine how LLMs are constructed.

LLMs are essentially statistical tools designed to predict the probability of a sequence of words. You can view them as sophisticated autocomplete machines trained on the entire internet.

For example, when given: “The cat chased the {BLANK}!” LLMs ask themselves: given the words so far, what’s the most likely next word?

Each time LLMs guess wrong, they adjust thousands of Weights, which control how tens of thousands of words relate to each other within LLMs. These relationships help form the Neural Network (NN): a network of interconnected nodes that processes words using two methods:

  1. 📍Attentions Mechanisms help calculate how much each word within a sentence should pay attention to every other word. Consider the following sentence: “The cat chased the mouse!” In this case, the words “cat” and “mouse” would pay more attention to each other because the past tense transitive verb “chased” connects them, indicating that the “cat” is actively pursuing the “mouse.”

  2. 📌Transformer Layers help further clarify the meaning of each word within a sentence. This process enables LLMs to develop a deeper understanding of the context. Consider the following sentence: “The cat chased the mouse!” In this case, it examines the word “chased” and determines that “cat” is important because it’s doing the chasing. It also determines that “mouse” is important because it’s being chased.

⦿ 2️⃣ Bias Toward the Expected?

LLMs absorb words statistically, not experientially. In other words, they map how words tend to appear together, rather than whether the sentences are grounded in reality. This gives them extraordinary fluency but a bias toward the expected. Cognitive scientists refer to this phenomenon as Regression Toward the Mean (RTM): picking the safe choice over the risky option. LLMs tend to regress to the safest choice when predicting the likely next word within a sentence because it’s statistically most probable.

In one of the most rigorous examinations of how LLMs impact idea generation, scientists at the University of Michigan (UofM) conducted a global experiment with over 800 participants across 40 countries. The participants were presented with creative ideas on a specific topic generated by LLMs. They were then asked to come up with their own original ideas on the same specific topic.

This global experiment revealed two critical patterns:

  1. 💭Exposure to creative ideas generated by LLMs increased the overall diversity of original ideas across the participants.

  2. 💡Each participant’s original ideas became semantically similar, clustering around common themes.

This combination is what makes LLMs feel simultaneously abundant and strangely uniform: it’s easier to generate original ideas, but those original ideas orbit around a statistical center rather than exploring frontiers of possibility. This global experiment identified the first clue that AI-generated text is optimized for probability, not situated truth.

⚓️WHAT WE LOSE?

⦿ 3️⃣ Who’s Speaking?

In speech, words hold meaning, and that meaning often translates to action. In the 1950s, prominent British philosopher J. L. Austin developed Speech Act Theory (SAT): words don’t just merely describe the world; they do things. This concept was later expanded by renowned American philosopher John Searle, who classified speech into five categories:

1. Directives {Requests}: “Please close the window.”

2. Expressives {Apologies}: “Sorry for the confusion earlier.”

3. Commissives {Promises}: “I promise to call you later today.”

4. Assertiveness {Stating Facts}: “The capital of France is Paris!”

5. Declarations {Decrees Altering Reality}: “You’re officially fired.”

These forms of speech have force because they’re backed by context, authority, and sincerity. For example, a judge saying: “I sentence you!” carries legal weight; a casual passerby saying the same thing doesn’t.

When LLMs provide us with AI-generated text, this chain is broken. “I promise” is no longer a commitment; it’s merely a string of Tokens that mimic commitment. The performative dimension collapses, leaving words that appear fluent yet remain hollow. In other words, AI-generated text can feel persuasive yet strangely weightless: it simulates the form of action without the responsibility that lends those actions their force.

⦿ 4️⃣ When and Where?

In the 1970s, influential American philosopher David Kaplan developed the formal semantics of Indexicals and Demonstratives: words whose meaning depends entirely on context. Indexicals are words like “I,” “you,” “here,” and “now.” Demonstratives are words like “this,” “that,” “these,” and “those.”

His findings were simple yet profound: to interpret a sentence containing Indexicals or Demonstratives, you must know who, when, and where. Without those contextual anchors, the sentence is underspecified or meaningless. “I’ll call you tonight” only communicates something actionable if the listener knows who “I” is, who “you” is, and what counts as “tonight.”

LLMs rarely supply these situational anchors. By design, they’re placeless and timeless. They produce AI-generated text that appears coherent but detaches from the concrete “here-and-now” that provides critical context.

⚔️LLMS VS. TRUTH

⦿ 5️⃣ Prediction Without Grounding.

The loss of context would be less worrying if LLMs could at least guarantee correctness. But their architecture makes this impossible.

LLMs work by Next-Token Prediction (NTP): given a sequence of words, choose the next likely word with the highest probability of correctness given past co-occurrences. NTP is a purely statistical operation. It has no internal representation of whether a statement matches reality. If the phrase “The Eiffel Tower is located in....” is often followed by “Paris,” LLMs will output “Paris.” This happens to be true. But it’s only true because the data distribution reflected reality, not because LLMs confirmed it.

🔑KEY TAKEAWAY

LLMs replace context with something thin, statistical, and strangely uniform. They give us infinite AI-generated text but strip words of their meaning. What we’re left with is sentences optimized for plausibility.

📒FINAL NOTE

FEEDBACK

How would you rate today’s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

❤️TAIP Review of The Week

“A must-read for anyone curious about AI, I always learn something new.”

-Cam (1️⃣ 👍Nailed it!)
REFER & EARN

🎉Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving 🎓3 Simple Steps to Turn ChatGPT Into an Instant Expert.