🧠 Is ChatGPT Flattening Language?

PLUS: What Hip-Hop Slang Can Teach Us About ChatGPT’s Effect on Language

Welcome back AI prodigies!

In today’s Sunday Special:

  • šŸ“œThe Prelude

  • šŸ—‘ļøLanguage Is a Living Mess

  • āš”ļøTraditional Language vs. LLMs

  • 🧽The Smoothing Effect

  • šŸ”‘Key Takeaway

Read Time: 7 minutes

šŸŽ“Key Terms

  • Large Language Models (LLMs): AI Models pre-trained on vast amounts of data to generate human-like text.

  • The Smoothing Effect: The tendency of AI Models to favor commonly used words, phrases, and expressions.

  • Creative-Risk Taking: The willingness to experiment with breaking grammatical rules by inventing new words and modifying meanings.

🩺 PULSE CHECK

Do most of ChatGPT’s responses sound the same?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

šŸ“œTHE PRELUDE

A poet asks ChatGPT to refine her poetry:

The original version:

The grief-drunk sky belches its copper pennies

while underneath, the Earth’s mouth

chews on tomorrow’s bones.

The result:

The sorrowful sky releases golden coins

while below, the Earth awaits

The promise of a new day.

āš™ļøOutput Source: OpenAI/ChatGPT/GPT-4o (ā€œoā€ for ā€œomniā€)/ā€œPlease edit this poem for clarity, rhythm, and impact.ā€/Snapshot

ChatGPT substitutes the personification of the sky as ā€œgrief-drunkā€ for ā€œsorrowful.ā€ So, why does this matter? It eliminates ambiguity, which narrows interpretation. In other words, this loss of ambiguity limits a reader’s ability to interpret meaning, and that’s what makes poetry powerful.

LLMs promise to enhance our expression. However, they systematically erase the very quality that makes language alive: its ability to change over time.

So, how exactly does language evolve? How do LLMs affect the evolution of language? And what do we lose in a society dominated by LLMs?

šŸ—‘ļøLANGUAGE IS A LIVING MESS

Language isn’t a fixed system; every word we use today was once new, unheard of, or unseen. Every grammatical rule we use today was once someone’s violation of traditional convention. The greatest literary pioneers helped shape the evolving landscape of language by creating new words, phrases, and expressions.

⦿ 1ļøāƒ£ Language of Shakespeare.

Perhaps no single person has had such a significant impact on language as English playwright, poet, and actor William Shakespeare. Historians credit him with inventing over 1,700 words that we still use today, including ā€œlonely,ā€ ā€œswagger,ā€ and ā€œaddiction.ā€

In his world-renowned play ā€œRomeo and Juliet,ā€ two young Italians try to find love while their families are feuding. After Juliet’s apparent death, her father, Lord Capulet, mourns the loss on what was meant to be her wedding day. He describes it as an ā€œuncomfortable time.ā€ By adding the prefix ā€œun-ā€ to ā€œcomfortable,ā€ Shakespeare captures the emotional strain and social dissonance of the moment.

In ā€œCoriolanus,ā€ Shakespeare tells the story of a proud Roman general who’s ultimately exiled by the very people he fought to protect. As Coriolanus grapples with his exile, Shakespeare uses the word ā€œlonelyā€ to describe his psychological state. This word helps express the profound emotional isolation of a man alienated from not only his society, but his own identity.

During Shakespeare’s time, plays dominated popular culture. Thus, playwrights had a large platform to challenge established linguistic norms. In recent times, Hip-Hop artists have defined American culture, replacing old terms with new slang.

⦿ 2ļøāƒ£ Language in Hip-Hop.

In 1998, American rapper Lil Wayne coined the term ā€œblingā€ to represent the flashy, sparkling quality of expensive jewelry. In 1999, fellow American rapper Baby Gangster (ā€œB.G.ā€) released ā€œBling Bling.ā€

Cultural historians refer to ā€œBling Blingā€ as a pivotal milestone in the ā€œBling Eraā€ of Hip-Hop, a period where displaying wealth became a prominent theme. By 2002, the phrase ā€œbling-blingā€ had become so popular within American culture that it earned a spot in the Oxford English Dictionary (OED).

In 2011, Canadian rapper, singer, and actor Drake released ā€œThe Motto,ā€ a hit single that popularized the acronym ā€œYOLO,ā€ which means ā€œYou Only Live Once.ā€ He chose the phrase to capture his carefree, impulsive attitude toward life. Shortly thereafter, shirts featuring ā€œYOLOā€ appeared in Macy’s, Target, and Walgreens. Today, Millennials and Gen Z continue to use ā€œYOLOā€ before taking spontaneous risks like quitting a current job to make a career change.

āš”ļøTRADITIONAL LANGUAGE VS. LLMs

⦿ 3ļøāƒ£ The Risk Profile of Language.

So, what unites the linguistic creativity of both Shakespeare and Hip-Hop: RISK. Developing new words or repurposing old phrases requires the willingness to sound strange and violate public expectations. This risk economy has always been essential to language evolution.

Nearly a century ago, African American Vernacular English (AAVE) speakers began to use ā€œbadā€ to mean ā€œgoodā€ (i.e., ā€œthis is badā€ means ā€œthis is awesomeā€). In 1927, a reviewer for Variety wrote the following about renowned jazz musician Duke Ellington: ā€œEllington’s jazzique is just too bad.ā€ At the time, the paradox didn’t make sense. How could someone so good at jazz be considered ā€œbadā€? However, with time, the colloquialism seeped beyond jazz into mainstream culture. Today, Gen Z and Millennials often use ā€œbadā€ to mean ā€œgood.ā€

Linguistic experiments often fail. But when they succeed, they give us more ways to express ourselves. LLMs, on the other hand, tend to reduce linguistic diversity.

⦿ 4ļøāƒ£ Linguistic Risks in LLMs.

LLMs like OpenAI’s GPT-4o (ā€œoā€ for ā€œomniā€) and Anthropic’s Claude Sonnet 4 operate on a fundamentally conservative principle: they predict the most statistically likely next word based on patterns learned from massive text datasets. As a result, they learn to associate ā€œgoodā€ writing with high-probability word sequences, which means they gravitate toward language that sounds familiar, safe, and broadly acceptable.

This approach produces remarkably fluent, coherent, and human-like text. When asked to rewrite the poem, ChatGPT produced verses that sound poetic. However, it removed key stylistic elements, such as personification and ambiguity. This prioritization of linguistic conformity over diversity creates The Smoothing Effect.

🧽THE SMOOTHING EFFECT

⦿ 5ļøāƒ£ What’s The Smoothing Effect?

In 2024, Georgetown University (GU) explored the homogenizing effect of LLMs on creativity. In simpler terms, they explored how LLMs tend to make writing sound more similar and less unique, reducing creativity.

They had two groups of participants write college admissions essays:

  1. Group A used AI assistance.

  2. Group B didn’t use AI assistance.

Group A’s college admissions essays were stylistically more uniform, deploying better grammatical structures with superior vocabulary sophistication. However, they lacked the personal flair, voice, and originality that make writing memorable. Simply put, what Group A gained in technical quality, they lost in originality.

This outcome isn’t surprising, since LLMs are optimized to generate the most common words, phrases, and expressions. GU determined that The Smoothing Effect can dilute what makes human expression unique.

⦿ 6ļøāƒ£ Cultural Conformity.

In 2023, the University of Amsterdam (UvA) reported that 97% of existing AI-based language technologies support only 3% of the world’s most widely spoken languages.

For example, ChatGPT currently supports 95 languages, which includes about 80% of the world’s population. But it still excludes more than 7,000 other languages. As a result, minority languages and regional dialects receive little to no support.

Even within English-based LLMs, words from dialects like AAVE are often flagged as ā€œincorrectā€ and rewritten in Standard American English (SAE), reinforcing dominant norms and devaluing non-standard speech patterns.

⦿ 7ļøāƒ£ Model Collapse.

A more technical risk emerges from the growing trend of Recursive Training: when LLMs train on text generated by previous versions of other LLMs. Recursively trained LLMs primarily learn from the outputs of other LLMs, which already reinforce common words, safe phrasings, and consensus ideas. Recursively trained LLMs favor conformity over originality, causing linguistic range to narrow over time.

In other words, rather than encourage experimentation, they reward the most statistically probable language: the sanitized, the typical, the expected. By rewarding conformity and punishing niche languages, it discourages the very experimentation that drives language evolution.

šŸ”‘KEY TAKEAWAY

Language can only evolve through creative risk: Shakespeare’s novel terms and Hip-Hop’s creative slang both emerged by defying convention. But LLMs prioritize safe, familiar phrasing. As LLMs become increasingly integrated into our personal lives and professional workflows, they threaten the rich, chaotic texture that makes language truly human.

šŸ“’FINAL NOTE

FEEDBACK

How would you rate today’s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

ā¤ļøTAIP Review of The Week

ā€œVery interesting and direct. Thanks!ā€

-Rodrigo (1ļøāƒ£ šŸ‘Nailed it!)
REFER & EARN

šŸŽ‰Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving šŸŽ“3 Simple Steps to Turn ChatGPT Into an Instant Expert.