
Welcome back AI prodigies!
In today’s Sunday Special:
⚙️How LLMs Work
🧬Everything Can Be Human
💨What Is Consciousness?
🔑Key Takeaway
Read Time: 7 minutes
🎓Key Terms
Tokens: the smallest units of data used by an AI model to process and generate text. Similarly, we break down sentences into words or characters.
Public Corpus: all the text-based content on the open internet, including data used to train general-purpose chatbots.
Philosophical Zombie: a creature that behaves like a human in every way without actually being conscious.
🩺 PULSE CHECK
Could AI possibly be conscious?
⚙️HOW LLMs WORK
Large Language Models (LLMs) like Google’s Gemini or OpenAI’s GPT-4 are versatile and work wonders for various tasks. They can do homework, craft polite emails, devise a diet plan, or write Eminem-ish lyrics. All you have to do is ask, and you can generate a more comprehensive array of content than most could imagine.
But we’re asking LLMs more like, “Here is some text. How might this text go on?” or maybe “What words are the most likely to come next?”. LLMs get to answer such questions based on the distribution of tokens available in the public corpus. This public corpus is why conversational chatbots based on LLMs, like OpenAI’s ChatGPT, may get confused, misunderstand, produce nonsense, fabricate information, or outright lie.
Training data is important; here’s why. OpenAI’s ChatGPT allegedly would answer “42” most often when asked to produce a random number. A prevalent explanation of why this happened is related to Douglas Adams’s book “The Hitchhiker’s Guide to the Galaxy,” where the number “42” is supposed to be the “Answer to the Ultimate Question of Life, the Universe, and Everything”—a concept that takes such a large portion of the public corpus data that OpenAI’s GPT-3.5 was trained on that it leads to a “number 42 bias.” Most humans aren’t great random number generators. Like life imitating art, any AI application more or less reproduces human expression, knowledge, and bias in its training data. In other words, the adage, garbage in, garbage out, still rings true.
At times, the human brain works in a similar statistical manner. In his book “Thinking Fast and Slow,” Daniel Kahneman explains how answering “What is 5 x 6?” is automatic. We don’t do the multiplication in our brains from scratch but rather recall that “5 x 6 = 30,” anticipating which number will most likely come next. This answer generation works because we’ve heard and seen it thousands of times; in other words, we’ve “trained” our brain, and now we can retrieve the correct answer with 100% accuracy. Hence, we don’t really “think” about it. This automatic response is somewhat similar to how LLMs predict text.
On the contrary, to calculate a more complex or not-so-usual product like “12 x 52,” we’d have to work through the problem consciously, doing “12 x 50,” then “12 x 2,” and then adding the products to get “624.” This category of deliberate thinking is essentially the thinking that general-purpose chatbots cannot perform—carrying out various tasks that require knowledge or reasoning beyond their training data.
🧬EVERYTHING CAN BE HUMAN
The question of whether AI will ever be equivalent to human consciousness involves not just the technical capabilities of AI but also our perceptions of what it means to be human and conscious. We can easily attribute human features to almost anything without resembling a human, even this pencil.
We formally refer to this phenomenon as anthropomorphism, attaching human-like attributes, features, behaviors, characteristics, and consciousness to objects or animals. A similar concept is personification—representing abstract concepts in the form of a human. This concept includes many different things, like natural phenomena (e.g., the wind, sky, or sea), good or bad traits (e.g., beauty, freedom, envy, or greed), places (e.g., cities, continents, and countries), lots of godly and religious figures, as well as animals that go to work, do laundry, and drink coffee. Sometimes, we may go even further and give a name to our car or some part of our body. This tendency is deeply rooted in human psychology, allowing us to form emotional connections with objects and concepts that don’t possess consciousness.
Would we consider any of these conscious? Eh, probably not. Nevertheless, on an emotional level, we can connect with all these things, even if they never said a word or existed, even if our brains understand that they’re soulless. It only takes us to imagine what our car would tell us if it had a voice, and then we may name our car Bob. Ultimately, humanity is inclined to accept something has consciousness rather than rejecting it.
Imagine how confused our prehistoric brains are when interacting with OpeAI’s ChatGPT or Google’s Gemini, spontaneously generating bunches of words that sound very human. It’s easy to get ahead of ourselves and forget all that statistical distribution and mathematical nonsense. Our capacity for anthropomorphism makes it relatively easy to forget that such AI models operate purely based on statistical patterns rather than a proper understanding of consciousness. That’s why public AI education is critical. Anyone using an LLM should have at least a superficial understanding of how LLMs function, how their responses are formulated, and what data they were trained on. They don’t “understand” or “think” (i.e., for now, at least) in the way humans do. Instead, they produce responses by determining the most statistically likely continuation of a given input.
💨WHAT IS CONSCIOUSNESS?
Even if AI were conscious, how would we ever know? There is no definition of human consciousness; we still don’t know what makes us human. Despite thousands of years of research, analysis, and discussions, we still have no idea how consciousness works, and it remains one of the greatest mysteries of science and philosophy.
Most agree that consciousness includes the following:
Being able to think about one’s existence as an entity separate from the environment and other entities.
Being able to experience subjective experiences (i.e., only your experience being you here and now, and there is no way to repeat it or replicate it for someone else).
Being able to experience sensations and feelings.
Being able to form intentions, wants, and desires.
Since consciousness is inherently subjective, it isn’t easy to form a standard test, measure, or procedure to define what is conscious and what isn’t. After all, AI may have come early; advanced language models like OpenAI’s GPT-4 or conversational chatbots like OpenAI’s ChatGPT can simulate human-like behavior and dialogue to an impressive extent; they may seem kind or entertaining, even tell you they feel pain, sadness, or joy, be angry or disappointed—produce any emotion or feeling that exists in the public corpus they were trained on. But we would have no way to check if an AI has actual subjective experiences or is pretending to; if it’s clever or imitates what an intelligent person would say.
In “The Infinite Conversation,” German filmmaker Werner Herzog and Slovenian philosopher Slavoj Žižek have a never-ending, AI-generated discussion about anything and everything. The voice, tone, accent, vocabulary, and arguments presented are pretty convincing—so convincing that the site creators deemed it appropriate to remind us that “the opinions and beliefs expressed don’t represent anyone. They’re the hallucinations of a slab of silicon.” These philosophical zombies likely pass The Turing Test in front of all except the most sophisticated observers. Their novel raises the question: Can an AI perfectly mimic human behavior and dialogue and distinguish it from a genuinely conscious being?
🔑KEY TAKEAWAY
Before the modern AI era, we struggled to provide an operational definition of consciousness, and we likely will only offer one once researchers reach a consensus on a neurobiological explanation. Thus, with no concrete definitions for these elements in a human context, the question of AI consciousness remains nothing more than a shower thought.
📒FINAL NOTE
If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.
How was today’s newsletter?
❤️TAIP Review of the Week
“I’m subscribed to 13 newsletters, but this one is my favorite.”
REFER & EARN
🎉Your Friends Learn, You Earn!
{{rp_personalized_text}}
Refer 5 friends to enter 🎰July’s $200 Gift Card Giveaway.
Copy and paste this link to others: {{rp_refer_url}}
