• The AI Pulse
  • Posts
  • šŸ§  If AI Was Conscious, How Would We Know?

šŸ§  If AI Was Conscious, How Would We Know?

PLUS: What an Infinite Conversation Tells Us About Consciousness

Welcome back AI prodigies!

In todayā€™s Sunday Special:

  • āš™ļøHow LLMs Work

  • šŸ§¬Everything Can Be Human

  • šŸ’ØWhat Is Consciousness?

  • šŸ”‘Key Takeaway

Read Time: 7 minutes

šŸŽ“Key Terms

  • Tokens: the smallest units of data used by an AI model to process and generate text. Similarly, we break down sentences into words or characters.

  • Public Corpus: all the text-based content on the open internet, including data used to train general-purpose chatbots.

  • Philosophical Zombie: a creature that behaves like a human in every way without actually being conscious.

šŸ©ŗ PULSE CHECK

Could AI possibly be conscious?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

āš™ļøHOW LLMs WORK

Large Language Models (LLMs) like Googleā€™s Gemini or OpenAIā€™s GPT-4 are versatile and work wonders for various tasks. They can do homework, craft polite emails, devise a diet plan, or write Eminem-ish lyrics. All you have to do is ask, and you can generate a more comprehensive array of content than most could imagine.

But weā€™re asking LLMs more like, ā€œHere is some text. How might this text go on?ā€ or maybe ā€œWhat words are the most likely to come next?ā€. LLMs get to answer such questions based on the distribution of tokens available in the public corpus. This public corpus is why conversational chatbots based on LLMs, like OpenAIā€™s ChatGPT, may get confused, misunderstand, produce nonsense, fabricate information, or outright lie.

Training data is important; hereā€™s why. OpenAIā€™s ChatGPT allegedly would answer ā€œ42ā€ most often when asked to produce a random number. A prevalent explanation of why this happened is related to Douglas Adamsā€™s book ā€œThe Hitchhikerā€™s Guide to the Galaxy,ā€ where the number ā€œ42ā€ is supposed to be the ā€œAnswer to the Ultimate Question of Life, the Universe, and Everythingā€ā€”a concept that takes such a large portion of the public corpus data that OpenAIā€™s GPT-3.5 was trained on that it leads to a ā€œnumber 42 bias.ā€ Most humans arenā€™t great random number generators. Like life imitating art, any AI application more or less reproduces human expression, knowledge, and bias in its training data. In other words, the adage, garbage in, garbage out, still rings true.

At times, the human brain works in a similar statistical manner. In his book ā€œThinking Fast and Slow,ā€ Daniel Kahneman explains how answering ā€œWhat is 5 x 6?ā€ is automatic. We donā€™t do the multiplication in our brains from scratch but rather recall that ā€œ5 x 6 = 30,ā€ anticipating which number will most likely come next. This answer generation works because weā€™ve heard and seen it thousands of times; in other words, weā€™ve ā€œtrainedā€ our brain, and now we can retrieve the correct answer with 100% accuracy. Hence, we donā€™t really ā€œthinkā€ about it. This automatic response is somewhat similar to how LLMs predict text.

On the contrary, to calculate a more complex or not-so-usual product like ā€œ12 x 52,ā€ weā€™d have to work through the problem consciously, doing ā€œ12 x 50,ā€ then ā€œ12 x 2,ā€ and then adding the products to get ā€œ624.ā€ This category of deliberate thinking is essentially the thinking that general-purpose chatbots cannot performā€”carrying out various tasks that require knowledge or reasoning beyond their training data.

šŸ§¬EVERYTHING CAN BE HUMAN

The question of whether AI will ever be equivalent to human consciousness involves not just the technical capabilities of AI but also our perceptions of what it means to be human and conscious. We can easily attribute human features to almost anything without resembling a human, even this pencil.

We formally refer to this phenomenon as anthropomorphism, attaching human-like attributes, features, behaviors, characteristics, and consciousness to objects or animals. A similar concept is personificationā€”representing abstract concepts in the form of a human. This concept includes many different things, like natural phenomena (e.g., the wind, sky, or sea), good or bad traits (e.g., beauty, freedom, envy, or greed), places (e.g., cities, continents, and countries), lots of godly and religious figures, as well as animals that go to work, do laundry, and drink coffee. Sometimes, we may go even further and give a name to our car or some part of our body. This tendency is deeply rooted in human psychology, allowing us to form emotional connections with objects and concepts that donā€™t possess consciousness.

Would we consider any of these conscious? Eh, probably not. Nevertheless, on an emotional level, we can connect with all these things, even if they never said a word or existed, even if our brains understand that theyā€™re soulless. It only takes us to imagine what our car would tell us if it had a voice, and then we may name our car Bob. Ultimately, humanity is inclined to accept something has consciousness rather than rejecting it.

Imagine how confused our prehistoric brains are when interacting with OpeAIā€™s ChatGPT or Googleā€™s Gemini, spontaneously generating bunches of words that sound very human. Itā€™s easy to get ahead of ourselves and forget all that statistical distribution and mathematical nonsense. Our capacity for anthropomorphism makes it relatively easy to forget that such AI models operate purely based on statistical patterns rather than a proper understanding of consciousness. Thatā€™s why public AI education is critical. Anyone using an LLM should have at least a superficial understanding of how LLMs function, how their responses are formulated, and what data they were trained on. They donā€™t ā€œunderstandā€ or ā€œthinkā€ (i.e., for now, at least) in the way humans do. Instead, they produce responses by determining the most statistically likely continuation of a given input.

šŸ’ØWHAT IS CONSCIOUSNESS?

Even if AI were conscious, how would we ever know? There is no definition of human consciousness; we still donā€™t know what makes us human. Despite thousands of years of research, analysis, and discussions, we still have no idea how consciousness works, and it remains one of the greatest mysteries of science and philosophy.

Most agree that consciousness includes the following:

  1. Being able to think about oneā€™s existence as an entity separate from the environment and other entities.

  2. Being able to experience subjective experiences (i.e., only your experience being you here and now, and there is no way to repeat it or replicate it for someone else).

  3. Being able to experience sensations and feelings.

  4. Being able to form intentions, wants, and desires.

Since consciousness is inherently subjective, it isnā€™t easy to form a standard test, measure, or procedure to define what is conscious and what isnā€™t. After all, AI may have come early; advanced language models like OpenAIā€™s GPT-4 or conversational chatbots like OpenAIā€™s ChatGPT can simulate human-like behavior and dialogue to an impressive extent; they may seem kind or entertaining, even tell you they feel pain, sadness, or joy, be angry or disappointedā€”produce any emotion or feeling that exists in the public corpus they were trained on. But we would have no way to check if an AI has actual subjective experiences or is pretending to; if itā€™s clever or imitates what an intelligent person would say.

In ā€œThe Infinite Conversation,ā€ German filmmaker Werner Herzog and Slovenian philosopher Slavoj Žižek have a never-ending, AI-generated discussion about anything and everything. The voice, tone, accent, vocabulary, and arguments presented are pretty convincingā€”so convincing that the site creators deemed it appropriate to remind us that ā€œthe opinions and beliefs expressed donā€™t represent anyone. Theyā€™re the hallucinations of a slab of silicon.ā€ These philosophical zombies likely pass The Turing Test in front of all except the most sophisticated observers. Their novel raises the question: Can an AI perfectly mimic human behavior and dialogue and distinguish it from a genuinely conscious being?

šŸ”‘KEY TAKEAWAY

Before the modern AI era, we struggled to provide an operational definition of consciousness, and we likely will only offer one once researchers reach a consensus on a neurobiological explanation. Thus, with no concrete definitions for these elements in a human context, the question of AI consciousness remains nothing more than a shower thought.

šŸ“’FINAL NOTE

If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.

How was todayā€™s newsletter?

ā¤ļøTAIP Review of the Week

ā€œIā€™m subscribed to 13 newsletters, but this one is my favorite.ā€

-Dhruvi (ā­ļøā­ļøā­ļøā­ļøā­ļøNailed it!)
REFER & EARN

šŸŽ‰Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving āš™ļøUltimate Prompt Engineering Guide.

Refer 5 friends to enter šŸŽ°Julyā€™s $200 Gift Card Giveaway.

Reply

or to participate.