
Welcome back AI prodigies!
In today’s Sunday Special:
📊The Turing Test, Revisited
💭Understanding Understanding
🦾The Turing Test for OpenAI’s ChatGPT
🔑Key Takeaway
Read Time: 7 minutes
🎓Key Terms
Theoretical Computer Science: a subset of computer science that explores fundamental computing principles such as algorithms, computational modeling, and complexity.
Torus: the surface of a donut-shaped geometric object.
🩺 PULSE CHECK
Can conversational chatbots draw connections between different concepts?
📊THE TURING TEST, REVISITED
As AI enhances digital interactions and content, separating humans from artificial entities like conversational chatbots, image generation models, and voice cloning software is becoming increasingly difficult. Alan Turing, one of the fathers of theoretical computer science, developed a method known as “The Turing Test” in 1950. The original version had three participants:
A Computer
A Human Integrator
A Human Foil
The “Human Interrogator” attempts to determine which is the “Computer” and the “Human Foil” by asking a series of questions through a keyboard and display screen. The “Human Interrogator” may ask as penetrating and wide-ranging questions as they like, and the “Computer” can do everything possible to force a wrong identification. If the “Computer” can consistently fool the “Human Interrogator,” it’s considered an intelligent, thinking entity. However, Turing’s approach had its limitations.
John Searle, an American philosopher widely noted for his contributions to the philosophy of language and the philosophy of mind, believed thinking meant the ability to speak and understand what one was saying. Syntax relates to the rules for constructing grammatically correct sentences, and semantics refers to understanding what those sentences mean. Searle illustrated this distinction with his famous “Chinese Room Argument” thought experiment.
Imagine you’re inside a room under the door, being fed slips of paper with mysterious symbols. You don’t know what the slips say, but you have a massive manual in the middle of the room that provides instructions for producing an output of symbols based on whatever inputs you’re receiving. So you take the slips of paper you’re getting, look up all the relevant portions of the instruction manual, write out a string of symbols on another piece of paper, and feed it back under the door. Unbeknownst to you, the slips of paper you received conveyed questions in Chinese, and the ones you sent out carried very cogent, human-sounding answers to those questions. To a person outside, the room’s inhabitant (i.e., you) seemed to understand their questions, even though that’s not true. Searle said that engineers can train computers to competently deploy syntax rules for any given language, just as the inhabitants of the Chinese room can use the instruction manual to produce strings of symbols. However, text manipulation isn’t text understanding.
💭UNDERSTANDING UNDERSTANDING
Some claim Searle sets an impossibly high standard for semantic understanding. No machine that relies on mathematical predictions can come close to the layered knowledge that humans possess. Humans can conceive of several versions of the “who, what, when, where, why, and how” for any situation and combine emotion and values with observations to conclude. To others, Searle’s thought experiment mistakenly assumes that cognitive processing speed equals understanding.
In How the Mind Works, Steven Pinker, a world-renowned cognitive psychologist, argues that Searle leans too much on human intuition in a context where those intuitions don’t provide helpful guidance. Pinker explains that understanding happens rapidly under normal conditions, but Searle’s “Chinese Room Argument” slows the process dramatically. Since slow information processing isn’t the same as understanding, we conclude that fast information processing isn’t understanding either. But suppose a sped-up version of Searle’s preposterous story could come true, and we met a person who seemed to converse intelligently in Chinese but was deploying millions of memorized rules in fractions of a second. In that case, we’d likely conclude that they understood Chinese. Instead of establishing an essential fact about the nature of thought or consciousness, Pinker argues that Searle is just “exploring facts about the English word understand” and that if an individual makes decisions on par with those who truly “understand,” then their lack of understanding is not worth highlighting.
🦾THE TURING TEST FOR OPENAI’S CHATGPT
Searle’s view is normative, describing what ought to be the case, whereas Pinker’s position is pragmatic. Genuine thinking involves a reflection on what one is thinking about. And it’s a reflection capable of bringing in other domains of knowledge beyond the linguistic. In this mode of reflection, the individual doesn’t just string together symbols that make sense to others; they construct a model of the world that makes sense to them. Conversational chatbots are the fastest version of the “Chinese Room Argument.” Sean M. Carroll, an American theoretical physicist and philosopher, suspected that OpenAI’s ChatGPT, no matter how well it imitated a human’s linguistic fluency, didn’t understand human prompts. So, he put it to the test with this prompt:
Imagine we’re playing a modified version of chess where the board is treated as a torus. From any one of the four sides, squares on the directly opposite side are counted as adjacent, and pieces can move in that direction. Is it possible to say whether white or black will generally win this kind of chess match?
OpenAI’s ChatGPT provides a long-winded but equivocal answer. It says chess on a torus-shaped board will open up new strategic and tactical possibilities but doesn’t conclude whether white or black chess pieces will be more likely to win relative to a standard, square-shaped chess board. That’s because OpenAI’s ChatGPT analyzes text strings and produces responses in which each subsequent word is most likely to occur based on past words. The result is an answer that “makes sense” but turns out dead wrong. On the other hand, a human being with spatial reasoning ability pictures a chess board in their mind. They roll it into a cylinder and combine the ends until it forms a donut shape. Then, they notice that white chess pieces will always win since they move first, and the black King chess piece begins the game in check.
🔑KEY TAKEAWAY
This example demonstrates humans’ ability to perform bisociation, a form of creative thinking that requires taking two habitually incompatible frames of reference and finding some point or hinge between them. Humans can combine knowledge of chess and geometry to draw a novel inference about a hypothetical situation, but conversational chatbots can’t. This cognitive limitation of AI models prevents the complete automation of knowledge work.
📒FINAL NOTE
If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.
How was today’s newsletter?
❤️TAIP Review of the Week
“This is hands down the best AI newsletter I’ve found!”
REFER & EARN
🎉Your Friends Learn, You Earn!
{{rp_personalized_text}}
Refer 9 friends to enter 🎰May’s $200 Gift Card Giveaway.
Copy and paste this link to others: {{rp_refer_url}}
