- The AI Pulse
- Posts
- š§ AI Is Dumb: The Myth of AGI
š§ AI Is Dumb: The Myth of AGI
PLUS: What Glass and Duct Tape Can Teach Us About AIās Limitations
Welcome back AI prodigies!
In todayās Sunday Special:
š¤What AI Is and Isnāt
š„Glass is Like AI
šŖMan > Machine
šKey Takeaway
Read Time: 6 minutes
šKey Terms
Blockchain: a decentralized, immutable, public ledger that records transactions to prevent tampering.
Web 3.0: a decentralized version of the internet where users control their data.
Caustics: the curved patterns formed when light reflects off a curved surface.
Pytorch: an open-source framework used to build Deep Neural Networks (DNNs) that help AI models get smarter.
Generative AI (GenAI): AI that creates new things like text, images, video, audio, and code.
š©ŗ PULSE CHECK
Do AIās quick responses to questions make it intelligent?Vote Below to View Live Results |
š¤WHAT AI IS AND ISNāT
AI has become a buzzword, a far outcry from its inception. AI is often used with blockchain, Web 3.0, and a litany of unrelated technology trends. In confounding AI with technology in general, some casual observers have begun to glamorize AI, mistakenly believing that, at present, AI is much more impactful than it is.
Just because Chief Executive Officers (CEOs) say their technology uses AI doesnāt mean it does. During the first three months of 2024, Fortune 500 CEOs cited āAIā an average of 11 times on earning calls, with Meta executives mentioning it 96 times, or nearly twice per minute. Even if a technology uses AI, the haphazard inclusion of AI doesnāt automatically make it better. If duct tape suddenly became a trendy technology, startups would start slapping duct tape on their keyboards just so they could say, āMade with duct tape.ā Large corporations would put little pieces of duct tape on existing products or services to say, āNow made with duct tape.ā This executive strategy doesnāt make duct tape any less useful, but it makes it challenging to discern its utility. AI is a tool; in many ways, itās the software equivalent of duct tape.
Duct tape is a versatile tool with applications ranging from aviation to race cars, but itās not a structural material that replaces long-term repairs. Similarly, AI excels in certain areas, shows surprising capabilities in others, and has limitations.
š„GLASS IS LIKE AI
Generative AI (GenAI) has revolutionized technology by enabling computers to generate human-quality content and mimic human creativity. GenAI can answer user queries in seconds or instantly generate images, videos, or music. Everything is nearly instantaneous. Creators leverage MidJourney to generate video thumbnails and article visualizations. Writers use Anthropicās Claude to analyze human text. The list of things GenAI excels at is growing at a rapid rate, and many people are adding it to their toolkit to get higher-quality work done faster.
If AI can do all these complicated human things and do more and more of them, isnāt it reasonable to think it'll keep improving? If itās already better than humans at many things, when will it be better at everything? Not in the foreseeable future. To understand why AI is dumb, weāll recall how AI works based on something everyone understandsālight.
Light hitting a curved, transparent surface can create beautiful patterns called caustics; think about those weird light rings on the bottom of a pool. The Swiss Federal Institute of Technology Lausanne (EPFL) figured out how to manufacture various forms of glass so artists could produce artistic works with it. Though this is novel, impressive, and useful, no one would dare call the piece of glass smart? Glass is dumb, but the people who made the glass are smart. Just because the glass produced a beautiful image doesnāt mean the glass itself is smart.
In this way, AI is like multiple pieces of fancy glass, filtering some input to achieve some output. You change what goes into the AI model, and the output changes. Except instead of light, itās things like words, images, and videos. All the cutting-edge AI you use is a filter that turns your input into some output. For example, Grammarly recommended I capitalize the word āglassā consistently, even when some instances of āglassā began sentences. Outputs (e.g., Grammarly recommendations) are only as good as the AI model training that involves humans teaching it how to āthink.ā
šŖMAN > MACHINE
Humans are incredibly complex organisms that evolved through billions of years and adapted to various environments. Using our brains, senses, and agile bodies, we rose to the top of a highly competitive food chain where life and death were at stake. We used our heightened place on the food chain to win bounty consistently and, throughout millennia, used the excess calories to evolve the most complex computational system the world has ever seen: the human brain.
Weāre born with a complex brain that evolves over our lives, constantly changing based on how we interact with the world, others, and ourselves. In this development, we seek new challenges to strengthen our minds and seek new ideas to test the limits of whatās possible. Some of us find mates and create new humans who inherit a mixture of information from their parents to further the continuum of humanity into future eons.
To train an AI model, you import PyTorch into your codebase, load some examples of images labeled cats or dogs, and adjust the parameters within the AI model until it gets the answer right most of the time. Though this is a trivial example, the underlying principle holds: AI only appears smart because humans trained it to mimic human thought.
AI only gets compared to humans because itās designed to do what humans can, and it automates narrowly defined tasks that donāt require human instinct, ethical trade-offs, emotional decision-making, or a variety of other inherently human elements. Nevertheless, AI can produce content that, a few years ago, most people would have deemed āman-made.ā To some, this warrants some version of the āintelligentā label. To us, it warrants a defense of AIās imitative nature. When answering a user query, OpenAIās ChatGPT could preface the response with this statement:
āBased on all of the things Iāve seen humans say, this is a reasonable response to the prompt Iāve been given.ā
AI models are trained on vast amounts of real-world data with human examples, which the AI models use to generate impressive outputs. While these responses may seem like original thoughts, theyāre sophisticated pattern-matching responses based on the information the AI models have been exposed to. For example, OpenAIās ChatGPT can quickly suggest recipe ideas based on your fridge contents by recognizing similar patterns in its training datasets. Essentially, AI is highly skilled at imitating human thought processes, but it doesnāt truly think independently. Some incorrectly assume that the AI āthoughtā of the answer rather than the more accurate conceptualization that the AI recalled the answer based on a complex combination of human examples.
Humans must carefully tailor these AI models to create high-quality outputs. Making an AI model requires massive computers, massive amounts of high-quality reference data to learn from, and vast amounts of time. And they still get stupid things wrong. Sure, humans get things wrong, too, but AI gets them wrong consistently and needs to be corrected. AI needs humans to determine why the AI model is making mistakes; humans need to tweak the AI model to get better outputs. Sometimes it works, and sometimes it doesnāt. The CEOs of big AI companies know this very well, but they want to keep the hype up to keep the money flowing.
šKEY TAKEAWAY
AI is nowhere near as complex as humans. AI can only do things humans can do because theyāre designed to mimic people. Even with billions of dollars, itās still hard to make AI models that are smart and useful.
That doesnāt mean AI isnāt powerful; like duct tape, it has various applications. But just because duct tape is useful doesnāt mean itās intelligent. Could AI be āgenerally intelligentā in the future? The potential is there.
šFINAL NOTE
If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.
How was todayās newsletter?
ā¤ļøTAIP Review of the Week
āI work at a SaaS startup. This newsletterās content is up-to-date, accurate, and well-written. Great work!ā
REFER & EARN
šYour Friends Learn, You Earn!
You currently have 0 referrals, only 1 away from receiving āļøUltimate Prompt Engineering Guide.
Refer 5 friends to enter š°Julyās $200 Gift Card Giveaway.
Copy and paste this link to others: https://theaipulse.beehiiv.com/subscribe?ref=PLACEHOLDER
Reply