🧠 Can AI Surpass Human Intelligence?

PLUS: The $1 Billion Neuroscience Experiment

Welcome back AI prodigies!

In today’s Sunday Special:

  • 💬It May Be Possible

  • 📵Or Not

  • 🙋Now What

Read Time: 5 minutes

🎓Key Terms

  • Artificial General Intelligence (AGI): artificial intelligence that can perform any task as well as a human and exhibit human traits such as critical reasoning, intuition, consciousness, sentience, and emotional awareness.

  • High-Level Machine Intelligence (HLMI): machines that can perform economically relevant tasks better and more cheaply than human workers.

  • Anthropomorphism: attributing human traits, emotions, or tendencies to non-human entities.

💬IT MAY BE POSSIBLE

Most experts believe that AGI is inevitable. Most doomsday scenarios become unavoidable on a long enough timescale. Asteroids large enough to extinguish Earth hit every 30 million years. Our Sun will likely engulf the Earth in about 7.59 billion years. But unlike those events, the development of AI is still firmly within human control.

Experts' beliefs stem from human intelligence being relatively fixed across hundreds of years. Machine capability, on the other hand, has grown exponentially in the last several decades, alongside the biennial doubling in computing power. Skeptics might argue that computational power and human reason are wholly independent, so machine developments fail to bring AGI closer. Nevertheless, most experts fall on the opposite side of the spectrum.

Predicting the timing of future events is almost always a fool’s errand. And experts are sometimes foolish. A 2022 survey of 738 experts found estimates ranging from the 2020s to never. In aggregate, they believed HLMI would arrive by 2060. Plus, 75% thought the chance of advanced AI causing human extinction is greater than zero.

The Following Caveats Should Be Considered:

  • Nonresponse Bias: Surveyors contacted 4,271 researchers, but only 17% responded, so those with stronger opinions or greater optimism may have been more likely to respond.

  • Optimistic Experts: In 1965, Herbert A. Simon, an AI pioneer, famously predicted that within 20 years, “machines will be capable of doing any work a man can do.” In 1979, John McCarthy, one of the founding fathers of AI, predicted, “By 2000, we will have the means to simulate human intelligence.”

  • Faulty Assumptions: The 75% group assumes that AGI, at some point, will seek to harm. Humans tend to anthropomorphize things or assume that AI will take on the worst aspects of human nature.

These opinions, despite their well-credentialed sources, are somewhat irrelevant. Definitions of both AGI and human intelligence are fluid, so it will likely be impossible to pinpoint when AGI has overtaken human cognitive ability. Conversely, if a turning point occurs, exposing the indisputable inferiority of human cognition, it will be possible to claim AGI superiority.

📵OR NOT

Every so often, an AGI false alarm captures the media’s attention. Last year, Google researcher Blake Lemoine questioned whether a chatbot, LaMDA, was sentient after he had technical and philosophical conversations with it. Many neuroscientists quickly pushed back, noting disagreement about the definition of sentience. Three current definitions include the ability to:

  1. Feel through sensory mechanisms (e.g., sight).

  2. Have subjective experiences.

  3. Be aware of your consciousness, including the self, body, and external world.

Experts also identified the limitations of consciousness analysis in humans, let alone machines. Some research has found primary consciousness in 35-week-old fetuses based on neural circuitry. Other papers placed the turning point around age one or even age three. Although consciousness is more basic than sentience, experts still disagree about its onset and definition. The lack of consensus alludes to a common neuroscientific argument against AGI. Comprehensively understanding—and modeling—the human brain is a necessary precursor to its development.

It doesn’t take a computational neuroscientist to tell you that modern computation is no match for 6 million years of evolution. In 2013, researchers set out to prove otherwise. Led by Henry Markram, the Blue Brain Project (BBP), a 1.3 billion-euro initiative, sought to develop all the biological algorithms, scientific processes, and software needed to digitally reconstruct and simulate the brain, starting with a mouse. Despite building a digital replica of brain tissue and supporting structures like blood vessels, BBP fell well short of its overly ambitious objective, which was formally concluded two months ago.

Beyond neuroscience-based limitations, other skeptics point to an inherent limitation of computers: they can compute correlations among thousands of variables yet fail to understand the causation between two variables. This lack of intuition threatens the development of both HLMI and AGI.

🙋NOW WHAT

Like most long-shot technological developments, debates about AGI’s possibility, timing, and feasibility are more of an intellectual exercise than a pragmatic policy discussion. If nothing else, our progress on AGI, or lack thereof, is a testament to the astonishing capability of the human brain and a reminder not to squander our intelligence on regrettable actions.

📒FINAL NOTE

If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.

How was today’s newsletter?

❤️AI Pulse Review of The Week

“Workin overtime on Thanksgiving.😤”

-Jake (⭐️⭐️⭐️⭐️⭐️Nailed it!)

🎁NOTION TEMPLATES

🚨Subscribe to our newsletter for free and receive these powerful Notion templates:

  • ⚙️150 ChatGPT prompts for Copywriting

  • ⚙️325 ChatGPT prompts for Email Marketing

  • 📆Simple Project Management Board

  • ⏱Time Tracker

Reply

or to participate.