🧠 The Terminator Comes to Life

PLUS: Should We Ban Lethal AI Weapon Systems?

Welcome back AI prodigies!

In today’s Sunday Special:

  • 🤖Sci-fi is Reality

  • 👨‍⚖️LAWs need Laws

  • ⚔️LAWs in Action

  • 🔑Key Takeaway

Read Time: 6 minutes

🎓Key Terms

  • Slaughterbots: microdrones that use AI and facial recognition software to assassinate political opponents based on preprogrammed criteria.

  • Lethal Autonomous Weapons (LAWs): a weapon system that can select and engage targets without further intervention by a human operator

  • Alignment: the fundamental differences between humans and AI—preferences, ethics, and objectives in a particular endeavor

  • Enfeeblement: an overreliance on AI that makes existing human skills and capabilities less relevant or potentially obsolete

🤖SCI-FI IS REALITY

They only need a profile: age, sex, income, race, and ethnicity. Not for the national census form. To direct an army of autonomous, palm-sized drones equipped with facial recognition to eliminate human targets. So-called slaughterbots, as depicted in this short film, were merely a sci-fi storyline in 2019. But in 2020, reports of Lethal Autonomous Weapons (LAW) surfaced. Turkey, not often considered the embodiment of ethical practices, released STM Kargu-2 drones to kill fleeing Libyan soldiers. According to the subsequent United Nations (UN) investigation, the microdrones were fully autonomous, requiring no instruction before exploding upon human impact. In response, the International Committee of the Red Cross (ICRC), a preeminent authority on the laws of war, expressed several concerns about LAWs. Their use poses unprecedented harm to both combatants and civilians, fails to comply with international law, and raises fundamental ethical questions for humanity to answer. In practice, however, the ICRC declarations are not legally enforceable.

👨‍⚖️LAWs NEED LAWS

In the years since Turkey’s Terminator-style attacks, international watchdogs allege that Israel, Russia, and South Korea have deployed weapons with autonomous capabilities. Further, the United States, China, Britain, and Australia have invested heavily in LAWs. Typically, the only international consensus on their development and deployment is that some international agreement should exist. One persistent challenge is that leading countries define LAWs differently and employ vague regulatory language, proposing that the use of LAWs should follow the “applicability of general legal norms” (China) and “appropriate levels of human judgment” (United States). These fundamental disagreements came to the forefront of policy discussions at the UN Convention on Certain Conventional Weapons (CCW) in late 2021. No consensus was reached.

Fortunately, a blueprint for drafting international laws governing the use of a potential weapon of extinction exists. During the Cold War, geopolitical adversaries crafted and abided by international agreements concerning the development, transfer, and employment of nuclear weapons. In that case, however, the principle of mutually assured destruction laid the foundation for the United States and the Soviet Union's strategies and the subsequent Treaty on the Non-Proliferation of Nuclear Weapons. Given the lack of consensus on the definition of LAWs, the nuances surrounding their deployment, and the mini-arms race, the ICRC will continue to push declarations into the international void.

⚔️LAWS IN ACTION

Legal debates aside, many of the risks associated with slaughterbots are no different than those inherent to other machine-learning-based predictive systems. Alignment and enfeeblement threaten the ability of the machine to perform historically human jobs. To elucidate these risks, we consider the following scenario:

An autonomous vehicle armed with machine guns and guided missiles is to rescue stranded soldiers from behind enemy lines. Along its journey lie dozens of consequential decisions. Are the enemy soldiers firing from distant barracks worth engaging with? If so, is the goal to distract, subdue, eliminate, or something else? What priority order, if any, should soldiers be rescued in?

Questions like these that are tricky for experienced combatants to answer raise the issue of human-AI alignment. Assuming remote operation, autonomous vehicle specialists can interpret its data (high-resolution images, radar, lidar, etc.) to make decisions. If scenarios like these play out across military drills and engagements, the military may become over-reliant on AI, marginalizing trained human tactics like ambush. And even if machine-man alignment exists, most experts agree that active human involvement in LAW-based operations is non-negotiable. As a result, vehicle operators will require a machine-learning (ML) skill set to fine-tune predictive models as mission objectives and decision rules built into the models evolve in real time.

🔑KEY TAKEAWAY

Beyond rescuing soldiers, autonomous systems of the future promise to disarm explosives, provide logistics support, conduct reconnaissance missions, perpetrate and predict cybersecurity attacks, and enhance training via augmented and virtual reality simulations. As the US and China spar over Taiwan and Russia and the West collide in Crimea, ethical, legal, and technological questions regarding LAWs remain at the forefront of international discourse.

📒FINAL NOTE

If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.

How was today’s newsletter?

❤️AI Pulse Review of The Week

“I discovered this newsletter in the Consensus Slack community. This content earned a subscribe.”

-Hana (⭐️⭐️⭐️⭐️⭐️Nailed it!)

🎁NOTION TEMPLATES

🚨Subscribe to our newsletter for free and receive these powerful Notion templates:

  • ⚙️150 ChatGPT prompts for Copywriting

  • ⚙️325 ChatGPT prompts for Email Marketing

  • 📆Simple Project Management Board

  • ⏱Time Tracker

Reply

or to participate.