🧠 AI and War: An Inevitable Combination

PLUS: How the U.S. Is Trying and Failing to Govern AI Weapon Systems

Welcome back AI prodigies!

In today’s Sunday Special:

  • ⚔️AI Weapons May Proliferate

  • 🤝Reaching a Consensus on LAWs

  • 🇺🇸The United States Laws on LAWs

  • 🔑Key Takeaway

Read Time: 6 minutes

🎓Key Terms

  • Lethal Autonomous Weapons (LAWs): autonomous weapon systems that identify and engage targets without intervention from a human operator.

  • Alignment: when autonomous weapon systems act as a human operator intended.

  • Enfeeblement: when autonomous weapon systems over-rely on programmed decision-making.

⚔️AI WEAPONS MAY PROLIFERATE

In 2021, Turkey allegedly released STM Kargu-2 drones to kill fleeing Libyan soldiers. According to the United Nations (UN), the microdrones were fully autonomous, requiring no human instruction before exploding upon contact with the target individual. In response, the International Committee of the Red Cross (ICRC), a preeminent authority on the laws of war, expressed several concerns about Lethal Autonomous Weapons (LAWs). Their use poses unprecedented harm to both combatants and civilians, fails to comply with international law, and raises fundamental ethical questions for humanity to answer. Though damning, ICRC declarations are not legally enforceable.

Nevertheless, the Geneva Conventions, which officially govern the laws of war, are slightly more enforceable. Ratified by 152 countries, they include four conventions about treating wounded and sick soldiers, prisoners of war, and civilians. The conventions include a statute requiring combatants to make context-dependent, evaluative legal judgments. Fully autonomous slaughterbots absolve combatants of making decisions. Despite this violation, the conventions rarely prevent legal and ethical breaches, whether by the ratifying countries or the 42 refusing. Among the accused countries are the United States and Israel for the indiscriminate killing of civilians in the Afghanistan and Gaza Wars, respectively.

🤝REACHING A CONSENSUS ON LAWs

The United States, China, Britain, and Australia have invested heavily in LAWs, but the only international consensus on their use is that some international agreement should exist. One persistent challenge is that leading countries define LAWs differently and employ vague regulatory language, proposing that the use of LAWs should follow the “applicability of general legal norms” (i.e., China) and “appropriate levels of human judgment” (i.e., the United States). These fundamental disagreements came to the forefront of policy discussions at the UN Convention on Certain Conventional Weapons (CCW) in late 2021, where participants failed to reach a consensus.

Perhaps the lack of consensus is unavoidable—a feature, not a bug. When each country follows individual incentives, a prisoner’s dilemma is ensured. A major power opting out of advanced capabilities won’t trust its counterparts to do the same. This dynamic fueled the U.S.-USSR nuclear arms race when the threat of mutually assured destruction prevented the deployment of nuclear weapons. However, LAWs deployed in isolation don’t pose the same threat. Despite the futility of governance, the Pentagon has shared best LAW practices at the UN, though the declaration featured vague jargon. Frankly, its depth and specificity match a statement written by OpenAI’s ChatGPT.

🇺🇸THE UNITED STATES LAWS ON LAWs

Legal debates aside, the risks associated with slaughterbots are no different than those inherent to other machine-learning-based predictive systems. The deployment must ensure human-machine alignment and prevent overreliance on AI systems, otherwise known as enfeeblement.

The Department of Defense (DOD) requires that LAWs “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” They define three categories of autonomy:

  1. Human Out of the Loop: weapon systems can select and engage targets without human intervention (i.e., enfeeblement).

  2. Human on the Loop: operators can monitor and halt a weapon’s target engagement (i.e., balanced).

  3. Human in the Loop: weapon systems only engage individual targets for specific target groups a human operator selects (i.e., alignment).

Despite introducing more specificity than the Pentagon’s UN statement, the scope of this recent DOD directive, dated February 1, 2024, is limited. It exempts a myriad of military assets, including unarmed platforms, unguided munitions manually guided by the operator (e.g., laser- or wire-guided munitions), and mines. Further, these directives are not the law of the land; they merely inform legislative debate on appropriate status.

🔑KEY TAKEAWAY

Notwithstanding legal and ethical concerns, LAWs promise to disarm explosives, provide logistics support, conduct reconnaissance missions, and perpetrate and predict cybersecurity attacks. Policymakers must balance speed and precision when crafting guidelines, as legislation must prevent LAW misuse in the near term while maintaining specificity to enable enforceability. Like all AI-related legislation, it must adapt to the pace of innovation.

📒FINAL NOTE

If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.

How was today’s newsletter?

❤️AI Pulse Review of The Week

“I didn’t initially subscribe for the Sunday Specials, but I’m starting to like them.”

-John (⭐️⭐️⭐️⭐️⭐️Nailed it!)

🎁NOTION TEMPLATES

🚨Subscribe to our newsletter for free and receive these powerful Notion templates:

  • ⚙️150 ChatGPT prompts for Copywriting

  • ⚙️325 ChatGPT prompts for Email Marketing

  • 📆Simple Project Management Board

  • ⏱Time Tracker

Reply

or to participate.