• The AI Pulse
  • Posts
  • 🧠 Should High-Risk AI Systems Require Special Regulations?

🧠 Should High-Risk AI Systems Require Special Regulations?

PLUS: The U.S.’s First Comprehensive AI Law

Welcome back AI prodigies!

In today’s Sunday Special:

  • 📜The Prelude

  • 🇪🇺How Does the EU Regulate Different AI Systems?

  • 🤖Colorado’s EU-Inspired AI Bill, Explained.

  • 📊Limitations of SB24-205

  • 🔑Key Takeaway

Read Time: 8 minutes

🎓Key Terms

  • Gross Domestic Product (GDP): The total value of all goods and services produced in a country.

  • Generative AI (GenAI): When AI models create new content such as text, images, audio, video, or code.

  • Foundation Models: AI models trained on massive amounts of general data so they can be applied to a wide range of use cases.

🩺 PULSE CHECK

Would you prefer AI regulations to be overly strict or too lenient?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

📜THE PRELUDE

France recently hosted the Artificial Intelligence Action Summit in Paris to shift the focus from AI Safety to AI Action.

The U.S. refused to sign a declaration stressing the need for international alliances to ensure AI developments are “open, inclusive, transparent, ethical, safe, secure, and trustworthy.”

Vice President JD Vance delivered an emphatic speech defending the U.S.’s attitude toward AI. After promising to champion a pro-growth, pro-labor, and pro-innovation approach, he asserted that the U.S. won’t follow suit with the European Union’s (EU’s) rapid, sweeping, and premature AI regulations.

“We need our European friends to look at this new frontier with optimism rather than trepidation…Our laws will keep Big Tech, Little Tech, and all other Developers on a level playing field. And when a massive incumbent comes to us asking us for safety regulations, we ought to ask whether that safety regulation is for the benefit of our people or whether it’s for the benefit of the incumbent.”

Despite his warnings against premature AI regulations, as many as a dozen states in the U.S. are advancing legislation that mirrors core components of the EU Artificial Intelligence Act. Although U.S. federal AI regulations would override U.S. state legislation, U.S. state legislation often dictates America’s future national policy.

So, what’s the EU Artificial Intelligence Act? How are states in the U.S. mirroring it? And is it the right approach to regulating AI developments?

🇪🇺HOW DOES THE EU REGULATE DIFFERENT AI SYSTEMS?

How Does the EU’s Legislative Process Work?

The EU is a Political and Economic Union of 27 European countries, with Germany, France, and Italy comprising half the EU’s GDP.

To create legislation, the European Commission (i.e., a member from each country) drafts proposals. Then, the European Parliament (i.e., citizen-elected national representatives) and the Council of the EU (i.e., government-appointed national ministers) debate, scrutinize, and amend the proposals.

The EU and Foundation Models.

While deliberating over provisions in the early proposals of the EU Artificial Intelligence Act, the European Parliament disagreed with developers, researchers, and startups on how to regulate Foundation Models.

In particular, French startup Mistral AI was concerned that overly stringent AI regulations on Foundation Models could severely disadvantage them against international rivals, preventing European developers from building GenAI applications on homegrown AI models. “We think that the deployer should bear the risk, bear the responsibility,” said Mistral AI CEO Arthur Mensch. He also explained that regulating Foundation Models before understanding how they’ll be used is premature.

The European Commission agreed to exclude burdensome requirements on Foundation Models, only requiring developers, researchers, and startups to disclose the potential dangers, biases, or limitations that could occur when using their Foundation Models for specific use cases.

The EU and AI Systems.

The European Commission wasn’t so lenient when regulating so-called AI Systems. According to the EU, AI Systems are “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” It categorizes AI Systems into three subgroups:

  1. High-Risk AI Systems: Pose significant risks to people’s fundamental rights, safety, or health. (e.g., AI used in insurance, lending, resume reviewing, or exam scoring).

  2. Limited-Risk AI Systems: Pose specific risks to people’s transparency. (e.g., AI used to manipulate images, audio, or video to create Deepfakes).

  3. Minimal-Risk AI Systems: Pose very little risk to people’s rights, safety, or health. (e.g., AI used to create email spam filters).

🤖COLORADO’S EU-INSPIRED AI BILL, EXPLAINED.

What’s a High-Risk AI System in Colorado?

Colorado recently passed SB24-205, the first comprehensive AI legislation in the U.S. that governs the development, deployment, and use of High-Risk AI Systems. The three factors it uses to determine what’s considered a High-Risk AI System are:

  1. Any AI System that has a Substantial Factor in making a Consequential Decision.

  2. Substantial Factors are “content, decisions, predictions, or recommendations concerning a consumer.”

  3. Consequential Decisions have a “material legal or similarly significant effect” on whether a consumer obtains any of the following: “education enrollment, employment, a loan, healthcare services, housing, insurance, an essential government service, or a legal service.”

Simply put, a High-Risk AI System is an AI System that significantly affects life outcomes. The key question is what qualifies as a “significant” effect.

High-Risk AI Systems in Employment.

So, how would SB24-205 impact employment decisions in Colorado? According to the U.S. Equal Employment Opportunity Commission (EEOC), 83% of all employers and 99% of Fortune 500 companies use “some form of automated tool to screen or rank candidates for hire.”

AI tools for resume screening would most likely be considered High-Risk AI Systems because they involve a Substantial Factor and influence a Consequential Decision. SB24-205 would require developers who create AI tools for resume screening in Colorado and deployers who use AI tools for resume screening in Colorado to implement a Risk Management Program to “specify and incorporate the principles, processes, and personnel they use to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination.”

📊LIMITATIONS OF SB24-205

Advocates of SB24-205 maintain that AI regulations are needed to prevent AI Systems from becoming biased, discriminatory, and unsafe.

However, another factor to consider is Compliance Cost. In 2021, the Center for European Policy Studies (CEPS) estimated that the EU Artificial Intelligence Act might add between 5% and 15% to a developer’s AI spending. However, these estimates were calculated before OpenAI’s ChatGPT-driven adoption of general-purpose AI models, which could drive even higher U.S. Compliance Costs. Regardless of actual Compliance Cost figures, developers would pass costs onto deployers, who, in turn, would pass costs onto consumers.

Beyond consumer impacts, complex AI regulations could burden smaller developers and deployers much more than larger ones, which could lead to further consolidation in markets like Foundation Models, where large companies like OpenAI can dominate, reducing competition and developer and consumer choice.

SB24-205 doesn’t take effect for another year. However, as states in the U.S., such as California, Connecticut, Illinois, New York, and Texas, consider similar AI-based bills, America’s state-level AI regulations may continue to follow in the EU’s footsteps.

🔑KEY TAKEAWAY

While protecting consumers is a top priority, imposing strict AI regulations that result in high Compliance Costs could burden smaller developers, ultimately stifling innovation and hurting consumers. This balance between protecting consumers without hindering AI innovation will haunt policymakers for years to come.

📒FINAL NOTE

FEEDBACK

How would you rate today’s email?

It helps us improve the content for you!

Login or Subscribe to participate in polls.

❤️TAIP Review of The Week

“Can we get a deep dive on AI regulations?”

-Vijay (1️⃣ 👍Nailed it!)
REFER & EARN

🎉Your Friends Learn, You Earn!

You currently have 0 referrals, only 1 away from receiving ⚙️Ultimate Prompt Engineering Guide.

Reply

or to participate.