- The AI Pulse
- Posts
- đ§ AI Regulation: Transformative or Trivial?
đ§ AI Regulation: Transformative or Trivial?
PLUS: Biden's Bold Move Unveiled: The Good, The Bad, and The Ugly
Welcome back AI prodigies!
In todayâs Sunday Special:
đ§¨Ready or Not, Here It Comes
â 4 Pros
â4 Cons
đThe Bottom Line
Read Time: 8 minutes
đKey Terms
Use Case Inventories: a catalog of how AI systems are used within a federal agency.
Office of Management and Budget (OMB): oversees the implementation of the Presidentâs vision across the Executive Branch, including review of all significant Federal regulations and department-specific oversight.
Transformer: a neural network that learns context and derives meaning by tracking relationships in sequential data like the words in this sentence.
Graphics Processing Unit (GPU): a specialized semiconductor with enhanced mathematical computation ability and parallel processing (i.e., calculations performed simultaneously), making it ideal for Machine Learning (ML) and other AI applications.
đ§¨READY OR NOT, HERE IT COMES
Though it threatens catastrophic outcomes, the doomsday AI scenario is exceedingly unlikely. Like all black swan events, policymakers must mitigate risk at an appropriate cost to society. Prosecuting people who maliciously create and disseminate deepfakes is a no-brainer, but should companies that built the models be held accountable? Should the federal government vet codebases to mitigate future deepfakes? We must first distinguish between process-based and outcome-based regulations.
In the highly regulated financial services industry, process-based and outcome-based standards work in tandem to protect consumers and businesses. Dual-factor authentication (i.e., process-based) is required to log in to a bank account; the mild inconvenience of punching in a verification code is worth the enhanced security. As we highlighted on Halloweekend, machine-learning algorithms detect hidden patterns in past transactions to identify fraud after the fact (i.e., outcome-based). Developers often fine-tune these algorithms to minimize false negatives (i.e., they didnât detect fraud, but it occurred) in favor of more false positives (i.e., they detected fraud, but it didnât occur). False positives may lead to freezing compliant accounts and inconvenience safe customers.
Undoubtedly, AI will require a precise combination of process and outcome-based standards. Government surveillance of codebases is process-based, regulating the systems and methods used to create user-facing applications. Prosecuting deepfake spreaders, on the other hand, punishes unethical users. How can regulators strike a balance between the two? Will process-based regulations spoil the critical ingredient of innovationârapid iterationâwithout meaningful risk mitigation? In an attempt to answer such existential questions, the Biden Administration issued Executive Order (EO) 14110 to guide the development of safe, secure, and trustworthy AI. But the sweeping 111-page directive raised more questions than it answered. Nevertheless, we identified four objectively promising elements of EO 14110.
â 4 PROS
Prioritizes End-User Protection: American consumers are king, and AI is ultimately no different. Biden empowered all federal agencies to âuse available policy and technical tools, including privacy-enhancing technologies (PETs) where appropriate, to protect privacy and to combat the broader legal and societal riskâincluding the chilling of First Amendment rightsâthat result from the improper collection and use of peopleâs data.â Is a constitutional right to digital privacy on the horizon? Past presidential candidates have proposed something similar, but itâs failed to gain traction in the halls of power.
Attracts AI Talent: Bidenâs EO also signifies a significant push to draw much-needed technical talent to the U.S. by fortifying paths to recruit and retain foreign computer and data scientists. The order explicitly directs the Secretary of State and Homeland Security to increase visa opportunities for âexperts in AI or other critical and emerging technologies.â The order also directs the Secretary of Labor to publish a request for information to solicit public input to identify potential updates to the list of Schedule A occupationsâa designation to hasten green card approval by allowing employers to bypass the labor certification process.
Delivers Holistic, Urgent Recommendations: It places some 150 requirements for federal agencies, calling out 50 specific ones to develop guidance, conduct studies, issue recommendations, and implement policies. This whole-of-government approach to AI is virtually unprecedented in software regulation. Further, the EO sets ambitious deadlines for most requirements, with roughly one-fifth of deadlines falling within 90 days and over 90% falling within a year.
Focuses on Policy Implementation: Federal agencies struggled to implement the prior two EOs, with 47% of agencies failing to file required AI use case inventories. The careful definitionâincluding the OMBâs clarification on covered agenciesâreflects substantial consideration for implementation.
â4 CONS
Given the EOâs hasty deployment, limitations loom large. Critiques range from reactionary to revelatory. Fox Businessâs technology expert worries that it could cement wokeness into platforms. Prominent economists and AI investors fear it will stifle the growth engine of the American economyâtechnological innovationâwithout compensatory safety and security enhancements. Here are the most pertinent critiques:
Prematurely Stifles Innovation: During the PC revolution, many experts feared job loss, social unrest, and even the end of humanity, but this fear didn't lead to a robust regulatory regime for computers before the technology caused harm. If not for the patience of regulators in the â70s and â80s, resisting the temptation to vet each Apple PC for safety and security, we may not have had widespread, affordable personal computers in our homes as quickly as we did. Premature, comprehensive regulation may also be foolish because of the staggering pace of AI innovation. A trip down memory lane tells us that a 2017 Google research paper seeded the large-language model revolution by developing an AI architecture called a transformer, which generates new content given specific patterns, including text, images, and code. In the timescale of innovation, six years is the blink of an eye. Current standards, such as stringent testing of models built on at least tens of billions of parameters, will presumably be irrelevant in a couple of years when GPU advancements enable training a trillion-parameter model. In practice, the American regulatory environment may evolve faster than the implementation of regulation itself, so federal regulators will deploy resources to uphold irrelevant laws. Separately, geopolitical observers note that overly cautionary regulation may give China the time to usurp the U.S. in the AI arms race.
Overreaches, But Without Authority: Critics maintain that EO 14110 assumes that AI warrants regulation without a democratic process. In contrast to Congressional legislation, EOs are not accountable to the public, as back-and-forths between proponents and opponents are not publicly distributed in places like C-SPAN (i.e., Public Affairs Network). Despite its authoritative bent, the EO is fundamentally feeble, as the following administration can revoke it at the stroke of a pen.
Lacks Specificity and Therefore Enforceability: Vague platitudes dot the order. The EO calls for âincorporating equity principles in AI-enabled technologies used in the health and human services sector.â It's not entirely clear what equity principles apply here. Medical schools train doctors to ânot harmâ; how might that translate into AI development? Standards for technical compliance are also not clearly defined. Firms must watermark âsynthetic content.â Although it defines synthetic content as âinformation, such as images, videos, audio clips, and text, that has been significantly modified or generated by algorithms, including by AI,â many are left wondering what âsignificantlyâ means. Whether this refers to a percentage of pixels generated by AI or a litany of other potential AI standards.
Itâs Overly Idealistic: EO 14110 reads, âIt is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse, including in the justice system and the Federal Government. Only then can Americans trust AI to advance civil rights, civil liberties, equity, and justice for all.â Given that public trust in technology companies sits at an all-time low, the notion that Americans will trust AI to uphold our most sacred values, regardless of how well itâs âheld accountable,â is a bold assumption. If regulators hold AI models to basic discrimination laws, they will be legally compliant, not necessarily trustworthy.
đTHE BOTTOM LINE
Though Executive Order 14110 is a step in the right direction, most technology leaders and academics agree that comprehensive Congressional legislation is required to balance issues of fair competition, user privacy and safety, regulatory oversight, and consumers' and businesses' unquenchable thirst for innovative products. In practice, the executive order empowered federal agencies to lay the groundwork to take action but not enforce novel regulations. Tangentially, it appeased the signatories of a March 2023 open letter to pause AI experiments that were more powerful than GPT-4. What do you think the most important goal of AI regulation should be? Doomsday prevention? User privacy protection? Enabling safe yet bold innovation? Or something else? Let us know in the comments.
đFINAL NOTE
If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.
How was todayâs newsletter?
â¤ď¸AI Pulse Review of The Week
âGreat information. Keep'em coming!â
đNOTION TEMPLATES
đ¨Subscribe to our newsletter for free and receive these powerful Notion templates:
âď¸150 ChatGPT prompts for Copywriting
âď¸325 ChatGPT prompts for Email Marketing
đSimple Project Management Board
âąTime Tracker
Reply