AI regulation: While Congress fiddles, California gets it done

Start

In the US, artificial intelligence (AI) regulation is a hot mess.

There are about 650 proposed state bills in 47 states and more than 100 federal congressional proposals related to AI, according to Multistate.ai. New York alone is home to 98 bills and California has 55. Then there are the executive orders from President Joseph R. Biden Jr. that have spun off many working groups and galvanized several government regulatory agencies.

When regulations are codified in so many ways by so many sources in so many places, the chance for conflicting directives is high — and the result could stifle business and leave loopholes in protections.

AI’s complexity adds to the confusion as do the numerous aspects of AI that warrant regulation. The list is lengthy, including job protection, consumer privacy, bias prevention and discrimination, deepfakes, disinformation, election fraud, intellectual property, copyright, housing, biometrics, healthcare, financial services, and national security risks.

So far, the federal government has dragged its feet on AI regulation, seemingly more focused on party politics and infighting than in crafting useful measures. As a result, Congress has not been an effective tool for structuring regulation policy.

The time for congressional action on AI regulation was two or three years ago. But with little being done federally, the states, particularly California, are attempting to fill the breach.

California jumps out front

California is out in front on consumer protections for AI. In 2018 — even before the public arrival of generative AI (genAI) in late 2022 — the state passed a transparency law that requires disclosure when genAI tools are used for deceptive communications to incentivize a purchase or sale of goods or services in a commercial transaction, or to influence a vote in an election. California has also passed laws on bias prevention in AI-based pre-trial criminal justice tools, deepfake use in electoral campaigns, and banning the use of facial recognition to analyze images captured by police body cams. (The state is nearing the possible release of additional consumer protections introduced in draft form late last year.)

Among other bills, California is formulating a model-level-approach to AI regulation, known as CA SB-1047. The legislation sets its sights on frontier models and the big tech companies that are developing them.

OpenAI defines frontier models as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.” SB-1047 would establish a new California agency to regulate large AI models and verify compliance. To be certified, developers would have to provide safeguards, security protections, prevention of critical harms, and built-in a facility that would enable a complete shutdown of the model.

This bill is the one being most closely watched by the tech industry.

Already, AI bills introduced by California and other states “are having a ripple effect globally,” according to The New York Times, quoting Victoria Espinel, CEO of the Business Software Alliance, a lobbying group representing big software companies. Causing its own ripple effect, the European Union adopted the comprehensive AI Act in March; it will be rolled out in stages beginning in 2025.

Follow the EU’s lead

What’s wrong with the US that it is unable to formulate and legislate a unified set of AI regulations like the EU? And do so in a timely manner? Senate Majority Leader Chuck Schumer, D-NY, has been working on AI regulation with industry leaders, but the effort doesn’t seem to be going anywhere quickly.

We’re well past the point of debating whether regulation is needed, yet many pundits are still arguing the point as if there were some doubt of its necessity. Those in the US in a position to foster comprehensive regulatory policies for AI should come together, roll up their sleeves, and craft policy.

California has done a great job, but its policies are not binding outside of its borders. The US is more freewheeling and supportive of business innovation than many other nations. That can be  one of this country’s strengths. But genAI, and AI in general, has the potential to be as destructive as it can be constructive. We ignore that risk at out peril.

The next 12 to 18 months will see significant AI legislation play out around the globe. The US is in danger of missing that timeframe. It’s time to catch up.

Previous Story

Alice Step 2 May be Satisfied by a Patent’s Description and use of Claimed Technology

Next Story

Congress warns Microsoft about foreign hackers again — will it matter this time?