Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks

Micheal

Fei-Fei Li

In a new report, a California-based policy group co-led by Fei-Fei Li, an AI pioneer, suggests that lawmakers should consider AI risks that “have not yet been observed in the world” when crafting AI regulatory policies.

The 41-page interim report released on Tuesday comes from the Joint California Policy Working Group on Frontier AI Models, an effort organized by Governor Gavin Newsom following his veto of California’s controversial AI safety bill, SB 1047. While Newsom found that SB 1047 missed the mark, he acknowledged last year the need for a more extensive assessment of AI risks to inform legislators.

In the report, Li, along with co-authors UC Berkeley College of Computing Dean Jennifer Chayes and Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar, argue in favor of laws that would increase transparency into what frontier AI labs such as OpenAI are building. Industry stakeholders from across the ideological spectrum reviewed the report before its publication, including staunch AI safety advocates like Turing Award winner Yoshua Benjio as well as those who argued against SB 1047, such as Databricks Co-Founder Ion Stoica.

According to the report, the novel risks posed by AI systems may necessitate laws that would force AI model developers to publicly report their safety tests, data acquisition practices, and security measures. The report also advocates for increased standards around third-party evaluations of these metrics and corporate policies, in addition to expanded whistleblower protections for AI company employees and contractors.

Li et al. write there’s an “inconclusive level of evidence” for AI’s potential to help carry out cyberattacks, create biological weapons, or bring about other “extreme” threats. They also argue, however, that AI policy should not only address current risks, but anticipate future consequences that might occur without sufficient safeguards.

“For example, we do not need to observe a nuclear weapon [exploding] to predict reliably that it could and would cause extensive harm,” the report states. “If those who speculate about the most extreme risks are right — and we are uncertain if they will be — then the stakes and costs for inaction on frontier AI at this current moment are extremely high.”

The report recommends a two-pronged strategy to boost AI model development transparency: trust but verify. AI model developers and their employees should be provided avenues to report on areas of public concern, the report says, such as internal safety testing, while also being required to submit testing claims for third-party verification.

While the report, the final version of which is due out in June 2025, endorses no specific legislation, it’s been well received by experts on both sides of the AI policymaking debate.

Dean Ball, an AI-focused research fellow at George Mason University who was critical of SB 1047, said in a post on X that the report was a promising step for California’s AI safety regulation. It’s also a win for AI safety advocates, according to California State Senator Scott Wiener, who introduced SB 1047 last year. Wiener said in a press release that the report builds on “urgent conversations around AI governance we began in the legislature [in 2024].”

The report appears to align with several components of SB 1047 and Wiener’s follow-up bill, SB 53, such as requiring AI model developers to report the results of safety tests. Taking a broader view, it seems to be a much-needed win for AI safety folks, whose agenda has lost ground in the last year.

Leave a Comment