Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, which passed both the California Senate and Assembly earlier this month. The law requires more transparency from so-called frontier AI developers, which must publicly publish on their websites exactly how they've incorporated national and international standards and industry consensus best practices into their frontier AI framework. It also creates a new mechanism for reporting potential safety issues to the state and protects whistleblowers who disclose risks posed by the AI models, according to the bill's text.
Notably, S.B. 53 creates a civil penalty for noncompliance that is enforceable by the state attorney general's office. It also instructs the California Department of Technology to recommend updates to the law on an annual basis and establishes "a new consortium within the Government Operations Agency to develop a framework for creating a public computing cluster," Newsom's office said in the announcement. The consortium will be tasked with helping the development of "safe, ethical, equitable and sustainable" AI, it said.
In a message to California state senators Monday, Newsom said the bill will "strengthen California's ability to monitor, evaluate and respond to critical safety incidents associated with these advanced systems, empowering the state to act quickly to protect public safety, cybersecurity and national security."
Newsom noted that the law was introduced after the state commissioned a working report on AI prepared by experts and academics. That report, issued earlier this year, included a series of recommendations for enhancing online security and helping to build public trust "while also continuing to spur innovation in these new technologies," the governor's office said. California is home to 32 of the 50 top AI companies in the world, including Google LLC, Apple Inc. and Nvidia Corp., it said.
Among those involved in preparing the report were Mariano-Florentino Cuéllar, a former California Supreme Court justice; Fei-Fei Li, co-director of Stanford University's Institute for Human-Centered Artificial Intelligence; and Jennifer Tour Chayes, dean of the College of Computing, Data Science and Society at University of California, Berkeley. In a joint statement Monday, the trio said SB 53 "moves us towards the transparency and 'trust but verify' policy principles outlined in our report."
"As artificial intelligence continues its long journey of development, more frontier breakthroughs will occur," they said. "AI policy should continue emphasizing thoughtful scientific review and keeping America at the forefront of technology."
Newsom said in his own statement that California "has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance."
He added, "AI is the new frontier in innovation, and California is not only here for it — but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves."
Last month, the parents of a California teenager who died by suicide filed a wrongful death suit claiming that OpenAI's artificial intelligence tool ChatGPT encouraged self-harm and suicidal ideation and then helped the 16-year-old plan his death. The Orange County, California, teen started using ChatGPT for help with his homework, but within months, the tool was isolating him from his family, encouraging self-harm and providing detailed suicide instructions, his parents claim in their complaint, filed in San Francisco County Superior Court.
Lawmakers in California also recently approved legislation that would require companies to apply human oversight and notify workers when using artificial intelligence tools to make employment decisions. The bill, known officially as S.B. 7 and titled the No Robo Bosses Act, bars employers in the Golden State from making certain types of employment decisions, like firing or disciplining an employee, by relying solely on an automated system.
--Additional reporting by Vin Gurrieri. Editing by Kristen Becker.
For a reprint of this article, please contact reprints@law360.com.