Prominent AI Scholars Back Private Governance Model in California
Press Releases
Apr 21, 2025
Academics and Researchers Support California Senate Bill 813, A Third-Way Approach to Governing Advanced Artificial Intelligence in California
SAN FRANCISCO, April 21, 2025 /PRNewswire/ — Fathom, a nonprofit building the solutions society needs to thrive in an AI-driven world, today published an open letter to California legislators signed by scholars, researchers, and thought leaders expressing support for Senate Bill 813. The group brings together a broad set of experts — from renowned computer scientists including Geoffrey Hinton and Yoshua Bengio, to AI thought leaders like Sam Hammond and Kay Firth-Butterfield.
The signers, who have dedicated their careers to the research of artificial intelligence, technology policy, and governance, represent different perspectives and have historically held varying views on AI. Yet they all “agree that SB 813 stands out as the most responsive, well-designed model yet, able to adapt and evolve over time with the underlying technology.”
SB 813 authorizes the creation of Multi-stakeholder Regulatory Organizations (MROs)—private, standards-setting bodies designed to provide legal and regulatory clarity, accountability, and agility in an era of exponential AI progress. Fathom has been working on the MRO concept extensively, and was proud to support Senator McNerney’s introduction of this important, novel legislation.
The letter was signed by the following experts:
Geoffrey Hinton, University of Toronto
Yoshua Bengio, Université de Montréal; Mila-Québec
Kay Firth-Butterfield, Good Tech Advisory LLC
Samuel Hammond, The Foundation for American Innovation (FAI)
Stuart Russell, UC Berkeley
Steve Omohundro, Beneficial AI Research
Gillian Hadfield, Johns Hopkins University; University of Toronto; Vector Institute for Artificial Intelligence
Lawrence Lessig, Harvard Law School
Peter Railton, University of Michigan
Ajay Agrawal, University of Toronto
Anthony Aguirre, Future of Life Institute; UC Santa Cruz
Gregory Allen, Center for Strategic and International Studies (CSIS)
David Danks, UC San Diego
Dylan Hadfield-Menell, Massachusetts Institute of Technology
Andrew Hickl, Allen Institute
Kartik Hosanagar, The Wharton School, University of Pennsylvania
Mark Nitzberg, Center for Human-Compatible AI, UC Berkeley
Alex Pentland, Stanford University
Adam Russell, Information Sciences Institute, USC
Jacob N. Shapiro, Princeton University
S. Craig Watkins, University of Texas at Austin
The full text of the letter is copied below.
April 21, 2025
Dear California lawmakers:
As scholars, researchers, and thought leaders who have dedicated their lives to the research of artificial intelligence, technology policy, and governance, we are writing to express our support for Senate Bill 813. Drawing on decades of collective research and experience across related fields, we believe that the framework outlined in this legislation is a novel and promising approach to ensuring that advanced AI systems can be an engine for innovation, while also ensuring that California can mitigate the risk of real world harm to persons and property.
Advanced AI technology is ever-changing, which makes it incredibly difficult to envision the technology’s nearly infinite future capabilities, or to forecast exactly when those capabilities will come online. This dynamic complicates traditional government agencies’ ability to regulate this important technology. However, the pace of innovation does not obviate the need for sensible guardrails – to the contrary, the pace of AI innovation proves that our society needs creative approaches to governance that allows the technology to flourish and ensures wide-spread adoption based on trust and legal and regulatory clarity.
SB 813 is the first-of-its-kind AI governance framework that is both nimble and built upon proven regulatory models that will continue to spur innovation and incentivize AI platforms to comply with state-of-the-art requirements to identify, monitor, and mitigate known, foreseeable risks. By establishing a “third-way” governing model, independent experts will be able to devise strong safety standards that also promote innovation while still being accountable to government leaders. This legislation harnesses the benefits of AI while also curbing its potential excesses.
SB 813 would create third-party assessments and safety standards developed by independent multi-stakeholder regulatory organizations, or MROs. The MROs will be made up of independent subject matter experts, civil society leaders, and industry leaders accredited by the California Attorney General’s Office to identify, develop, and evolve best practices for AI development. The system is entirely voluntary, and those developers who wish to opt in receive tangible benefits, including certain legal protections under state law. This model is particularly effective, and has been used before when groundbreaking technologies begin to change the American landscape. It will also bring more transparency to the AI industry, as the safety standards developed would be made public. This proposed structure is critically important and provides incentives for developers who opt in, allowing everyone to be part of the solution.
Many of us would agree that more has to be done to prepare for a world shaped by AI, though we may disagree on what those policies should be. However, we all agree that SB 813 stands out as the most responsive, well-designed model yet, able to adapt and evolve over time with the underlying technology.
SB 813 is a pragmatic approach to AI governance that advances critical safety standards, while preserving innovation for current and future developers. The evidence-based framework outlined will help keep residents safe and help California maintain its position as a global leader in AI development.
Yours sincerely,
Geoffrey Hinton, University of Toronto
Yoshua Bengio, Université de Montréal; Mila-Québec
Kay Firth-Butterfield, Good Tech Advisory LLC
Samuel Hammond, The Foundation for American Innovation (FAI)
Stuart Russell, UC Berkeley
Steve Omohundro, Beneficial AI Research
Gillian Hadfield, Johns Hopkins University; University of Toronto; Vector Institute for Artificial Intelligence
Lawrence Lessig, Harvard Law School
Peter Railton, University of Michigan
Ajay Agrawal, University of Toronto
Anthony Aguirre, Future of Life Institute; UC Santa Cruz
Gregory Allen, Center for Strategic and International Studies (CSIS)
David Danks, UC San Diego
Dylan Hadfield-Menell, Massachusetts Institute of Technology
Andrew Hickl, Allen Institute
Kartik Hosanagar, The Wharton School, University of Pennsylvania
Mark Nitzberg, Center for Human-Compatible AI, UC Berkeley
Alex Pentland, Stanford University
Adam Russell, Information Sciences Institute, USC
Jacob N. Shapiro, Princeton University
S. Craig Watkins, University of Texas at Austin
About Fathom:
Fathom is an independent 501(c)(3) nonprofit that finds, builds, and scales the solutions needed for our transition to a world with AI. We believe this transition must be guided by the voices and values of all people, not just technologists. Through deep public engagement, policy innovation, and cross-sector collaboration, Fathom is pioneering private governance models that match the pace of technological change while upholding democratic values. Learn more at http://fathom.org.
View original content to download multimedia:https://www.prnewswire.com/news-releases/prominent-ai-scholars-back-private-governance-model-in-california-302433352.html
SOURCE Fathom AI Inc.