As a fellow AI researcher, I have enormous respect for Dr. Fei-Fei Li’s scientific contributions to our field. However, I disagree with her recently published stance on California’s SB 1047. I believe this bill represents a crucial, light touch and measured first step in ensuring the safe development of frontier AI systems to protect the public.
Many experts in the field, including myself, agree that SB 1047 outlines a bare minimum for effective regulation of frontier AI models against foreseeable risks and that its compliance requirements are light and not prescriptive by intention. Instead, it relies on model developers to make self-assessments of risk and implement basic safety testing requirements. It also focuses on only the largest AI models—those costing over $100 million to train—which ensures it will not hamper innovation among startups or smaller companies. Its requirements align closely with voluntary commitments many leading AI companies have already made (notably with the White House and at the Seoul AI Summit).
We cannot let corporations grade their own homework and simply put out nice-sounding assurances. We don’t accept this in other technologies such as pharmaceuticals, aerospace, and food safety. Why should AI be treated differently? It is important to go from voluntary to legal commitments to level the competitive playing field among companies. I expect this bill to bolster public confidence in AI development at a time when many are questioning whether companies are acting responsibly.
Critics of SB 1047 have asserted that this bill will punish developers in a manner that stifles innovation. This claim does not hold up to scrutiny. It is common sense for any sector building potentially dangerous products to be subject to regulation ensuring safety. This is what we do in everyday sectors and products from automobiles to electrical appliances to home building codes. Although hearing perspectives from industry is important, the solution cannot be to completely abandon a bill that is as targeted and measured as SB 1047. Instead, I am hopeful that, with additional key amendments, some of the main concerns from industry can be addressed, while staying true to the spirit of the bill: Protecting innovation and citizens.
Another particular topic of concern for critics has been the potential impact of SB 1047 on the open-source development of cutting-edge AI. I have been a lifelong supporter of open source, but I don’t view it as an end in itself that is always good no matter the circumstances. Consider, for instance, the recent case of an open-source model that is being used at a massive scale to generate child pornography. This illegal activity is outside the developer’s terms of use, but now the model is released and we can never go back. With much more capable models being developed, we cannot wait for their open release before acting. For open-source models much more advanced than those that exist today, compliance with SB 1047 will not be a trivial box-checking exercise, like putting “illegal activity” outside the terms of service.
I also welcome the fact that the bill requires developers to retain the ability to quickly shut down their AI models, but only if they are under their control. This exception was explicitly designed to make compliance possible for open-source developers. Overall, finding policy solutions for highly capable open-source AI is a complex issue, but the threshold of risks vs. benefits should be decided through a democratic process, not based on the whims of whichever AI company is most reckless or overconfident.
Dr. Li calls for a “moonshot mentality” in AI development. I agree deeply with this point. I also believe this AI moonshot requires rigorous safety protocols. We simply cannot hope for companies to prioritize safety when the incentives to prioritize profits are so immense. Like Dr. Li, I would also prefer to see robust AI safety regulations at the federal level. But Congress is gridlocked and federal agencies constrained, which makes state action indispensable. In the past, California has led the way on green energy and consumer privacy, and it has a tremendous opportunity to lead again on AI. The choices we make about this field now will have profound consequences for current and future generations.
SB 1047 is a positive and reasonable step towards advancing both safety and long-term innovation in the AI ecosystem, especially incentivizing research and development in AI safety. This technically sound legislation, developed with leading AI and legal experts, is direly needed, and I hope California Governor Gavin Newsom and the legislature will support it.
More must-read commentary published by Fortune:
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.