The AI Alliance, a community of technology creators, developers, and adopters collaborating to advance safe, responsible AI rooted in open innovation, respectfully opposes SB 1047, the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. It would slow innovation, thwart advancements in safety and security, and undermine California’s economic growth. The bill’s technically infeasible requirements will chill innovation in the field of AI and lower access to the field’s cutting edge, thereby directly contradicting the bill’s stated purpose “...to ensure that artificial intelligence innovation . . . is accessible to academic researchers and startups, in addition to large companies.” Sec. 2(d).
The AI Alliance is a community of researchers, developers, and organizational leaders committed to supporting and enhancing open innovation across the AI technology landscape. Many members of the AI Alliance are based in California or have a nexus to the state or its residents⏤and we are deeply concerned about the potential establishment of an anti-open-source precedent. The Alliance enables developers and researchers to accelerate responsible open innovation in AI while ensuring scientific rigor, trust, safety, security, diversity, and economic competitiveness. By bringing together this community, the AI Alliance pools resources, knowledge, and best practices to address safety concerns and provides a platform for sharing and developing solutions that benefit the whole AI ecosystem.
Examples of AI Alliance contributions. AI Alliance members contribute to and rely on open AI models, with uses demonstrating the ways open models enhance innovation, economic productivity, safety, and security, while broadening access to AI. IBM, in partnership with NASA, has democratized the largest geospatial AI foundation model on Hugging Face. Stability AI is developing innovative, resource-efficient, open-access AI models. Snowflake is empowering businesses to build potent AI and machine learning applications from its enterprise data. SeedAI is crafting frameworks for co-creation, evaluation, and decision-making regarding AI technology and policies. Meta launched Llama 3, which has already been downloaded over 1 million times; over 600 derivative models have been uploaded to Hugging Face. Databricks is democratizing AI within the enterprise space, by helping thousands of organizations customize and deploy DBRX, Llama 3, and other opensource AI models for enterprise applications. ServiceNow, in collaboration with Hugging Face and the BigCode community of more than 1,200 members from 60+ countries, released its StarCoder family of models with the aim of introducing more responsibility and transparency in the development of large language models for code. AI Alliance member universities globally are leveraging open-source AI for scientific discovery, research, and innovation.
Open-source approach is safe and secure. If the bill does aim to promote “Safe and Secure Innovation for Frontier Artificial Intelligence Models,” it should facilitate an AI ecosystem that provides for the full range of options regarding form of release. Historically, open source has been a more secure approach to technology development. This remains true. The availability of model weights, for example, enables the identification and examination of model vulnerabilities, driving security enhancements for both closed and open models. Open models facilitate cross-examination and the detection of potential improvements or bugs, fostering a collaborative development environment. This perspective is shared by twenty-five leading AI experts from industry, academia, and civil society. And this sentiment is shared by over 1,200 individuals across the AI ecosystem, who, in a Joint Statement on AI Safety and Openness, declared that openness and transparency are fundamental to achieving safety, security, and accountability in AI.
The bill severely penalizes responsible open-source development. While SB 1047 is not targeting open-source development specifically, it will affect the open-source community dramatically. The bill requires developers of AI models of 10^26 FLOPS or similar performance (as determined by undefined benchmarks) to implement a full shutdown control that would halt operation of the model and all derivative models. Once a model is open sourced and subsequently downloaded by a third party, by design developers no longer have control over a model. Before such a “shutdown switch” provision is passed, we need to understand how it can be done in the context of open source; the bill does not answer that question. No models at 10^26 FLOPS are openly available today but technology is rapidly advancing and the open ecosystem could evolve alongside it. However, this legislation seems intended to freeze open-source AI development at the 2024 level.
Current exemptions are not practicable. The bill does attempt to provide some reasonable requirements for models subject to a “limited duty exemption,” including an exception to the shutdown requirement. However, few⏤if any⏤models would ever qualify for this exemption because it includes technically impracticable obligations, such as requiring developers to determine whether the exemption applies prior to training. Further, the exception is only available if the model does not enable a “hazardous capability,” but a model is considered to have a hazardous capability “even if the hazardous capability [at issue] would not manifest but for fine tuning and post training modifications performed by third-party experts intending to demonstrate those abilities.”4 As fine tuning and post training modifications could introduce almost anything to the model, this requirement would be nearly impossible to meet.
The bill needs a number of changes to make it effective and practicable. This bill will also affect smaller companies, which often rely on open-source models created by larger companies. Compliance costs may discourage open sourcing, depriving smaller companies of the models they need to build their businesses. Therefore, we recommend refocusing regulatory attention from model development to model applications (i.e., how the model is used), particularly those applications that pose high risks to public safety and security.
However, assuming the bill retains the current structure, we recommend the following changes:
- Amending the requirements for the limited duty exemption to make it feasible for responsible open-source developers;
- When a limited duty exemption does not apply, amending the obligations on developers to limit scope to instances where developers have control. Without this limitation, developers will be liable for all kinds of downstream uses that will make open sourcing models unviable;
- Incorporating discrete open-source exceptions throughout the bill to ensure that responsible developers can open-source models with adequate mitigations.