Don’t let the RAISE Act drive New York’s innovators, researchers, and businesses away. Keep opportunity alive and protect New York’s open source community.
If the RAISE Act is signed into law, New York would be the first State to impose a regulatory AI framework far more onerous than any established framework globally and that will threaten open source in the state and put New York’s AI community at risk of falling behind.
The RAISE Act would limit the availability of and access to US-developed AI models, discourage AI research and collaborations, undermine AI adoption, and slow the growth of AI jobs and investment. Learn the facts about the RAISE Act.
What is the RAISE Act?
- On June 12, the New York legislature passed the Responsible AI Safety and Education (RAISE) Act.
- The Act establishes unprecedented broad risk assessments, downstream reporting, and extensive disclosure requirements for large developers of frontier models.
- Developers who spend more than $5 million in compute costs on a frontier model or a model distilled from a frontier model are obligated by the Act.
- Governor Hochul is currently assessing whether to sign or veto the Act, or send it back to the legislature for chapter amendments.
- If signed into law, the Act would take effect within 90 days.
What does the RAISE Act say?
- It regulates “Large Developers” of “Frontier Models”
- [Link to footnote: “The RAISE Act defines a “Frontier Model” as a model trained on more than 10^26 FLOPs and more than $100 million in aggregate compute costs; or a model distilled from a frontier model and more than $5 million in aggregate compute costs.”]
- It applies to any frontier model, or distilled model, developed, deployed, or operating in New York State.
- The NY Attorney General can issue steep civil penalties:
- Up to $10 million for any first violation;
- Up to $30 million for any subsequent violation.
How would the RAISE Act impact AI jobs and investment in New York?
- The RAISE Act will create a chilling effect on the US open-source ecosystem, affecting both model developers and the downstream beneficiaries of open innovation, including startups and researchers.
- If signed into law, New York would be home to a regulatory AI framework that puts New York’s developer community at risk of falling behind as they lose the opportunity to build on the latest, more capable AI models.
- New York City alone is home to over 40,000 AI or AI-adjacent jobs, with the projected economic value of AI of over $320 billion by 2038. Since 2018, it has attracted VC investment in excess of $21 billion, resulting in 35 AI unicorns, and where 71% of AI startups are in the AI application space versus 66% globally. [Link to footnote: https://edc.nyc/ai-nyc].
Is there an exception for research and development, or internal use?
- There is no exception for research and development, or internal use.
- However, colleges and universities are exempt when engaging in academic research. This carve-out does not address safety concerns that academic-produced models might pose.
- While University-driven AI research is indispensable in progressing the field, the vast majority of the advances which have led to the development and adoption of AI powering New York’s applied AI ecosystem were created by industry research, often in close collaboration with academic institutions.
- In-house research labs may need to relocate R&D activity outside of the State, or cease research collaborations with New York academic partners, to avoid the uneven regulatory burden and legal liability of conducting AI activities within New York.
How does the RAISE Act impact open source AI developers?
- It creates an unworkable regulatory regime for covered open source developers:
- Open sourcing a model means making it available for wide use. Open source developers cannot possibly foresee and prevent every potential misuse of their models.
- Covered developers are expected to anticipate, evaluate, and potentially mitigate a wide range of potential downstream harms stemming from any use of their frontier models, or distilled models, under a subjective "unreasonable risk" standard.
- The lack of controllability by open source developers of downstream uses, together with the Act’s subjective “unreasonable risk” standard, creates extreme legal uncertainty, where developers are exposed to high degrees of liability based on the actions of others, which they cannot predict or control.
- Developers are prohibited from deploying a frontier model if it would create an “unreasonable risk” of causing a critical harm.
- Developers are expected to implement downstream monitoring and reporting within 72 hours for a “safety incident” which includes the unauthorized use or access of released model weights. This requirement is infeasible in the open source context.
What is at Risk for NY’s Open-Source AI Community?
- Availability of and Access to US-developed AI Models
- Development techniques like model distillation would be put at risk, as a covered open-source developer would have to assess and mitigate for all potential risks regarding any smaller model distilled from the teacher model - this is clearly an impossibility and illustrates how this Act is fundamentally at odds with technology, and the open source ecosystem.
- Closing off New York’s open-source ecosystem will narrow the market to just a few proprietary models, reducing choice for developers and consumers.
Would the RAISE Act advance safety?
- The RAISE Act’s premise that it is required to advance AI safety is a red-herring. By limiting its scope to large developers and ignoring model capabilities entirely, RAISE will not meaningful advance safety and instead would penalize well-resourced U.S. AI developers who are investing in New York, while excluding smaller foreign developers, not because of model performance but solely on the basis of expenditure on compute.
How would the RAISE Act impact foreign-based developers?
- Out of scope, foreign-based developers would still be free to develop and deploy their AI models in the US despite the lack of any transparency into how the models were developed.