Don’t let the RAISE Act drive New York’s innovators, researchers, and businesses away.
Keep opportunity alive and protect New York’s open source community.
If the RAISE Act is signed into law, New York would be the first State to impose a regulatory AI framework for frontier development far more onerous than any established framework globally—this approach will cause New York’s AI community to fall behind and deter open source in the state.
The AI Alliance Community is a 501c(3)¹ charitable non-profit composed of a diverse 190+ organizations, large and small companies, academic and non-profit institutions, representing developers, researchers, and business leaders who are focused on accelerating and disseminating open innovation across the AI technology landscape. We aim to improve foundational capabilities, safety, security, and trust in AI, and to responsibly maximize benefits to people and society everywhere. The AI Alliance Community brings together a critical mass of compute, data, tools, and talent to accelerate open innovation in AI.
What is the RAISE Act?
The New York Responsible AI Safety and Education (RAISE) Act would limit the availability of and access to US-developed AI models, discourage AI research and collaborations, undermine AI adoption, and slow the growth of AI jobs and investment.
The RAISE Act is a bill that would impose steep liability on “Large Developers” who release open source AI “Frontier Models” if others use or modify their models to cause hypothetical harm, incentivizing large developers to stop releasing open AI models.²
In an attempt to prevent unknown and hypothetical “critical harms” from potentially occurring in the future, the Act establishes unprecedented risk assessment requirements that are unworkable, especially for open source developers, infeasible downstream reporting, and burdensome disclosure requirements for developers of frontier AI models.³
This would create a chilling effect on the US open source ecosystem, affecting both model developers and the downstream beneficiaries of open innovation, including startups and researchers.
- Developers who spend more than $5 million in compute costs on a frontier model or a model distilled from a frontier model would be regulated under the Act.
- Liability and requirements apply to any frontier model, or distilled model, developed, deployed, or operating in New York State.
- The NY Attorney General can issue steep civil penalties – Up to $10 million for any first violation; Up to $30 million for any subsequent violation.
What is the status of the RAISE Act?
The RAISE Act passed the New York state legislature on June 12, 2025 and now awaits Governor Hochul’s signature, veto, or return to the legislature for chapter amendments. If signed into law, the Act will take effect in 90 days.
How would the RAISE Act impact jobs and investment in New York?
New York state is third in the nation for high-tech employment with 400,000 New Yorkers working in high-tech positions.⁴
The RAISE Act will create a chilling effect on the US open source ecosystem, affecting both model developers and the downstream beneficiaries of open innovation, including startups and researchers.
If signed into law, New York would be home to a regulatory AI framework that puts New York’s developer community at risk of falling behind as they lose the opportunity to build on the latest, more capable AI models.
- New York City alone is home to over 40,000 AI or AI-adjacent jobs, with the projected economic value of AI of over $320 billion by 2038. Since 2018, AI has attracted VC investment in excess of $21 billion in the state, resulting in 35 AI unicorns, and where 71% of AI startups are in the AI application space versus 66% globally.⁵
Is there an exception for research and development, or internal use?
There is no exception for research and development, or internal use.
However, colleges and universities are exempt when engaging in academic research. This carve-out does not address safety concerns that academic-produced models might pose.
While University-driven AI research is indispensable in progressing the field, the vast majority of the advances which have led to the development and adoption of AI powering New York’s applied AI ecosystem were created by industry research, often in close collaboration with academic institutions.
In-house research labs may need to relocate R&D activity outside of the State, or cease research collaborations with New York academic partners, to avoid the uneven regulatory burden and legal liability of conducting AI activities within New York.
How does the RAISE Act impact open source AI developers?
It creates an unworkable regulatory regime for covered open source developers:
- Open sourcing a model means making it available for wide use. Open source developers cannot possibly foresee and prevent every potential misuse of their models.
- Covered developers are expected to anticipate, evaluate, and potentially mitigate a wide range of potential downstream harms stemming from any use of their frontier models, or distilled models, under a subjective "unreasonable risk" standard.
- The lack of controllability by open source developers of downstream uses, together with the Act’s subjective “unreasonable risk” standard, creates extreme legal uncertainty, where developers are exposed to high degrees of liability based on the actions of others, which they cannot predict or control.
What is at Risk for NY’s Open Source AI Community?
Availability of and Access to US-developed AI Models
- Development techniques like model distillation would be put at risk, as a covered open source developer would have to assess and mitigate for an indefinite range of potential risks regarding downstream uses, including any smaller model distilled from the teacher model. Forecasting and bearing the risks taken by other actors is clearly an impossibility and illustrates how this Act is fundamentally at odds with technology, and the open source ecosystem.
- Closing off New York’s open source ecosystem will narrow the market to just a few proprietary models, reducing choice for developers and consumers.
Would the RAISE Act advance safety?
The RAISE Act’s premise that it is required to advance AI safety is not consistent with its requirements as it does not account for the capabilities of AI systems and carves out some smaller developers regardless of any risks their models may pose. By limiting its scope only to “large” developers and ignoring model capabilities entirely, RAISE will not meaningfully advance safety and instead would penalize U.S. AI developers who are investing in New York, while excluding smaller foreign developers, not because of model performance but solely on the basis of expenditure on compute.
How would the RAISE Act impact foreign-based developers?
Out of scope, foreign-based developers would still be free to develop and deploy their AI models in the US despite the lack of any transparency into how the models were developed, giving them a competitive edge over New York developers.
¹ The AI Alliance Community is in the process of applying for, but has not yet received, determination from the IRS of, 501(c)(3) charitable status.
² The RAISE Act defines a “Frontier Model” as a model trained on more than 10^26 FLOPs and more than $100 million in aggregate compute costs; or a model distilled from a frontier model and more than $5 million in aggregate compute costs.
³ “Critical harm” according to the bill means “the death or serious injury of one hundred or more people or at least one billion dollars of damages to rights in money or property caused or materially enabled by a large creation, use, storage, or release of a frontier model…”
⁴ Source: https://esd.ny.gov/industries/tech-and-electronics
⁵ Source: https://edc.nyc/ai-nyc