Responsible AI Safety and Education Act

Responsible AI Safety and Education Act
New York State Legislature
Full nameResponsible AI Safety and Education Act
IntroducedMarch 5, 2025
Sponsor(s)Andrew Gounardes, Alex Bores
GovernorKathy Hochul
BillS6953-B / A6453-B
WebsiteChapter amendment text (A9449)
Status: Current legislation

The Responsible AI Safety and Education Act (RAISE Act) is a New York State law that imposes transparency, safety, and reporting requirements on developers of large frontier artificial intelligence models. The law was signed by Governor Kathy Hochul on December 19, 2025.[1] It was sponsored by State Senator Andrew Gounardes and Assemblymember Alex Bores.[2]

The RAISE Act is the second U.S. state law to regulate frontier AI model developers, following California's Transparency in Frontier Artificial Intelligence Act (TFAIA), which was signed in September 2025.[3] Hochul signed the bill on the condition that the legislature would pass chapter amendments to bring the law closer to the California model. The amending bills (A9449/S8828) were introduced in January 2026; as of February 2026 they remain in committee, though the Governor's office and legal commentators treat the agreed-upon amendments as representing the final form of the law.[4][5][6][7]

Provisions

The following describes the RAISE Act as it is expected to operate after the agreed-upon chapter amendments take effect. The law is expected to take effect on January 1, 2027.[5][7]

Scope

The law applies to "large frontier developers," defined as companies with annual revenues exceeding $500 million that develop "frontier models," which are foundation models trained using more than 1026 floating-point operations (FLOPs).[8][5] The version passed by the legislature in June 2025 had instead defined large developers based on having spent over $100 million in aggregate compute costs, and also included a provision prohibiting deployment of frontier models posing "unreasonable risk of critical harm"; both were removed as part of the negotiations between Hochul and the legislature.[8][9] Accredited colleges and universities engaged in academic research are exempt, as is the state's Empire AI consortium.[5]

Safety and transparency framework

Large frontier developers must write, implement, and publicly publish a "frontier AI framework" describing how they assess and mitigate catastrophic risks, secure unreleased model weights against unauthorized access, use third-party evaluators, govern internal use of frontier models, and respond to safety incidents. The framework must describe these measures "in detail," a requirement that goes beyond the California TFAIA's requirement to describe a developer's "approach."[5][9] The framework must be reviewed at least annually, and material modifications must be published with justification within 30 days.[1][5]

Before or concurrently with deploying a new or substantially modified frontier model, developers must publish a transparency report including the model's release date, supported languages and output modalities, intended uses, and any restrictions on use. Large frontier developers must additionally include summaries of catastrophic risk assessments and the extent of third-party involvement.[5]

Catastrophic risk and incident reporting

The law defines "catastrophic risk" as a foreseeable and material risk that a frontier model will contribute to the death of or serious injury to more than 50 people, or more than $1 billion in property damage, arising from a frontier model providing expert-level assistance in creating chemical, biological, radiological, or nuclear weapons; engaging in cyberattacks or conduct equivalent to crimes such as murder, assault, or theft without meaningful human oversight; or evading the control of its developer or user.[5] Loss of equity value is explicitly excluded from the definition of property damage.[5]

"Critical safety incidents" include unauthorized access to model weights resulting in death or injury, materialization of a catastrophic risk, loss of control of a frontier model causing death or injury, and a model using deceptive techniques to subvert developer controls outside of an evaluation context in a manner that increases catastrophic risk.[5]

Frontier developers must report critical safety incidents within 72 hours, or within 24 hours if the incident poses an imminent risk of death or serious physical injury.[5]

Enforcement

The chapter amendments establish a new office within the New York State Department of Financial Services to oversee compliance, receive incident reports, and publish annual reports on AI safety beginning in 2028. Large frontier developers must file disclosure statements with this office and pay pro rata assessments to fund its operations.[5] The New York Attorney General may bring civil actions, with penalties of up to $1 million for a first violation and $3 million for subsequent violations.[5][8] The version passed by the legislature in June 2025 had set penalties at up to $10 million and $30 million respectively.[7] The law does not create a private right of action.[5]

Legislative history

The bill was introduced in the Assembly on March 5, 2025, by Assemblymember Alex Bores, and in the Senate on March 27, 2025, by Senator Andrew Gounardes.[2] After a series of amendments, the legislature passed the bill in June 2025.[3][10]

Governor Hochul did not immediately sign the bill, using nearly all the time available under New York law before acting; had she not signed by the end of 2025, the bill would have been pocket vetoed.[10] The tech industry lobbied against the bill during this period, and Hochul initially proposed a near-complete rewrite modeled on California's TFAIA.[3][9] Legislators resisted the extent of the changes, and the two sides ultimately agreed on a version that used the California law as a base but preserved several provisions that went beyond it, including the 72-hour incident reporting timeline and the creation of a dedicated enforcement office.[9]

Hochul signed the original bill (S6953-B/A6453-B) on December 19, 2025, with the legislature committing to pass chapter amendments formalizing the agreed changes in the January 2026 session.[3][8] The amending bills (A9449 in the Assembly, S8828 in the Senate) were introduced on January 6 and January 8, 2026.[5][6]

OpenAI and Anthropic expressed support for the law. Anthropic's head of external affairs Sarah Heck said the two state laws "should inspire Congress to build on them."[3] The super PAC network Leading the Future, backed by Andreessen Horowitz and OpenAI president Greg Brockman, subsequently announced plans to challenge Bores in a future election.[11][3]

Federal preemption debate

Hochul signed the RAISE Act eight days after President Donald Trump issued an executive order on December 11, 2025, directing the Department of Justice to challenge state AI laws deemed to conflict with a "minimally burdensome" national AI policy.[12] On January 9, 2026, the Department of Justice announced the establishment of an AI Litigation Task Force as called for by the executive order.[12] The executive order also threatened states with loss of certain federal broadband funding if their AI laws were found to be onerous.[12]

Legal commentators have noted several potential avenues for federal challenge, including arguments that the law constitutes compelled speech, violates the dormant Commerce Clause by creating a patchwork of state regulations, or is preempted by federal AI policy.[4]

Comparison with California's TFAIA

The RAISE Act was designed to align with California's Transparency in Frontier Artificial Intelligence Act, signed on September 29, 2025. Both laws use the same 1026 FLOP threshold to define frontier models and the same $500 million revenue threshold to define large developers. Both require public safety frameworks, transparency reports, and incident reporting.[8]

The RAISE Act's 72-hour incident reporting window is stricter than the TFAIA's 15-day window, though both require faster reporting for incidents posing imminent physical risk (24 hours under the RAISE Act, immediate under the TFAIA).[8][12] The RAISE Act establishes a dedicated enforcement office within the Department of Financial Services, whereas California routes reports through the Office of Emergency Services.[8] The RAISE Act requires developers to describe their safety measures "in detail" and how they "handle" various risks, whereas the TFAIA requires developers to describe their "approach."[9]

See also

References

  1. ^ a b "Governor Hochul Signs Nation-Leading Legislation to Require AI Frameworks for AI Frontier Models" (Press release). Office of the Governor of New York. December 19, 2025. Retrieved February 27, 2026.
  2. ^ a b "Landmark AI Safety Bill Signed Into Law" (Press release). New York State Senate. December 19, 2025. Retrieved February 27, 2026.
  3. ^ a b c d e f Ha, Anthony (December 20, 2025). "New York Governor Kathy Hochul signs RAISE Act to regulate AI safety". TechCrunch. Retrieved February 27, 2026.
  4. ^ a b "New York Enacts RAISE Act for AI Transparency Amid Federal Preemption Debate". Davis Wright Tremaine. December 2025. Retrieved February 27, 2026.
  5. ^ a b c d e f g h i j k l m n o "A09449 Summary". New York State Assembly. Retrieved February 27, 2026.
  6. ^ a b "Senate Bill S8828". New York State Senate. Retrieved February 27, 2026.
  7. ^ a b c Loring, Jason M. (January 2, 2026). "New York's RAISE Act: What Frontier Model Developers Need to Know". Jones Walker. Retrieved February 27, 2026.
  8. ^ a b c d e f g Tobey, Danny; Carr, Ashley; Atleson, Michael (December 22, 2025). "The RAISE Act: New York joins California in requiring developer transparency for large AI models". DLA Piper. Retrieved February 27, 2026.
  9. ^ a b c d e Ngo, Mara (December 20, 2025). "Hochul signs watered down AI regs, but lawmakers still got some wins". City & State New York. Retrieved February 27, 2026.
  10. ^ a b "Hochul enacts New York's AI safety and transparency bill". IAPP. December 22, 2025. Retrieved February 27, 2026.
  11. ^ Wellons, Kevin Breuninger,Mary Catherine (2025-11-17). "AI industry-backed super PAC targets New York Democrat in opening shot of midterms". CNBC. Retrieved 2026-02-28.{{cite web}}: CS1 maint: multiple names: authors list (link)
  12. ^ a b c d Peretti, Kim; Everett, Jennifer; Hilsen, Scott; Simmons, Dorian; Villar, Santi (January 21, 2026). "New York Regulates Large Artificial Intelligence Models". Alston & Bird. Retrieved February 27, 2026.