New York RAISE Act Forces AI Firms to Share Safety Data

New York just dropped a bombshell on the AI industry. Governor Kathy Hochul signed the RAISE Act into law last week, making it the first state to force major AI companies to open their safety books to public scrutiny.

The law targets large AI developers and requires them to publish detailed safety testing results and risk assessments. It’s a move that could reshape how tech giants approach AI development nationwide, setting a precedent that may ripple across the country and beyond.

What the RAISE Act Actually Requires

The Responsible AI Safety and Evaluation Act focuses on companies developing the most powerful AI systems. Here’s what it demands:

  • Public disclosure of safety testing protocols and results
  • Detailed risk assessments for potential harm
  • Documentation of mitigation strategies for identified risks
  • Regular updates as systems evolve or new risks emerge

Companies must file these reports with New York’s Department of State. Failure to comply carries hefty penalties up to $10,000 per day, with penalties escalating for repeat offenses or particularly egregious violations.

Who Gets Hit by the New Rules?

The law specifically targets “high-impact AI systems” – defined as models trained using massive computing power. The threshold sits at systems requiring more than 10^26 floating-point operations during training, a metric that essentially captures only the most resource-intensive AI models currently in development.

This threshold catches major players like OpenAI, Google, Anthropic, and Meta. Their latest frontier models easily exceed this limit by orders of magnitude. For context, OpenAI’s GPT-4 reportedly required approximately 2.15 × 10^25 FLOPs, while newer models likely surpass the 10^26 threshold. Smaller companies and startups remain exempt for now, though the law includes provisions for lowering the threshold as computational costs decrease and more companies reach these performance levels.

The legislation also applies to companies that fine-tune or significantly modify existing high-impact systems, ensuring that safety disclosures cover not just initial training but also subsequent iterations and specialized versions of powerful AI models.

Why This Matters Beyond New York

New York’s move creates a template other states might copy. California and Texas are already considering similar measures, with lawmakers in both states citing New York’s approach as a model for their own legislation. The patchwork of state laws could force companies to adopt uniform transparency practices nationwide, creating a de facto federal standard through state-level action.

Tech companies hate dealing with conflicting state regulations. It’s often easier to comply with the strictest standard everywhere rather than maintain different practices by state, a phenomenon known as the “California effect” in regulatory circles. This dynamic has played out in everything from data privacy to automotive emissions standards.

The law’s impact extends beyond U.S. borders. International companies developing AI systems must comply if they want to operate or sell their services in New York, creating extraterritorial effects similar to GDPR’s global influence on data privacy practices.

The Timing Is No Accident

The law arrives as concerns about AI risks reach fever pitch. Recent incidents involving AI-generated deepfakes disrupting elections, biased decision-making systems denying healthcare or financial services, and autonomous systems making critical errors have lawmakers scrambling for solutions. The 2024 election cycle saw numerous examples of AI-generated misinformation, while healthcare algorithms have repeatedly demonstrated racial and gender biases.

Congress remains stalled on federal AI legislation, with partisan divisions and industry lobbying creating gridlock. States are filling the void with their own approaches, creating a laboratory of democracy where different regulatory models compete. New York’s transparency-focused model offers one path forward, while other states explore different approaches like usage restrictions or liability frameworks.

The Biden administration’s executive order on AI, while comprehensive, lacks the enforcement mechanisms and public disclosure requirements that New York’s law provides. This regulatory gap has motivated state-level action, with governors and state legislators arguing that they’re simply protecting their constituents from emerging technological risks.

Industry Pushback Already Building

Major tech companies aren’t taking this lying down. Industry groups argue the law could expose trade secrets and harm competitiveness, claiming that detailed safety disclosures would reveal proprietary information about model architectures, training techniques, and competitive advantages.

The Software Alliance, representing Microsoft, IBM, and others, claims the requirements are “overly broad” and “technically infeasible.” They warn it could drive AI development overseas, arguing that companies might relocate research operations to jurisdictions with lighter regulatory burdens. Similar arguments were made during debates over data localization laws and privacy regulations, though empirical evidence for such “regulatory flight” remains mixed.

Trade associations have begun coordinating opposition, with the Chamber of Commerce and TechNet both issuing statements criticizing the law. They argue that safety disclosure requirements should be developed through industry consensus rather than legislative mandate, preferring self-regulatory frameworks that offer more flexibility.

Privacy advocates counter that public safety outweighs corporate concerns. They point to similar disclosure requirements in pharmaceuticals and automotive industries, where safety testing results must be made public before products reach consumers. The comparison resonates with lawmakers who see parallels between AI systems and other potentially dangerous technologies.

Legal Challenges Expected

Constitutional challenges are likely. Companies may argue the law violates interstate commerce protections, claiming that New York is improperly regulating activity that occurs primarily in other states. The Dormant Commerce Clause doctrine could provide grounds for such challenges, though courts have generally upheld state regulations that affect interstate commerce when there’s a legitimate local purpose.

First Amendment arguments are also expected, with companies potentially claiming that forced disclosure of safety information constitutes compelled speech. However, commercial speech receives less protection than political speech, and courts have upheld disclosure requirements in other regulated industries.

Previous state tech regulations have faced legal challenges with mixed results. California’s net neutrality law survived court battles after the Trump administration tried to block it, but other state laws, such as certain privacy regulations, have been struck down or modified following industry litigation.

What Companies Must Do Now

Affected companies have 180 days to file their initial reports. The clock started ticking when Hochul signed the bill, creating a tight timeline for compliance. This deadline applies retroactively to any high-impact AI systems already in development or deployment, meaning companies must document safety practices for existing systems, not just future ones.

Here’s what compliance looks like:

  1. Inventory all AI systems meeting the computing threshold, including those in development, testing, or deployment
  2. Document existing safety testing procedures, including red-teaming exercises, alignment research, and evaluation protocols
  3. Identify potential risks across multiple categories: bias, safety, security, misuse potential, and societal impacts
  4. Prepare detailed reports for public disclosure, including technical appendices and executive summaries accessible to non-technical audiences
  5. Establish processes for ongoing updates, including quarterly reviews and immediate disclosure triggers for critical discoveries

The Documentation Burden

Companies must provide more than basic summaries. The law requires technical details about testing methodologies, failure rates across different demographic groups, and specific risk scenarios evaluated. This includes providing datasets used in safety evaluations, specific prompts or test cases that revealed problems, and quantitative metrics for measuring alignment and safety.

This goes far beyond current industry practices. Most companies keep safety information internal, sharing only high-level summaries through research papers or blog posts. The detailed disclosure requirements will force companies to develop new documentation processes and potentially reveal embarrassing failures or inadequacies in current safety measures.

The reporting requirements also extend to third-party safety evaluations. Companies must disclose results from external audits, red-team exercises, and academic evaluations, even when these reveal significant vulnerabilities or alignment problems.

Global Implications

New York’s law could influence international AI governance discussions. The EU is developing its own AI Act with similar transparency requirements, and having a major U.S. state require public disclosure strengthens the EU’s position in negotiations with American tech companies. It demonstrates that transparency requirements aren’t simply a European preference but reflect growing global consensus.

The EU AI Act takes a risk-based approach, requiring more oversight for high-risk applications regardless of computational requirements. New York focuses on system capability regardless of use case, creating potential conflicts for companies trying to comply with both frameworks.

Other countries watching these developments include the UK, Canada, and Singapore, all of which have proposed or implemented AI governance frameworks. New York’s approach provides a model for sub-national AI regulation that could be replicated in other federal systems or by local governments in unitary states.

The law also affects international AI research collaborations. Universities and research institutions partnering with covered companies must ensure their contributions are properly documented and disclosed, potentially complicating international research partnerships.

What Happens Next

Watch for other states to introduce copycat legislation. New York’s law becomes effective January 1, 2025, giving lawmakers elsewhere time to craft their versions. Legislative sessions in California, Texas, and Illinois are already preparing similar bills, with some proposing even stricter requirements.

Industry groups will likely lobby for federal preemption, preferring one federal standard over 50 different state requirements. This strategy has worked in other tech policy areas, such as data breach notification laws where industry successfully pushed for federal standards to supersede state variations.

Conclusion

Following these guidelines and best practices will help you effectively address the topic discussed in this article.

Previous Post

Google FunctionGemma: Edge-Native Function Calling Explained

Next Post

Claude Chrome Extension: AI Browses For You

Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *