4 Biggest Employer Takeaways From California’s New AI Policy Report
Insights
3.20.25
A high-profile AI policy report commissioned by California Governor Gavin Newsom has just set the stage for potential new AI regulation that could soon impact your hiring processes, workplace surveillance, and AI-fueled decision-making. While the March 18 draft report is open for feedback until April 8 and could be revised before finalization, its recommendations are already shaping legislative discussions, including a proposed AI safety bill (SB 53) that could introduce new AI-related compliance and disclosure obligations. AI regulation is soon coming to California, and employers must prepare for this new wave. What are the four biggest takeaways from this report and what should you do about them?
4 Biggest AI Policy Proposals That Could Affect Employers
You can read the entire 41-page report from the Joint California Policy Working Group on AI Frontier Models here – but we’ve made it easy and pulled the four biggest policy proposals that impact the workplace below.
Mandatory AI Risk Assessments and Third-Party Audits
The report strongly emphasizes independent, third-party AI safety assessments to prevent potential harms.
What This Means for Employers:
- Businesses using AI for hiring, promotions, performance reviews, and terminations may soon be required to conduct formal risk assessments.
- Companies may need to engage third-party auditors to verify that AI tools are not introducing bias, privacy risks, or unfair employment practices.
- AI-powered systems must demonstrate compliance with risk mitigation protocols to avoid liability.
Action Steps: ✔ Conduct an internal AI audit to assess bias risks and compliance with existing laws (like FEHA, Title VII, ADA, etc.). |
Transparency Requirements for AI Development and Deployment
California policymakers are increasingly focused on forcing AI companies and employers to disclose how AI models function, what data they use, and how decisions are made.
What This Means for Employers:
- Your HR and compliance teams may soon be required to explain how AI-driven hiring and workplace decisions are made.
- AI developers and deployers may be required to disclose the data sources behind AI models, ensuring they do not rely on biased or unlawfully obtained information.
- AI-powered workplace tools may need to include explainability features that clarify how they reach decisions.
Action Steps: ✔ Ensure AI vendors provide documentation on training data sources, bias mitigation, and model accuracy. |
AI Whistleblower Protections and Compliance Oversight
The report advocates for stronger legal protections for employees who expose AI-related risks. This means businesses could face new liabilities if they retaliate against workers who report AI-related issues.
What This Means for Employers:
- AI whistleblowers may be protected under expanded labor laws, similar to those covering workplace safety violations.
- Employers could face penalties for failing to investigate AI-related complaints.
- Internal compliance teams will need to update whistleblower policies to incorporate AI concerns.
Action Steps: ✔ Consider updating your whistleblower and compliance policies to explicitly cover AI-related concerns. |
Adverse Event Reporting and AI Incident Disclosure
The report calls for mandatory reporting systems that require companies to disclose AI-related failures, discrimination, or harm.
What This Means for Employers:
- If AI causes harm (like biased hiring decisions or data breaches), employers may soon be required to report incidents.
- Companies using AI in workforce management could face stricter documentation and reporting requirements.
- Regulators could end up impose penalties for failing to disclose known AI risks.
Action Steps: ✔ You might want to develop incident response protocols for AI-related risks and harm. ✔ Keep detailed records of AI-driven decisions, including hiring and performance evaluations. ✔ Assign an AI compliance officer or designate a legal team member to oversee AI reporting obligations. |
What’s Next?
Again, this draft report was prepared to seek feedback from stakeholders, including employers. Your organization can submit comments through an online form by April 8. The Joint California Policy Working Group on AI Frontier Models will review all comments and incorporate them into a final report, expected by June 2025.
What Else is Brewing?
Meanwhile, several pieces of AI-related legislation are working their way through the state’s legislative process, including:
- Assembly Bill 1018 seeks to regulate AI decision-making tools in employment and other key areas, imposing strict oversight on automated decision systems (ADS) in an attempt to prevent discrimination in the workplace and elsewhere. You can read about that bill here.
- The “No Robo Bosses” Act (Senate Bill 7) also seeks to regulate the use of ADS in employment, hoping to strictly limit AI-driven tools when hiring, promoting, disciplining, and terminating workers. You can read all about that legislation here.
- Senate Bill 53 builds on failed legislative efforts (which you can read about here) by introducing whistleblower protections for AI workers, increasing transparency requirements for AI models, and potentially mandating independent risk assessments to ensure AI systems do not pose significant societal or workplace harms.
What Should You Do?
With over half of the world’s top AI companies headquartered in California, state regulations are likely to influence other states’ laws. Even businesses operating outside California should monitor these developments, as similar laws may soon emerge in your area. Make sure you engage your Legal and HR teams to consider the recommendations listed above.
If you want assistance in formulating comments to response to this report, consider reaching out to our FP Advocacy Team for assistance in shaping your remarks and having your voice heard.
Conclusion
We will continue to monitor developments and provide updates as warranted, so make sure you subscribe to Fisher Phillips’ Insight System to gather the most up-to-date information on AI and the workplace. Should you have any questions on the implications of these developments and how they may impact your operations, contact your Fisher Phillips attorney, the author of this Insight, any attorney in any of our California offices, or any attorney in our AI, Data, and Analytics Practice Group.
Related People
-
- Benjamin M. Ebbink
- Partner
-
- Richard R. Meneghello
- Chief Content Officer