Newsom vs. Privacy Watchdog? Why Battle Over California’s Proposed AI Rules Could Reshape the Future for Employers
Insights
4.28.25
California’s privacy regulator intends to advance sweeping new rules that would govern AI tools used for automated decision-making purposes – but Governor Newsom just stepped in and signaled concern that these rules could stifle innovation and drive AI companies out of the state. The outcome of the debate between the Governor and the California Privacy Protection Agency (CPPA) will have nationwide implications for businesses using automated decision-making technologies (ADMTs), so you should get familiar with these proposed rules and Newsom’s April 23 opposition letter whether you operate in California or not.
What’s Happening? California Regulators Propose Sweeping AI Rules
The CPPA has proposed rules that would impose strict requirements on businesses using ADMTs for:
- Making a “significant decision” concerning a consumer (e.g., access or denial of employment or independent contracting opportunities or compensation)
- extensive profiling of a consumer
- training uses of ADMT “which are processing consumers’ personal information to train the [ADMT] that is capable of being used for any of the following: (A) for a significant decision concerning a consumer; (B) to establish individual identity; (C) for physical or biological identification or profiling; or (D) for the generation of a deepfake.”
Key requirements under the draft include:
- Pre-use notices to employees and job applicants explaining how AI tools are used
- Opt-out rights in certain situations
- Detailed risk assessments
- Access and explanation rights for individuals impacted by automated decisions
You can read a full summary of this proposal here.
These proposals would go beyond anything currently required in the United States, pulling from international models like the EU’s GDPR but layering on California-specific standards. If adopted anywhere close to their current form, the rules would drastically increase the regulatory burdens on businesses relying on automation to streamline processes.
Newsom’s Intervention: A Rare Warning Shot
In an unusual move, Governor Gavin Newsom sent an April 23 letter to the CPPA urging caution. He acknowledged the importance of protecting Californians but warned that overly broad or restrictive rules could have “unintended consequences” for innovation and economic growth.
While regulation is necessary, it must be crafted thoughtfully to avoid chilling innovation or imposing onerous burdens that could stifle California’s leadership in emerging technologies, including artificial intelligence.
The letter further emphasized that regulations should be “clear, reasonable, and focused” to “promote responsible innovation while safeguarding individual rights.”
Newsom’s letter reflects broader concerns that overregulating AI tools at this early stage could:
- Push companies to relocate development outside California
- Create confusing compliance obligations across industries
- Hamstring economic competitiveness in the national and global AI race
Business and Labor Are Flooding the Debate
Newsom is not the only one offering a cautious opinion about the draft regulations. Media reports indicated that the CPPA received more than 600 public comments, including from some major tech companies and business associations – many warning that the rules could drive AI development out of the state.
On the other side, labor and advocacy groups like the ACLU of Northern California, the California Labor Federation, and the California Nurses Association weighed in to support the regulations. They contend that these protections are essential to prevent discrimination, surveillance abuses, and unchecked corporate power over workers and consumers.
What’s Next? Uncertainty, Litigation – and a National Ripple Effect
The CPPA is still in the process of revising its draft regulations. We may see significant changes before any final adoption — but employers should not expect the issue to disappear altogether.
- If California finalizes these aggressive rules, other states could follow.
- At the same time, if California scales back, we may see a patchwork of local, state, and industry-specific standards emerge across the country.
Litigation over the proposed rules is all but certain. Companies and business associations will almost certainly challenge the regulations on constitutional grounds, procedural grounds, or argue that they conflict with existing federal laws. There still are many administrative hoops for this proposal to jump through before any potential adoption.
What Employers Should Be Doing Right Now
Regardless of whether the rules are finalized in their current form or scaled back, employers should treat this moment as a call to action to tighten their practices around automated decision-making. Here are three steps to consider now.
- First, assess all AI tools used within your organization and do a deep dive into the system itself – whether proprietary or third-party supported. Any investigation into a system should include, among other things, confirming its intended function or use, what data was/is used to fuel and train the tool (including whether your data will be used), the quality of the training data, what the intended output is, the processes for identifying and mitigating potential bias, the cadence for testing and analyzing results, and any audit rights customers may have.
- Second, establish an AI governance policy outlining a framework for the responsible and ethical use of AI within your organization. The policy should cover areas such as risk management, bias and fairness, transparency, oversight, and training. In addition to an AI governance policy, consider implementing other relevant AI polices such as a Gen AI Acceptable Use Policy or vendor management policy and checklist. A good place to start? Our 10-step AI governance plan.
- Third, establish guidelines for managing vendor relationships that develop, supply, and/or support the AI technology utilized within your organization. Consider maintaining a vendor questionnaire to help guide in a risk assessment before AI tools are deployed. If you are a developer of AI, consider internal discussion and analysis on any exposure given the new definition of “agent” under the regulations, and anticipate an influx of questions from customers seeking information and clarity on the system. Here are some key questions you should consider asking your AI vendors when establishing a new relationship.
Want to Learn More About AI? Join Fisher Phillips for its third-annual AI Conference for business professionals this July 23 – 25, in Washington, D.C. Learn more and register here. |
Conclusion
We will continue to monitor new developments and provide updates, so make sure you subscribe to Fisher Phillips Insight System to gather the most up-to-date information on AI and the workplace. Should you have any questions on the implications of these developments and how they impact your operations, contact your Fisher Phillips attorney, the authors of this Insight, any attorney in any of our California offices, or any attorney in our AI, Data, and Analytics Practice Group.
Related People
-
- Anne Yarovoy Khan
- Of Counsel
-
- Benjamin M. Ebbink
- Partner