Texas Two-Step: State Lawmakers Grapple With AI Regulation
Insights
3.31.25
A surprising series of recent events in Austin revealed the struggle that Texas lawmakers are having in deciding how – and whether – to regulate the growing use of artificial intelligence in the workplace and elsewhere. While it appears that we may not see legislation pass this year that would restrict or control the use of AI by employers, the stage has now been set for a battle that could play out over the next few years.
Step One
The legislative session started with a bang when Rep. Giovanni Capriglione (R-District 98) introduced the Texas Responsible AI Governance Act (TRAIGA) – a potentially groundbreaking bill that aimed to force Texas employers to comply with standards that would be on par with the nation’s most comprehensive state-level AI standards. Officially styled as HB 1709, which you can read here, it sought to take a risk-based approach to AI regulation similar to European-style regulatory schemes, with significant implications for employers across industries.
- It classified AI systems by their potential impact on “consequential decisions,” defined as those that materially affect an individual’s access to or conditions of these essential services, including hiring or other employment matters.
- If an AI system can substantially influence such decisions, it would be considered “high-risk” under the proposed law – and therefore employers using such tools would have to conduct semiannual impact assessments, document how these systems are trained and tested, and report steps taken to prevent algorithmic discrimination.
- That construct essentially mirrors the EU AI Act, one of the strictest regulatory systems on the globe.The proposed law would have also prohibited AI uses deemed to present “unacceptable risks,” including social scoring, developing inferences based on sensitive personal attributes (e.g., race, color, religion, disability, religion, sex, national origin, age, etc.), capturing biometric identifiers using AI, and AI designed to manipulate human behavior.
You can read all about TRAIGA and its implications here.
Given Texas’s pro-business and low-regulation tradition, TRAIGA marked a surprising departure from what one would normally expect to see here. AI regulations of this magnitude are more commonly seen in progressive states like Colorado, Illinois, and in proposed legislation from California and New York. Which led to the debate:
- Could this bill suggest that concerns about algorithmic bias and AI governance might cross ideological lines and become bipartisan issues?
- Or is this an aberrant bill doomed to fail in a red state that doesn’t want to put the brakes on innovation and business opportunities?
Step Two
We got our answer on March 14. Rep. Capriglione proposed another bill – HB 149 – that walks back TRAIGA and offers a soft-touch regulatory approach. Under the new proposal, AI developers and users wouldn’t commit “unlawful discrimination” unless someone can prove that the AI system in question was developed with “intent” to discriminate against protected classes such as race, color, religion, disability, religion, sex, national origin, age, etc. In fact, the bill specifically states that disparate impact alone – the claim that an action might not have been intentional but nonetheless ends up resulting in a discriminatory outcome – is not sufficient to prove a claim of bias under the proposed law. This is consistent with the Department of Justice’s recent directive to narrow the use of disparate impact theories.
While it did retain some of the same prohibitions on certain socially unacceptable practices – such as manipulation on human behavior, social scoring, and capturing of biometric identifiers – it also added an additional prohibition on political viewpoint discrimination.
The bill stands as marked departure from Rep. Capriglione’s TRAIGA proposal, significantly scaling back workplace regulation and allowing for much more freedom for employers and AI developers.
What’s Next?
At this point, it appears that the concept of AI regulation in Texas during this legislative session may be dead.
- TRAIGA languished for several months after it was first introduced, and then was referred to the Delivery of Government Efficiency Committee in mid-March – which happens to be Rep. Capriglione’s own committee. It must advance out of committee by May 12 in order to survive, and the introduction of the follow-up bill means that there is little to no chance of that happening with TRAIGA.
- Meanwhile, HB 149 was also sent to the same committee and had a public hearing on March 26. However, it was left pending in the committee after that hearing, which means that the committee did not take any final action on the bill after its initial consideration. This is a common fate for many bills in the Texas Legislature – and often effectively kills the legislation. It effectively prevents the bill from reaching the chamber floor for debate and voting, making it a significant hurdle in the legislative process. While it is possible for the bill to be revived and reconsidered by the committee if there is sufficient interest or pressure, that is a rare occurrence and doesn’t seem likely here.
Other States Will Lead – and Texas Will Watch
Meanwhile, a group of other states led by California and New York are barreling towards AI regulation of the workplace, eager to catch up to Colorado and Illinois. We will no doubt see restrictions emerge in these and other states in the course of the next year, and Texas will sit back and watch the impact these new laws have on innovation and growth.
What Should Employers Do Now?
While Texas lawmakers hit pause, employers shouldn’t. Even without new state mandates, AI tools used in hiring, evaluation, and management can still trigger legal risk – especially under existing anti-discrimination laws. Here’s how forward-thinking employers in Texas can prepare for the regulatory future while minimizing current liability:
- Audit your AI tools. Identify where you’re using AI in employment decisions and understand how those systems work – including data inputs, training models, and decision-making processes.
- Build documentation now. Even if Texas isn’t requiring impact assessments yet, regulators in other jurisdictions might. Start tracking how your AI tools are tested, validated, and monitored for fairness.
- Review vendor agreements. Ensure third-party AI vendors commit explicitly to bias-free and accessible solutions. Here is a list of questions you should consider asking your AI vendors before deploying new technology in your workplace.
- Train your teams. Ensure HR, legal, and IT teams understand how AI is used in your organization and are prepared to address potential concerns from employees, applicants, and regulators. Check out our step-by-step guide to developing an AI governance program here.
- Monitor national trends. With states like New York, California, and Colorado racing ahead, multistate employers must stay compliant with a patchwork of laws. Don’t wait for Texas to set the standard.
The Bottom Line
Texas may be tapping the brakes on AI regulation. But with the rise of high-stakes decisions made by algorithms, the pressure to act will only grow. The time to build responsible AI practices is now. If lawmakers don’t lead, the courts – and the public – just might.
We’ll continue to monitor developments in this ever-changing area and provide the most up-to-date information directly to your inbox, so make sure you are subscribed to Fisher Phillips’ Insight System. If you have questions, contact your Fisher Phillips attorney, the authors of this Insight, any attorney in our Texas offices, or any attorney in our AI, Data, and Analytics Practice Group.
Related People
-
- Amanda E. Brown
- Partner
-
- Adam F. Sloustcher
- Regional Managing Partner