Federal Agencies Say Employer Use of AI and Hiring Algorithms May Lead to Disability Bias: 5 Key Takeaways
Insights
5.16.22
Employers can benefit from using software programs to streamline their hiring process, but federal agencies just sent a stern warning that relying on algorithms and artificial intelligence (AI) to make staffing decisions might unintentionally lead to discriminatory employment practices, including disability bias. If you incorporate AI tools into your selection process, a pair of guidance documents just issued by the Department of Justice (DOJ) and Equal Employment Opportunity Commission (EEOC) on May 12 suggest that you regularly monitor for bias and provide reasonable accommodations as needed. How can you utilize the latest technology without running afoul of the Americans with Disabilities Act (ADA)? Here are five key takeaways from the latest guidance.
- Identify AI and Algorithms You Use in the Hiring Process
In a tight labor market with record-high job openings, many employers are turning to technology to help ease staffing burdens. It’s estimated that over 80% of all employers use some form of AI or algorithms to assist with all types of HR functions. So, the first step in ensuring compliance is to inventory all AI or algorithm-based tools that your company uses for HR functions. This is vital, especially in larger companies, where the use of these tools may not be well-known.
Software programs may use algorithms – which are rule sets for computers based on patterns – and artificial intelligence – which means a computer is doing task that are typically performed by an employee.
According to the DOJ’s guidance, employers might use technology for the following reasons:
- To show job advertisements to targeted groups;
- To decide if an applicant meets job qualifications;
- To interview applicants online;
- To administer computer-based tests that measure an applicant’s skills or abilities; and
- To score applicant resumes.
In the hiring process, algorithmic decision-making might include the use of:
- Chatbots and virtual assistants that ask questions to screen job candidates;
- Scanners that rank resumes by keywords;
- Software that monitors and rates employees based on keystrokes;
- Technology that evaluates candidates based on facial expressions and speech patterns; and
- Testing software that scores applicants based on personality traits or skills.
Employers can use technology to “save time and effort, increase objectivity, or decrease bias,” but “the use of these tools may disadvantage job applicants and employees with disabilities,” the EEOC noted in its technical assistance document.
Further, state legislatures are already addressing these issues. For example, Illinois recently passed the Artificial Intelligence Video Interview Act. This law imposes strict requirements on employers who use software to analyze video interviews. These requirements include specific notice requirements, limits on information sharing, and strict reporting requirements. The state of Washington is considering a similar law. Expect many other states and cities to follow suit.
- Provide Reasonable Accommodations
The ADA requires most employers to provide reasonable accommodations to job applicants and employees with disabilities unless doing so would cause an undue hardship.
You can set qualification standards that are job-related and consistent with business necessity, according to the DOJ and EEOC, but you need to explore reasonable accommodations that will enable applicants and employees with disabilities to meet those standards – especially if you are using AI or algorithms as part of the hiring process.
Reasonable accommodations are changes that you can make to help an applicant with a disability apply for a job. According to the guidance, you may tell applicants or employees what steps an evaluation process includes and ask them whether they will need reasonable accommodations to complete it. For example, you may offer specialized equipment or alternative testing formats. But you don’t have to lower production or performance standards, eliminate an essential job function, or provide an accommodation that would create an undue hardship.
The DOJ noted that existing technical standards, such as the Web Content Accessibility Guidelines, provide helpful guidance on how to ensure website features are accessible to people with disabilities, including those who are blind.
Additionally, the EEOC provided a list of accessibility questions for employers to ask software vendors. For example, “Are the materials presented to job applicants or employees in alternative formats? If so, which formats?”
- Regularly Review Programs for Potential Bias and Unintentional “Screening Out”
Be sure to review your hiring tools for possible bias before you use them and periodically thereafter, since even technology is not immune from biases. Algorithms may not intentionally try to screen out candidates based on a protected category, but they may be based on qualities of top-performing employees and thus may unintentionally screen out a disproportionate number of qualified candidates in a protected category as a result. Many technology vendors may claim that the tool they have is “bias-free,” but you should take a close look at what biases the technology claims to eliminate. For example, it may be focused on eliminating race, sex, national origin, color, or religious bias, but not necessarily focused on eliminating disability bias. You should keep in mind that there are many types of disabilities, and hiring technologies may impact each in a different way, the DOJ noted.
As an example, the EEOC said employers that administer pre-employment tests measuring personality, cognitive, or neurocognitive traits may want to consult neurocognitive psychologists to help identify and correct ways the tests might unintentionally screen out people with autism or cognitive, intellectual, or mental health-related disabilities.
In addition to monitoring AI tools for potential disability bias, you should ensure your selection process doesn’t adversely impact job candidates based on other protected characteristics. For example, if your system automatically rejects candidates that live more than 20 miles from the worksite, you may be unintentionally limiting the ethnic and racial diversity of the candidates you consider, depending on the demographics of the area. Or, if the AI tool only allows consideration of applicants from certain schools or minimum education criteria, this may unintentionally screen out diverse candidates with atypical work experience that would otherwise be a fit for the role.
Consider getting (or demanding) an independent bias audit of all AI and algorithm-based tools. A recently passed law in New York City requires that employers get a “bias audit” for all automated employment decision tools.” This is an impartial evaluation by an independent auditor that tests, at minimum, the tool’s disparate impact upon individuals based on their race, ethnicity, and sex. This law also contains strict notice and opt-out requirements. The law goes into effect on January 1, 2023. Anticipate that many states and other large cities will adopt similar requirements for bias audits.
- Use Gamification Software Cautiously
Do you use “games” as part of the hiring process? Games may be used to evaluated personality traits and job-related skills while making the hiring process more engaging and fun for applicants.
The DOJ and EEOC, however, warned that employers must ensure these games evaluate only job-related skills and abilities rather than an applicant’s possible sensory or manual impairment or speaking skills.
For example, an applicant who is blind shouldn’t be automatically screened out if they can’t play a particular online game that measures memory. They may be able to perform the essential functions of the job and an alternative test should be administered.
“If a test or technology eliminates someone because of disability when that person can actually do the job, an employer must instead use an accessible test that measures the applicant’s job skills, not their disability, or make other adjustments to the hiring process so that a qualified person is not eliminated because of a disability,” the DOJ said.
- Expect More Guidance and Regulations
EEOC Chair Charlotte Burrows has said the agency “is committed to helping employers understand how to benefit from these new technologies while also complying with employment laws.” In October 2021, the agency launched an initiative on artificial intelligence and algorithmic fairness, addressing how the use of such technologies in the workplace may contribute to systemic employment discrimination. Since this is an EEOC priority, you can expect more guidance on this topic from the agency.
Additionally, states and cities are starting to regulate employer use of AI tools. As discussed above, New York City employers that use AI technology will face significant compliance obligations starting in 2023. Illinois already regulates video interviews. And California’s Fair Employment and Housing Council is also reviewing potential AI regulations. Further, many other states are considering laws that prohibit the use of algorithms that discriminate in areas like insurance and banking. It will be easy to add prohibitions against employer to these laws.
Conclusion
We will continue to monitor developments related to algorithms and AI technology in the workplace, so make sure you are subscribed to Fisher Phillips’ Insight System to get the most up-to-date information directly to your inbox. If you have further questions, contact your Fisher Phillips attorney, the authors of this Insight, or any member of our Employee Leaves and Accommodations Practice Group.
Related People
-
- Myra K. Creighton
- Partner
-
- Hannah Sweiss
- Partner