Three Takeaways from the EEOC’s Guidance on the Use of AI in Making Employment Decisions
Artificial intelligence (AI), algorithmic programs, and software tools continue to have far-reaching (and sometimes unlikely) effects, including in our professional lives. There are now a wide variety of AI and algorithmic tools available to assist employers in recruitment and employee management.
However, these tools come with some risks, including unintentional discriminatory effects. Title VII and the ADA prohibit employers from using selection procedures that disproportionately exclude employees or applicants based on race, color, national origin, religion, disability or sex, unless the procedures are job-related and consistent with business necessity. Even if job-related, employers must consider if there are available alternatives.
The Equal Employment Opportunity Commission (EEOC) recently issued a technical assistance document on assessing adverse impacts when using AI. This guidance builds on previous guidance from the EEOC on AI and the Americans with Disabilities Act.
Below are three key takeaways for employers who use, or are interested in using, AI or other algorithmic tools to assist in hiring, performance management, or other employment decisions:
- Employers cannot assume that AI tools are non-discriminatory.
While AI tools may appear neutral, they can result in unintentional discriminatory effects. For example, an AI tool may automatically screen out applicants based on large gaps in employment. This could have a disparate impact on the basis of sex, as gaps in employment may be result of pregnancy or leaving the workforce to raise children. This tool may also screen out individuals with disabilities who have gaps in employment for disability-related treatments.
The EEOC encourages employers to proactively analyze their employment practices, including software and AI tools, on an ongoing basis to ensure there are no such adverse impacts.
- Employers should consider available alternatives if AI tools cause an adverse impact.
Employers can assess whether a selection procedure has an adverse impact on a particular protected group by checking whether use of the procedure causes a selection rate for individuals in the group that is “substantially” less than the selection rate for individuals in another group. The general rule of thumb is that a selection rate is substantially different than another if their ratio is less than four-fifths (or 80%), however, courts often look to statistical significance when assessing adverse impact.
If the selection procedure has an adverse impact, the employer should consider whether the use of the tool is job-related and consistent with business necessity and whether there are alternatives that may have less of a disparate impact.
- Employers can be responsible for AI tools designed or administered by a vendor.
The guidance makes clear that an employer can be responsible for adverse impacts in their selection procedures, even if the tool was developed by an outside vendor or carried out by an agent administering the tool on its behalf.
The EEOC suggests that employers ask the vendor, at a minimum, if it has assessed whether the tool causes a substantially lower selection rate for individuals within a protected group. Additionally, language can be built into vendor contracts requiring the use of non-discriminatory selection tools.
We continue to monitor and stay abreast of developments related to the rapidly changing world of AI and will provide updates as necessary.