Seyfarth Synopsis: On May 18, 2023, the Equal Employment Opportunity Commission (EEOC) released Technical Assistance on the use of advanced technologies in the workplace titled Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (“TA”). The EEOC did not unveil new policies in the TA but reiterated that its long existing policies and practices continue to apply to the technologies (such as artificial intelligence and machine learning tools) that are grabbing the public’s attention today. The TA broadly defines the types of automated systems that may be subject to employment laws and poses seven questions and answers designed to help employers avoid discriminatory employment decisions regardless of whether those decisions are made by humans or machines. The publication is an important read for employers.
EEOC Emphasizes That Long-Standing Title VII Principles Apply Even To New Technologies
In its new publication, the EEOC acknowledges that it is not announcing new policies. Rather, the publication “applies principles already established in the Title VII statutory provisions as well as previously issued guidance” to advanced technologies used in the workplace.
In line with that approach, the EEOC makes clear that it takes a broad view of the types of technology it has the authority to cover. Specifically any software, algorithm, AI, or other automated tool that is used to make “selection decisions” such as hiring, promotion and terminations, must be used in a manner consistent with EEO statutes. Expanding existing legal theories to emerging issues like AI and other technology tools fits squarely within the EEOC’s strategic enforcement priorities. A detailed examination of the EEOC’s strategic goals can be found here.
Focus On Disparate Impact Discrimination
The publication focuses on theories of “disparate impact” discrimination under Title VII of the Civil Rights Act of 1964. Disparate impact (sometimes called “adverse impact”) refers to the use of a facially neutral test or selection procedure that has the effect of disproportionately excluding members of a protected group, if the tests or selection procedures are not “job related for the position in question and consistent with business necessity.” Disparate impact theories are powerful tools for the EEOC, as they necessarily implicate broad swaths of potential “victims” in a single enforcement action, maximizing EEOC’s “bang for the buck.”
EEOC Expects Employers To Assess Algorithmic Decision-Making Tools For Adverse Impact In Accordance With The Uniform Guidelines on Employee Selection Procedures
Since 1978, the EEOC has directed employers to follow its Uniform Guidelines on Employee Section Procedures (Guidelines) to determine whether tests and selection procedures are lawful under Title VII. In its TA, the EEOC reiterated that the Guidelines continue to apply to new technologies “when they are used to make or inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees.”
As a result, the EEOC’s expectation is that employers will assess whether a selection procedure has an adverse impact on a particular protected group. The assessment requires a comparison between the selection rates for individuals in a protected group to those not in the protected group. Significant differences between the two groups must be remedied, unless the employer can show that the selection procedure is job related and consistent with business necessity.
Employers Can Be Responsible For Tools Designed And Administered By Others, Including Vendors
In the TA, the EEOC made it clear that employers may be held liable for the tools created by third parties, including software vendors, and cannot simply wash their hands of responsibility for the outcomes that flow from using tools developed by others.
The EEOC suggests that employers must, at a minimum, ask vendors whether steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for those in protected groups. However, the EEOC also makes clear that an employer cannot rely on the representations of its vendors. If the vendor says its assessment does not result in different selection rates, but disparate impact nonetheless results, the employer may still be on the hook for any adverse results. As a best practice, employers should vet any tools provided by third parties before putting the tools into use, and also implement audit procedures designed to monitor the results of using those tools to guard against any adverse impact.
With regard to third-party developers of tools, EEOC Commissioners have separately suggested that vendors themselves could be targeted by the Commission if their input into employment decisions are enough to bring them into the orbit of EEO laws. While not specifically addressed in the TA, this is an important issue that we will continue to track.
The Four-Fifths (80%) Rule Alone Does Not Provide A Safe Harbor For Measuring Allowable Differences In Selection Rates
One clarification likely to grab the attention of employers (and vendors) is the EEOC’s position that the well-known “four-fifths rule” may not be used as a sole measure to assess bias in a selection tool. As described in the Guidelines, the four-fifths rule is one measure used to assess whether selection rates of two groups are “substantially” different. More specifically, if one group’s selection rate is less than 80% of that of the comparison group, the rates are considered substantially different.
In the TA, the EEOC emphasized that the four-fifths rule is only a “general rule of thumb” that is “practical and easy-to-administer,” and courts have found that it is not a reasonable substitute for statistical tests. Curiously, the EEOC has historically championed the use of the four-fifths rule to assess significance in other contexts. (See prior EEOC Guidance here for an example.)
The EEOC’s position is important as many vendors and employers have used the four-fifths rule articulated in the EEOC’s 1978 Guidance as a threshold analysis in bias audits. The general thinking was that since the four-fifths rule was a well-established benchmark articulated by EEOC and long applied to other testing and assessment tools, it was a good threshold indicator of potential bias in the absence of other guidance. Indeed, the FAQs in the Guidelines provide that to assess adverse impact, “federal enforcement agencies normally will use only the 80% (4/5ths) rule of thumb, except where large numbers of selections are made.” (See FAQ Guidelines Q18).
In keeping with the clarifying comments in the EEOC’s TA, as well as issues related to the appropriate audit method depending on the sizes of the pools being analyzed, employers and vendors that are studying the effects of selection tools (or asking vendors about the tools they provide) may need to reassess their audit strategies. This may include implementing audit standards that evaluate both statistical significance and practical significance, using the four fifths test or other “practical significance” methodologies.
Employers Should Act Upon Discovering That An Algorithmic Decision-Making Tool Results In A Disparate Impact
The EEOC encourages employers to conduct self-analyses before implementing any new tool, and periodically thereafter to ensure that the tool is operating free of bias. If an employer discovers that a tool would have an adverse impact, the EEOC’s expectation is that the employer will either take steps to remedy the impact or select a different tool to use going forward.
EEOC’s Initiatives And Guidance Related To Automated Systems and AI
The TA is part of the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative, first announced in October 2021, which was designed to ensure that AI and other emerging tools used in hiring and employment decisions comply with the federal civil rights laws that the agency enforces. Since that time, the EEOC has continued to beat its drum on the topic:
- The EEOC has published guidance that discusses how existing requirements under the Americans with Disabilities Act (ADA) may apply to the use of AI, software applications, and algorithms in employment-related decision-making processes and practices and offered useful information and tips to employers in an effort to assist them with ADA compliance when using such tools.
- The EEOC published a proposed Strategic Enforcement Plan (SEP) that announced its intention to focus on recruitment and hiring practices and policies that might give rise to discrimination against members of protected groups, including where employers use AI to aid decision-making.
- The EEOC has held roundtable events and hearings to gather information and discuss the civil rights implications of the use of automated technology systems, including artificial intelligence, when hiring workers.
- The EEOC has joined other federal agencies to release a joint statement to emphasize that the use of advanced technologies, including artificial intelligence, must be consistent with federal laws.
Employers should expect the EEOC to continue to focus on this topic.
Implications For Employers
The EEOC’s Technical Assistance document does not impose any new rules on employers. Rather, it is yet another reminder to employers that existing law applies to new and advanced technologies, and employers are responsible for employment decisions that impact applicants and employees, whether made by people or with the assistance of machines. Employers should dust off the 1978 Guidelines and supporting materials and take a fresh look at them as they consider the various technologies that may be used to support employment decisions.
The publication also represents more foreshadowing of the EEOC’s enforcement priorities, showing once again that the EEOC will scrutinize the technological tools that employers increasingly rely on to make hiring and employment decisions. Employers are well-served to track EEOC charges filed against them that include allegations concerning technological developments as well as those which may prompt the EEOC to issue requests for information seeking information about these tools.