Employers may be compelled to rethink how they use AI tools to make workplace decisions after new EEOC guidance clarified that they can be held liable for biased effects of the software’s use.
The Equal Employment Opportunity Commission technical assistance document, released last week, addressed how artificial intelligence can violate Title VII of the 1964 Civil Rights Act’s protections against bias for job applicants and employees. It also tackled the role employers have in preventing this type of discrimination.
Companies are adopting AI software for hiring at a faster rate than before, with nearly one in four organizations using these tools to support HR-related activities in 2022, according to a study from the Society for Human Resource Management. But legal and AI experts say they now expect companies to reevaluate their use of AI for employment decisions and perhaps ditch the technology entirely until they have a body of case law to lean on.
Title VII prohibits discrimination in employment, which includes tests or selection procedures that disproportionately exclude groups of people based on their race, color, nationality, sex, or religion. The new guidance states that employers may be held responsible for disparate treatment by any algorithmic decision-making tool used in processes for hiring, promotion, or termination, even if those tools were designed or administered by a vendor.
Joseph Schmitt, shareholder at Nilan Johnson Lewis, said the guidelines stretch further than many employers may have anticipated to include all software that advertises and evaluates job applications.
“Every time an employer puts an advertisement on any type of social media platform, there is a risk because these platforms use algorithmic decision making to determine who is going to see that,” he said. “So this guidance is extremely broad and I think a lot of employers and third party vendors are going to be swept within the scope of the regulation, potentially without realizing it.”
The EEOC has been studying AI issues in employment through an agency initiative launched in 2021. The commission issued guidance last year alongside the Department of Justice on how to prevent violations of the Americans with Disabilities Act when using AI tools and signaled its focus on AI in its new strategic enforcement plan released in January. Senate Democrats also have indicated their interest in regulating the issue.
Litigation and enforcement from the EEOC around AI’s role in employment discrimination has been minimal. The EEOC sued tutoring company iTutorGroup in May 2022 under federal age discrimination law. HR company Workday was hit with a class action lawsuit earlier this year over claims that its software disqualifies applicants who are Black, disabled, or over the age of 40 at a disproportionate rate.
Some employers may want to sideline the use of AI while they wait to see how the EEOC and other federal agencies enforce new guidance and regulations, said Gerald Maatman, partner at Duane Morris LLP.
“This is going to be a new frontier of law,” he said of the EEOC’s potential litigation. “Some employers will have to think hard about this, because they don’t want to be the test case. Other employers, maybe some of the bigger ones, will decide to keep using it even with these risks from the EEOC because AI saves them so much money.”
Companies also have the obligation to conduct ongoing impact analyses of their software, even after it’s been put to use, according to the guidance. If an employer discovers that their programs have a disproportionate effect on protected groups, they then also have the duty to find alternatives that don’t cause such an impact.
The best way for employers to protect themselves will be to have people on their HR teams with technical knowledge of AI systems, said Adam Klein, managing partner at Outten & Golden LLP.
“There needs to be a certain level of technical competency and understanding of how this stuff works,” Klein said. “Someone at the company needs to understand what the heck is going on and how to make sure it’s operating in a nondiscriminatory way.”
The EEOC guidance also said employers can’t guarantee Title VII compliance by merely complying with the “four-fifths rule,” a 1970s standard that looks at whether a hiring test has a selection rate of less than 80% for protected groups compared to others. The rule isn’t always appropriate to determine whether certain portions of large groups are being significantly impacted by employment practices, the commission said.
Schmitt said the four-fifths rule was developed at a time when many employers didn’t have access to computers capable of running statistical models, but as technology has advanced, there are better ways to determine if a procedure is disproportionately impacting protected groups.
“The EEOC is absolutely right, employers should be applying more sophisticated statistical tools to calculate whether something is a matter of chance or a matter of discrimination,” he said.
Frida Polli, chief data science officer at HR company Harver, said the guidance leaves companies in a tricky spot because the commission didn’t provide another benchmark to use.
“We’re being pushed into a realm where specific technical standards would be really helpful,” she said. “If I was a car manufacturer and I couldn’t get a definition of what the right emission level was, it would be pretty problematic because I’m trying to build a car to certain specifications.”
To contact the reporter on this story:
To contact the editors responsible for this story: