Busting Companies for AI Bias in Hiring Is Tough Task for EEOC

Aug. 14, 2023, 9:15 AM UTC

The EEOC’s high-profile effort to target biased hiring decisions made by companies’ artificial intelligence software is poised to face big challenges, as the opaque nature of these tools often prevents applicants from knowing they’re the victims of discrimination at all.

The US Equal Employment Opportunity Commission settled its first-ever AI-based hiring discrimination case on Thursday. A group of job seekers received $365,000 in the deal, resolving the commission’s suit against iTutorGroup Inc., a company that allegedly programmed its recruitment software to automatically reject older applicants.

Employers can use AI tools to screen applicants and their resumes for desired qualities and experiences, which in turn can lead to intentional or unintentional discrimination by rejecting them based on race, sex, age, or other characteristics protected by federal law. Many AI tools, which are becoming increasingly popular in HR departments, are trained on historical data that contains built-in biases.

The EEOC has repeatedly emphasized its focus on addressing AI discrimination, launching a major initiative and issuing guidance. Chair Charlotte Burrows earlier this month called the technology the “new civil rights frontier.”

The commission also signaled its interest in tackling AI-based hiring bias through its draft strategic enforcement plan in January.

But employment attorneys say the agency is still fighting an uphill battle, as it typically relies on job seekers and employees to flag potential bias by filing charges with the commission. The secrecy inherent in AI tools makes that difficult for most job applicants.

“The odds are not in their favor,” Dan Kuang, a researcher and fellow at Biddle Consulting Group, said of the EEOC’s position. “They’re not set up to be more effective.”

EEOC’s Work

Despite the EEOC’s emphasis on addressing AI bias, there hasn’t been any known litigation on the issue by the commission yet, aside from the iTutorGroup settlement.

While AI in HR technology is relatively new, and the EEOC may only be in the early stages of receiving them, the challenges ahead are apparent.

“There’s a real black box situation where if you were turned down for a job by an algorithm or you don’t receive information at a job opportunity because of your age or your gender, you don’t know you missed out on it because you didn’t get it,” said Peter Romer-Friedman, founder of Peter Romer-Friedman Law PLLC.

In the case of iTutorGroup, for instance, an applicant only realized bias might be at play when they submitted a second application with an earlier birth date listed, according to court documents. They were automatically offered an interview, instead of being rejected on their second try, the filings said.

Even if a lawsuit alleging bias in AI reaches the discovery stage, the EEOC and plaintiffs may still have a hard time gaining access to audits the software has undergone that may support bias. If an employer does an audit at the direction of counsel, it can be considered privileged, according to management-side attorney Adam Forman of Epstein Becker Green P.C.

The EEOC has been studying AI issues in employment through an initiative launched in 2021, and the commission issued guidance last year alongside the Department of Justice on how to prevent violations of the Americans with Disabilities Act when using AI tools. In May of this year, the commission released a technical assistance document addressing AI and Title VII of the 1964 Civil Rights Act.

The EEOC has also requested that all staff attend a training to identify discrimination caused by automated systems.

“We continue to try to raise awareness among workers through our technical assistance documents and public outreach,” EEOC Director of Communications Victor Chen told Bloomberg Law. “If an employee or applicant suspects that they may have been discriminated against based on AI, they can reach out to EEOC who can help the worker decide the next steps.”

Chen also said the agency has an internal working group focused on AI issues.

Still, the EEOC doesn’t have “clear statutory authority to just require companies to disclose their practices” or detailed information about who they hire, said Matt Scherer, senior policy counsel for workers’ rights and technology for the Center for Democracy and Technology.

Absent that disclosure, he said the “EEOC is kind of in a rock and a hard place when it comes to efforts to regulate these tools.”

The commission does collect EEO-1 demographic data from companies but these forms provide only limited information on current employees, and do not touch on hiring numbers or the use of AI.

Ways to Help

One tool the EEOC does have at its disposal is the ability for each of the five commissioners to file individual “commissioner charges” to confidentially investigate possible discrimination at targeted employers.

There were 29 commissioner charges filed in 2022, up from just three in the previous year. That still pales in comparison to the number of charges initiated by workers and job seekers.

“So what that means is that the EEOC is spending most of their time on matters that are brought to them,” said Eric Reicin, president and CEO of BBB National Programs.

To better enable the agency to tackle discrimination in AI, “the most efficient way would certainly be the passage of any law at the federal level” to require disclosure that would provide enough public information for the EEOC and candidates to gather that a company is using a tool in a way that is potentially discriminatory, Scherer said.

Catherine Massey, an attorney at Charles River Associates, said a push from state legislatures for companies to be open about their use of AI in hiring is an important step in making it easier for applicants and the EEOC to identify instances of bias.

Washington D.C., California, and New Jersey have all recently proposed legislation to police automated hiring tools for discrimination.

A New York City law implemented last month requires all job candidates living within the city to be notified if a potential employer is using AI in their hiring process.

All New York City employers are prohibited from using automated tools to screen potential employees unless their software has undergone an independent audit. They are required to submit a summary of their most recent bias audit on their company website.

“I think until we see laws like that more prevalent across the US, it will be difficult for people to know whether or not AI was used at all,” Massey said.

To contact the reporters on this story: Riddhi Setty in Washington at rsetty@bloombergindustry.com; Annelise Gilbert at agilbert1@bloombergindustry.com

To contact the editors responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com; Jay-Anne B. Casuga at jcasuga@bloomberglaw.com

Learn more about Bloomberg Tax or Log In to keep reading:

See Breaking News in Context

From research to software to news, find what you need to stay ahead.

Already a subscriber?

Log in to keep reading or access research tools and resources.