Vendors, auditors, employers, and regulators are each scrambling to establish standards for bias-proofing the AI tools that companies increasingly depend on to make employment decisions.
A February 2022 survey from the Society for Human Resource Management showed that 79% of employers use AI for recruitment and hiring, and now increased compliance requirements are on the horizon in places like New York City and the European Union.
“Right now, it’s game time so to speak with these laws coming along,” said Shea Brown, CEO of AI consulting firm BABL AI Inc.
Regulations seeking to prevent bias in AI generally call for the technology to be audited in some form. But the AI auditing industry is nascent, with industry standards and best practices still developing.
“There is no standard for AI audits currently,” said Mona Sloane, a senior research scientist at the New York University Center for Responsible AI. “There is a race going on for setting the standard by way of doing it rather than waiting for a government agency to say, ‘This is what an audit of hiring AI should look like.’”
Though it has limited authority to increase requirements on employers or vendors, the US Equal Employment Opportunity Commission has indicated that it will turn its enforcement focus to AI-related bias.
Republican EEOC Commissioners Keith Sonderling and Andrea Lucas noted during a Jan. 31 hearing on employer use of AI tools that no vendors were asked to attend the event, which in part discussed auditing strategies for AI technologies.
The exchange highlights a disconnect between the industry that builds and operates algorithmic tools for hiring and government regulators, said Chad Sowash, a former human resources consultant who now hosts a popular HR and recruitment podcast.
“I’m not convinced that the EEOC is prepared for this quickly evolving and fluid topic,” he said.
Conducting the Audits
Tech watchdogs have warned for years that AI has the potential to perpetuate bias. The available data used to build a particular tool can reflect existing disparities in society if not mitigated, according to Jiahao Chen, owner of Responsible Artificial Intelligence LLC, an AI auditing firm.
“There’s a complex historical web of how women and minorities have been excluded from employment,” Chen said. “The history of that is still present in the data that we have.”
Meanwhile, lawsuits are emerging alleging bias in workplace AI systems, including the first from the EEOC in May suing an English-language tutoring services company iTutorGroup for allegedly programming its online recruitment software to automatically reject older applicants.
Last month a man accused Workday Inc. of programming its artificial intelligence systems and screening tools to disproportionately disqualify applicants who are Black, disabled, or over the age of 40. The case, filed in the US District Court for the Northern District of California, is rare in that it’s an AI bias lawsuit filed against a hiring platform rather than an employer.
New York City will soon become the first US jurisdiction to add notice and audit requirements for automated employment decision-making tools, which include AI. A more comprehensive AI law has been proposed in the European Union.
There are generally two audit frameworks: an outcome audit or process audit. An outcome audit, as required by the New York City law, measures bias based on final hiring decisions. A process audit, which will be required in the EU AI Act, considers how the algorithm makes suggestions.
Employers and vendors often refer to the EEOC’s uniform guidelines on employee selection procedures, published in 1978. Among other things, those guidelines establish a “four-fifths rule,” which looks at whether a hiring test has a selection rate of less than 80% for protected groups compared to others.
But the type of bias that should be detected in an AI tool stems from the type of suggestions it makes to a hiring manager considering the tool itself doesn’t get the final say, according to Brown at BABL AI.
“The actual disparate impact comes when the hiring manager actually makes the decision, but we want to worry one level up at what kind of impact does the tool have in influencing how these people make decisions,” he said.
After some initial confusion, New York City has indicated that the onus will be on employers to provide proof of an audit. But multiple employers may use the same audit so long as it includes their data. Some vendors, like Harver and HireVue, decided to take on the task of auditing for compliance with that law.
That flexibility was a welcome development for some vendors and auditors, who say it should be a shared responsibility.
“If I change something in my algorithm it’s going to impact all my clients,” said Frida Polli, chief data scientist at HR tech company Harver. “I think this vendor-level audit makes a lot of sense.”
EEOC’s ‘Rule of Thumb’
EEOC Chair Charlotte Burrows said in the January hearing that the four-fifths rule is simply a “rule of thumb,” and not a guarantee that the tool isn’t biased. Courts have generally relied on more sophisticated statistical analysis to determine disparate impact as well.
The EEOC and the Department of Justice issued guidance in May indicating that employers must inspect artificial intelligence tools for disability bias and should have plans to provide reasonable accommodations. For example, if an assessment involves submitting timed typed responses, there should be an alternative way to complete it, such as recording verbal responses.
But auditors and vendors say testing for how people with disabilities perform using a given tool can be difficult, as the Americans with Disabilities Act bars employers from asking a person to disclose their disability.
Suresh Venkatasubramanian, a professor at Brown University who coauthored the Biden administration’s Blueprint for an AI Bill of Rights, told the EEOC then that vendors take their auditing cues directly from regulators. The federal government hasn’t issued guidance on preventing workplace AI bias based on characteristics other than disability status, such as age or gender.
“In my experience vendors look to guidelines like those provided by the EEOC and decide what to test and test only those things,” he said.
Transparency Efforts
Some vendors commissioned or self-conducted audits before any obligations were in place. As public scrutiny mounts, vendors and auditors say disclosing their methods has helped quell concerns.
But some have met with skepticism as well. HireVue has been criticized for allegedly “audit-washing” or mischaracterizing the results of a 2021 audit of its products conducted by O’Neil Risk Consulting & Algorithmic Auditing.
The company was also criticized for its facial analysis tools used for hiring, which were discontinued in January 2021 shortly after the Electronic Privacy Information Center filed a complaint with the Federal Trade Commission, arguing that it was biased and an invasion of privacy.
Lindsey Zuloaga, chief data scientist at HireVue, said transparency is the answer. Though not required, HireVue and other major HR technology companies post ethics statements and other documents that explain how their systems work on their website.
“We’ve been scrutinized quite a bit because we did something new and different and there were a lot assumptions around how our technology worked,” Zuloaga said. “At my time at HireVue I’ve seen us take a lot of steps toward realizing if we just open up and talk about exactly what we do, it’s really helped us dispel a lot of myths or concerns that people have had.”
To contact the reporter on this story:
To contact the editor responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com;
To read more articles log in.
Learn more about a Bloomberg Law subscription.