Companies and their service providers are grappling with how to comply with New York City’s mandate to audit artificial intelligence systems used for hiring.
A New York City law that takes effect in January will require companies to conduct audits to assess biases, including along racial and gender lines, in the AI systems they use to hire. Under New York law, the rental company is ultimately responsible — and can be fined — for violations.
But the requirement has given rise to some compliance challenges. Unlike familiar financial audits, refined through decades of accounting experience, the AI audit process is new and without clearly defined guidelines.
“There’s a big concern, which is that it’s not clear exactly what constitutes an AI audit,” said Andrew Burt, managing partner at AI-focused law firm BNH. “If you’re an organization that uses some type of these tools … it can be quite confusing.”
The city law will potentially affect a large number of employers. New York City had just under 200,000 businesses in 2021, according to the New York State Department of Labor.
A spokesman for New York City said its Department of Consumer and Worker Protection has been working on regulations to implement the law, but he did not have a timeline for when they might be published. He did not respond to inquiries about whether the city had a response to complaints about the alleged lack of guidance.
Beyond the immediate impact in New York City, employers are confident audit requirements will soon be required in many more jurisdictions, said Kevin White, co-chair of the labor and employment team at law firm Hunton Andrews Kurth LLP.
AI has steadily crept into the HR departments of many companies. Nearly one in four uses automation, AI or both to support HR activities, according to research published by the Society for Human Resource Management earlier this year. The figure rises to 42% among companies with more than 5,000 employees.
Other studies have estimated even higher levels of use among businesses.
AI technology can help companies hire and deploy candidates faster amid a “war for talent,” said Emily Dickens, SHRM’s director of government affairs.
Boosters for the technology have argued that, used well, it could also potentially stop unfair biases from seeping into hiring decisions. A person can, for example, unconsciously side with a graduate who went to the same college or root for a certain team, whereas computers don’t have alma maters or favorite sports teams.
A human mind with its hidden motivations is “the ultimate black box,” unlike an algorithm whose responses to various inputs can be studied, said Lindsey Zuloaga, chief data scientist at HireVue Inc. HireVue, which shows Unilever PLC and Kraft Heinz. co.
among its customers offers software that can automate interviews.
But if companies aren’t careful, AI “can be very scale-biased. Which is scary,” Zuloaga said, adding that she supports the scrutiny AI systems are starting to receive.
HireVue’s systems are regularly audited for bias, and the company wants to make sure customers feel comfortable with its tools, she said.
An audit of HireVue’s algorithms published in 2020, for example, found that minority candidates tended to be more likely to give short answers to interview questions, saying things like “I don’t know,” which would result in their answers was marked as human review. HireVue changed how its software handles short answers to fix the problem.
Companies are concerned about the “opacity and lack of standardization” about what is expected in AI auditing, said the US Chamber of Commerce, which lobbies on behalf of companies.
Even more troubling is the possible impact on small businesses, said Jordan Crenshaw, vice president of the Chamber’s Technology Engagement Center.
Many companies have had to scramble to determine for themselves the extent to which they use AI systems in the hiring process, Hunton’s Mr. White. Companies have not taken a uniform approach to which executive function “owns” AI. In some, it’s human resources that drives the process, and in others it’s driven by the chief privacy officer or information technology, he said.
“They’re realizing pretty quickly that they need to put together a committee across the company to figure out where all the AI can sit,” he said.
Because New York does not offer clear guidelines, he expects there may be a variety of approaches in the revisions. But compliance difficulties aren’t driving companies back toward the processes of a pre-AI era, he said.
“It’s too useful to put back on the shelf,” he said.
Some critics have argued that the New York law does not go far enough. The Surveillance Technology Oversight Project, the New York Civil Liberties Union and other organizations noted the lack of standards for bias audits but pushed for tougher penalties in a letter sent before the law’s passage. They argued that companies selling tools deemed biased could potentially face penalties themselves, among other proposals.
Regulators aren’t necessarily looking for perfection in the early days.
“The good faith effort is really what regulators are looking for,” said Liz Grennan, co-head of digital trust at McKinsey & Co. “Honestly, regulators will learn as they go.”
Mrs. Grennan said some companies are not waiting until the January effective date to act.
Companies are motivated in part by reputational risk as much as the fear of a regulator intervening. For large companies with high-profile brands, concerns about social impact and environmental, social and governance issues may outweigh concerns about being “beaten by a regulator,” said Anthony Habayeb, CEO of AI management software firm Monitaur Inc.
“If I’m a bigger company … I want to be able to demonstrate that I know AI can have problems,” Mr. Habayeb. “And instead of waiting for someone to tell me what to do… I built controls around these applications because I know, like with any software, things can go wrong.”
Write to Richard Vanderford at email@example.com
Copyright ©2022 Dow Jones & Company, Inc. All rights reserved. 87990cbe856818d5eddac44c7b1cdeb8