Artificial Intelligence

How artificial intelligence can make hiring bias worse


At first glance, artificial intelligence and job hiring seem like a match made in employment equity heaven.

There’s a compelling argument for AI’s ability to alleviate hiring discrimination: Algorithms can focus on skills and exclude identifiers that might trigger unconscious bias, such as name, gender, age and education. AI proponents say this type of blind evaluation would promote workplace diversity.

AI companies certainly make this case.

HireVue, the automated interviewing platform, boasts “fair and transparent hiring” in its offerings of automated text recruiting and AI analysis of video interviews. The company says humans are inconsistent in assessing candidates, but “machines, however, are consistent by design,” which, it says, means everyone is treated equally.

Paradox offers automated chat-driven applications as well as scheduling and tracking for applicants. The company pledges to only use technology that is “designed to exclude bias and limit scalability of existing biases in talent acquisition processes.”

Beamery recently launched TalentGPT, “the world’s first generative AI for HR technology,” and claims its AI is “bias-free.”

All three of these companies count some of the biggest name brand corporations in the world as clients: HireVue works with General Mills
GIS,
+0.92%
,
Kraft Heinz
KHC,
-1.50%
,
Unilever
UL,
+0.60%
,
Mercedes-Benz and St. Jude Children’s Research Hospital; Paradox has Amazon
AMZN,
-0.76%
,
CVS
CVS,
+0.15%
,
General Motors
GM,
-0.83%
,
Lowe’s
LOW,
-0.56%
,
McDonald’s
MCD,
+0.51%
,
Nestle
NSRGY,
+1.27%

and Unilever on its roster; while Beamery partners with Johnson & Johnson
JNJ,
-0.15%
,
McKinsey & Co., PNC
PNC,
+0.16%
,
Uber
UBER,
+1.16%
,
Verizon
VZ,
+0.06%

and Wells Fargo
WFC,
-0.57%
.

Read: Jobs in artificial intelligence: Workers are looking to ride the wave, and employers are hiring

AI brands and supporters tend to emphasize how the speed and efficiency of AI technology can aid in the fairness of hiring decisions. An article from October 2019 in the Harvard Business Review asserts that AI has a greater capacity to assess more candidates than its human counterpart — the faster an AI program can move, the more diverse candidates in the pool. The author — Frida Polli, CEO and co-founder of Pymetrics, a soft-skills AI platform used for hiring that was acquired in 2022 by the hiring platform Harver — also argues that AI can eliminate unconscious human bias and that any inherent flaws in AI recruiting tools can be addressed through design specifications.

These claims conjure up the rosiest of images: human resource departments and their robot buddies solving discrimination in workplace hiring. It seems plausible, in theory, that AI could root out unconscious bias, but a growing body of research shows the opposite may be more likely.

The problem is AI could be so efficient in its abilities that it overlooks nontraditional candidates — ones with attributes that aren’t reflected in past hiring data. A resume for a candidate falls by the wayside before it can be evaluated by a human who might see value in skills gained in another field. A facial expression in an interview is evaluated by AI, and the candidate is blackballed.

“There are two camps when it comes to AI as a selection tool,” says Alexander Alonso, chief knowledge officer at the Society for Human Resource Management (SHRM). “The first is that it is going to be less biased. But knowing full well that the algorithm that’s being used to make selection decisions will eventually learn and continue to learn, then the issue that will arise is eventually there will be biases based upon the decisions that you validate as an organization.”

In other words, AI algorithms can be unbiased only if their human counterparts consistently are, too.

Read: Help wanted: No over-50s need apply

How AI is used in hiring

More than two-thirds (79%) of employers that use AI to support HR activities say they use it for recruitment and hiring, according to a February 2022 survey from SHRM.

Companies’ use of AI didn’t come out of nowhere: For example, automated applicant tracking systems have been used in hiring for decades. That means if you’ve applied for a job, your resume and cover letter were likely scanned by an automated system. You probably heard from a chatbot at some point in the process. Your interview might have been automatically scheduled and later even assessed by AI.

Employers use a bevy of automated, algorithmic and artificial intelligence screening and decision-making tools in the hiring process. AI is a broad term, but in the context of hiring, typical AI systems include “machine learning, computer vision, natural language processing and understanding, intelligent decision support systems and autonomous systems,” according to the U.S. Equal Employment Opportunity Commission. In practice, the EEOC says this is how those systems might be used:

  • Resume and cover letter scanners that hunt for targeted keywords.

  • Conversational virtual assistants or chatbots that question candidates about qualifications and can screen out those who don’t meet requirements input by the employer.

  • Video interviewing software that evaluates candidates’ facial expressions and speech patterns.

  • Candidate testing software that scores applicants on personality, aptitude, skills metrics and even measures of culture fit.

Also see: Who’s most likely to lose their job to AI?

How AI could perpetuate workplace bias

AI has the potential to make workers more productive and facilitate innovation, but it also has the capacity to exacerbate inequality, according to a December 2022 study by the White House’s Council of Economic Advisers.

The CEA writes that among the firms spoken to for the report, “One of the primary concerns raised by nearly everyone interviewed is that greater adoption of AI driven algorithms could potentially introduce bias across nearly every stage of the hiring process.”

An October 2022 study by the University of Cambridge in the U.K. found that the AI companies that claim to offer objective, meritocratic assessments are false. It posits that anti-bias measures to remove gender and race are ineffective because the ideal employee is, historically, influenced by their gender and race. “It overlooks the fact that historically the archetypal candidate has been perceived to be white and/or male and European,” according to the report.

One of the Cambridge study’s key points is that hiring technologies are not necessarily, by nature, racist, but that doesn’t make them neutral, either.

“These models were trained on data produced by humans, right? So like all of the things that make humans human — the good and the less good — those things are going to be in that data,” says Trey Causey, head of AI ethics at the job search site Indeed. “We need to think about what happens when we let AI make those decisions independently. There’s all kinds of biases coded in that the data might have.”

There have been some instances in which AI has shown to demonstrate bias when put into practice:

  • In October 2018, Amazon removed its automated candidate screening system that rated potential hires — and filtered out women for positions.

  • A December 2018 University of Maryland study found two facial recognition services — Face++ and Microsoft’s
    MSFT,
    -1.33%

    Face API — interpreted Black applicants as having more negative emotions than their white counterparts.

  • In May 2022, the EEOC sued an English-language tutoring services company called iTutorGroup for age discrimination, alleging its automated recruitment software filtered out older applicants.

Read more: Biden administration regulators warn AI, employee surveillance tools could ‘turbocharge’ fraud and discrimination

In one instance, a company had to make changes to its platform based on allegations of bias. In March 2020, HireVue discontinued its facial analysis screening — a feature that assessed a candidate’s abilities and aptitudes based on facial expressions — after a complaint was filed in 2019 with the Federal Trade Commission (FTC) by the Electronic Privacy Information Center.

When HR professionals are choosing which tools to use, it’s critical for them to consider what the data input is — and what potential there is for bias surfacing in those models, says Emily Dickens, chief of staff and head of government affairs at SHRM.

“You can’t use any of the tools without the human intelligence aspect,” she says. “Figure out where the risks are and where humans insert their human intelligence to make sure that these [tools] are being used in a way that’s nondiscriminatory and efficient while solving some of the problems we’ve been facing in the workplace about bringing in an untapped talent pool.”

Also see: Must be ‘fit and active’ or ‘digital native’: how ageist language keeps older workers out

Public opinion is generally mixed

What does the talent pool think about AI? Response is mixed. Those surveyed in an April 20 report by Pew Research Center, a nonpartisan American think tank, seem to see AI’s potential for combatting discrimination, but they don’t necessarily want to be put to the test themselves.

Among those surveyed, roughly half (47%) said they feel AI would be better than humans in treating all job applicants in the same way. Among those who see bias in hiring as a problem, a majority (53%) also said AI in the hiring process would improve outcomes.

But when it comes to putting AI hiring tools into practice, paradoxically, more than 40% of survey respondents said they oppose AI reviewing job applications, and 71% say they oppose AI being responsible for final hiring decisions.

“People think a little differently about the way that emerging technologies will impact society versus themselves,” says Colleen McClain, a research associate at Pew.

The study also found 62% of respondents said AI in the workplace would have a major impact on workers over the next 20 years, but only 28% said it would have a major impact on them personally. “Whether you’re looking at workers or not, people are far more likely to say is AI going to have a major impact, in general? ‘Yeah, but not on me personally,’” McClain says.

That’s all aside from the anxiety workers are feeling about the impact of AI on their jobs.

Government officials raise red flags

AI’s potential for perpetuating bias in the workplace has not gone unnoticed by government officials, but the next steps are hazy.

The first agency to officially take notice was the EEOC, which launched an initiative on AI and algorithmic fairness in employment decisions in October 2021 and held a series of listening sessions in 2022 to learn more. In May, the EEOC provided more specific guidance on the usage of algorithmic decision-making software and its potential to violate the Americans with Disabilities Act and in a separate assistance document for employers said that without safeguards, these systems “run the risk of violating existing civil rights laws.”

The White House had its own approach, releasing its “Blueprint for an AI Bill of Rights,” which asserts, “Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination.” On May 4, the White House announced an independent commitment from some of the top leaders in AI — Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI — to have their AI systems publicly evaluated to determine their alignment with the AI Bill of Rights.

Even stronger language came out of a joint statement by the FTC, Department of Justice, Consumer Financial Protection Bureau and EEOC on April 25, in which the group reasserted its commitment to enforcing existing discrimination and bias laws. The agencies outlined some potential issues with automated systems, including:

  • Skewed or biased outcomes resulting from outdated or erroneous data that AI models might be trained on.

  • Developers, along with the businesses and individuals who use systems, won’t necessarily know whether the systems are biased because of the inherently difficult-to-understand nature of AI.

  • AI systems could be operating on flawed assumptions or lack relevant context for real-world usage because developers don’t account for all potential ways their systems could be used.

From the archives (Aug. 2020): Most white people don’t believe racial discrimination exists at their workplace, but nearly half of Black employees disagree

AI in hiring is under-regulated

Law regulating AI is sparse. There are, of course, equal opportunity and anti-discrimination laws that can be applied to AI-based hiring practices. Otherwise, there are no specific federal laws regulating the use of AI in the workplace — or requirements that employers disclose the use of the technology, either.

For now, that leaves municipalities and states to shape the new regulatory landscape. Two states have passed laws related to consent in video interviews: Illinois has had a law in place since January 2020 that requires employers to inform and get consent from applicants about use of AI to analyze video interviews. Since 2020, Maryland has banned employers from using facial recognition service technology for prospective hires unless the applicant signs a waiver.

Thus far, there’s only one place in the U.S. that has passed a law specifically addressing bias in AI hiring tools: New York City. The law requires a bias audit of any automated employment decision tools. How this law will be executed remains unclear because companies don’t have guidance on how to choose reliable third-party auditors. The city’s Department of Consumer and Worker Protection will start enforcing the law July 5.

Additional laws are likely to come. Washington, D.C., is considering a law that would hold employers accountable for preventing bias in automated decision-making algorithms. In California, two bills that aim to regulate AI in hiring were introduced this year. And in late December, a bill was introduced in New Jersey that would regulate the use of AI in hiring decisions to minimize discrimination.

At the state and local level, SHRM’s Dickens says, “They’re trying to figure out as well whether this is something that they need to regulate. And I think the most important thing is not to jump out with overregulation at the cost of innovation.”

Because AI innovation is moving so quickly, Dickens says, future legislation is likely to include “flexible and agile” language that would account for unknowns.

Plus: What skill sets are needed for workers in the AI era

How businesses will respond

Saira Jesani, deputy executive director of the Data & Trust Alliance, a nonprofit consortium that guides responsible applications of AI, describes human resources as a “high-risk application of AI,” especially because more companies that are using AI in hiring aren’t building the tools themselves — they’re buying them.

“Anyone that tells you that AI can be bias-free — at this moment in time, I don’t think that is right,” Jesani says. “I say that because I think we’re not bias-free. And we can’t expect AI to be bias-free.”

But what companies can do is try to mitigate bias and properly vet the AI companies they use, says Jesani, who leads the nonprofit’s initiative work, including the development of Algorithmic Bias Safeguards for Workforce. These safeguards are used to guide companies on how to evaluate AI vendors.

She emphasizes that vendors must show their systems can “detect, mitigate and monitor” bias in the likely event that the employer’s data isn’t entirely bias-free.

“That [employer] data is essentially going to help train the model on what the outputs are going to be,” says Jesani, who stresses that companies must look for vendors that take bias seriously in their design. “Bringing in a model that has not been using the employer’s data is not going to give you any clue as to what its biases are.”

More: Will AI cause mass unemployment? What history says about technology and jobs

So will the HR robots take over or not?

AI is evolving quickly — too fast for this article to keep up with. But it’s clear that despite all the trepidation about AI’s potential for bias and discrimination in the workplace, businesses that can afford it aren’t going to stop using it.

Public alarm about AI is what’s top of mind for Alonso at SHRM. On the fears dominating the discourse about AI’s place in hiring and beyond, he says:

“There’s fear-mongering around ‘We shouldn’t have AI,’ and then there’s fear-mongering around ‘AI is eventually going to learn biases that exist amongst their developers and then we’ll start to institute those things.’ Which is it? That we’re fear-mongering because it’s just going to amplify [bias] and make things more effective in terms of carrying on what we humans have developed and believe? Or is the fear that eventually AI is just going to take over the whole world?”

Alonso adds, “By the time you’ve finished answering or deciding which of those fear-mongering things or fears you fear the most, AI will have passed us long by.”

More From NerdWallet

Anna Helhoski writes for NerdWallet. Email: anna@nerdwallet.com. Twitter: @AnnaHelhoski.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.