News

Should AI decide who gets the job?

January 5, 2023

In late November 2022, Vox reported that Amazon recently sought to replace human resources (HR) employees with artificial intelligence (AI), at least for some tasks.

According to Vox, an “Amazon confidential” document, dated Oct. 2021, described a technology called Automated Applicant Evaluation, or AAE. The technology uses data on current employees to predict which applicants are most likely to succeed on the job. Without human involvement, AAE schedules interviews for candidates it deems best.

While it is unclear whether Amazon currently uses—or ever used—AAE in its actual hiring practices, Vox’s report has rekindled debate over the ethics of delegating hiring decisions to AI.

Globally, companies are embracing AI in hiring. According to global survey data from Aptitude Research, a Boston-based research firm that specializes in HR technology, 55% of companies have invested in “recruitment automation.”

Companies that offer AI hiring products sometimes market them as necessary in a globalized economy, where remote workers can be sourced from anywhere. Eightfold, a company that sells AI solutions for HR, puts it this way: “Managing a planet’s worth of applications hasn’t been achievable for humans working alone. Humans working with AI, on the other hand, can seek top talent worldwide.”

Proponents for AI in hiring argue that it has two benefits. AI automates time-consuming tasks, such as searching for job candidates, screening them, and scheduling interviews. And AI could eliminate humans’ unconscious biases, thus reducing discrimination and equalizing job opportunities for applicants.

However, critics are concerned that AI—and algorithms more generally—may discriminate in familiar ways by baking in precisely the kinds of unconscious biases that some believe AI can avoid. Bias in AI hiring may be even more pernicious when it hides behind a façade of algorithmic neutrality and opacity.

Some commentators concede that AI can be biased but believe that, ultimately, AI can be designed to make hiring decisions less biased than those made by humans. However, designing an ethical and just AI may require better adherence to AI design standards and regular auditing—perhaps new laws too.

How is AI used in hiring?

According to a survey by the Society for Human Resource Management (SHRM), AI or automation is already used by one in four organizations for HR-related activities, including recruitment and hiring. The survey also found that 92% of companies that use AI or automation source their tools from vendors.

AI has been tested or used for nearly all hiring tasks. It has even been used—with mixed results—to analyze and score body language, tone, and facial expressions during interviews. But according to another SHRM survey, three uses are most common:

  • Identifying job candidates by searching publicly available information, like social media profiles
  • Screening and evaluating candidates
  • Communicating with job candidates through chatbots, for tasks like scheduling interviews and interviewing itself

Similarly, Phenom, a company that sells AI hiring products, divides AI hiring capabilities into three categories: searching, screening, and scheduling.

AI is used in other, idiosyncratic ways too. For instance, HireVue, another company that sells AI hiring products, offers “game-based psychometric testing,” which uses automation to evaluate candidates’ performance on specially designed games. HireVue claims their testing can assess “competencies” such as “personality and work style, how they work with people and how they work with information.”

At each of these stages, AI impacts what candidates are favored—or disfavored—and can exacerbate, reproduce, or reduce discriminatory biases.

Does AI create bias in hiring? Sometimes, yes.

While AI can create consistency in decision making, various factors—like a skewed data set for training AI or hyper-personalization in targeting job ads—can introduce biases that result in discrimination.

In 2014, Amazon began developing an AI tool for hiring, but the project was discontinued when the AI was discovered to be biased. The AI (different from the more recent AAE) had been trained on applications submitted over the previous ten years in male-dominated tech professions. As a result, the AI favored male applicants. The AI even favored resumes with verbs that men are more likely to use, like “executed” and “captured.”

“Many hope that algorithms will help human decision-makers avoid their own prejudices by adding consistency to the hiring process,” writes Miranda Bogen, Senior Policy Analyst at Upturn, a nonprofit focused on equity and justice in tech. “But algorithms introduce new risks of their own.”

For example, with colleagues at Northeastern University and the University of Southern California, Bogen researched whether targeted job ads on Facebook were biased. They found that 85% of people who saw ads for supermarket cashier jobs were women, and 75% of people who saw ads for taxi companies were Black.

“This is a quintessential case of an algorithm reproducing bias from the real world, without human intervention,” writes Bogen.

Advocates for AI suggest biases might be mitigated with auditing and “AI literacy.” Companies would also need to ensure transparency: they would need to be able to explain to job candidates how their hiring AI makes decisions.

“To systematically mitigate bias in AI technologies, it is important to create internal processes based on how one’s organization defines fairness in algorithmic outcomes,” write Jessica Kim-Schmid and Roshni Raveendhran in Harvard Business Review. Companies also need to set “how transparent and explainable AI decisions within the organization need to be.”

AI might be able to reduce bias too

While AI and algorithmic decision-making pose new risks, AI may be designed to reduce bias and discrimination while increasing efficiency.

For instance, one of the most attractive uses for AI is screening applicants for interviews, which has historically been time-consuming and labor-intensive. Moreover, during this process, humans are susceptible to unconscious bias, such as rating candidates with masculine names more favorably than candidates with feminine names, even when their résumés are otherwise identical.

Unconscious bias is especially known to mar a practice that Frida Polli, cognitive neuroscientist and co-founder and CEO of pymetrics, calls “shrinking”: reducing an applicant pool by sifting out all candidates who lack a desired trait, like degrees from Ivy League colleges or an employee referral.

“Numerous studies have shown that this process leads to significant  unconscious bias against women, minorities and older workers,” writes Polli in Harvard Business Review. Polli concludes, “Traditional hiring tools are already biased.”

In contrast, Polli writes, AI can reduce unconscious bias by, for example, evaluating all applicants, not just those who survived the shrinking stage.

“Recruiters limit their review of the applicant pool to the 10% to 20% they think will show most promise,” writes Polli. “But guess what? Top colleges and employee-referral programs are much less diverse than the broader pool of applicants submitting résumés.”

Polli argues that AI can overcome such biases, but it must meet ethical standards and be audited once it’s created. Polli suggests following principles—created by AI developers like OpenAI and the Future of Life Institute—“for making AI ethical and fair.”

Projections for the future

While Amazon’s early attempt at using AI was discontinued due to bias, there is evidence that Amazon’s new model works better. “The model is achieving precision comparable to that of the manual process and is not evidencing adverse impacts,” says the internal Amazon report, according to Vox. If Amazon’s claim is true, Amazon’s success may be a sign of what’s to come.

“If AI technology continues to develop, we could see more companies following Amazon’s lead in making more aggressive moves toward replacing recruiter tasks (and reducing recruiter staffing levels) with AI technology,” writes Steve Boese, columnist for Human Resource Executive.

There also is also movement on legal fronts. Recognizing potential for bias and discrimination, state and federal governments in the U.S. have begun to regulate AI used in hiring processes.

In 2019 Illinois passed a law regulating AI in video interviews, after AI companies had started selling products that autonomously evaluated videos of job candidates. More recently, according to Observer, New York City has passed legislation, which goes into effect this January, requiring employers to perform a “bias audit” on any AI used in hiring. Observer also reports that similar legislation has been proposed in Washington D.C. and California.

Even if AI is audited and regulated, it may face limitations. Writing in Forbes, Jack Kelly, a self-described “executive recruiter,” cautions: “If Amazon strictly hires people based on their similarities to current employees, the eCommerce giant runs the risk of creating a bubble of homogeneity, which will bar applicants who don’t fit exactly into the mold, but have a lot to offer.”

Back to All News & Research