AI Hiring Systems Face Scrutiny for Potential Bias Against Marginalized Groups

Kevin Lee Avatar

By

AI Hiring Systems Face Scrutiny for Potential Bias Against Marginalized Groups

New research conducted by Dr. Natalie Sheard of the National Employment Law Project reveals dire implications of using new, emerging artificial intelligence (AI) technology in hiring processes. According to her findings, AI hiring systems may “enable, reinforce and amplify discrimination against historically marginalized groups.” Indeed, this disclosure has re-ignited a controversy over the ethical consequences of using these types of technologies in hiring.

To write her dissertation, Dr. Sheard interviewed 23 leaders around the country. Among them were two of the commission’s new career coaches, a leading AI expert from Australia, and two employees from tech development firm Anthropic. She contends that without smart regulations on these hiring tools, huge amounts of bias could end up infiltrating hiring decisions. This lack of oversight is highly detrimental to equitable hiring practices.

We can’t afford not to address these issues, both timely and comprehensively. Last year, nearly 62 percent of Australian companies employed AI “extensively or moderately” in their recruitment procedures. Dr. Sheard is clear in her conviction that the Australian government needs to review its anti-discrimination legislation. These laws must be robust enough to regulate and ensure the safe operation and use of AI hiring tools, including in matters of liability, as no such laws definitively exist yet.

The Problem of Data Bias

AI hiring tools work by having to learn from the data they’re given, which leads to discriminatory results.[…]

Dr. Sheard’s piece sheds light on an alarming problem. These systems can accidentally penalize qualified candidates for their differences in backgrounds, gender, and language fluency. For example, in 2014, Amazon developed an AI model based on CVs collected over a decade for software developer positions. In one instance, this model even learned to downgrade applications that included the word “women’s,” demonstrating the built-in bias of the system.

Dr. Sheard is here to discuss how language is equally essential when it comes to AI assessments. As a subheading, she writes that these systems judge how well you can communicate. They accomplish this by testing your mastery of standard English. Consequently, candidates who speak English as a second language or use non-standard English may be unfairly judged as lacking strong communication skills.

She points out that these systems frequently perpetuate current societal prejudices. Additionally, the training data for these models tends to be male-biased. This bias is perpetuated as women are often underrepresented in publicly available sources such as Wikipedia. They’re trained on data that’s scraped off the internet,” she clarifies. This process only serves to reinforce a male-dominated lens in attracting talent.

The Call for Regulation

Sheard is a strong advocate for the regulation of AI hiring systems. She is quick to underscore, “I think there’s absolutely a need to regulate these AI screening systems.” Through her research, she has evidence that if there is not careful oversight, these tools can have a disparate impact on marginalized communities within the labor market. That’s particularly important for women, job seekers with disabilities, and older job candidates.

Civil rights and advocacy groups have been calling for an outright ban on AI hiring systems. They are seeking to have this action extended until meaningful legal protections are established. Dr. Sheard is a passionate advocate for this perspective. He argues, “This claim is extremely compelling, especially in the absence of meaningful legal protections that we have now.”

This developing story is further complicated by new complaints in the United States about discriminatory practices in AI recruiting technology. One of the largest US-based tutoring groups has been rocked by grave allegations. They told the nation what they were doing by programming their system to automatically reject female applicants over 55 and male applicants over 60.

The Role of Human Oversight

The need for human intervention in the recruitment process, we found, is indispensable. Dr. Sheard’s key point is that there is no oversight when adopting these AI systems. I mean, that just really makes you wonder what sort of human oversight was given to the entire process,” she noted. Somewhere along the way, a smart human should have intervened. Instead, they might have responded with, “This is completely contradictory to all we know about this individual.

Dr. Sheard cautions against using AI solely for candidate evaluation. This tick-the-box approach can often ignore critical nuances in an individual’s background and potential. Job seekers with depression may find positivity tests to be particularly challenging. Yet they can nonetheless be highly effective at carrying out the key functions of the job.

>Now, the House Standing Committee on Employment, Education, and Training has uncovered some major flaws with this use of AI in HR. Their blueprint for AI regulation strongly calls for a ban on any AI systems used to make final decisions without human supervision.

Kevin Lee Avatar
KEEP READING
  • The Rise of Jimmy Cherizier and the Gang Crisis in Haiti

  • From Marketing to Motherhood: Dimity May Cultivates a New Life in Tasmania

  • US and China Engage in Trade Talks in Geneva to Address Economic Tensions

  • Elizabeth Holmes’ Partner Pursues New Blood-Testing Startup Amid Ongoing Legal Challenges

  • Ed Husic Criticizes Factional Politics After Demotion from Frontbench

  • China’s Exports Show Mixed Results Amid U.S. Trade Challenges