Recent research commissioned by Diversity Council Australia points to the fact that AI hiring systems can introduce new barriers, sometimes in surprising ways. This affects all demographics of jobseekers. The report, released on May 14, paints a troubling picture of the bias older applicants and women experience. It further responds to the challenges faced by Limited English Proficient persons and applicants with disabilities in the application process. Catherine Hunter, CEO of Diversity Council Australia, emphasizes the need for organizations to consider the ethical implications of these rapidly evolving technologies.
The study indicates that as of 2024, 43 percent of organizations employ AI “moderately” in recruitment processes, while 19 percent utilize it “extensively.” With the technology’s increasing use comes concerns around the fairness and accessibility of AI systems. According to Hunter, “With such high adoption rates and the rapid acceleration of the technology, we’re just concerned that people aren’t putting in place the proper considerations around ethical use.”
Doctor Natalie Sheard from the Melbourne Law School at the University of Melbourne elaborates on how AI systems might disadvantage specific demographics. Additionally, she adds, people who speak English as a second language are deeply at risk when subjected to video interviews. Ifrin Fittock, CEO of Sisterworks, states that all our sisters know the reality of AI recruitment and interviews can be extremely daunting. One of the biggest obstacles is that English is not their primary language. Their job is not to simply answer your questions, but rather, help you understand and address the complex array of challenges you face. What’s more, their digital literacy makes a tremendous difference in this fight.
Instead of evaluating what’s happening on video, AI systems typically just use transcribed audio from video interviews. Sheard points out a major flaw in this approach. This can lead to implicit bias when scoring candidates with accents or who speak English as a second language. Moreover, she explains, transcription services tend to have a hard time functioning at scale for people who don’t speak English as a first language. Plus, people who have accents are disproportionately affected.
Additionally, the study shows that older applicants and women encounter even more obstacles. Sheard identifies a crucial problem—gaps in employment history are frequent disqualifiers, which disproportionately affect women who may leave the workforce to care for families. Yet, this practice unfairly discounts their potential and contributions. In my research, I found CV screening systems that specifically look for gaps in employment history. These systems frequently disqualify candidates on this basis,” she says.
Sisterworks has incorporated video interview training into its job readiness course to better prepare candidates for these unexpected challenges. Fittock remembers one recent case in which a number of candidates were surprised by AI-first screening interviews and then did not pass as a result. We were flat out shocked at the end of last year. We would have sent about nine sisters to an interview, and they wouldn’t have gotten it. When prompted, we discovered that they were indeed being interviewed by videos and AI interviews and due to the fact that they’ve never encountered it previously, I think they just bombed because they didn’t know how to respond.
The implications go beyond the personal. These stories highlight national concerns about bias in artificial intelligence systems. Research conducted in the U.S. indicates that women and non-Anglo candidates are more likely to apply for jobs if they are aware that AI tools are in use. Professor Andreas Leibbrandt notes that while these groups perceive AI bias, they believe it to be less pronounced than bias encountered with human recruiters.
“It’s not that both women or ethnic minorities feel there’s no bias in the AI algorithm, but they feel that this bias is less so than when they’re faced with a human recruiter,” Leibbrandt asserts. He cautions that the data used to train AI algorithms is rife with bias. These biases come from institutions that have a long history of discrimination against marginalized communities. These AI algorithms aren’t being fed out of the goodness of corporations’ hearts. But their training data can, in and of itself, be biased,” he adds.
The concerns surrounding AI hiring systems are compounded by their development overseas, where training datasets may not reflect the Australian workforce’s diversity. Sheard underscores the fact that most of these systems are produced outside of the U.S. Consequently, they might have been over-fitted on data from populations which do not reflect the Australian population. This can lead to lazy scoring when screening candidates from interesting new backgrounds. This is especially the case for refugees, migrant women and First Nations Australians.

