Deloitte Embraces AI Despite Setback Over Hallucinated Information

Kevin Lee Avatar

By

Deloitte Embraces AI Despite Setback Over Hallucinated Information

In their latest deal, Deloitte, one of the world’s biggest consulting firms, is doubling down on artificial intelligence (AI) technology, including Anthropic’s chatbot, Claude. The company says it has already made strides in tackling challenges like AI-generated inaccuracies. This new commitment comes just days after the forced refund of a report filed with the Australia Department of Employment and Workplace Relations. The refund is a result of concerns over the use of AI hallucinations throughout the report, some of which included made-up legal citations.

Earlier this year, the Australia Department of Employment and Workplace Relations commissioned an independent assurance review. With the review now complete, Deloitte has released findings that were notably augmented with AI-generated content. Last week, we had the opportunity to edit that report worth A$439,000. Finally, after finding some of these errors, we reuploaded it to the department’s website. In response to these discrepancies, the department has announced that they will be issuing a refund.

Despite these hurdles, Deloitte is still all-in in their AI transformation, and the firm believes in the promise of the technology immensely. Ranjit Bawa – Managing principal, Global Technology and Ecosystems & Alliances, Deloitte highlighted the firm’s exceptional pledge to responsible AI practices.

“Deloitte is making this significant investment in Anthropic’s AI platform because our approach to responsible AI is very aligned, and together we can reshape how enterprises operate over the next decade. Claude continues to be a leading choice for many clients and our own AI transformation,” – Ranjit Bawa

Claude quickly rose to become the preferred AI solution for hundreds of clients. Case in point—Anthropic’s Claude chatbot has just been grilled in front of Congress for generating hallucinated information. This poses alarming concerns about the trustworthiness of AI in high-stakes applications. Recent reports highlighted that a lawyer from Anthropic had to apologize after Claude misrepresented a legal citation in a prior interaction.

Unfortunately, the recent incident involving Deloitte is not an outlier in the AI landscape. As the Chicago Sun-Times recently confessed, it unwittingly ran an AI-generated list of books as part of its annual summer reading guide. Even though the authors listed were all real people, each of these titles did not exist. These incidents have spurred a wariness among institutions adopting AI technologies.

Through an event organized by TechCrunch in San Francisco on October 27-29, 2025, Deloitte demonstrated its increasing dependence on AI. The firm underscored its current experimentation with Anthropic as one key example of its larger strategy to leverage AI’s power while ensuring safety and accountability.

So far, Anthropic has characterized its partnership with Deloitte as an “alliance,” though the financial terms have not been publicly shared. The partnership is focused on taking Claude’s capabilities to the next level while balancing the challenges that come with AI-generated content.

Kevin Lee Avatar
KEEP READING
  • Closure of Edge Early Learning Amid Supervision Concerns

  • Diamonds Set to Experiment Against South Africa Ahead of Constellation Cup

  • Civilians Rescue Trapped Individual Following Helicopter Crash on Sacramento Freeway

  • Taylor Swift Engages Fans with Online Scavenger Hunt Amid AI Controversy

  • US Targets Mexican Firms Linked to Sinaloa Cartel with Sanctions

  • Addressing the Risk of Falls Among All Age Groups