Attorneys General Demand AI Companies Address Dangerous Outputs

Kevin Lee Avatar

By

Attorneys General Demand AI Companies Address Dangerous Outputs

Today, dozens of state attorneys general (AGs) from every corner of the United States and its territories are leading the charge. They’re calling on large artificial intelligence (AI) companies to adopt new protections that proactively stop dangerous outputs produced by their systems. This move comes at a time of increasing alarm over the harms AI can pose, especially for marginalized groups. The letter, signed by Attorneys General from the National Association of Attorneys General, shoots straight for the big targets. It notably names Microsoft, OpenAI, Google and ten other high AI companies.

Public interest advocates have been clamoring for Congress to act, due in part to the overwhelming consensus that AI technologies often produce harmful “delusional outputs.” These inaccurate outputs can negatively affect users’ mental health. The AGs have been explicit in their ask. They envision these companies producing “reasonable and appropriate safety tests” for their generative AI (GenAI) models. These tests are an attempt to keep models from generating toxic, inappropriate, or otherwise dangerous outputs. The letter characterizes this sort of output as nothing short of “sycophantic and delusional.”

Despite all this, the Trump administration remains fully committed to advancing AI. In reaction, we are seeing an increased demand for internal protections. Sadly, this overdue support has sparked fierce controversy. Its introduction follows a number of other attempts to introduce an AI regulations nationwide moratorium at the state level. Critics say that with no real access to oversight, the dangers posed by AI will only grow, harming at-risk populations the most.

The AGs’ letter highlights the need for transparency and openness in the current AI development, which is a very timely reminder. Perhaps most significantly, it recommends that every large language model be subject to independent third-party audits. These audits will root out any evidence of messianic or courtier hallucinations. These smart contracts could be conducted by university-affiliated institutions or civil society organizations, creating an outside check on the AI companies’ practices.

These AGs are doing something to protect users as well. They’ve suggested new incident reporting procedures, aimed at warning users when chatbots produce outputs that are psychologically damaging. These types of measures are more important now than ever, as AI’s role in our daily lives only increases, impacting decisions in housing, healthcare, finance and more.

GenAI has recently grabbed headlines for a different reason. Its potential to transform every industry. With significant positive potential comes the risk of GenAI being used to shape the world in harmful ways. In practice, it has done deep harm already and would further harm especially vulnerable populations. This inherent duality in AI technology only intensifies the need for responsible stewardship and regulation.

As these discussions unfold, President Trump has announced plans to issue an executive order aimed at limiting individual states’ ability to regulate AI. This executive order should be welcomed even though it will likely face bipartisan pushback from congressional legislators who say they are concerned about the unregulated spread of AI.

As we get closer to a major TechCrunch event, the controversy over AI regulation is heating up. Save the date for the 2026 Change Making Summit, October 13-15, 2026, San Francisco! This event will likely serve as a platform for further discussion on policy approaches and technological advancements in the AI sector.

Kevin Lee Avatar
KEEP READING
  • Attorneys General Demand AI Companies Address Dangerous Outputs

  • MacKenzie Scott Donates $7.1 Billion to Nonprofits in 2025

  • Tensions Rise in East Asia as US Supports Japan Against China’s Aggressive Maneuvers

  • Two Decades Since the Cronulla Riots: Reflecting on Lessons Learned

  • Taylor Swift’s Docuseries Set to Premiere on Disney+

  • Collingwood Staff Member Resigns Following Inappropriate Incident at Community Camp