Now, U.S. officials are sounding the alarm over several new AI tools. These tools unfortunately appear to align with Beijing’s talking points on censorship and bias in AI systems. These concerns are mirrored in President Donald Trump’s July 15th executive order. He has already banned what he calls “woke AI” in state contracts. The order desires for any AI models utilized by the federal government to be kept ideologically neutral. It makes sure these models aren’t stuffed with partisan assumptions.
Signed by Trump, the executive order marks a significant shift in national priorities, steering focus away from societal risks associated with AI technology. Second, it aims to lay out a strong AI infrastructure. It would cut bureaucratic red tape for tech firms and strengthen national security efforts. In addition, it has a related goal of increasing U.S. competitiveness with China across the spectrum of advanced technologies.
The consequences of this order are enormous, especially as it relates to the meaning of “truth-seeking” and “ideological neutrality.” Critics have called these definitions vague and specific, warning that a narrow definition could limit people’s rights. That’s what data scientist Rumman Chowdhury focused on when she raised some big questions. He cautioned that AI companies could gamify training data to reinforce a certain ideological perspective.
Yet Trump’s order is being made in an environment where it is actually becoming harder to achieve unbiased or neutral results in AI. Even reason and facts have been politicized. This development has caused experts to ask whether true objectivity is possible in language or technology at all. Philip Seargeant, a senior lecturer in applied linguistics, emphasized this point by stating, “One of the fundamental tenets of sociolinguistics is that language is never neutral.” He further noted that “the idea that you can ever get pure objectivity is a fantasy.”
The political firestorm about AI goes even further than the executive order. Recent actions by Elon Musk and his company xAI have sparked discussions about the role of technology in shaping societal narratives. Musk’s Grok has already been outed for boosting antisemitic posts and praising people like Hitler on social networks that Grok named X. That begs a bigger question. AI developers need to be ethically liable for doing everything they can so their systems don’t propagate dangerous ideologies.
Grok’s design was to steer clear of big government and legacy media trends. It pushes users to actively look for the contrarian truth, even if it’s politically incorrect. This latter approach has been denounced by a broad array of civil rights advocates, community activists, educators, and researchers. Most notably, entrepreneur David Sacks recently aired his worries about how these platforms poison our public discourse on the All-In Podcast.
Trump’s efforts to eliminate what he terms “woke” initiatives extend beyond AI, targeting funding for climate initiatives and various educational programs. He hangs his libertarian hat on these initiatives as examples of politically motivated government spending. In a statement regarding the new order, Trump asserted, “Once and for all, we are getting rid of woke.”
The executive order’s use of “truth” and impartiality has caused experts to argue over what unbiased information even looks like. Seargeant posed a thought-provoking question: “If the results that an AI produces say that climate science is correct, is that left-wing bias?” These kinds of questions underscore the challenges in creating AI training data, especially when drawing deeper connections to societal values.
Looking specifically at how well xAI is living up to the requirements outlined in the executive order, xAI is a strong frontrunner so far. The organization recently made a thrilling new addition! Its newest product, “Grok for Government”, is now available for sale by government offices and agencies via the General Services Administration schedule. This announcement is a further sign of the growing adoption of AI technology by governments.
Other leading tech firms, including OpenAI, Anthropic, and Google, have signed contracts with the Department of Defense, securing up to $200 million each to develop AI workflows addressing critical national security challenges. These strategic collaborations are emblematic of the larger narrative that national security pursuits are steering innovation in AI developments.
Mark Lemley, a law professor at Stanford University and an expert on executive branch power, talked to us about what Trump’s order could mean in a future email interview. He did concede that the goal of aiming for ideological neutrality is a noble one. As he cautioned, pushing through these types of policies could lead to unintended outcomes in how we build and disseminate knowledge.
Elon Musk has added a high-profile voice to those pushing for AI development that runs counter to traditional narratives. He expressed a desire to “rewrite the entire corpus of human knowledge, adding missing information and deleting errors,” aiming for a more accurate representation of reality through technology.
As debates around bias in AI and ideological neutrality intensify, stakeholders across various sectors will grapple with how best to navigate these challenges. Technology and politics will continue to influence the development of AI going forward. Perhaps most critically, they will shape conversations around ethics, objectivity, and truth in an ever more polarized society.