Executive Orders and Control of the Narrative in the Age of AI
On July 23, 2025, President Trump issued three executive orders that, when viewed together, form a coordinated federal initiative to shape the infrastructure, content, and international reach of U.S.-developed artificial intelligence (AI). These orders follow the same theme and direction of earlier directives like EO 14190 (which curtailed K–12 education on systemic racism and unconscious bias) and EO 14253 (which ordered the Vice President to remove museum exhibits that “foster national shame” or depict the USA as being on the wrong side of history, including during colonial times).
The theme across all these orders is clear: the Federal Government seeks to control the narrative, whether through education, culture, or AI.
Let’s look briefly at each EO at issue here:
EO 14318 – Accelerating Federal Permitting of Data Center Infrastructure
Authorizes expedited permitting on federal lands for data centers and related energy infrastructure to support rapid AI development—effectively clearing environmental and regulatory hurdles in favor of speed and scale.EO 14319 – Preventing Woke AI in the Federal Government
Requires that any AI systems procured by the federal government must meet a new standard of “ideological neutrality,” defined in part by the rejection of DEI as “one of the most pervasive and destructive of these ideologies.” This order prohibits AI providers from encoding “partisan or ideological judgments” into model outputs—yet simultaneously establishes anti-DEI and pro-nationalist baselines as the standard for neutrality.EO 14320 – Promoting the Export of the American AI Technology Stack
Calls for the Department of Commerce and other agencies to coordinate U.S. technical, financial, and diplomatic resources to accelerate the global deployment of U.S.-made AI systems, especially in sectors like education, healthcare, and agriculture.
Taken together, what do these orders mean? First, the Federal Government is laying the groundwork to own the AI pipeline: building the infrastructure (14318), controlling the content (14319), and exporting the product with possible strings attached (14320).
Second, “ideological neutrality” is being redefined through a political lens, specifically, aligning with the Trump Administration’s worldview. Anti-DEI positions, hyper-patriotic historic narratives, and rejection of systemic injustice frameworks are now baseline expectations for federally-procured AI. Or, perhaps even the Trump Administration’s view of the economy and economic “facts” will be embedded in federally-procured or federally-developed AI. It’s hard to know at this early stage how far this will go.
And third, the U.S. is positioning itself not just as a tech exporter, but as a values exporter, using economic and diplomatic levers to influence allied and developing nations into adopting USA-developed AI, which includes U.S. ideology embedded in those systems.
What does this mean for you, the user?
It’s unlikely (though not impossible) that AI companies will build entirely separate models for the government and public users. More than likely they will develop a partitioned rules layer that enforces different norms based on user context. But that raises critical questions. Will private sector users encounter filtered outputs that mirror federal priorities? Or will multiple “truths” begin to emerge – one for public/private use, and one for federal government use?
If AI providers comply fully with EO 14319, then, yes, your AI experience could be shaped by the Trump Administration’s definition of what constitutes “neutral.” As a user, you may need to override assumptions in your prompts or deliberately cross-check sources. Neutrality in this context might not be neutrality at all, and it could produce outcomes that are out of sync with the rest of the world. This could be especially problematic if you work for a multinational organization with affiliates and colleagues around the world. That said, if you’re a federal contractor, you likely will need to use the federally-procured version of AI to ensure that your AI results comport with the expectations of the federal agencies you are servicing.
What does it mean for the rest of the world?
The international export of U.S.-developed AI could embed American ideologies into foreign infrastructure, especially if financing and diplomacy are conditioned on adoption of US-developed AI. That could draw allies closer. Or, it could produce an international backlash from countries that refuse to adopt the Trump Administration’s ideology on critical issues (e.g., diversity, world history, the economy, the environment, etc.).
Time will tell as to how the Trump Administration’s agenda unfolds in this area. In the meantime, be aware and learn how to prompt your AI for the “neutrality” baseline that makes sense to you and your organization.

I'm worried about what's happening with algorithms, and I'm not being dramatic; I'm looking at the actual policies. AND my granddaughter who is learning tech and her, our history.
Look, I know tech talk can be boring, but these executive orders aren't what they seem. They're reshaping what information we can access and how we understand our shared history. It's like someone's not just editing our photo albums but programming what pictures we can take tomorrow.
EO 14318? It's basically saying "forget the environment, we need more server farms!" (Think Starbucks, but for data—one on every corner.)
EO 14319 makes me laugh in that nervous way—suddenly, AI that acknowledges diverse perspectives is "biased," but AI that ignores them is "neutral." That's like calling vanilla "flavorless" and everything else "too spicy."
And EO 14320? It's about spreading this approach globally. It's like we're exporting a very specific American apple pie recipe and insisting it's the only way to bake.
I care about this because it affects all of us. When the systems running our schools, hospitals, and voting booths are trained on selective information, we all lose something precious, our ability to make truly informed choices.
We can do better! Start by asking questions about the tech you use. Where did it learn what it "knows"? Who decided what it should forget?
This matters because algorithms are becoming our shared memory. And just like I want my friends to call me out when I misremember something, I want our digital tools to reflect our full, complicated, beautiful reality.
Let's build technology that brings us together rather than divides us. The future is still ours to shape, and I believe we can create one where machines help us see more clearly, not less.
@Nadine Jones, GCSupport Thank you, Honeyeee—— the world needs your voice in this conversation.