The new White House roadmap for US leadership in artificial intelligence is ambitious and sometimes admirably thoughtful. It’s also vague, contradictory, and at odds with reality in important ways.
The Trump administration released its long-anticipated AI action plan on July 23, pointing the way forward in AI innovation, security, and tech diplomacy. This effort has the potential gravity of a modern-day Manhattan Project for accelerating AI while grappling with AI-enabled threats and the technology’s inherent risks.
The plan acknowledges difficult truths—namely, the power and opacity of cutting-edge AI models—and the technology’s seismic impact on society. However, many parts of the blueprint are incomplete or unrealistic.
Five questions still loom, and the answers will ultimately decide whether the AI Action Plan becomes a milestone for American AI leadership or a cautionary tale of missed opportunity.
Are the AI export controls in name only? The plan calls for export controls on AI tools—a time-honored tactic for limiting the spread of sensitive technologies to adversarial powers. But Trump has equivocated on this commitment before.
In late July, he decided to allow US chip maker Nvidia to sell to China H20 chips, which enable AI systems to use the knowledge they have acquired. The action plan extols the virtues of leveraging the US tech stack (hardware, models, software, applications, and standards) as tools of statecraft, weaving adoption of US AI products into diplomatic relationships.
The action plan says all the right things about export controls. Whether execution will match the stated plans will be the real test. The Trump administration is off to a rocky start.
How will the US balance AI tech transparency with security? The action plan takes a strong stance in support of open-source and open-weight models, which are available to the public to copy or modify, by expanding access to large-scale computing power for startups and incentivizing private-public partnerships.
This puts the White House in line with major private sector AI developers who see open-source as the best path forward. However, adversarial powers could easily repurpose highly capable open-source systems, and malicious actors could weaponize these tools for cyberwarfare, disinformation, deepfakes, and surveillance.
Given this known danger, and the unknown consequences it could unlock, what will the federal government do to make sure that open models are safe and not misused in the future?
Will the government guide risk assessment for frontier AI models? One of the most consequential parts of the plan is that it elevates the role of the US government from a standards‑writer to a lead examiner. Federal officials will evaluate the most capable AI models for national security risks.
The central question is whether the evaluators will get prerelease access to frontier models—and, if they do, whether their findings will be baked into the systems before they go public.
If the Center for AI Standards and Innovation and the Department of Defense are limited to the assessment of post-release models, any suggested fixes would come far too late in the process, enabling adversaries to exploit hidden flaws in the interim.
Who will protect Americans from AI misinformation? The Trump administration advocates removing all guidance on the subject of “misinformation” from the National Institute of Standards and Technology’s AI Risk Management Framework used to classify risks that emanate from AI.
Misinformation, spread deliberately or not, is a major destabilizing force in the US that can have dangerous repercussions during a pandemic or election. Most importantly, misinformation is a signature tactic of foreign adversaries such as Iran, Russia, and China.
Deleting misinformation removes critical safeguards, like provenance logs, that US agencies use to keep AI-generated falsehoods in check. Without those guardrails, a government-procured model tampered with by foreign actors could blast out false evacuation orders in the middle of a storm—leaving agencies with scant visibility and even less leverage to contain the disinformation.
The next time AI-enabled misinformation spreads, what playbook will the government use to combat it?
Who will do the work to implement the AI Action Plan? The federal government’s ability to meet the plan’s goals hinges on one blunt question: Who is going to do it?
The Department of Government Efficiency just finished a sweeping and ill-fated gutting of many of the very agencies the Trump administration now wants to lean on for AI expertise. DOGE targeted the Cyber Infrastructure Security Agency, hollowed out the National Science Foundation, and cut entire AI teams, including the AI Corps in the Department of Homeland Security.
Foreign AI competitors are tripping over themselves to hire US talent, and US private tech companies, such as Meta are offering $250 million salary packages to OpenAI experts. These experts are the most sought after in the world, and they will never come into the government if they feel undervalued or easily discarded.
Trump’s AI action plan could lead the country to a better phase of AI development, where AI performance is more predictable and safer, and therefore serves as a better tool for the US military, intelligence agencies, and civil servants. But Trump also released an executive order, Preventing Woke AI, which bars language models deemed to advance “ideological bias” and requires ill-defined “truthseeking.”
By those metrics, xAI’s Grok—which recently had an unprompted outburst of antisemitic rhetoric and discredited claims of “white genocide” in South Africa—would appear to fall short, yet the company’s $200 million Department of Defense contract seems secure.
Whether the administration will see the AI action plan through with adequate funding and resourcing and prioritize security and safety challenges, or whether they will squander any momentum chasing performative “anti-woke” AI bans remains to be seen.
The Trump administration wants to go down in history as the greatest enabler of US innovation. It will succeed only if it takes a more comprehensive approach to supporting the entire AI ecosystem.
That means staffing agencies with technologists and funding long-term research so the world’s best developers can build AI in a democracy that values security and safety—something Beijing and Moscow can’t match.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Rear Admiral (Ret.) Mark Montgomery is the director of the Center on Cyber and Technology Innovation at the Foundation for Defense of Democracies.
Leah Siskind is director of impact and an AI research fellow at the Foundation for Defense of Democracies.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Tax or Log In to keep reading:
Learn About Bloomberg Tax
From research to software to news, find what you need to stay ahead.
Already a subscriber?
Log in to keep reading or access research tools.