•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

The Pentagon says it has reached an agreement with seven technology companies to bring artificial intelligence (AI) into classified computer systems, expanding AI use in military operations as Washington accelerates deployment of new technology amid intensifying strategic competition.
According to the Department of Defense, the partners—Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX—will provide resources and technology to enhance soldiers’ decision-making in complex combat environments.
The Defense Department said deploying AI in classified systems allows the military to use data analytics, surveillance, and forecasting tools at scale, shortening information-processing times and improving battlefield effectiveness.
A Brennan Center for Justice report from March suggested AI could help identify and engage targets faster, while also supporting maintenance management of weapons and logistics chains.
The Department of Defense said AI capabilities are currently deployed through the GenAI.mil platform, adding that “soldiers, civilian personnel, and contractors are putting these technologies into practical use, shortening processes from months to days.”
Some companies, including Amazon and Microsoft, already have long-standing relationships with the U.S. military in classified environments, though it is unclear whether the new deals will significantly change those relationships.
Nvidia and the startup Reflection are described as newcomers in this space. Both companies develop open-source AI models. U.S. officials reportedly view open-source development as a priority to build an “American alternative” amid rapid AI development in China, where many components are publicly accessible for development.
AI in military use raises concerns including potential privacy infringement for American citizens and the risk that machines could select targets autonomously on the battlefield.
According to a well-informed source, at least one Pentagon contract requires human oversight for tasks where the AI system operates autonomously or semi-autonomously. The Defense Department also said AI tools must be used in accordance with constitutional rights and civil liberties.
Debates have intensified following the Gaza and Lebanon conflicts, when U.S. tech companies were accused of assisting Israel in targeting. Rising civilian casualties have increased concerns that AI could contribute to unintended harm.
Helen Toner, acting director of the Center for Security and Emerging Technology at Georgetown University, said modern warfare increasingly depends on command hubs where humans must process large volumes of information quickly.
“AI can help summarize information or analyze surveillance data to identify potential targets,” she said. “But questions about the degree of human involvement, risk, and training remain unresolved.”
She also warned about “automation bias,” where people may place excessive trust in machines, potentially leading to wrong decisions if AI does not perform as expected.
In practice, AI is already used for tasks such as predicting maintenance timing for helicopters, optimizing troop and equipment transport, and analyzing drone data to distinguish civilian and military targets.
A notable aspect of the agreements is the absence of Anthropic. The company has clashed with the Trump administration over the use of AI in military applications, and has insisted on assurances that its technology will not be used for fully autonomous weapons or for surveillance of American citizens.
Defense Secretary Pete Hegseth said partners must accept any use cases the DoD deems lawful. Tensions rose as the Trump administration sought to block federal agencies from using Anthropic’s Claude, while OpenAI moved quickly by announcing a Pentagon agreement in March 2026 to bring ChatGPT into classified environments. OpenAI said it would ensure human supervision and civil-liberties compliance.
Emil Michael, the DoD’s Chief Technology Officer, said working with multiple firms is a strategic choice.
“It would be irresponsible to rely on a single supplier,” he said, while acknowledging disagreements with Anthropic. “When one partner does not wish to cooperate in the way we want, we ensure there are other providers.”
The Pentagon’s broad collaboration with major technology firms underscores how AI is becoming a cornerstone of U.S. defense strategy, while the challenge of balancing military effectiveness with risk control and ethical standards remains central to implementation.

Premium gym chains are entering a “golden era” that is ending or already in decline, as rising operating costs collide with shifting consumer preferences toward more flexible, community-based ways to exercise. Long-term memberships are shrinking, margins are pressured by higher rents and facility expenses, and competition from smaller, more personalized…