- Sponsored
- AI
Gaining AI advantage: The need for trusted autonomy, transparency and control

The Department of Defense is racing to deploy artificial intelligence from central command to the tactical edge to ensure decision-dominance in future conflicts. However, military leaders face a fundamental obstacle that threatens to undermine their progress: Deploying autonomous AI agents without a deeper foundation of trust and operational control poses significant risks of fragmentation, flawed outcomes, and mission failure, say AI experts and former military intelligence officials in a new report.
The stakes for managing AI effectively in the military are increasing as global opponents speed up their use of commercial AI, and leaders face the emerging threat of what one AI expert in the report called “algorithmic warfare.” Given the growing amount of commercial and customized AI acquired by the U.S. military that operates inside so-called “black boxes,” experts warn that the lack of trust in AI output will hinder the Pentagon’s AI progress, especially if commanders lack confidence in the ability to verify or trust the data.

The report suggests that without a shift toward transparent, configurable, and explainable AI, the DoD risks mission failure and ceding the advantage to its rivals, even if it continues to invest billions in modernization.
The new report, titled “The AI control advantage: Trusted autonomy, on your terms,” produced by Scoop News Group on behalf of Seekr, argues that to achieve true decision dominance, defense leaders must move beyond acquiring fragmented, siloed AI tools. It lays out the case for taking a broader platform-based approach that provides a command-and-control layer for AI itself, ensuring that autonomous agents operate with explainable logic and in alignment with commander’s intent, from the enterprise cloud to the tactical edge.
The report, based on insights from former senior military and intelligence officials, highlights three major factors shaping the military’s approach to AI:
Confronting insight gaps and trust deficits
The DoD’s aging systems and dashboards generally fail to provide the insights needed to make quick decisions on the ground. This “insight gap” is exacerbated by a dangerous “trust deficit” in AI output, says Lisa Costa, former U.S. Space Force Chief Technology and Innovation Officer and now a Senior Advisor to Seekr in the report. Many AI applications function as black boxes, obscuring how they arrive at a recommendation. This lack of transparency makes it nearly impossible for commanders to verify the logic or trust the source of AI-generated recommendations. That poses potentially fatal risks in high-stakes operational environments when humans only have seconds to make critical decisions.
This forces an untenable choice between speed and safety, says Costa. “Our adversaries are moving forward with commercial AI. Waiting isn’t an option. However, trust is not an option, even if commercial AI is used. How can a commander execute a mission based on an AI recommendation if they cannot verify its reasoning or trust its source?”
True autonomy requires orchestration from the enterprise to the edge.
Additionally, the report says, effective military AI cannot be confined to a central cloud. It must be deployable as autonomous agents to the warfighter, operating in disconnected and denied environments. This requires an infrastructure that can create and manage these agents, pushing them from powerful, centralized resources out to a small form-factor device on the front lines, explains Derek Britton, SVP of Government at Seekr and a former U.S. Air Force intelligence officer.
“It’s all about creating the agentic processes at the various levels, using enterprise cloud capabilities… to develop human-centric AI agents, but then having the ability to push them out from the enterprise cloud to the tactical cloud node, then all the way out to the edge on a PC or a small form-factor device,” he says.
Fragmented solutions cannot keep pace with ‘Algorithmic Warfare.’
The future of conflict will continue to evolve as adversaries directly target U.S. capabilities dynamically and at machine speeds, and vice versa, creating a mounting contest between algorithms. A defense strategy built on disparate point solutions, each with its own vulnerabilities and no common framework for updates, is dangerously fragile, warns John Chao, Seekr’s Director of Federal Products and a former U.S. Marine Corps Special Operations Command Intelligence Operator.
He argues that defense leaders need to look beyond isolated AI tools and consider adopting a unified platform approach capable of developing, deploying and orchestrating trustworthy AI agents that can be updated rapidly across the enterprise and out to the tactical edge to maintain a competitive advantage.
Key takeaways for defense leaders
The report maintains that to gain the AI advantage, the imperative is to act now. “Mission owners can start by solving discreet but critical and urgent problems using pre-built, out-of-the-box commercial AI solutions that are transparent and configurable for their needs, without compromising safety and trust,” says Britton.
The report highlights four “non-negotiable principles” for embracing this platform approach. Among them is an AI platform that stresses data and algorithmic transparency, radical explainability, correctability, continuous improvement, and training agility. It also emphasizes the need for speed and points to the success Seekr has achieved with its AI-Ready Data Engine, which automates data preparation 2.5 times faster and 90% less expensive than traditional data preparation methods.
Listen to a “deep dive” podcast discussion highlighting the findings and recommendations of the report, created by Scoop News Group using NotebookLM.
This article and the full report were produced by Scoop News Group for DefenseScoop and sponsored by Seekr.