AI leadership begins — and fails — with the foundation no one talks about
Artificial intelligence is now the centerpiece of American competitiveness. The administration’s AI Action Plan put the world on notice when it declared that whoever builds the most capable AI ecosystem will set global standards and reap the economic and military advantage. The newly announced Genesis Mission takes that ambition a step further, directing the federal government to create an integrated AI platform that will accelerate innovation and discovery across national security, health, transportation, critical infrastructure, and many other areas — quickly, securely, and at scale.
But there is a gap between policy vision and operational reality. For all the conversation about models, chips, and data centers, we continue to underestimate the most fundamental ingredient for AI success: data infrastructure.
In every mission I’ve supported throughout my career, from national security programs to large-scale civilian modernization, one truth has remained constant: Nothing functions without the ability to move, store, secure, and retrieve data at the speed and scale the mission requires.
This is more true today than ever before. AI does not fail because of weak ambition or insufficient computing power. It fails because the underlying data infrastructure cannot keep up.
The action plan itself acknowledges this, calling for a national effort to build “vast AI infrastructure,” and highlighting that AI workloads rely on data centers built for extreme scale. Yet the foundation beneath compute — the storage — often receives the least attention. Agencies cannot train or deploy AI models without fast, sovereign access to the right data, organized and available across hybrid environments. Latency slows decisions. Fragmented storage increases risk. Proprietary systems limit interoperability. And when agencies lose control over their data, they lose control over their AI.
Genesis puts this challenge in sharp relief. It asks government not simply to adopt AI piecemeal, but to operationalize it across the size and scope of the federal government. That is a far higher bar. A pilot model can run on a convenient cloud configuration. A mission-critical AI system supporting warfighter readiness, biomedical research, benefits adjudication, or border security cannot. The scale is too large, the security requirements too strict, and the stakes too high.
If the United States is serious about competing globally and reaping the benefits of the AI age, the federal government must treat data infrastructure as a strategic national capability, not a commodity IT layer. The countries that lead in AI will be those that can mobilize data securely, rapidly, and with full sovereignty across mission environments.
This begins with an honest assessment of legacy architectures. Too many public systems remain anchored to proprietary storage platforms that restrict portability, throttle throughput, and drive unpredictable long-term costs. These systems were built for yesterday’s demands, not for AI models that ingest and generate terabytes of data in real time. Agencies increasingly find that their GPUs sit idle, not because they lack computing power, but because storage can’t deliver data fast enough to feed them. That is not a technical inconvenience. It is a strategic vulnerability.
The private sector confronted this reality years ago. Organizations running data-intensive workloads — financial institutions, biotech companies, research labs — recognized that the economics of proprietary storage and closed ecosystems were incompatible with long-term AI strategy. They pivoted to open, high-performance object storage to keep data agile, reduce vendor lock-in, and scale predictably as workloads exploded. Federal agencies must now follow suit, not as a cost-saving exercise but as a mission imperative.
The AI Action Plan leans heavily into openness, emphasizing the role of open-source and open-weight systems in driving innovation. That same principle applies to data infrastructure. Open, portable storage architectures ensure agencies can adapt to new models, new vendors, and new mission requirements without rebuilding the data layer every few years. They also support stronger oversight and governance by keeping data under agency control — a core tenet of safe, accountable AI.
Genesis elevates the urgency further. Its directives assume the creation of an architecture capable of supporting real-time decision systems, cross-domain data access, secure multilevel operations, and resilient performance across contested environments. That is not aspirational; it is operational guidance. And it is impossible to meet those objectives with brittle storage environments that cannot scale horizontally, manage exabyte-level volumes, or sustain mission tempo during crises.
This is where modern platforms — purpose-built for high-performance AI data operations — transform what is possible. But the technology alone is not the story. For the first time, federal agencies have clear direction, strong policy momentum, and rapidly maturing tools to build AI systems worthy of America’s current aspirations. What remains is execution — starting with the data foundations that make AI a reality.
I have seen the federal government move mountains when the mission demands it. The Genesis Mission is a call to do exactly that again. If we want AI that strengthens national security, improves public services, and advances scientific discovery, we must invest in and build infrastructure equal to that responsibility.
AI is not a software investment. It is an infrastructure strategy. And the agencies that modernize their data foundations now will be the ones that set the pace of American innovation in the decades ahead.
Cameron Chehreh is president and general manager of government operations at AI software company MinIO.