The Wild West of AI? Why data governance matters more than ever
While the federal government has opted to restrict state mandates and regulations governing the use of AI tools, the importance of information governance — and the responsibility borne by those who steward public data — remains unchanged.
A recent executive order has curtailed the ability of individual states to regulate AI tools, creating an environment ripe for faster innovation and broader deployment. But make no mistake: deregulating the tools does not deregulate the data.
Public sector leaders are now navigating a regulatory paradox: a strong federal push for AI adoption and reduced friction to innovation, paired with limited guidance on safety, accountability, and long-term risk. In this environment, responsibility does not disappear — it shifts. The burden of risk increasingly falls on information owners responsible for how data is collected, governed, retained, and ultimately made available to automated systems.
That shift carries real consequences. Data privacy regulations such as HIPAA, the Fair Credit Reporting Act, and protections for personally identifiable information remain fully enforceable. If an AI system ingests citizen data and mishandles it, the absence of AI-specific regulation will not shield agencies from the consequences of a breach or privacy violation.
The gatekeeper’s mandates
The regulatory gap created by the recent executive order does not eliminate accountability — it relocates it. As AI tools move faster into production environments, the quality, governance, and stewardship of the data feeding those systems becomes the primary line of defense against legal, ethical, and operational risk.
The 2025 AI Action Plan identifies high-quality data as a national strategic asset. That designation elevates records and data professionals from compliance stewards to central actors in responsible AI adoption. Decisions about what data is collected, how it is classified, how long it is retained, and who can access it now directly shape whether AI systems are explainable, defensible, and trustworthy — or opaque and legally vulnerable.
In the absence of a comprehensive federal AI framework, information governance has become the control layer agencies can act on today. Rather than waiting for new mandates, agencies looking to deploy AI responsibly should focus on four governance priorities that are well established in principle, but newly consequential in an AI-enabled environment.
1. Enforce data minimization
AI systems are designed to consume large volumes of data, but effective governance requires restraint. Only data strictly necessary for a defined, mission-specific purpose should be collected or ingested. When a model does not require personally identifiable information to function, that data should be excluded by design.
Minimization reduces attack surfaces, limits the blast radius of potential breaches, and simplifies compliance obligations. It also improves analytical performance by reducing noise and reinforcing relevance — an often overlooked benefit in public-sector AI deployments. Feeding models more data than necessary does not make them smarter; it makes them riskier.
2. Implement “need-to-keep” retention policies
Data retention can no longer be treated as a passive archival function. In the age of AI, it must be active, intentional, and defensible. Clear retention periods should be established not only for records, but for AI training data, prompts, outputs, and user interactions.
When data no longer serves a verified legal, operational, or mission purpose, it should be defensibly destroyed. Retaining information “just in case” increases long-term liability without delivering proportional value. Strong retention discipline supports auditability, enables explainability, and reduces the risk that outdated or inappropriate data continues to influence automated decisions.
3. Demand privacy-preserving techniques
Before approving AI tools, agencies should rigorously evaluate the privacy architecture behind them. Techniques such as anonymization (removing PII while preserving analytical utility) and differential privacy (introducing statistical noise to prevent re-identification) are no longer optional safeguards.
These approaches are especially critical as agencies explore secondary uses of data beyond its original collection context. The goal is to generate accurate, mission-relevant insights without exposing or compromising the individuals represented in the data. Privacy protection is not a barrier to AI value; it is a prerequisite for legitimacy and public trust.
4. Mandate human-in-the-loop oversight
Algorithms are powerful, but they lack judgment, context, and accountability. Strong information governance extends beyond securing data to validating how AI-driven outputs are used. High-stakes decisions — particularly those affecting citizen services, benefits, enforcement actions, or legal standing — should never rely solely on automated systems.
AI functions best as a decision-support capability, not a decision-maker. Maintaining human oversight ensures accountability, enables contextual review, catches potential hallucinations, and mitigates the risk of embedded bias. In environments where trust is paramount, removing human judgment from the loop is not efficiency — it is exposure.
The bottom line
The legal landscape may be shifting, but the ethical imperative remains constant. Agencies that prioritize strong information governance do more than reduce compliance risk. They create the conditions under which AI can be deployed responsibly, scaled sustainably, and trusted by the public it is meant to serve.
Melissa Carson is vice president and general manager at Iron Mountain Government Solution.