New report highlights agency advantages of using smaller, open-source AI models

Why transparent, agency-trained AI models can deliver greater reliability and control of sensitive federal data, compared to ‘bigger is better’ large language models.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
Scoop News Group graphic

As federal agencies navigate the complex landscape of artificial intelligence, a new special report from Scoop News Group argues that the path to a smarter, more secure AI future lies not in large, proprietary systems but in smaller, transparent, open-source models.

The report, titled “Why open-source AI models offer a smarter future for agencies,” sponsored by Red Hat, addresses growing concerns over implementing AI within government agencies. While White House initiatives are accelerating AI adoption, many federal CIOs, CISOs, and program leaders are struggling with the enormous costs, deep-seated security risks, and lack of transparency inherent in many mainstream AI solutions.

Download the report.

According to the analysis, agencies often underestimate the total cost of AI implementation by focusing on the models themselves and overlooking significant expenses such as data preparation (20-30% of project costs), regulatory compliance (up to 50% higher in regulated fields), and infrastructure upgrades (15-25%).

This financial strain is compounded by what the report refers to as the “black box dilemma.” Large language models (LLMs), often trained on the entire internet, operate as opaque systems. Agencies using them lack insight into the training data, algorithmic weighting, and potential for built-in biases.

“If someone shows up on your doorstep with an AI black box, and you don’t know what data went into creating that or how it was valued… then you can run into some really interesting problems,” warns Adam Clater, Chief Architect in Red Hat’s CTO organization, in the report. For agencies handling everything from classified intelligence to sensitive healthcare records, this lack of control is not a viable option.

The report advocates for a strategic pivot from “black box” to “glass box” AI, championing the open-source alternative. This approach enables federal agencies to inspect, understand, and trust the AI tools they deploy. The analysis directly aligns with recent White House directives that mandate agencies pursue transparent, adaptable, and open-weight AI models to protect privacy, avoid vendor lock-in, and ensure cost-effectiveness.

Smaller models can yield better results

Challenging the prevailing “bigger is better” myth, the report argues that smaller, specialized AI models trained on specific agency data are more efficient and effective. Clater offers a powerful analogy: memorizing a dictionary is more practical for specific tasks than memorizing an entire encyclopedia. “Tests have shown that smaller models can give you a high percentile completeness level without memorizing the entirety of the encyclopedia,” he states.

This specialized approach is already proving effective in highly regulated environments. The report explains how federal agencies can use fine-tuned AI models trained on internal case files and regulatory texts to identify sophisticated fraud more accurately than their larger, more general counterparts.

The report highlights another advantage of smaller, purpose-built models: the ability to bring AI to the data, rather than sending sensitive data to a third-party model. This ensures data sovereignty and enables powerful new applications at the tactical edge—from providing real-time intelligence to wildfire incident commanders to accelerating decision-making for military leaders on the battlefield. “Being able to take our AI directly to where the mission is happening in real time is going to bring tremendous value,” says Clater.

The report points to recent initiatives by Red Hat to help agencies leverage smaller, open-source AI technologies, including advancements in Red Hat OpenShift AI and its recent acquisition of Neural Magic, which enables agencies to run optimized AI models on existing hardware. These tools, combined with developer community projects like InstructLab, aim to empower agency domain experts—not just data scientists—to contribute to and refine AI models.

The report concludes that the future of AI will increasingly lean toward open-source solutions because they are more accessible, powerful, and secure.

Download the full report.

This article was produced by Scoop News Group for FedScoop and sponsored by Red Hat.

Latest Podcasts