Agencies face big risks in 2026 with AI browsers
As the federal government continues to prioritize AI adoption, cybersecurity experts are grappling with new security threats posed by agentic AI and large language models.
In November, Anthropic reported that a Chinese state-backed group abused its AI coding tool to launch automated cyberattacks against roughly 30 global organizations and government agencies. This attack follows research highlighting how bad actors can easily jailbreak LLMs or exploit AI browsers for unauthorized operations.
With the Defense Department partnering with AI giants to accelerate the Pentagon’s adoption of the technology, these risks must be addressed quickly. Congress also recognizes this and is responding, with the recent 2026 NDAA directing the DOD “to include content related to the unique cybersecurity challenges posed by the use of artificial intelligence.”
But legislation alone isn’t enough. Agencies must understand the evolving AI landscape and adopt new security approaches before these risks threaten operations.
The new big risk: AI browsers
AI browsers have seen a surge in popularity in 2025. New platforms like Perplexity’s Comet and OpenAI’s Atlas have launched, and mainstream browsers like Chrome, Safari, and Edge are now offering AI features. These browsers have significant productivity potential, but also create an entirely new attack surface.
Unlike traditional browsers, AI browsers use agents to gather information and perform tasks for users. Since these agents are built on the same technology as popular chatbots, they also inherit the same vulnerabilities, including hallucinations, misaligned behavior and data leakage.
Multiple independent reviews have already flagged systemic issues in these browsers: agents executing malicious instructions (indirect prompt injection), falling for scams, and bypassing safeguards for sensitive sessions such as banking and email. Even non-browser-agent features like Windows Recall can prompt privacy apps to respond defensively, showing that traditional, always-on, endpoint-resident AI needs new governance.
The heart of these issues lies in areas of risk that traditional security cannot address, requiring new approaches that focus on previously neglected security dimensions.
Intent and identities are AI’s biggest threats
Agentic and GenAI’s most critical vulnerabilities stem from the misuse of intent or identities. AI browsers, for example, blur the line between human and agent intent, putting SSO sessions at risk.
These agents operate autonomously, assuming distinct identities and making decisions on behalf of users. Since AI can access legitimate tools, manipulate workflows, and execute actions, even when malicious intent isn’t apparent from the data itself, traditional defenses that protect data at rest or in transit are ineffective.
To address this issue, agencies must prioritize identity security — verifying the authenticity of agent personas — and intent security, ensuring AI tools align with organizational missions and policies.
By 2027, intent security will become the core discipline of AI risk management, replacing traditional data-centric security as the primary line of defense. Agencies will need AI-aware control frameworks, intent auditing, anomaly detection and incident response playbooks that focus on what the AI intends to do, rather than what data it accesses.
We’re already seeing early movement to address these two issues, including federal initiatives on virtual identities, but intent is still largely overlooked. Failing to monitor AI identities and align AI intent will result in operational and strategic risks at machine speed, far beyond what conventional cybersecurity can mitigate.
It’s clear that AI security in 2026 will need new solutions and approaches.
Purple-teaming is no longer negotiable
Addressing these new security challenges means being able to validate and monitor agent behavior to ensure every action aligns with the defined identity and intent, regardless of agent type or environment.
Some defense agencies are attempting to address this challenge through red-teaming to “assess AI-enabled battlefield systems.” However, traditional red-and-blue teaming methods are too restrictive for the current landscape. Manual assessments are slow, and purely defensive monitoring leaves blind spots.
By merging the two into a purple-teaming approach and automating the combined exercise, agencies create a continuous feedback loop where each simulated attack immediately informs and strengthens active defenses. Only this autonomous, agent-driven approach can keep up as agencies deploy AI agents at scale.
With intent security and automated purple-teaming, agencies are well positioned to meet NDAA requirements and address the unique cybersecurity challenges posed by AI.
Legislation and looking ahead to 2026
Other legislation, such as President Donald Trump’s executive order limiting states’ ability to regulate AI, also offers guidance for how state and federal agencies should approach the adoption of AI. Even more direction is expected in 2026, likely addressing AI browsers, agent intent and ensuring identity visibility.
Crafting AI guidance and legislation is challenging, as compliance frameworks must contend with not being able to see into the technology they’re regulating. Additionally, there is often inconsistent adoption, as many risk-management frameworks don’t fully apply to AI, and non-traditional KPIs make it hard to measure whether compliance is being met at all.
Despite these challenges, agencies can position themselves for success by prioritizing emerging trends like AI browsers and intent-based security. Centering these elements within AI strategies and integrating innovative approaches, such as autonomous purple-teaming, empowers agencies to develop a forward-thinking mindset and effectively accomplish their mission in 2026.
Elad Schulman is the CEO and co-founder of Lasso Security.