State Department names new chief data officer
The State Department hired Matthew Graviss as its first full-time chief data officer, the agency announced Monday.
As CDO, Graviss leads the Office of Management Strategy and Solution’s Center for Analytics, which serves the department’s enterprise data needs. He was hired into the role last month.
The CDO role was previously held in an acting capacity by the office’s deputy director, Janice deGarmo, but now Graviss is spearheading department efforts to use data as a strategic asset. Graviss will report to deGarmo, who was recently promoted to director of the Office of Management Strategy and Solutions.
The Center for Analytics’ data products inform foreign policy and management decisions essential to U.S. diplomacy.
This is a step up for Graviss, who was previously CDO for U.S. Citizenship and Immigration Services within the Department of Homeland Security.
Biden transition team names White House tech officials
President-elect Joe Biden‘s transition team announced two technology officials to serve in the incoming administration, both of whom served in the Obama White House.
David Recordon will be the director of technology in the White House’s Office of Management and Administration, and Austin Lin will serve as his deputy. Recordon and Lin both come from roles at the Chan Zuckerberg Initiative before taking technology roles on the Biden transition team late last year. They also have in common stints working at Facebook.
The Office of Management and Administration is typically an internal, behind-the-scenes White House office that oversees operations, and the tech functions within it tend to serve the needs of the Executive Office of the President. But it appears the incoming Biden administration may expand Recordon and Lin’s roles to be more governmentwide than in previous administrations.
“The technology leaders will play an important role in restoring faith across the federal government by encouraging collaboration to further secure American cyber interests,” says a release from the Biden-Harris transition.
During the Obama administration, Recordon worked with the U.S. Digital Service before serving as the first director of White House information technology. In the administration, he worked on IT modernization and cybersecurity issues, according to the release. He has served as the deputy chief technology officer for the Biden-Harris transition team.
Celebrating his new role on LinkedIn, Recordon wrote: “The pandemic and ongoing cyber security attacks present new challenges for the entire Executive Office of the President, but ones I know that these teams can conquer in a safe and secure manner together.”
Lin was deputy director of information technology and associate director for operations in the Obama White House.
“In addition to working with organizations and communities, these accomplished public servants sit at the forefront of collaboration across the administration,” Biden said in a statement. “They will lead initiatives ranging from developing policies and processes, to ensuring our cybersecurity needs are met with a whole of government response.”
How IT modernization has helped HUD better manage its finances
IT modernization proved critical to the Department of Housing and Urban Development issuing a clean, audited financial statement for the first time in eight years in December.
HUD began working with the General Services Administration’s IT Centers of Excellence (CoE) in the summer of 2018 on its data analytics, contact centers, cloud adoption and customer experience. And after recently completing Phase 1 of work with the CoEs, the department was able to audit 14 areas it couldn’t previously account for in its statement.
“The financial infrastructure and reporting in the IT systems at HUD were fairly weak,” Chief Financial Officer Irv Dennis told FedScoop. “Probably the weakest of all of the Cabinet-level agencies.”
Dennis joined HUD from the private sector in January 2018 and aggressively lobbied for the department to become the CoEs’ second agency partner, knowing that better IT could make a huge difference in the department’s accounting. HUD’s financials were in disarray, audits were a mess and IT systems were antiquated with “lots of paper processes” still around, he said.
Among the areas HUD couldn’t audit were a loan portfolio Ginnie Mae had taken over servicing, fixed assets, multiple liabilities, and some Community Policing Development grant approvals. HUD was also not in compliance with several financial reporting requirements in the DATA Act, GONE Act, and Improper Payments Elimination and Recovery Improvement Act (IPERIA).
Now HUD is compliant with all such regulations, having analyzed $1 trillion in spend data over the past 22 years for display in an internal dashboard launched three months ago. Data can be displayed at the state, congressional district or city level or by grant program or grantee.
Dennis wants to eventually make the dashboard publicly accessible, but first, a process must be established to ensure it’s updated monthly or weekly. But whether the Biden administration continues HUD’s work with the CoEs remains to be seen.
“We’re now starting Phase 2,” Dennis said. “And I hope that’s something that the next administration continues.”
Much of HUD’s data remains stored in hard-to-access ways, so Dennis’ team has begun the process of standing up a centralized data warehouse. Dennis hopes to set up a chief data officer shop either separately or within another office like that of the chief information officer.
A full-time CDO could lead an agencywide effort to use data and dashboard it, Dennis said.
Another major aspect of HUD’s CoE work has been streamlining its six different call centers with hundreds of phone lines. The Federal Housing Administration already has a “strong” call center worth replicating across agencies, Dennis said.
What’s more, the call centers capture data that HUD could use to predict where services are needed.
Unstructured data from outside HUD, such as on social media, could also be used to improve department services, but first resources must be provided to a customer experience officer, Dennis said.
“I’m a big fan of Centers of Excellence shared service centers,” Dennis said. “I would encourage each agency to have a very open mind, understand what their capabilities are and then reach out to them; go through your Phase 1.”
The centers helped HUD perform a gap analysis and identify acquisition needs, but putting together a team to work with them took time, he added.
Improved IT systems have benefitted HUD in other ways as well, such as helping the department quickly award the “lion’s share” of $12.5 billion in CARES Act funds, Dennis.
At the same time HUD has been working with the CoEs, it’s also launched a robotic process automation initiative.
The department started by bringing one small, manual process down from six-and-a-half months to three-and-a-half weeks to complete with automation. Since then the CFO’s Office has identified more than 60 other processes taking a cumulative 70,000 hours to automate.
Grant accrual is a big one with emails sent out to hundreds of public housing authorities (PHAs) to do accounting. Now those emails are automated, as is the process of moving the numbers they contain into a consolidating Excel spreadsheet.
HUD is exploring other automation techniques like intelligent data extraction to review PDF files for high-risk areas and agencies.
All 24 CFO Act agencies, not just HUD, have struggled to meet federal requirements for implementing financial management systems, according to an August report from the Government Accountability Office. The CFO Act charged CFOs with approving and managing the upgrade of such systems, but they still lack a standardized set of responsibilities.
HUD was no exception.
“Programs would make business process changes, IT changes and accounting policy changes without any interaction, coordination or oversight from the CFO,” Dennis said. “All the programs were operating their own little businesses in silos.”
Army Reserve gets its first cyber general
The U.S. Army Reserve recently introduced its first general to oversee its cybersecurity.
Newly promoted Brig. Gen. Robert Powell will serve as a deputy commanding general of cyber for the 335th Signal Command, specializing in overseeing the unit’s cyber activities, according to a news release from the Army. Powell has a long history in the Army Signal Corps and cyber-related units, most recently commanding the U.S. Army Reserve Cyber Protection Brigade from 2016 to 2019.
The military has seen its reserve components’ cyber capabilities as one option to enhance cyber-readiness, hoping to lean on members of the military who have left full-time service but can still offer their technical expertise as reservists. The Marine Corps also recently announced plans for more reserve units focused on network security.
“[Powell] is the first United States Army Reserve General Officer to come from the cyber branch,” said Maj. Gen. Stephen Hager, who led Powell’s promotion ceremony in December. “That is significant since it demonstrates to our younger troops that there is a path to general officership.”
Many of the military’s highest-ranking generals come from combat roles — less often from IT or cyber. In a recent opinion piece, the former chief learning officer of the Navy criticized the over-representation of combat officers in the upper echelons of the military, saying it posed a threat to the effectiveness and cybersecurity of the armed forces.
In a speech during his promotion ceremony, Powell stressed the importance for general officers to have cyber experience as the Army puts greater emphasis on cybersecurity operations and information warfare.
“It was very evident in my time at Fort Meade that information warfare is growing in complexity, and we must continue to move in a direction to address these challenges,” Powell said during the ceremony.
Being a general in the Army Reserve is a major achievement in its own right, with only 130 currently serving at that level.
“The jump from colonel to flag officer is a very competitive endeavor,” Hager said.
Data analytics and security platforms play key role in cloud migration
Even though cloud migration has been a mantra for federal agencies over the past decade, the pandemic made it starkly clear that all agencies have not modernized at the same pace.
Third-party research compiled by Splunk indicates that cultural barriers among public sector leaders may be keeping government from taking full advantage of cloud benefits. However, the research also shows that agencies are looking more to FedRAMP-authorized cloud services to help them speed up digital services.
“Often we hear that leaders discuss modernization efforts, but a combination of legacy systems and poor insight into what they have, they are finding it difficult to actually start the modernization process,” said Ashok Sankar, director, marketing and strategy in public sector and education at Splunk, who commented on the research in a recent interview with FedScoop.
One study cited by Splunk found that among 156 IT public sector decision-makers only 13 percent are confident in their ability to modernize current systems and applications, including cloud and hybrid migrations.
According to another study of over 600 IT and security practitioners, organizations face a number of key challenges migrating to the cloud, such as:
- The inability to achieve a strong security posture (65%)
- Complexity in migrating from on-prem to cloud (61%)
- A lack of visibility into resource utilization, metering and monitoring (60%)
“That is why data analytics and security analytics are so important,” he says. “If you harness your data, you’ll be able to gather real-time insights to solve challenges like minimizing security risks and migrating to the cloud while at the same time improving citizen experience and mission assurance.”
That’s why FedRAMP-authorized cloud services are so beneficial to agencies who need to securely accelerate their cloud migration and modernization efforts, according to Sankar.
“FedRAMP is the gold standard. It is an extensive evaluation process to ensure the highest level of cloud security to ensure a service is meeting legally mandated federal security measures. It saves the government 30-to-40% of their authorization costs,” he explains.
It hasn’t been lost on agency leaders the value of FedRAMP to expedite their modernization initiatives. The research gathered by Splunk, cites a U.S. Government Accountability Office study which found that: they discovered that:
- From June 2017 to July 2019, the number of instances of agencies using FedRAMP authorizations increased from 390 to 926.
- 21 of 24 agencies reported that FedRAMP made their data in cloud environments more secure or about the same.
- 21 out of 24 agencies reported that FedRAMP authorization reduced costs in reviewing CSP assessment and authorization packages.
What leaders may not know is that Splunk’s cloud service is the first data analytics and the first security service analytics service that is FedRAMP authorized, Sankar says.
“The data analytics capabilities that Splunk brings helps collect and correlate data
from any source regardless of format and timescale, for full visibility and rapid insights to manage migration complexity, including real-time views into performance and availability.”
Sankar recommends that leaders look to data to drive any modernization or IT strategies they have.
“What leaders need to know is that the tools exist and are attainable that can give them the visibility they need to migrate systems properly while ensuring an acceptable risk posture and be confident that they can save time and money for their agency,” he says.
Learn more about using Splunk Cloud to drive confident decisions and actions for cloud migration initiatives.
This article was produced by FedScoop for, and sponsored by, Splunk.
2020 in review: AI and quantum see more White House attention
The Trump administration prioritized doubling federal spending on artificial intelligence and quantum information science (QIS) research and development in 2020, while also issuing a series of policies aimed at outcompeting China.
Trump’s fiscal 2021 budget proposal floated increasing AI R&D funding from $973 million to $2 billion by 2022 and quantum spending to $860 million within two years, even while reducing total federal R&D spending by $13.8 billion.
The White House Office of Science and Technology Policy called AI and quantum “industries of the future” and even joined the Global Partnership on AI with other G-7 countries — despite President Trump eschewing most international agreements — in opposition to China.
“AI is being twisted by authoritarian regimes to violate rights,” Michael Kratsios, U.S. chief technology officer, wrote in a Wall Street Journal op-ed in May. “The Chinese Communist Party is reportedly using AI to uncover and punish those who criticize the regime’s pandemic response and to institute a type of coronavirus social-credit score — assigning people color codes to determine who is free to go out and who will be forced into quarantine.”
The White House established seven AI research institutes under the National Science Foundation and five QIS research centers under the Department of Energy in August, with the $1.2 billion authorized in the National Quantum Initiative Act of 2018.
NSF ultimately announced eight AI Institutes the following month, with private industry providing $160 million toward their creation.
“Each one of these is a significant activity in its own right to catalyze education, to create a future AI-ready workforce and also to be nexus points for growing new partnerships,” said James Donlon, program director at NSF, in an interview.
NSF requested about $875 million for AI R&D in 2021, but a spokesperson said it hasn’t yet calculated actual funding totals because the omnibus spending package that funds government through September only just passed Sunday.
The U.S. entered into several bilateral AI research agreements with allies like the U.K., and in October the White House named 20 critical and emerging technologies agencies need to promote and protect. Prior to the strategy’s release, agencies had prioritized such technologies individually.
AI and QIS both made the list along with 5G, semiconductors and space technologies.
“It articulates the areas viewed as the most strategically important and provides a guiding approach for the rest of the federal government, the U.S. research and innovation community, and our allies around the world as we promote and protect our technology advantage,” said a senior administration official at the time.
The Government Accountability Office continues work on an AI oversight framework for continuously monitoring agencies’ progress adopting the technology. Explainability and transparency, bias and fairness, integrity and resilience, and data quality and lineage will all be taken into account.
“Practical” principles are needed beyond “do no harm,” said Taka Ariga, chief data scientist at GAO, in November.
To that end, the National Institute of Standards and Technology plans to issue a series of foundational documents on trustworthy AI. Guidance on AI vulnerabilities is coming in 2021, but first, the agency needs more time to understand the dangers of bias within data and algorithms and how to measure it.
“While we understand the urgency, we want to take time to make sure that we build the needed scientific foundations,” Elham Tabassi, chief of staff at NIST’s Information Technology Laboratory, said in November. “Otherwise developing standards too soon can hinder AI innovations that allow for evaluations and conformity assessment programs.”
Trump signed an executive order earlier in December for the first time providing civilian agencies with nine principles and a policy process for implementing trustworthy AI. The principles borrow heavily from those already established by the defense and intelligence communities.
The Office of Management and Budget issued an overdue memo in November directing agencies on how to regulate AI applications produced for the U.S. market without stifling innovation.
“Through this memorandum, the United States is taking the lead to set the regulatory rules of the road for artificial intelligence,” Kratsios said at the time. “The U.S. approach will strengthen the nation’s AI global leadership and promote trustworthy AI technologies that protect the privacy, security, and civil liberties of all Americans.”
2020 in review: Pentagon leads federal DevSecOps efforts
The Department of Defense accounted for most of the federal movement on DevSecOps in 2020, while civilian agencies generally were just getting started in using the development philosophy that is popular in private industry.
In a few well-publicized projects, DOD took the greatest strides attempting to integrate the work of developers, security experts and operations specialists — the Dev, the Sec and the Ops. Progress in using the DevSecOps philosophy was less noticeable, however, on the civilian side, experts say. Many federal coding teams and contractors have embraced DevOps practices but still tack the “Sec” onto the end of the software development lifecycle.
“Agencies are in various stages of maturity in DevSecOps, sometimes even within the same agency itself,” ATARC founder Tom Suder told FedScoop in October. “Most agencies have at least started the DevSecOps journey with the purchase of stand-alone tools.”
DevSecOps is so highly regarded because it “bakes” security into the software development process and allows for developers to recognize and address vulnerabilities as they work. The most advanced versions of the philosophy allow not only for the security work be integrated, but also for dynamic, technology-assisted assessments of the code. Google, Microsoft, Apple and Facebook this year all said they had begun dynamic analysis of their code.
Although it most federal agencies might not get to that level for awhile, experts say advanced DevSecOps eventually could help them reduce their technical debt. The National Institute of Standards and Technology began exploring possible development of a DevSecOps framework in March that could help agencies close the gap with industry.
Traditional, static analysis of code uses continuous integration, continuous delivery (CI/CD) pipelines to perform automated status checks that tend to report more false positives than the advanced techniques now available, said Richard Bae, director of solutions at ForAllSecure, in an interview. The Pittsburgh-based software company specializes in fuzzing, a type of dynamic analysis that sends a bunch of inputs at target code — executing it hundreds or thousands of times per second — searching for bugs in performance.
Google’s Project Zero revamped its DevOps pipeline to incorporate fuzzing of Chrome, while Microsoft’s Project OneFuzz restructured its codebase to fuzz every endpoint. Other Silicon Valley companies are joining suit, but federal agencies have ground to make up, Bae said.
“If you do good software development, most of our security problems will go away because all of the nagging vulnerabilities that we see in software — a lot of those are attributed to people not using secure coding techniques and things we should be doing,” said Ron Ross, a NIST fellow, in March.
Like many ongoing efforts in federal IT, progress on DevSecOps also could have been affected by the COVID-19 pandemic. Chief information officers and IT teams have had to focus limited resources on more urgent problems, like the shift to telework and remote access security architectures.
At least one agency — the Department of Veterans Affairs — put the accountability for DevSecOps practices in the hands of a specific leader. The department appointed Todd Simpson as its first head of DevSecOps in July.
And ATARC launched a source code repository on GitLab in October to help agencies begin using DevSecOps. A DevSecOps Project Team is creating an automated CI/CD pipeline allowing agency IT personnel to practice source code management.
DOD pockets of DevSecOps
The Pentagon meanwhile has more resources to put toward DevSecOps, with several service branches creating their own coding units and programs this year. But that doesn’t mean DevSecOps is part of DOD’s overall tech culture just yet, Bae said. So far, the progress has been within projects like Kessel Run and Platform One that have mandates to “hyper-modernize,” Bae said.
“Those are pretty isolated, junior,” he added. “There’s still a long way to go to have that be applied DOD-wide.”
The Platform One team, based out of the Air Force, consists of about 180 software developers using DevSecOps practices to develop military tools. The methodology helps limit the chance of adversaries probing their networks, especially during pandemic telework.
Meanwhile the Army launched a software factory to bring “true DevSecOps” to the branch in July, and the National Geospatial-Intelligence Agency has a “relatively advanced” DecvSecOps pipeline as well, Bae said.
Even the newly minted Space Force has a DevSecOps coding unit dubbed the Kobayashi Maru.
DOD officials further announced an Adaptive Acquisition Framework pathway for buying software and securely developing code between government and contractor teams using DevSecOps in October.
Federal watchdogs have shown an interest in monitoring the DOD’s progress on DevSecOps. The department has at least 22 weapons programs using agile software development — where iterative updates are pushed rapidly — but none of them used a DevSecOps methodology, according to an annual Government Accountability Office assessment released in June.
Such programs don’t have time to implement DevSecOps practices because their contract requirements only cover producing features. Instead, DOD has to create specific programs to work down the tech debt for analyzing weapons systems, Bae said. In those cases, vulnerability researchers often have limited knowledge of what they’re evaluating, he said.
The best example of this is Section 1647 of the 2016 National Defense Authorization Act, which provided DOD $200 million to give to weapons system developers to find bugs post-development.
2020 in review: COVID-19 accelerates congressional modernization
Philadelphia’s yellow fever outbreak in 1793 and the 1918 Spanish flu couldn’t do what the COVID-19 pandemic did for Congress this year: force it to reckon with its historical objection to institutional change.
The House of Representatives established the Select Committee on the Modernization of Congress in January 2019. And since then, most of the committee’s work has been treated as recommendations, rather than as a call for urgent and necessary action.
That is until the pandemic hit. Throughout 2020, COVID-19 has proven the importance of a congressional modernization committee and provided a springboard to largely bypass the typical staunch opposition to modernization efforts such as remote work and proxy voting.
Remote work options
The House quickly recognized the need to transition to remote work in the early days of the pandemic. Committee on House Administration Chair Zoe Lofgren, D-Calif., issued a Dear Colleague letter on March 4 informing members that unspent 2019 funds could be used to purchase the technology necessary for telework. She also urged staff to beef up their offices’ continuity of operations plans.
“We note that adopting such plans and purchasing suitable equipment to permit telework is a safeguard and investment that will protect offices in the future, including from situations where offices may need to be closed because of natural disasters,” she wrote.
On March 9, the Chief Administrator Officer also set up a House Telework Readiness Center to provide technical assistance.
Digital submissions
Congress is stubbornly analog and paper-based, but with fewer staff members around to shuttle bills and distribute Dear Colleague letters for signatures, both chambers needed a quick pivot to digital workflows.
Speaker Nancy Pelosi, D-Calif., announced a process for members and staff to digitally submit bills, co-sponsorships, and extension of remarks through an email system managed by the House Clerk on April 6. By May 20, members could also digitally submit committee reports.
Remote proceedings
The House Committee on Veterans’ Affairs held a first-ever live virtual forum on April 28 about the pandemic’s effect on homeless veterans, using Open Broadcast Software to personalize Zoom speaker displays. The Senate Homeland Security and Governmental Affairs Permanent Subcommittee on Investigations held its own virtual proceeding on April 30.
What followed was a relatively quick — for Congress — adoption of virtual and hybrid convening.
That new technology also presented new opportunities for gaffes, such as when Sen. Tom Carper, D-Del., swore during a live virtual hearing in late August.
The House altered its rules on May 15 to allow for both virtual committee hearings and, significantly, proxy voting for the duration of the pandemic. So far, 123 members have designated a proxy, or another member who can vote on their behalf.
“Convening Congress must not turn into a super-spreader event,” House Rules Chairman Jim McGovern, D-Mass., said at the time.
The introduction of proxy voting, though limited to just the current health crisis, is one of the largest updates to voting procedures since the elimination of “teller votes” in 1971 and the debut of the still-in-use electronic voting system in 1973.
The events in 2020 forced Congress to take significant strides not only in updating workflows and adopting new technologies but also in recognizing why a continuous modernization effort is important: Pelosi announced that the Select Committee to Modernize Congress will continue its work into the 117th Congress.
“Through bipartisan collaboration and a commitment to reform, I’m proud that this Committee has approved nearly 100 recommendations over the course of the last year and a half to make Congress work better for the American people,” Chairman Derek Kilmer, D-Wash., said in a statement announcing the committee’s renewal. “But our work is only getting started.”
2020 in review: Joint AI Center gets a ‘2.0’ and a clear path forward
This was the year the Department of Defense’s Joint Artificial Intelligence Center transformed from a scrappy startup working on low-risk products to being a full-fledge battlefield AI incubator.
As 2020 draws to a close, the JAIC stands as a 200-plus person organization with a new leader and new direction dubbed “JAIC 2.0.” JAIC officials now see themselves as an enabling force, one designed to help the many other parts of the Pentagon reach AI-readiness rather than be a product-focused office just making models.
In 2020 the JAIC also adopted its own ethical principles, started building a development platform and launched new international outreach programs. The office is also on the cusp of gaining its own acquisition authority, pending a likely override vote on the Fiscal 2021 National Defense Authorization Act in the Senate.
“Every year has felt like, ‘OK, it couldn’t possibly be the case that next year would be a bigger deal than what we went through the previous year,'” Greg Allen, the JAIC’s head of strategy and communications, said during a recent C4ISR webinar. “Because we’ve grown from … really just an idea and a handful of people to a really large organization with an enormous mandate and enormous breadth of activities underway.”
When the JAIC was first launched, it was a small group people working on a handful of projects with no direct connections to combat, like wildfire tracking and predictive maintenance. Now, the vast majority of the JAIC’s budget is spent on its Joint Warfighting work, with the center awarding an $800 million contract to Booz Allen Hamilton in May to be a prime integrator for battlefield AI.
In his final appearance as JAIC director in the same month, Lt. Gen. Jack Shanahan said it was time for DOD to take AI to the battlefield.
“People in the field have begun to taste what AI can be,” the now retired general said in May.
With its new director, Lt. Gen. Michael Groen, JAIC 2.0 was born. Its latest mission is to be the office that makes other AI offices work better.
“We will continue to do products … but we really want to create a tide that raises all boats,” Groen said in his first major appearance as JAIC director in November.
Defining ethics for battlefield tech
Before the introduction of warfighting tech, the JAIC adopted its own AI ethics principles that officials say will guide all of their decisions. The five principles — that AI use should be responsible, equitable, traceable, reliable and governable — are the words that projects will live or die by.
“I think about how do we have everyone think about ethics across the organization, and not think perhaps it is just the technologist’s job to address it,” Alka Patel, the JAIC AI ethics lead, said. “Frankly, it is all part of our jobs.”
The ethics principles were adopted in February from a set of recommendations drafted by the Defense Innovation Board. Patel and other ethics officials say they are implementing the principles in several ways, including new cross-cutting ethics committees, as well as testing and evaluation processes.
Tech for all five sides of the Pentagon
The JAIC says it is bringing AI to all through its Joint Common Foundation (JCF). The AI development platform is billed as the soon-to-be one-stop-shop for databases, coding environments and AI models. The ultimate goal is to have anyone from a solider deployed in Afghanistan to civilian in a support agency be able to access and manipulate clean, verifiable data for analytics.
“At the core of JAIC’s success has got to be this JCF,” DOD Chief Information Officer, Dana Deasy, said during the FedTalks 2020 virtual event.
In August, the JAIC issued a $106 million contract to Deloitte to help on the JCF. (The JAIC actually needed the Defense Information Systems Agency to issue the contract). Many other technology pieces need to be put in place to reach the JCF’s goals, including cross-department cloud adoption.
The JAIC’s budget has consistently grown since its founding. But, until the fiscal 2021 National Defense Authorization Act is made into law by an expected veto-override vote in the Senate, the JAIC has no ability to buy its own tech and services. Having acquisition authority will mean that the JAIC will not need to go to other organizations within the government — like DISA or the General Services Administration — to do the actual acquisition, like it did on the JCF and Joint Warfighting contracts.
The changes to the JAIC, from new authorities to new monikers, all point to a singular focus in the year’s to come: using AI wherever the military confronts adversaries. DOD leaders for years have spoken about AI potential impact on the future of conflict, but in 2021 that future will be closer than ever.
2020 in review: Tech policy adjusts to new norm of telework in pandemic
The year 2020 will be remembered as one when federal policymakers swung from being telework-hesitant to all-in on allowing personnel to perform their jobs remotely to keep them safe.
Amid the Trump administration‘s normal rhythm of IT and tech-related policy updates in 2020, the COVID-19 pandemic hit, forcing the federal government to issue rapid guidance to keep federal agencies and operations running smoothly during mandates for social distancing.
While some agencies had telework policies in place prior to the arrival of the coronavirus stateside, many didn’t — and almost none had put them into action at the scale they would during the pandemic. But in a matter of days, that would all change, as administration leaders issued new policies that guided how the government operated in a remote and virtual way for the remainder of 2020, and perhaps longer.
On March 15, as most of the nation was still sizing up the gravity of the crisis, the federal government took unprecedented action to offer maximum telework flexibilities to agencies across the capital region. While not an IT policy per se, Office of Management and Budget Memo M-20-15 set the government on the course to send employees home and test the limits of a full-scale federal remote workforce.
Within days, OMB issued another memo, M-20-16, with more specific actions agencies would have to take to slow the spread of COVID-19. “[T]he Government must immediately adjust operations and services to minimize face-to-face interactions, especially at those offices or sites where people may be gathering in close proximity or where highly vulnerable populations obtain services,” the memo said. OMB would look to put its money where its mouth was, too, by requesting billions of dollars for telework and IT needs to ensure continuity of operations during the most challenging early days of the pandemic.
Then, on March 22, OMB’s Margaret Weichert digitally signed M-20-19, guidance on “Harnessing Technology to Support Mission Continuity,” perhaps the most important IT guidance issued in 2020. While none of the memo’s guidance introduced anything new necessarily, it directed agencies to “utilize technology to the greatest extent practicable to support mission continuity” and heavily emphasized existing capabilities like digital signatures, virtual private networks, identity and access management policies, and virtual collaboration tools.
“In response to the national emergency for COVID-19, agencies are directed to use the breadth of available technology capabilities to fulfill service gaps and deliver mission outcomes,” the memo reads. It also came with a set of frequently asked questions and resources for agencies to make decisions during the early days of their pandemic response.
The same day, Weichert signed another memo urging contracting officers to extend teleworking opportunities to government contractors. Prior to that, contractors had been left in limbo over whether they were still required to report to government buildings or could access their work remotely.
Though these guidance memos on how agencies should operate in a remote environment were introduced just days into the pandemic response, they are still just as relevant as 2020 comes to a close, with a large portion of the federal workforce continuing to work from home as COVID-19 rages on more than nine months later.
If anything, 2020 will serve as a starting point for discussions in 2021 on technology’s capacity to support remote work in the future — and the government’s inclination to break free of dated requirements that dictate in-office time for the workforce.
TIC 3.0 and AI
On top of the policy related to telework and COVID-19, the Trump administration issued new guidance related to the government’s evolving use of technology. In particular, 2020 saw new policy on Trusted Internet Connections (TIC) and the use of artificial intelligence both in and out of government.
Just last week, the Cybersecurity and Infrastructure Security Agency introduced network security guidance for remote work under TIC 3.0. The new use case further illuminates the cloud-enabled flexibilities agencies have under TIC 3.0 to support secure telework as personnel connect to agency networks externally.
Before this, CISA issued finalized versions of initial TIC 3.0 core guidance — the Program Guidebook, Reference Architecture: Volume 1 and Security Capabilities Catalog — in July. The first two documents will be fairly static and the latter a living document that adds capabilities and controls into use cases as they’re announced.
Now, the CISA team is working to issue finalized guidance under TIC 3.0, which will likely include a use case for zero-trust security.
Additionally, the Trump administration made a strong push in 2020 to set a foundation for AI in the government.
As early as last January, the White House began issuing guidance on how agencies should govern the development of artificial intelligence in the private sector. In a set of binding principles, the administration hoped to set a consistent global standard for how AI is developed that every country in the world can adopt.
The White House’s regulatory focus on AI continued in November, issuing more guidance for agencies on how to regulate artificial intelligence applications produced for the U.S. market.
“While narrowly tailored and evidence-based regulations that address specific and identifiable risks could provide an enabling environment for U.S. companies to maintain global competitiveness, agencies must avoid a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America’s position as the global leader in AI innovation,” the memo states.
But the most meaningful AI policy for agencies came in early December through an executive order with nine principles and a policy process for agencies to implement AI the public can trust. The order borrows heavily from principles already established by the defense and intelligence communities, extending them to civilian agencies outside the national security space.
“Artificial intelligence can be an important tool to help modernize government and ensure federal agencies are effectively and efficiently delivering on their missions on behalf of the American people,” said Michael Kratsios, U.S. chief technology officer, in the announcement. “This executive order will foster public trust in the technology, drive government modernization and further demonstrate America’s leadership in artificial intelligence.”
Other policy highlights
- OMB issued guidance for agencies to move to more secure Internet Protocol version 6 (IPv6) systems and services completely by 2023.
- In June, President Trump signed an executive order allowing federal agencies to prioritize skills-based hiring in addition to traditional educational-based requirements.
- The White House and the Pentagon agreed on a plan in August for the fastest transfer of federal spectrum for commercial 5G wireless use in history, without affecting military operations.
