Energy awards $28M to 5 supercomputing projects

The Department of Energy will give $28 million to five research projects developing software for its supercomputers, the Scientific Discovery Through Advanced Computing (SciDAC) program announced Friday.

The projects DOE selected will develop computational methods, algorithms and software benefitting research into quantum information science and chemical reactions with clean energy applications.

SciDAC brings together interdisciplinary groups of experts to make use of DOE’s high-performance computing resources, and the five teams will partner with one or both of its institutes, FASTMath and RAPIDS2, out of the Lawrence Berkeley and Argonne national laboratories.

“DOE’s national labs are home to some of the world’s fastest supercomputers, and with more advanced software programs we can fully harness the power of these supercomputers to make breakthrough discoveries and solve the world’s hardest to crack problems,” said Secretary of Energy Jennifer Granholm in an announcement. “These investments will help sustain U.S. leadership in science, accelerate basic research in energy, and advance solutions to the nation’s clean energy priorities.”

The five awardees are:

The projects were chosen through a competitive, peer review process under a DOE Funding Opportunity Announcement open to universities, national labs and other research organizations. DOE has yet to negotiate final project details for the awardees, but $7 million of the total funding has been allocated for fiscal 2021, contingent upon congressional appropriations.

Cyber defense strategies that focus on protecting people

Deborah Watson is the resident CISO at Proofpoint with over 20 years’ experience in security.

Cybercrime has become a profitable business model, as evidenced by recent ransomware payments where criminals continue to perfect low-investment, high-return campaigns. While the majority of attacks start in email, the techniques, tools and procedures cybercriminals use are quickly changing. This rapid evolution makes it increasingly difficult for organization leaders to adapt to changes to the threat landscape in a timely manner.

Deborah Watson, Resident CISO, Proofpoint

One of the techniques we see on the rise is social engineering attacks, where malicious actors gather information about the people within an organization to trick users into making security mistakes. Attitudinally, cybercriminals approach people-centric attacks with as much effort, time and resources as they are devoted to understanding vulnerabilities in enterprise networks. Some emails impersonate colleagues and suppliers, taking advantage of employees who strive to be supportive. Other emails leverage reconnaissance information to emulate standard user interfaces resulting in credential theft.

In a threat environment where criminals are strategically targeting people, federal agency leaders may make many assumptions about who represents the most significant risks within the organization. But those assumptions can be wrong when leaders do not have the complete picture of who is vulnerable, privileged and targeted. And while their ecosystem of security tools monitor network activity, cloud environments and endpoint devices, they may be missing an agency’s most outstanding security and compliance risk — its people.

Human error is still the most significant risk factor

Phishing and credential theft are two primary techniques that attackers use to gain access to an organization. Verizon’s 2021 Data Breach Investigations Report found that 94% of breaches start with attacks targeting people via email, which is now the number one threat vector.

Complicating the situation, hackers have evolved from their emails being blatantly fraudulent, increasing the probability that an employee, with limited time, will evaluate an email before opening an attachment or clicking on a URL. It is true that poorly crafted emails still exist and are broadly distributed, but modern email security solutions generally catch those due to their widespread distribution. Today’s attacks are often narrowly targeted and explicitly crafted to subvert traditional email filters as the probability of detection is reduced by the number of emails sent.

While traditional cybersecurity threats have been built based on a linear kill chain — where reconnaissance of system and software vulnerabilities lead to vulnerabilities allowing access to an organization’s assets — current attack patterns indicate anything but a linear approach and have highlighted that our employees and those within our supply chain are softer targets.

Attackers do their homework targeting people based on data readily available to them. Social networking accounts, for example, allow them to identify common content types for those who are more likely to click on an email based on their specific roles and responsibilities. Once a cybercriminal gets access to the system through a compromised credential or the use of ransomware, they can take their time gathering information about the organization to navigate their way to a part of the architecture where they can launch their exploits.

People-centric approach to security

Many organizations may make qualitative assumptions about how they are being targeted and attacked. One strategy organizations frequently take involves wrapping added security layers around people in the organization — such as executives or high-level finance resources — based on what they believe is true in the absence of intelligence data. However, that strategy can overlook individuals in a wide range of lower-level job functions that frequently offer criminals an easy opening.

A people-centric approach provides agencies the ability to apply risk-based controls because the tools look at data in three key areas:

Instead of treating everybody in the organization the same way, agency security teams can create a more informed picture about their security risks and implement adaptive security controls based on current situational intelligence. Adaptive controls may include using zero-trust application access, browser isolation, step-up and risk-based authentication and targeted security training. Applying adaptive policies can also benefit user monitoring programs, support privacy requirements, minimize data collection and expedite investigations.

Using a platform approach to manage adaptive controls consolidates and correlates policies, intelligence and supports ease of reporting. The result of this approach – increased situational awareness without additional staffing. The workforce efficiency gains allow agency personnel to focus on additional initiatives like those highlighted by the recent White House Executive Order, such as continuous monitoring and compliance.

The growing risk of security threats

Cybercriminals are also getting more organized and functioning more like businesses. In addition to malicious groups creating shared infrastructure, they share information and leverage credential dumps obtained from other security breaches to exploit known visibility gaps. Consequently, agencies need to increase information sharing, control standardization and implement modern security solutions to reduce the risks from the increasing intensity of more targeted attacks.

We work with a global network of customers every day to detect and block advanced threats and compliance risks in more than 2.2 billion emails and 22 million cloud accounts. We see how organizations are getting attacked and which countermeasures are proving most effective. For instance, in the public sector, we can identify which agencies, departments and roles are more targeted than others.

Healthcare organizations, for example, have been increasingly targeted by ransomware attacks both during and following the COVID-19 pandemic response. The aim of those attacks is not so much to disrupt patient care but to extract payment. However, the far-reaching nature of these attacks suggests that criminals could prevent health organizations from providing critical patient care and safety.

Financial institutions and federal regulatory agencies also saw a spike in activity from cybercriminals. Because many of these institutions still use legacy communication systems for transactions, they lost some security visibility and oversight as employees shifted to remote working conditions.

Not surprisingly, cybercriminals saw tremendous opportunities to social engineer account takeovers and infiltrate an entire ecosystem of public and private sector entities that often work closely together.

Another risk factor we see is the number of agencies with underutilized security tools and those who do not take advantage of the complete set of available features. The more security leaders can adapt their security strategies to incorporate a people-centric perspective, the more effective they will become in utilizing the protective controls required to address today’s attacks.

And by working with Proofpoint — with more than a decade’s experience building a global intelligence platform (Proofpoint Nexus), spanning threat protection, information protection and compliance — agencies are equipped to become more secure and protect their people even when they make mistakes.

Learn more about how Proofpoint can help protect federal agencies, and their people, against malicious attackers.

GAO gives 3 priority recommendations to White House on science and tech issues

The White House’s Office of Science and Technology Policy needs to strengthen interagency collaboration around research and development to maximize performance and results, according to the Government Accountability Office.

GAO provided its first-ever priority recommendation letter to OSTP urging the office to work with agencies to establish common outcomes, joint strategies, and roles and responsibilities; address needs using their resources; and develop a way to monitor, evaluate and report results.

OSTP implements GAO’s recommendations at a faster rate than other offices addressing 16 out of 17 recommendations across two fiscal 2017 GAO reports — but 11 recommendations remained open as of June. GAO established three open recommendations as priorities because of OSTP’s “critical role” convening agencies on National Science and Technology Council committees and subcommittees. 

“This mechanism provides a valuable opportunity for agencies to coordinate on implementing an administration’s research and development priorities and to address crosscutting science and technology issues, such as scientific integrity, public access to federally funded research results, reliability of research results, supply chains for critical materials, and others,” reads GAO’s letter. “Strengthening interagency coordination in these areas could help amplify the synergistic effects of related research conducted by different agencies, avoid unnecessary overlapping or duplicative research and development efforts, and facilitate the sharing of lessons learned or coordinating actions to address science and technology issues.”

GAO recommended OSTP, as co-chair of the NSTC Subcommittee on Open Science, coordinate with other co-chairs and participating agencies to implement collaboration practices in November 2019. OSTP initially disagreed with the recommendation but as late as May provided information on its efforts to work with other agencies to increase access to federally funded research results. GAO won’t close its recommendation until OSTP shows it’s attempted to address the practices identified though.

The second priority recommendation dates back to September 2018, when GAO recommended OSTP similarly implement collaboration practices as co-chair of the NSTC Subcommittee on Quantum Information Science. OSTP agreed with the recommendation in that case and took some steps to set goals in key areas as late as May, but GAO won’t close the recommendation until it’s fully addressed.

Lastly GAO recommended OSTP take steps to assess potentially critical minerals as a co-chair of the NSTC Subcommittee on Critical Minerals in September 2016. OSTP didn’t comment at the time but later stated it saw value in analyzing more minerals and non-minerals to inform policy decisions. In May, OSTP stated it was “actively exploring” broadening its focus beyond raw mineral and mineral challenges, but GAO won’t consider its recommendation implemented until there’s a plan for federal coordination addressing the data limitations hindering assessments of potentially critical minerals using a screening process the subcommittee develops.

In all three instances, GAO advised OSTP to consider whether participating agencies agreed to a decision-enforcement process, how leadership can be sustained and if there are documented collaboration agreements in place.

OSTP hasn’t responded to the letter.

Army Research Lab’s new ‘autonomy stack’ speeds up self-driving tech development

The Army Research Lab has begun using its own new “autonomy stack” to speed up the development of its autonomous vehicles program during a one-year sprint.

By owning its autonomy tech stack — all the layers of technology that support applications and development — rather than depending on a contractor for it, ARL now has more control over its Scalable, Adaptive and Resilient Autonomy (SARA) program to improve how robots drive themselves, researchers told FedScoop. Namely, it gave the lab more flexibility to assign research roles to partners to be more deliberate about what groups do and how they use the tech stack to fuse their efforts.

The SARA program kicked off its one-year sprint last year, working with eight collaborators from across the country that each was given a specific part of the complex world of autonomy to engineer new solutions to, instead of putting out broad requests for proposals.

It was “a new and different way of doing business,” Eric Spero, lead for systems engineering for autonomous robotics integration at ARL, said in an interview.

Areas of research that delivered new capabilities range from obstacle classification to navigating narrow passageways. Having the tech stack in-house allowed ARL to be more specific with the tasks it gave to researchers and reduce redundancies, Spero said.

The Army has been chasing the idea of having autonomous ground vehicles to improve the safety of soldiers in battle for years. But so far the Army has missed many of its own timelines for fielding the advanced tech. The SARA program is one of many within the department working on getting AI behind the steering wheel of its vehicles.

With a divide-and-conquer strategy in place for the lab, the eight collaborators were given specific use cases and problems to solve. When they had new code to share, they simply uploaded the software to ARL’s stack, and engineers in the Army could then use it in concert with software from other teams.

“In my mind, the biggest breakthrough is not a technology one … really the biggest innovation is programmatic,” Ethan Stump, artificial intelligence for maneuver and mobility essential research program chief scientist, told FedScoop.

The stack is completely owned by ARL. Most of it was developed in-house, and the roughly 20% that was contracted out is still wholly government-owned. That allows for the greatest amount of flexibility within ARL on how it works with research partners, Stump said.

SARA is not the very first program to utilize this new process, but so far the biggest and most successful. Leaders already have initiated a second yearlong sprint and have designs for a third.

“The strength of the SARA program is that we’re requiring the performers to work with the ARL software,” said Dr. Brett Piekarski, chief scientist of the lab’s Computational and Information Sciences Directorate.

In this case, those partners came from a more broad and diverse group than normal. University and private sector teams from across the country participated, many outside of the usual group the Army works with on autonomy problem sets, Spero said.

“With [the] SARA program, it was a little different, because instead of just putting out a call … it was more of: ‘We would like to invite you into this new collaborative environment,'” Spero said. “Folks became really innovative.”

Edge computing is critical to NASA’s Mars ambitions

Federal employees are producing and consuming data farther from on-premise, physical networks as the workforce becomes more distributed, and no place is that truer than with NASA astronauts in space.

NASA is working with tech companies like IBM to use edge computing to process data closer to the source, namely the International Space Station (ISS).

Edge computing will help NASA decrease the time it takes to analyze data from space, a capability critical to its Artemis program’s efforts to establish a sustainable presence on the moon en route to Mars.

“Edge computing has eliminated the need to move massive data at the International Space Station from a DNA sequencing project,” said Howard Boville, senior vice president of cloud platform at IBM, during Think Gov 2021, produced by FedScoop on Thursday. “Using containerized, analytic code where the data is being produced on ISS has reduced time to get results.”

NASA’s Johnson Space Center used to sequence the DNA of microbes in air, water and surface samples collected by astronauts to ensure they were safe from bacterial and fungal contamination. While the sequencing itself took hours, it would take days or weeks to receive the Petri dish cultures from space.

“It causes a lot of lag time between when the sample is taken and when we know what was in that sample, the microbes that were there,” said Sarah Wallace, microbiologist at Johnson Space Center.

NASA put a handheld DNA sequencer on ISS in 2016 so astronauts could know right away the microbes in their samples. The sequencer was first used in 2017.

A year later NASA developed a new method allowing astronauts to swab any surface for sequencing, eliminating the need for them to culture the organisms prior to testing — thereby protecting them from unnecessary exposure.

The problem remained that ISS’s DNA sequencer generated a lot of data that still needed to be transferred back down to Earth.

“That really is not acceptable as we look toward the moon and Mars,” Wallace said. “So we really need a quicker way.”

Edge computing represents a “paradigm shift,” the infrastructure allowing near real-time analysis, she added.

Live sequencing and analysis is expected “sometime soon,” the next step for Wallace’s team, she said. And one that will pave the way for edge computing’s use addressing other challenges NASA faces as it eyes Mars.

The IC is recruiting for a ‘titan of industry’ to be CIO

The Office of the Director for National Intelligence is actively recruiting to bring on a “titan” of the tech industry to be the next CIO of the intelligence community, the IC’s current acting CIO told FedScoop.

The office is set to issue a job listing soon to bring on a permanent CIO, said Michael Waschull, the IC deputy CIO who’s been acting CIO since January. Waschull said Director of National Intelligence Avril Haines is looking for a “visionary” IT leader who can take the IC’s technology and information environment “to the next level.”

“What we’re looking for is a titan of this industry, we’re looking for a former CEO of a telco, we’re looking for a former CEO of a major information-intensive organization,” he told FedScoop in an interview. The IC is searching for “somebody who is absolutely steeped in the business of cloud and IT and telecommunications and application development and information management and data science. We’re looking for somebody that’s got real gravitas and stature in this space to take us to the next level.”

The IC CIO role is charged with coordinating and integrating IT elements across the intelligence agencies and reports directly to the director of national intelligence.

Whoever comes into the role has the major building blocks in place to do great things, Waschull said, pointing to the intelligence community’s landmark Commercial Cloud Enterprise (C2E) contract and its recent work optimizing networks for connectivity home and abroad.

“What we’re excited about here is the fact that we have set the table, we’ve got all of these capabilities now in place, the building blocks are there,” he said. “And we are seeking a world-class IC CIO from industry with demonstrated knowledge, skills, expertise and experience in setting a vision to take us to the whole next level, frankly. We’re looking for somebody to help us boldly go where the IC has not gone before.”

Upon the hire of a new CIO, Waschull would return to his deputy CIO role. And he’s “proud” to do that going forward “to enable that person to create that vision and to execute that plan of attack that program of work to take us to that next level,” he said.

Waschull took over the acting role when previous IC CIO Matthew Kozma stepped down. Before Kozma, John Sherman was the IC CIO for several years until he was recruited to be deputy CIO of the Department of Defense, where he’s now acting CIO.

DevSecOps is fueling agencies’ cloud migrations

Agencies have increasingly migrated to the cloud to expand their DevSecOps efforts over the last few years, according to federal officials.

Both the Bureau of Information Resource Management within the State Department and the Navy moved to the cloud to automate processes, integrate security into the software development process and deploy updates faster.

The embrace of the cloud as an enabler of DevSecOps and cybersecurity more broadly represents an evolution in agencies’ approaches to the technology.

“Two years ago the biggest driver was ‘my boss told me to,'” said Tom Santucci, director of IT modernization within the Office of Government-wide Policy within the General Services Administration, during an ATARC event. “Now people are starting to see the benefits of this.”

IRM functions as the main provider of IT resources, from infrastructure to messaging, for the State Department, and its Systems Development Division deploys software solutions for any domestic or overseas office with a business need. The bureau moved its previously separate development and production environments to the cloud over the last few years to bridge the two.

Now IRM can not only run up a build agent in the development environment to automate tests or scans but actually create a pipeline to push it to the staging or production environments for users.

“It was possible before the cloud, but it was a larger tactical effort and a larger security effort because you had differences between environments,” said David Vergano, systems development division chief of IRM, during a FedInsider event. “So the cloud backbone is helping to make things smoother, and now we can really try to change how we do things because we have the tooling.”

The department’s other offices come to Vergano with the particular cloud products they want to use, and he advises them to use Federal Risk and Authorization Management Program-certified tools to smooth acquisition and ensure security.

Like IRM, the Naval Information Warfare Systems Command’s (NAVWAR’s) Program Executive Office for Digital and Enterprise Services (PEO Digital) is delivering environments that can be built once and adapted to multiple use cases. Cloud platforms that enable continuous integration/continuous deployment (CI/CD) and DevSecOps make that work easier, but migration isn’t always immediately affordable.

“In [the Department of Defense], nobody has buckets of funding laying around to dump into modernizing their architectures so that everyone in July 2021 can move toward containers and microservices,” said Taryn Gillison, program executive director of the digital platform application services portfolio.

Alternatively, PEO Digital is developing enabling capabilities like Naval Identity Services, infrastructure as code (IaC) and middleware.

If PEO Digital can automate testing and tools for the Navy’s various components, it allows them to shift focus to modernization, Gillison said.

Hurdles remain for NAVWAR, however — namely installation timelines. Gillison said she’s “impatient” to see cloud-to-ship software pushes and other policy changes happen more broadly.

In the prior six to 12 months, 52% of IT executives at public sector organizations said they’d chosen a new cloud provider, according to an Ensono report from June. That number jumped to 85% in the prior 24 months, with 78% of public sector respondents citing security as a concern.

With most public sector organizations managing their cloud environments centrally, a multi-cloud model prevails — largely due to the flexibility it affords agencies, Clint Dean, a vice president at Ensono, told FedScoop.

That jives with DOD canceling its $10 billion Joint Enterprise Defense Infrastructure (JEDI) cloud procurement, citing its intent to launch the multi-cloud, multi-vendor Joint Warfighter Cloud Capability (JWCC) environment.

“As much as Amazon, Microsoft and Google would have us believe, maybe there’s not as much brand loyalty as folks think there is,” Dean said.

Education Department is getting ‘smarter’ about hiring, training data scientists

The Department of Education is crafting its first data-focused workforce plan with an emphasis on the professional development of both data professionals and general staff, according to Deputy Chief Data Officer for Analytics Sharon Boivin.

The Office of the CDO established just under two years ago in response to the Foundations for Evidence-Based Policymaking Act — is supporting other parts of the department by creating standard position descriptions for data professionals and career ladders for data scientists.

OCDO had to staff up fairly quickly and hired several master’s- and PhD-level data scientists, some straight out of school or with industry experience, under the Office of Personnel Management‘s 1530 statistics series last year. The office will continue to leverage such governmentwide opportunities, Boivin said.

“We’ve gotten smarter in what we ask for in vacancy announcements,” Boivin said during a Data Coalition event Thursday. “We now specify technical skills in the required specialized experience statements, and we’ve started asking for a code sample from each applicant.”

The latter keeps people who don’t have code samples from applying, which is a good thing, she added.

OCDO handles data governance, information collection, infrastructure, analytics, and data access including open data for the department, and its data professionals are involved in monitoring grants, performance measure analysis, distributing student financial aid, and planning for future budget and workforce needs.

New data scientist hires need to fit team needs, rather than being unicorns who can do it all, Boivin said.

Some prefer coding in Python, Stata, SAS or R languages and making data visualizations using Tableau, Power BI, R Shiny or Excel. But they also need domain knowledge and a strong foundation in statistics, Boivin said.

OCDO is considering university partnerships to better communicate hiring opportunities.

The office recently released a new Education Data Strategy, and one of the four goals is building human capacity to leverage data effectively for decision making. That includes filling short- and long-term staffing needs, launching a data literacy program for general staff, and an emerging program for education data professionals.

OCDO is also rolling out a Data Professionals Community of Practice and is structuring its curriculum with learning pathways, short topics, presentations, a mentoring program, and career paths and competencies for every General Schedule level in areas like data analytics and data architecture.

A final endeavor the office is undertaking is sponsoring rotational assignments that allow education staff to come in, learn and return to their home office with new data skills and a better understanding of the government’s data priorities, Boivin said.

What makes a good TMF proposal? Maria Roat has some tips

If you want to score money under the Technology Modernization Fund, you’re going to need to catch the attention of the TMF Board. And the best way to do that, Deputy Federal CIO Maria Roat says, is to really key in on a solid business case and get to the point.

“It’s about a business case, right — the CIO, the CFO, the mission and the alignment,” Roat said Thursday during IBM’s Think Gov 2021 event, produced by FedScoop. “And the initial project proposals need to get to the point.”

While the mission of the TMF is, broadly, to fund technology modernization projects, most of the evidence the TMF Board wants to see in proposals is relates to the business aspects, said Roat, who sits on the TMF Board as an alternate board member as deputy federal CIO. The TMF Board that evaluates proposals and awards funding is currently comprised of seven voting members and six alternate members.

“When you look at the questions [in the project proposal template], one of them is specifically about technology,” she said. “The rest of the questions are broadly about the business of what this proposal is, and what are those measures and what are those outcomes you’re trying to achieve? So the board is looking for the mission alignment, that it’s mission-focused — the partnership with the CFO, with the business owner. It’s not about an IT thing … it’s about solving a hard business problem. And this is where a thoughtfully crafted business case comes in.”

Often, the board receives proposals filled with extraneous information that doesn’t detail the actual work an agency hopes to get funded, Roat said. “Too often people are going on about their agencies. We know who your agency is, we know who you are. And if we’ve never heard of you, heaven forbid, we’ll go look it up. But you have to get to the point.”

At the end of the day, the initial proposal is meant to be a “low burden” so that the board can “maximize the number of projects” it analyzes, she said.

The TMF Board introduced an expedited process after the fund received $1 billion from the American Rescue Plan earlier this year, prioritizing selecting and funding projects “that cut across agencies, address immediate security gaps, and improve the public’s ability to access government services,” leaders announced in May. Along with that, it introduced new flexibilities in the agency repayment process built around those priority areas.

While the June 2 deadline for that expedited process has passed, Roat said the board is continuing to accept proposals and is “making sure that as those proposals are coming in, we’re doing very quick reviews of those.” To support that, the board has added alternate members so it can meet more frequently to assess proposals.

Additionally, the TMF Program Management Office has expanded to work more closely with agencies as they propose projects, helping them to come to the board with good proposals — again, focused on a business case — to save everyone time.

“They’ve done a great job over the last three years working with agencies and prepping those proposals, making sure that they’re in good shape even before they come to the board,” Roat said. “Having a good proposal coming into the board helps move things along a little bit faster. It expedites the board discussion.”

Industry presses government to invest in more practical quantum computing projects

Quantum computing industry experts urged agencies Wednesday to invest more of their budgets in practical projects that address mission needs while advancing commercial products.

The government’s ongoing quantum projects tend to focus on esoteric fields and theories like black hole edge conditions at NASA and high-energy physics at the Department of Energy, but that doesn’t help the Department of Commerce address more pressing issues like infrastructure and climate change, Christopher Savoie, CEO of Zapata Computing, said during Center for Data Innovation event.

Moonshot-type projects are great in intention, often leading to unintended breakthrough developments, Savoie said, like how the Apollo program aimed to send a semiconductor-based computer to the moon, creating Silicon Valley in the process. But lately foreign adversaries like China have been more successful at getting its industrial and academic bases to work on practical projects.

“They have a lot of near-term commercial outcomes for government and for industry that they’re putting in place that incentivize people to move to more near-term commercial things,” he said.

DOE provides researchers access to its testbeds — through the Quantum User Expansion for Science and Technology (QUEST) program — and its National Quantum Information Science (QIS) Research Centers conduct research and development. But generally, the goal is to address a problem that industry can’t, before gracefully bowing out to allow the industry to become viable, said Rima Oueid, commercialization executive at the Office of Technology Transitions within DOE.

That doesn’t mean DOE can’t do more practical work in the program space though.

“Within DOE there is a cohort that is looking at some of the more viable use cases, some of the shorter-term wins that are possible that fit within our mission space,” Oueid said.

With quantum likely in a hybrid state with classic computational computing for another eight years, breakthroughs are still being made in drug discovery, autonomous vehicles communicating with each other and allocating resources for emergency response, said Allison Schwartz, global government relations and public affairs lead at D-Wave Systems.

Still, private sector products remain in the early stages, and its important policymakers make targeted investments to ensure small companies can supply industry with quantum-enabling technologies like lasers and cryogenic cooling moving forward, said Celia Merzbacher, executive director of the Quantum Economic Development Consortium.

Industry needs to be better about keeping government researchers abreast of practical applications, and researchers must similarly improve sharing the results of their work with the private sector, Merzbacher said.

“The different parts of the ecosystem, the innovation supply chain, need to be in communication so that the results of the research move efficiently to the people who can develop it and incorporate it into their products and services,” Merzbacher said.