Advertisement

Europe’s fight for AI transparency faces staff and pacing challenges. The US can take note.

Two European algorithm-focused outfits, one in government and one outside, show the challenges of governing in the AI age.
A view of the building that houses the European Center for Algorithmic Transparency in Seville, Spain. (Photo courtesy of the Joint Research Centre)

In the United States, there’s still no broad-based artificial intelligence legislation. Nevertheless, myriad federal agencies, driven in part by the AI executive order, have begun hiring staff equipped to confront — and potentially study and audit — artificial intelligence systems. Private organizations, including nonprofits and civil society organizations, have worked to do the same.

To understand the technology and its potential harms related to discrimination, privacy, civil rights or exposing sensitive scientific information, experts want to pry open the black box and uncover how algorithmic systems actually work. To do that, they typically need two critical ingredients: technical expertise and access to data and model architecture. 

Across the pond, those living member states of the European Union are facing similar challenges, where experts are looking at how to enforce several already-approved laws, including Europe’s Digital Services Act and the new Artificial Intelligence Act, which entered into force earlier this year. 

Of course, the matter of investigating artificial intelligence is complicated by the European Union’s confederal structure. Member states operate their own courts and have their own digital laws, but European officials can also advance technology policy goals through the European Parliament and the European Commission. Many of these efforts have been buoyed by the Digital Services Act, and more recently, Europe’s Artificial Intelligence Act, the continental body’s first comprehensive law on the technology. Non-governmental organizations also navigate these dynamics, while trying to use new laws to boost transparency into the technology and its impact. 

Advertisement

Europe’s path thus far provides important lessons for the U.S. as it pursues a stronger artificial intelligence regulatory platform. It also reveals challenges that both will face, particularly around amassing technical expertise and adapting to the quick pace of developing technology. Two organizations exemplify those challenges: The European Center for Algorithmic Transparency, which was established under Europe’s Digital Services Act and is based in Seville, Spain, and AlgorithmWatch, an algorithmic transparency-focused nonprofit based in Zurich, Switzerland and Berlin, Germany. 

In Seville, ECAT sits just outside the old city, in a European Union office building constructed back in the 1990s. The ECAT staff at the facility — the second-largest under Europe’s Joint Research Centre — are only a fraction of the hundreds there that work across a range of issues, including fiscal policy and energy transportation. The office surrounds a quiet internal courtyard with an adjoining cafe.

Photo credit: Joint Research Centre

The employees at ECAT help the European Commission fulfill its responsibilities to oversee very large online platforms, or VLOPs, and very large online search engines, or VLOSEs. Under the DSA, large platforms like PornHub, the China-based retail service AliExpress, or the American company Meta, are subject to increased scrutiny. The center helps the agency by providing technical expertise to assist in this responsibility, while also conducting more long-term research that could help guide public discourse on the technology. 

Right now, the focus is on algorithmic systems that might be deployed by these platforms, including algorithmic system assessments, technical tests of algorithmic systems, and work to make data more broadly accessible to researchers.

Advertisement

No large language model company is currently designated a VLOP or VLOSE, though it’s possible a company like OpenAI could eventually attract enough users to fall into one of those categories. 

When the creation of ECAT was first announced, TechCrunch reported that it was expected to play a “major role” in how Europe might approach its large platforms. It is based within the Joint Research Centre but also works with the EU’s Directorate-General for Communications Networks, Content and Technology, which can send ECAT requests. 

Still, its activities are somewhat confidential, since they contribute to ongoing investigations under ECAT’s role in assisting with the enforcement of the Digital Services Act. The organization helps form questions and requests that might be needed to support the technical component of the law’s enforcement. The European Commission has already taken a series of actions against the technology companies, including Meta

“So far our research has supported the foundations of the AI Act, and at every stage of its development, our work has informed and guided policymakers by offering deep technical understanding and sound scientific advice, across the full legislative lifecycle of the AI Act,” the Centre said in a statement following FedScoop’s visit. 

ECAT added: “We have been instrumental in helping policymakers make sense of AI terminology, and are actively contributing to the development of AI standards that support the adoption of trustworthy AI practices in line with the European regulation.”

Advertisement

But the Centre’s challenges also hint at issues experienced here. ECAT has 35 full-time staffers, but like federal agencies in the U.S., it’s often difficult to hire people weighing handsome salaries in the private sector. Another hurdle is just the pace of technological change, which makes collaboration between scientists and regulators more important — but also difficult to form agile policies that can adapt for new technological advancements.

AI evaluation is an emerging field, which means even creating benchmarks for measuring a technology can present challenges. 

“I would say that the biggest challenge we are addressing is to try to get practical approaches for things that are still novel and for things that are still evolving, for which there’s no standard evaluation methodologies for algorithms,” one ECAT researcher told FedScoop. “We [also] need to have methodologies that can be applied at the policy side.” 

“When talking about challenges, the first one was to create this center itself. The second was related to an industry that changes amazingly fast, thus requiring high capacity to adapt,” another ECAT staffer cautioned. “The third is the environment we operate in — with a complex geopolitical context and a new unique legislation, with no previous references for comparison.” 

Looking forward, the unit said that in its first year-and-a-half, it’s learned “to tailor our approach to the specific needs and contexts of different organizations and sectors, especially in such a new scientific field of work.” Algorithmic transparency, the organization said, cannot be pursued with a “one-size-fits-all” approach. 

Advertisement

“Continuous learning is essential as the AI landscape rapidly evolves,” the statement said. “We must be prepared to adapt and respond to new challenges and opportunities as they arise in the European and global markets.” 

ECAT isn’t the only organization based in Europe focused on artificial intelligence. The European AI Office is also supposed to assemble expertise in the technology within the European Commission, the EU’s executive branch. The group is intended to help evaluate general-purpose AI models, apply sanctions, and generally enforce rules for these large models. But hiring has been a challenge, especially given private-sector offerings. The concern is the technical capacities of regulators and the companies they hope to regulate will continue to widen, threatening potential accountability measures. 

Clara Helming, a senior advocacy and policy manager at AlgorithmWatch, a nonprofit focused on the relationship between democracy, civil rights, and artificial intelligence, notes that in the EU, salaries for organizations like the AI Office and ECAT do not attract people in the technology private sector. ECAT seems to be staffed by former academics, she noted.

“ECAT is already publishing some papers in scientific journals but this doesn’t (and possibly may not for legal reasons) draw on their privileged access to platforms,” Helming said in an email. “They are largely doing systematic reviews and traditional academic work, which is still useful but not groundbreaking new approaches yet.”

She added: “It does seem helpful to have these people in the same building as people supporting investigations into particular platforms for enforcement reasons.”

Advertisement

There might be another benefit, too. Groups like AlgorithmWatch may soon be empowered by new tools made available under the AI Act, according to Nikolett Aszódi, who also works as a policy and advocacy manager at AlgorithmWatch. 

Specifically, the AI Act requires the establishment of a new register for high-risk AI systems, with the goal of making more information about the system public and jumpstarting the kind of work done by those evaluating AI outside the government. 

“Deployers who are public authorities or persons acting on their behalf are obliged to register the use of the system in the database, too. Information about these high-risk (the law lays down which systems qualify as high-risk and the criteria for derogation from it) systems and their uses will be publicly accessible which can serve public interest organisations to conduct research about them,” she said. “This is important — yet not sufficient alone — for holding those who develop and use AI systems accountable and for public scrutiny.”

One challenge Aszódi highlighted is that for areas deemed to be rights-impacting — around issues including law enforcement, migration, asylum and border control — information will be registered in a non-public section of the repository. To truly hold platforms accountable, AlgorithmWatch is calling for end-to-end algorithmic auditing.

American efforts

Advertisement

In the U.S., the fight for algorithmic transparency is also underway. That effort is spread among those in civil society, members of Congress, federal agencies, and representatives of U.S. technology interests abroad. Of course, it’s not clear where these efforts might go in the second Trump administration.

On hiring, there’s been some progress.

Earlier this fall, a spokesperson for the Office of Personnel Management — which functions as the hiring agency for much of the federal government — pointed to a series of AI hiring efforts. Several U.S. agencies — including the Federal Trade Commission, the Equal Employment Opportunity Commission, the Consumer Financial Protection Bureau, and the National Labor Relations Board — issued a call earlier this year to strengthen their tech capacity. As part of the AI Talent surge, the U.S. Digital Corps has also placed AI experts at regulatory agencies, while the CFPB promoted public interest technology jobs. 

To help staff up on artificial intelligence, the White House has launched an AI hiring spree meant to focus on bringing experts in the technology into the U.S. government. It’s not clear the extent to which the government is focused on those developing artificial intelligence systems — and those focused on evaluating them — for potential purchase or regulatory action. In some cases, the skills, such as understanding model architecture and evaluating model performance, might overlap. 

A White House AI Hiring Task Force report from April noted that technical expertise is needed “to create and enforce effective policies and regulations to ensure AI is effective, equitable, and

Advertisement

rights-respecting.” The report acknowledged that the government simply could not compete with the private sector on salary, but could try to offer incentive pay, pay flexibility, and remote programs. 

Similarly, within agencies, there are also efforts to understand how these systems work and create platforms for studying them. 

The National Institute of Standards and Technology has devoted serious resources to creating evaluation metrics for artificial intelligence, including benchmarking facial recognition systems and releasing new frameworks. The agency, which falls under the Commerce Department, has released an artificial intelligence risk management framework that’s supposed to help entities conduct their own algorithmic auditing. 

Similarly, the EEOC, charged with investigating employment discrimination, has launched an artificial intelligence and fairness initiative, meant to investigate how software automation might exacerbate biased hiring decisions.

Documents obtained by FedScoop, meanwhile, show the Federal Aviation Administration has studied artificial intelligence systems to see how they might perform in evaluating aviation systems. 

Advertisement

The idea of a national AI registry isn’t unheard of in the U.S., and some at the Carnegie Endowment have also called for a large model registry, though a private one. Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., have also proposed legislation that would, among other efforts, require large model developers to register with an independent oversight body.

Of course, as the Biden administration begins to wind down and the Trump transition team plans its AI policy, it’s even less apparent what progress toward AI transparency might look like. 

Still, the government will almost certainly need to beef up its AI staffing to achieve the technology growth goals it’s expressed and make data more available to understand these systems both inside and outside of government. 

Research for this article was made possible with the support of the Heinrich Böll Foundation, Washington, DC’s Transatlantic Media Fellowship.

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Latest Podcasts