Advertisement

White House technology policy chief says AI bill of rights needs ‘teeth’

Procurement on the enforcement side seems likely, though laws and litigation are also needed, said the director of OSTP.
(Getty Images)

The White House Office of Science and Technology Policy’s bill of rights for an artificial intelligence-powered world needs “teeth,” in the form of procurement enforcement, said Director Eric Lander on Tuesday.

Many AI ethics proposals are little more than a set of basic expectations around governance, privacy, fairness, transparency and explainability, when laws and litigation are needed to back them up, Lander said, during Stanford University’s Human-Centered AI Fall Conference.

Lander’s comments come after the Office of Science and Technology Policy (OSTP) issued a request for information last month on biometrics use cases — given the technologies’ wide adoption for identification, surveillance and behavioral analysis — to inform development of the AI bill of rights.

“We see this as a way not to limit innovation,” Lander said. “We see this [as a way] to improve the quality of products by not rewarding people who cut corners and instead setting ground rules to reward people who produce safe, effective, fair, equitable products.”

Advertisement

OSTP chose to focus its bill of rights on AI systems involved in decisions that benefit or harm people — rather than their quality, safety or research applications — because the approach might not work with all technologies.

The agency is asking questions like: What might the right to govern your own personal data mean?

“Right now, of course, it largely consists of notice and consent, terms of service that are very long, very hard to interpret and force you to say, ‘OK,’ if you want to use the product because you’re in a rush,” Lander said.

Establishing a right of consent to data use could lead to a layered consent model where owners decide what purposes their data is used for; if that data can be sold, so long as other users abide by the same restrictions; when to withdraw data and have it deleted; and if an agent can decide all that on their behalf.

A right to fairness is tricky because of the many definitions of algorithmic fairness circulating currently, Lander said.

Advertisement

People may end up demanding the datasets used to create and train AI be fully described and represent all subgroups, even if that means oversampling, with comparable performance. Lander gave the example of a facial recognition algorithm that has high overall accuracy, but not among minority passengers, being problematic.

Biometrics algorithms also raise civil liberties questions like if the government should engage in surveillance at all.

OSTP is also weighing what the right to transparency entails, from algorithm certification by third-party auditors to open testing or source code.

People also may want the right to know how AI came to a decision about their government benefits, which could lead to a requirement for developers of “rights by design,” Lander said.

“The best way to do that is to ensure that the developers of AI have a diverse range of perspectives and experiences, so you catch issues early,” Lander said.

Advertisement

Exactly how to ensure rights by design remains to be seen, but Lander could foresee court cases where plaintiffs argue a developer’s failure to include enough perspectives in the design of their AI could lead to problems.

AI ethicists argue that industry dominates AI research, but Lander said that makes sense given the extensive infrastructure required. Still some companies have proven willing to share their algorithms openly with academia, which continues to do cutting-edge work, he said.

“We have to have a corporate sector that views itself as responsible stewards of an ecosystem as well,” Lander said. “And that will come from values and connections between the academic and the corporate world and pushing people and reminding people of what they need to do to make that work.”

Latest Podcasts