Advertisement

Facial recognition algorithms are getting a lot better, NIST study finds

The technology has undergone an "industrial revolution" in the past five years.
(Getty Images)

Facial recognition software has made huge gains in accuracy in the past five years, a new study from the National Institute of Standards and Technology asserts.

In fact, the technology has undergone an “industrial revolution” that’s made certain algorithms about 20 times better at searching databases and finding matches. These numbers come from the most recent edition of NIST’s Ongoing Facial Recognition Vendor Test, which was released last week. The report is a follow up to research done in 2010 and 2014.

NIST tested 127 algorithms developed by 45 different vendors — a number the agency claims is “the bulk of the industry” — using a primary database of 26.6 million reasonably well-controlled portrait photos of 12.3 million individuals. Given good quality photos, the most accurate algorithm was able to identify matches with only a 0.2 percent error rate. For context, this same test saw at least a 4 percent failure rate in 2014 and a 5 percent failure rate in 2010.

NIST also tested the algorithms on a database of “wild images” — images of people’s faces from photojournalism and amateur photography. The best performing algorithms in this batch include those developed by Microsoft, IDEMIA and the Chinese facial recognition startup Yitu.

Advertisement

The secret to the improvement? Among other things, NIST says, it’s about the widespread adoption of convolutional neural networks, an advancement in facial recognition and machine learning that wasn’t being used in 2014. “The accuracy gains stem from the integration, or complete replacement, of prior approaches with those based on deep convolutional neural networks,” the report reads. “As such, face recognition has undergone an industrial revolution, with algorithms increasingly tolerant of poor quality images. Whether the revolution continues or has moved into a more evolutionary phase, further gains can be expected as machine learning architectures further develop, larger datasets are assembled and benchmarks are further utilized.”

“The test shows a wholesale uptake by the industry of convolutional neural networks, which didn’t exist five years ago,” Patrick Grother, a NIST computer scientist and one of the report’s authors, said in a statement. “About 25 developers have algorithms that outperform the most accurate one we reported in 2014.”

But there’s an important caveat to all the improvements made — performance is not evenly distributed. “Recognition accuracy is very strongly dependent on the algorithm, and more generally on the developer of the algorithm,” the report states. “Recognition error rates in a particular scenario range from a few tenths of one percent up to beyond fifty percent. Thus algorithms from some developers are quite un-competitive and should not be deployed.”

In other words the most accurate algorithms are, in many cases, much more accurate than those at the back of the pack. “This implies you need to properly consider accuracy when you’re selecting new-generation software,” Grother warned.

Plus, even well-performing algorithms struggle with certain naturally occurring challenges, like bad quality photos, aging faces or even the existence of twins. None of the high-performing algorithms that NIST tested correctly identified twins.

Advertisement

In 2019, NIST plans to release two more reports on facial recognition accuracy — one detailing the results from an additional 90 algorithms submitted by 49 developers, and another on “demographic dependencies in face recognition.”

As facial recognition algorithms become more widely utilized, accuracy is a big concern. In fact, just last week, a group of Democratic lawmakers sent a letter to Amazon CEO Jeff Bezos expressing their “serious concern” about the accuracy (or lack thereof) of Amazon’s Rekognition software.

“Facial recognition technology may one day serve as a useful tool for law enforcement officials working to protect the American public and keep us safe,” the lawmakers wrote. “However, at this time, we have serious concerns that this type of product has significant accuracy issues, places disproportionate burdens on communities of color, and could stifle Americans’ willingness to exercise their First Amendment rights in public.”

Latest Podcasts