Scientists create new neural network strategy for spotting ‘spoofed’ faces

April 6, 2021

New technology could defeat attempts to spoof facial recognition. (AP Photo/Alexander Zemlianichenko)

Some of our most advanced facial-recognition systems can be thwarted by tactics out of a dystopian sci-fi movie: People using two-dimensional printouts of faces or realistic masks can trick those systems into thinking they've authenticated the right person. A Chinese and British team has tackled these so-called spoofing attacks, introducing a new two-stage framework for detecting them that fuses multiple types of data to identify fake faces.

In the study, published March 8 in IEEE Transactions on Cognitive and Developmental Systems, the researchers' anti-spoofing pipeline first vets faces using a neural network called D-Net. Designed to pick up on obviously fake faces, it analyzes images that have been preprocessed to identify marked deviations in facial depth and similar characteristics. If D-Net does not spot a spoof, the system fuses RGB color data and infrared data, which are then fed to another neural network, M-Net. Designed to detect more convincing spoofs, including three-dimensional masks, M-Net gets the last word on whether a face is real or fake.

"When we applied and evaluated our system, we tried to use the same databases as some of these best [predecessors] so we could compare our results," Asoke K. Nandi, a professor of electronic and computer engineering at Brunel University London, Distinguished Visiting Professor in the mechanical engineering department at Xi'an Jiaotong University, and a coauthor of the study, told The Academic Times. "The best ones are about 97% accurate. We have been able to get to 99% on these specific datasets."

According to Nandi, the system's use of spectral data helps give it an edge. "The reflection depends on what wavelength it is, and so the same surface will have different reflectance at different wavelengths," he explained. A person's skin has many subtle variations in color, whereas a mask tends to be more uniform — and more spectral data can make that uniformity more obvious.

The use of different inputs at different stages also worked to the system's advantage, according to Nandi. "For the first stage, we used the depth input," he said. "For the second stage, we don't look at the depth input any more — and that's important."

This particular research was supported by the National Natural Science Foundation of China, as well as the U.K.'s Royal Society.

More and more technologies are capitalizing on biometrics to protect users. Yet facial-recognition systems have often met with controversy, eliciting critiques from the ACLU and others as a potential threat to privacy. In the past several years, activists and others have developed techniques to avoid being tracked by such platforms, including during worldwide protests against police brutality, in the summer of 2020.

Nandi sees his own work in more ethically neutral terms. "A kitchen knife is a tool, and everyone has access to such a tool," he said. "One can use it for good or for bad. Would it be right to blame the kitchen knife?"

"Our work was not carried out in relation to protestors or violent criminals," Nandi emphasized. "It is purely an intellectual curiosity and a challenge. Can we do this, and can we do this better? The answers to both questions are affirmative."

He also highlighted our collective overreliance on written passwords, which have proliferated to such an extent that many people suffer from so-called password fatigue. "Why do you need passwords?" Nandi asked. "There are other, much more natural ways — and we do forget passwords."

Nandi came to anti-spoofing from a background in high-energy physics and, more recently, signal processing. "Some people, in some sense rightly, distinguish between signal processing and image processing," he explained. "But, for me, they're just a different modality."

"Anti-spoofing is, for us, an application — it's not really theoretical stuff," Nandi said, adding a prediction for the future: "There will be more automation in terms of identity detection." He theorized that companies or government agencies that have been successfully spoofed may be reluctant to publicly disclose it because their public image might suffer.

While Nandi acknowledges that his team's framework could be improved, perhaps through the use of hyperspectral imaging, he stresses its strong performance on many different datasets — it bested a number of leading alternatives, including FaceBagNet and FeatherNets.

"You're trying to squeeze something that's already at 99%," Nandi said. "But if that's a million [people], that's a lot of people you're mistaking. If you think of fingerprint recognition, it's not as good as these numbers."

The study, "Data Fusion based Two-stage Cascade Framework for Multi-Modality Face Anti-Spoofing," published March 8 in IEEE Transactions on Cognitive and Developmental Systems, was authored by Weihua Liu, Shaanxi University of Science and Technology and the Orbbec Company; Xiaokang Wei, Xi'an Jiaotong University; Tao Lei and Xingwu Wang, Shaanxi University of Science and Technology; Hongying Meng, Brunel University London; and Asoke K. Nandi, Brunel University London and Xi'an Jiaotong University.

Saving
We use cookies to improve your experience on our site and to show you relevant advertising.