Invisible Adversarial Attacks on Deep Learning-Based Face Recognition Models

Chih Yang Lin, Feng Jie Chen, Hui Fuang Ng, Wei Yang Lin

研究成果: 雜誌貢獻期刊論文同行評審

3 引文 斯高帕斯(Scopus)

摘要

Deep learning technology has grown rapidly in recent years and achieved tremendous success in the field of computer vision. At present, many deep learning technologies have been applied in daily life, such as face recognition systems. However, as human life increasingly relies on deep neural networks, the potential harms of neural networks are being revealed, particularly in terms of deep neural network security. More and more studies have shown that existing deep learning-based face recognition models are vulnerable to attacks by adversarial samples, resulting in misjudgments that could have serious consequences. However, existing adversarial face images are rather easy to identify with the naked eye, so it is difficult for attackers to carry out attacks on face recognition systems in practice. This paper proposes a method for generating adversarial face images that are indistinguishable from the source images based on facial landmark detection and superpixel segmentation. First, the eyebrows, eyes, nose, and mouth regions are extracted from the face image using a facial landmark detection algorithm. Next, the superpixel segmentation algorithm is used to include the pixels neighboring the extracted facial landmarks with similar pixel values. Lastly, the segmented regions are used as masks to guide existing attack methods to insert adversarial noise within the masked areas. Experimental results show that our method can generate adversarial samples with high Structural Similarity Index Measure (SSIM) values at the cost of a small percentage of attack success rate. In addition, to simulate real-time physical attacks, printouts of the adversarial images generated by the proposed method are presented to the face recognition system via a camera and are still able to fool the face recognition model. Experimental results indicated that the proposed method can successfully perform adversarial attacks on face recognition systems in real-world scenarios.

原文???core.languages.en_GB???
頁(從 - 到)51567-51577
頁數11
期刊IEEE Access
11
DOIs
出版狀態已出版 - 2023

指紋

深入研究「Invisible Adversarial Attacks on Deep Learning-Based Face Recognition Models」主題。共同形成了獨特的指紋。

引用此