Invisible Adversarial Attacks on Deep Learning-Based Face Recognition Models

Chih Yang Lin, Feng Jie Chen, Hui Fuang Ng, Wei Yang Lin

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Deep learning technology has grown rapidly in recent years and achieved tremendous success in the field of computer vision. At present, many deep learning technologies have been applied in daily life, such as face recognition systems. However, as human life increasingly relies on deep neural networks, the potential harms of neural networks are being revealed, particularly in terms of deep neural network security. More and more studies have shown that existing deep learning-based face recognition models are vulnerable to attacks by adversarial samples, resulting in misjudgments that could have serious consequences. However, existing adversarial face images are rather easy to identify with the naked eye, so it is difficult for attackers to carry out attacks on face recognition systems in practice. This paper proposes a method for generating adversarial face images that are indistinguishable from the source images based on facial landmark detection and superpixel segmentation. First, the eyebrows, eyes, nose, and mouth regions are extracted from the face image using a facial landmark detection algorithm. Next, the superpixel segmentation algorithm is used to include the pixels neighboring the extracted facial landmarks with similar pixel values. Lastly, the segmented regions are used as masks to guide existing attack methods to insert adversarial noise within the masked areas. Experimental results show that our method can generate adversarial samples with high Structural Similarity Index Measure (SSIM) values at the cost of a small percentage of attack success rate. In addition, to simulate real-time physical attacks, printouts of the adversarial images generated by the proposed method are presented to the face recognition system via a camera and are still able to fool the face recognition model. Experimental results indicated that the proposed method can successfully perform adversarial attacks on face recognition systems in real-world scenarios.

Original languageEnglish
Pages (from-to)51567-51577
Number of pages11
JournalIEEE Access
Volume11
DOIs
StatePublished - 2023

Keywords

  • Adversarial attack
  • deep learning
  • face recognition

Fingerprint

Dive into the research topics of 'Invisible Adversarial Attacks on Deep Learning-Based Face Recognition Models'. Together they form a unique fingerprint.

Cite this