Entropy-Boosted Adversarial Patch for Concealing Pedestrians in YOLO Models

Chih Yang Lin, Tun Yu Huang, Hui Fuang Ng, Wei Yang Lin, Isack Farady

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In recent years, rapid advancements in hardware and deep learning technologies have paved the way for the extensive integration of image recognition and object detection into daily applications. As reliance on deep learning grows, so do concerns about the vulnerabilities of deep neural networks, emphasizing the need to address potential security issues. This research unveils the Entropy-boosted Loss, a novel loss function tailored to generate adversarial patches resembling potted plants. Specifically designed for the YOLOV2, YOLOV3, and YOLOV4 object detectors, these patches obscure the detectors' ability to identify individuals. By enhancing the uncertainty in class probability, a person wearing an adversarial patch crafted using our proposed loss function becomes less identifiable by YOLO detectors, achieving the desired adversarial effect. This underscores the significance of comprehending the vulnerabilities of YOLO models to adversarial attacks, particularly for individuals aiming to obscure their presence from camera detection. Our experiments, conducted using the INRIA person dataset and under real-time network camera conditions, confirm the effectiveness of our method. Moreover, our technique demonstrates substantial success in virtual try-on environments.

Original languageEnglish
Pages (from-to)32772-32779
Number of pages8
JournalIEEE Access
Volume12
DOIs
StatePublished - 2024

Keywords

  • Adversarial attack
  • deep learning
  • entropy

Fingerprint

Dive into the research topics of 'Entropy-Boosted Adversarial Patch for Concealing Pedestrians in YOLO Models'. Together they form a unique fingerprint.

Cite this