TY - GEN
T1 - A Comparative Study of Cross-Model Universal Adversarial Perturbation for Face Forgery
AU - Lin, Shuo Yen
AU - Chen, Jun Cheng
AU - Wang, Jia Ching
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Although the rapid development of deep generative models (DGM) enables diverse applications of content creation, increasing illegal uses of the technologies also severely threaten the privacy and security of personal information, especially for faces. Several previous works have been proposed to leverage adversarial attacks to fight against these malicious manipulations by adding an imperceptible perturbation to each input image to disrupt the output. In addition, to improve its scalability, a sequential cross-model universal perturbation attack has been proposed to learn a common adversarial perturbation to defend the images from the manipulation of multiple DGMs. However, we find that the order of DGMs for the adversarial perturbation generation does matter and influence the final defense performance. To address this issue, we propose to generate the universal perturbation through joint optimization of multiple DGMs. From the extensive experimental results, we find that the universal perturbation generated by the proposed method can successfully disrupt the output faces of multiple DGMs at the same time and achieves higher attack success rates than the previous state-of-the-art method based on the sequential generation, even under the situations where the model robustness of DGMs are enhanced by random perturbations.
AB - Although the rapid development of deep generative models (DGM) enables diverse applications of content creation, increasing illegal uses of the technologies also severely threaten the privacy and security of personal information, especially for faces. Several previous works have been proposed to leverage adversarial attacks to fight against these malicious manipulations by adding an imperceptible perturbation to each input image to disrupt the output. In addition, to improve its scalability, a sequential cross-model universal perturbation attack has been proposed to learn a common adversarial perturbation to defend the images from the manipulation of multiple DGMs. However, we find that the order of DGMs for the adversarial perturbation generation does matter and influence the final defense performance. To address this issue, we propose to generate the universal perturbation through joint optimization of multiple DGMs. From the extensive experimental results, we find that the universal perturbation generated by the proposed method can successfully disrupt the output faces of multiple DGMs at the same time and achieves higher attack success rates than the previous state-of-the-art method based on the sequential generation, even under the situations where the model robustness of DGMs are enhanced by random perturbations.
KW - adversarial example
KW - deep generative model
KW - generative adversarial network
KW - universal adversarial perturbation
UR - http://www.scopus.com/inward/record.url?scp=85147256762&partnerID=8YFLogxK
U2 - 10.1109/VCIP56404.2022.10008794
DO - 10.1109/VCIP56404.2022.10008794
M3 - 會議論文篇章
AN - SCOPUS:85147256762
T3 - 2022 IEEE International Conference on Visual Communications and Image Processing, VCIP 2022
BT - 2022 IEEE International Conference on Visual Communications and Image Processing, VCIP 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE International Conference on Visual Communications and Image Processing, VCIP 2022
Y2 - 13 December 2022 through 16 December 2022
ER -