TY - JOUR
T1 - Detecting Targets of Graph Adversarial Attacks With Edge and Feature Perturbations
AU - Lee, Boyi
AU - Jhang, Jhao Yin
AU - Yeh, Lo Yao
AU - Chang, Ming Yi
AU - Chen, Chia Mei
AU - Shen, Chih Ya
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2024/6/1
Y1 - 2024/6/1
N2 - Graph neural networks (GNNs) enable many novel applications and achieve excellent performance. However, their performance may be significantly degraded by the graph adversarial attacks, which intentionally add small perturbations to the graph. Previous countermeasures usually handle such attacks by enhancing model robustness. However, robust models cannot identify the target nodes of the adversarial attacks, and thus we are unable to pinpoint the weak spots and analyze the causes or the targets of the attacks. In this article, we study the important research problem to detect the target nodes of graph adversarial attacks under the black-box detection scenario, which is particularly challenging because our detection models do not have any knowledge about the attacker, while the attackers usually employ unnoticeability strategies to minimize the chance of being detected. To our best knowledge, this is the first work that aims at detecting the target nodes of graph adversarial attacks under the black-box detector scenario. We propose two detection models, named Det-H and Det-RL, which employ different techniques that effectively detect the target nodes under the black-box detection scenario against various graph adversarial attacks. To enhance the generalization of the proposed detectors, we further propose two novel surrogate attackers that are able to generate effective attack examples and camouflage their attack traces for training robust detectors. In addition, we propose three strategies to effectively improve the training efficiency. Experimental results on multiple datasets show that our proposed detectors significantly outperform the other baselines against multiple state-of-the-art graph adversarial attackers with various attack strategies. The proposed Det-RL detector achieves an averaged area under curve (AUC) of 0.945 against all the attackers, and our efficiency-improving strategies are able save up to 91% of the training time.
AB - Graph neural networks (GNNs) enable many novel applications and achieve excellent performance. However, their performance may be significantly degraded by the graph adversarial attacks, which intentionally add small perturbations to the graph. Previous countermeasures usually handle such attacks by enhancing model robustness. However, robust models cannot identify the target nodes of the adversarial attacks, and thus we are unable to pinpoint the weak spots and analyze the causes or the targets of the attacks. In this article, we study the important research problem to detect the target nodes of graph adversarial attacks under the black-box detection scenario, which is particularly challenging because our detection models do not have any knowledge about the attacker, while the attackers usually employ unnoticeability strategies to minimize the chance of being detected. To our best knowledge, this is the first work that aims at detecting the target nodes of graph adversarial attacks under the black-box detector scenario. We propose two detection models, named Det-H and Det-RL, which employ different techniques that effectively detect the target nodes under the black-box detection scenario against various graph adversarial attacks. To enhance the generalization of the proposed detectors, we further propose two novel surrogate attackers that are able to generate effective attack examples and camouflage their attack traces for training robust detectors. In addition, we propose three strategies to effectively improve the training efficiency. Experimental results on multiple datasets show that our proposed detectors significantly outperform the other baselines against multiple state-of-the-art graph adversarial attackers with various attack strategies. The proposed Det-RL detector achieves an averaged area under curve (AUC) of 0.945 against all the attackers, and our efficiency-improving strategies are able save up to 91% of the training time.
KW - Detection
KW - graph adversarial attacks
KW - machine learning
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85183940956&partnerID=8YFLogxK
U2 - 10.1109/TCSS.2023.3344642
DO - 10.1109/TCSS.2023.3344642
M3 - 期刊論文
AN - SCOPUS:85183940956
SN - 2329-924X
VL - 11
SP - 3218
EP - 3231
JO - IEEE Transactions on Computational Social Systems
JF - IEEE Transactions on Computational Social Systems
IS - 3
ER -