Detecting Targets of Graph Adversarial Attacks With Edge and Feature Perturbations

Boyi Lee, Jhao Yin Jhang, Lo Yao Yeh, Ming Yi Chang, Chia Mei Chen, Chih Ya Shen

研究成果: 雜誌貢獻期刊論文同行評審

1 引文 斯高帕斯(Scopus)

摘要

Graph neural networks (GNNs) enable many novel applications and achieve excellent performance. However, their performance may be significantly degraded by the graph adversarial attacks, which intentionally add small perturbations to the graph. Previous countermeasures usually handle such attacks by enhancing model robustness. However, robust models cannot identify the target nodes of the adversarial attacks, and thus we are unable to pinpoint the weak spots and analyze the causes or the targets of the attacks. In this article, we study the important research problem to detect the target nodes of graph adversarial attacks under the black-box detection scenario, which is particularly challenging because our detection models do not have any knowledge about the attacker, while the attackers usually employ unnoticeability strategies to minimize the chance of being detected. To our best knowledge, this is the first work that aims at detecting the target nodes of graph adversarial attacks under the black-box detector scenario. We propose two detection models, named Det-H and Det-RL, which employ different techniques that effectively detect the target nodes under the black-box detection scenario against various graph adversarial attacks. To enhance the generalization of the proposed detectors, we further propose two novel surrogate attackers that are able to generate effective attack examples and camouflage their attack traces for training robust detectors. In addition, we propose three strategies to effectively improve the training efficiency. Experimental results on multiple datasets show that our proposed detectors significantly outperform the other baselines against multiple state-of-the-art graph adversarial attackers with various attack strategies. The proposed Det-RL detector achieves an averaged area under curve (AUC) of 0.945 against all the attackers, and our efficiency-improving strategies are able save up to 91% of the training time.

原文???core.languages.en_GB???
頁(從 - 到)3218-3231
頁數14
期刊IEEE Transactions on Computational Social Systems
11
發行號3
DOIs
出版狀態已出版 - 1 6月 2024

指紋

深入研究「Detecting Targets of Graph Adversarial Attacks With Edge and Feature Perturbations」主題。共同形成了獨特的指紋。

引用此