摘要
The Machine Learning-based Intrusion Detection System (ML-IDS) becomes more popular because it doesn't need to manually update the rules and can recognize variants better, However, due to the data privacy issue in ML-IDS, the Federated Learning-based IDS (FL-IDS) was proposed. In each round of federated learning, each participant first trains its local model and sends the model's weights to the global server, which then aggregates the received weights and distributes the aggregated global model to participants. An attacker will use poisoning attacks, including label-flipping attacks and backdoor attacks, to directly generate a malicious local model and indirectly pollute the global model. Currently, a few studies defend against poisoning attacks, but they only discuss label-flipping attacks in the image field. Therefore, we propose a two-phase defense mechanism, called Defending Poisoning Attacks in Federated Learning (DPA-FL), applied to intrusion detection. The first phase employs relative differences to quickly compare weights between participants because the local models of attackers and benign participants are quite different. The second phase tests the aggregated model with the dataset and tries to find the attackers when its accuracy is low. Experiment results show that DPA-FL can reach 96.5% accuracy in defending against poisoning attacks. Compared with other defense mechanisms, DPA-FL can improve F1-score by 20∼64% under backdoor attacks. Also, DPA-FL can exclude the attackers within twelve rounds when the attackers are few.
原文 | ???core.languages.en_GB??? |
---|---|
文章編號 | 103205 |
期刊 | Computers and Security |
卷 | 129 |
DOIs | |
出版狀態 | 已出版 - 6月 2023 |