Robust Model for Adversarial Attack Protection through Weak Features Removal

Pratomo Adinegoro, Chin Chun Chang, Deron Liang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This study focusses on the challenge of imitating user behavior in continuous authentication systems and proposes a solution to build a defense system against adversarial attacks. We conduct experiments to analyze the effectiveness of weak feature removal in preventing attackers from imitating legitimate users. The experiments include individual weak feature (IWF) removal and common weak feature (CWF) removal, using feature ranking methods to identify the weak features. Preliminary results show that the proposed method outperforms the baseline model in defending against single attackers and multiple attackers. Ongoing work involves building a defense system that can handle different types of attacks from various victims by clustering users with similar behavior.

Original languageEnglish
Title of host publicationProceedings - 2023 10th International Conference on Dependable Systems and Their Applications, DSA 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages583-584
Number of pages2
ISBN (Electronic)9798350304770
DOIs
StatePublished - 2023
Event10th International Conference on Dependable Systems and Their Applications, DSA 2023 - Tokyo, Japan
Duration: 10 Aug 202311 Aug 2023

Publication series

NameProceedings - 2023 10th International Conference on Dependable Systems and Their Applications, DSA 2023

Conference

Conference10th International Conference on Dependable Systems and Their Applications, DSA 2023
Country/TerritoryJapan
CityTokyo
Period10/08/2311/08/23

Keywords

  • adversarial attack
  • Continuous authentication
  • defense system
  • weak feature removal

Fingerprint

Dive into the research topics of 'Robust Model for Adversarial Attack Protection through Weak Features Removal'. Together they form a unique fingerprint.

Cite this