Project Details
Description
Deep learning technology has grown rapidly in recent years, and it has brightspots in various fields. However, as deep learning is about to land and becomepopular, the security issues of AI models arise accordingly, and this is animportant key to whether AI models can be widely used. At present, manymethods to attack AI models have been proposed, and some methods evenattack the face recognition and self-driving environment. For example, wearingspecial glasses on the face can make the face recognition system misjudgment;Or adding invisible noise to the traffic light image can make the traffic lightidentification system fail, which may cause serious harm. The research goal ofthis project is to analyze and study the possible attack methods of the AI model,and propose a set of sustainable AI model defense technology to resist currentand future possible attacks; in addition, we will also propose the robustness ofthe AI model Benchmark (robustness benchmark) design, after the AI modeldesign and training are completed. Through the developed robustnessbenchmark test, the defensive capability level of the AI model can be known, sothat the designer can understand the security level of the model. Finally, inresponse to more and more deep learning models running on handheld or edgecomputing devices, and returning data to the cloud for more refined learning andimprovement, how to protect the personal privacy, and strengthen the security ofthe AI model, will also be implemented in this project. This plan will beimplemented in a three-year period.
Status | Active |
---|---|
Effective start/end date | 1/02/23 → 31/07/25 |
UN Sustainable Development Goals
In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):
Keywords
- AI model attack and defense
- deep learning
- privacy protection
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.