TY - GEN
T1 - A Light-Weight Defect Detection System for Edge Computing
AU - Huang, Hsiang Ting
AU - Chiu, Tzu Yi
AU - Lin, Chia Yu
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Recently, many factories have utilized AI to help Automatic Optical Inspection (AOI) machines accurately detect defects. They usually deploy AI models on the clouds and submit the data to the clouds for inference. However, transmission delay increases the response time of the AI model. If AI can differentiate defects on the local edge devices, the production efficiency can be significantly improved. In this paper, we propose a light-weight defect detection system that utilizes pruning techniques to compress the model and can accurately detect defects at a faster speed. Besides, we compare the performance of pruned and unpruned models on Kneron KL520 AI dongle and NVIDIA Jetson Nano to verify the superior ability of pruning to accelerate inference. The accuracy of the pruned model in the proposed system can reach 97.7% on Kneron KL520 AI dongle. The inference speed is 28.2 frames per second, 1.6 times faster than the unpruned model. Also, compared to NVIDIA Jetson Nano, the inference speed on Kneron KL520 AI dongle is two times faster. This result shows the better performance of Kneron KL520 AI dongle than NVIDIA Jetson Nano on inference. In summary, the proposed system can significantly improve the efficiency of production lines and avoid the information security risks brought by cloud computing.
AB - Recently, many factories have utilized AI to help Automatic Optical Inspection (AOI) machines accurately detect defects. They usually deploy AI models on the clouds and submit the data to the clouds for inference. However, transmission delay increases the response time of the AI model. If AI can differentiate defects on the local edge devices, the production efficiency can be significantly improved. In this paper, we propose a light-weight defect detection system that utilizes pruning techniques to compress the model and can accurately detect defects at a faster speed. Besides, we compare the performance of pruned and unpruned models on Kneron KL520 AI dongle and NVIDIA Jetson Nano to verify the superior ability of pruning to accelerate inference. The accuracy of the pruned model in the proposed system can reach 97.7% on Kneron KL520 AI dongle. The inference speed is 28.2 frames per second, 1.6 times faster than the unpruned model. Also, compared to NVIDIA Jetson Nano, the inference speed on Kneron KL520 AI dongle is two times faster. This result shows the better performance of Kneron KL520 AI dongle than NVIDIA Jetson Nano on inference. In summary, the proposed system can significantly improve the efficiency of production lines and avoid the information security risks brought by cloud computing.
UR - http://www.scopus.com/inward/record.url?scp=85138722176&partnerID=8YFLogxK
U2 - 10.1109/ICCE-Taiwan55306.2022.9868995
DO - 10.1109/ICCE-Taiwan55306.2022.9868995
M3 - 會議論文篇章
AN - SCOPUS:85138722176
T3 - Proceedings - 2022 IEEE International Conference on Consumer Electronics - Taiwan, ICCE-Taiwan 2022
SP - 521
EP - 522
BT - Proceedings - 2022 IEEE International Conference on Consumer Electronics - Taiwan, ICCE-Taiwan 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE International Conference on Consumer Electronics - Taiwan, ICCE-Taiwan 2022
Y2 - 6 July 2022 through 8 July 2022
ER -