TY - JOUR
T1 - Learning with Sharing
T2 - An Edge-Optimized Incremental Learning Method for Deep Neural Networks
AU - Hussain, Muhammad Awais
AU - Huang, Shih An
AU - Tsai, Tsung Han
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2023/4/1
Y1 - 2023/4/1
N2 - Incremental learning techniques aim to increase the capability of Deep Neural Network (DNN) model to add new classes in the pre-trained model. However, DNNs suffer from catastrophic forgetting during the incremental learning process. Existing incremental learning techniques try to reduce the effect of catastrophic forgetting by either using previous samples of data while adding new classes in the model or designing complex model architectures. This leads to high design complexity and memory requirements which make incremental learning impossible to implement on edge devices that have limited memory and computation resources. So, we propose a new incremental learning technique Learning with Sharing (LwS) based on the concept of transfer learning. The main aims of LwS are reduction in training complexity and storage memory requirements while achieving high accuracy during the incremental learning process. We perform cloning and sharing of full connected (FC) layers to add new classes in the model incrementally. Our proposed technique can preserve the knowledge of existing classes and add new classes without storing data from the previous classes. We show that our proposed technique outperforms the state-of-the-art techniques in accuracy comparison for Cifar-100, Caltech-101, and UCSD Birds datasets.
AB - Incremental learning techniques aim to increase the capability of Deep Neural Network (DNN) model to add new classes in the pre-trained model. However, DNNs suffer from catastrophic forgetting during the incremental learning process. Existing incremental learning techniques try to reduce the effect of catastrophic forgetting by either using previous samples of data while adding new classes in the model or designing complex model architectures. This leads to high design complexity and memory requirements which make incremental learning impossible to implement on edge devices that have limited memory and computation resources. So, we propose a new incremental learning technique Learning with Sharing (LwS) based on the concept of transfer learning. The main aims of LwS are reduction in training complexity and storage memory requirements while achieving high accuracy during the incremental learning process. We perform cloning and sharing of full connected (FC) layers to add new classes in the model incrementally. Our proposed technique can preserve the knowledge of existing classes and add new classes without storing data from the previous classes. We show that our proposed technique outperforms the state-of-the-art techniques in accuracy comparison for Cifar-100, Caltech-101, and UCSD Birds datasets.
KW - Incremental learning
KW - deep neural networks
KW - energy-efficient learning
KW - learning on-chip
KW - network sharing
UR - http://www.scopus.com/inward/record.url?scp=85139878477&partnerID=8YFLogxK
U2 - 10.1109/TETC.2022.3210905
DO - 10.1109/TETC.2022.3210905
M3 - 期刊論文
AN - SCOPUS:85139878477
SN - 2168-6750
VL - 11
SP - 461
EP - 473
JO - IEEE Transactions on Emerging Topics in Computing
JF - IEEE Transactions on Emerging Topics in Computing
IS - 2
ER -