Learning with Sharing: An Edge-Optimized Incremental Learning Method for Deep Neural Networks

Muhammad Awais Hussain, Shih An Huang, Tsung Han Tsai

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Incremental learning techniques aim to increase the capability of Deep Neural Network (DNN) model to add new classes in the pre-trained model. However, DNNs suffer from catastrophic forgetting during the incremental learning process. Existing incremental learning techniques try to reduce the effect of catastrophic forgetting by either using previous samples of data while adding new classes in the model or designing complex model architectures. This leads to high design complexity and memory requirements which make incremental learning impossible to implement on edge devices that have limited memory and computation resources. So, we propose a new incremental learning technique Learning with Sharing (LwS) based on the concept of transfer learning. The main aims of LwS are reduction in training complexity and storage memory requirements while achieving high accuracy during the incremental learning process. We perform cloning and sharing of full connected (FC) layers to add new classes in the model incrementally. Our proposed technique can preserve the knowledge of existing classes and add new classes without storing data from the previous classes. We show that our proposed technique outperforms the state-of-the-art techniques in accuracy comparison for Cifar-100, Caltech-101, and UCSD Birds datasets.

Original languageEnglish
Pages (from-to)461-473
Number of pages13
JournalIEEE Transactions on Emerging Topics in Computing
Volume11
Issue number2
DOIs
StatePublished - 1 Apr 2023

Keywords

  • Incremental learning
  • deep neural networks
  • energy-efficient learning
  • learning on-chip
  • network sharing

Fingerprint

Dive into the research topics of 'Learning with Sharing: An Edge-Optimized Incremental Learning Method for Deep Neural Networks'. Together they form a unique fingerprint.

Cite this