Memory Access Optimization for On-Chip Transfer Learning

Muhammad Awais Hussain, Tsung Han Tsai

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Training of Deep Neural Network (DNN) at the edge faces the challenge of high energy consumption due to the requirements of a large number of memory accesses for gradient calculations. Therefore, it is necessary to minimize data fetches to perform training of a DNN model on the edge. In this paper, a novel technique has been proposed to reduce the memory access for the training of fully connected layers in transfer learning. By analyzing the memory access patterns in the backpropagation phase in fully connected layers, the memory access can be optimized. We introduce a new method to update the weights by introducing the delta term for every node of output and fully connected layer. Delta term aims to reduce memory access for the parameters which are required to access repeatedly during the training process of fully connected layers. The proposed technique shows 0.13x-13.93x energy savings for the training of fully connected layers for famous DNN architectures on multiple processor architectures. The proposed technique can be used to perform transfer learning on-chip to reduce energy consumption as well as memory access.

Original languageEnglish
Article number9352020
Pages (from-to)1507-1519
Number of pages13
JournalIEEE Transactions on Circuits and Systems I: Regular Papers
Volume68
Issue number4
DOIs
StatePublished - Apr 2021

Keywords

  • deep neural networks
  • fully connected layers
  • on-chip training
  • Optimized memory access
  • transfer learning

Fingerprint

Dive into the research topics of 'Memory Access Optimization for On-Chip Transfer Learning'. Together they form a unique fingerprint.

Cite this