Memory Access Optimization for On-Chip Transfer Learning

Muhammad Awais Hussain, Tsung Han Tsai

研究成果: 雜誌貢獻期刊論文同行評審

4 引文 斯高帕斯(Scopus)

摘要

Training of Deep Neural Network (DNN) at the edge faces the challenge of high energy consumption due to the requirements of a large number of memory accesses for gradient calculations. Therefore, it is necessary to minimize data fetches to perform training of a DNN model on the edge. In this paper, a novel technique has been proposed to reduce the memory access for the training of fully connected layers in transfer learning. By analyzing the memory access patterns in the backpropagation phase in fully connected layers, the memory access can be optimized. We introduce a new method to update the weights by introducing the delta term for every node of output and fully connected layer. Delta term aims to reduce memory access for the parameters which are required to access repeatedly during the training process of fully connected layers. The proposed technique shows 0.13x-13.93x energy savings for the training of fully connected layers for famous DNN architectures on multiple processor architectures. The proposed technique can be used to perform transfer learning on-chip to reduce energy consumption as well as memory access.

原文???core.languages.en_GB???
文章編號9352020
頁(從 - 到)1507-1519
頁數13
期刊IEEE Transactions on Circuits and Systems I: Regular Papers
68
發行號4
DOIs
出版狀態已出版 - 4月 2021

指紋

深入研究「Memory Access Optimization for On-Chip Transfer Learning」主題。共同形成了獨特的指紋。

引用此