An Efficient and Fast Softmax Hardware Architecture (EFSHA) for Deep Neural Networks

Muhammad Awais Hussain, Tsung Han Tsai

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

12 Scopus citations

Abstract

Deep neural networks are widely used in computer vision applications due to their high performance. However, DNNs involve a large number of computations in the training and inference phase. Among the different layers of a DNN, the softmax layer has one of the most complex computations as it involves exponent and division operations. So, a hardware-efficient implementation is required to reduce the on-chip resources. In this paper, we propose a new hardware-efficient and fast implementation of the softmax activation function. The proposed hardware implementation consumes fewer hardware resources and works at high speed as compared to the state-of-the-art techniques.

Original languageEnglish
Title of host publication2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems, AICAS 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665419130
DOIs
StatePublished - 6 Jun 2021
Event3rd IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2021 - Washington, United States
Duration: 6 Jun 20219 Jun 2021

Publication series

Name2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems, AICAS 2021

Conference

Conference3rd IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2021
Country/TerritoryUnited States
CityWashington
Period6/06/219/06/21

Keywords

  • FPGA
  • Softmax layer
  • area-efficient implementation
  • deep neural networks
  • learning on-chip

Fingerprint

Dive into the research topics of 'An Efficient and Fast Softmax Hardware Architecture (EFSHA) for Deep Neural Networks'. Together they form a unique fingerprint.

Cite this