Implementation of FPGA-based Accelerator for Deep Neural Networks

Tsung Han Tsai, Yuan Chen Ho, Ming Hwa Sheu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

22 Scopus citations

Abstract

At present, there are many researches on deep neural network (DNN) applied in life. In the task of object recognition, deep convolutional neural network (CNN) has a good performance, but it relies on GPU to solve a large number of complex operations. Thus the hardware accelerator of DNN is concerned by many people. In order to implement the DNN model on hardware, complex connection relationship and memory usage scheduling are needed. This paper presnets the design of FPGA-based accelerator for DNN. The proposed architecture is implemented on Xilinx Zynq-7020 FPGA. It takes the advantages of low latency and low usage in the task of MNIST digital identification, and keeps the 96 % recognition rate.

Original languageEnglish
Title of host publicationProceedings - 2019 22nd International Symposium on Design and Diagnostics of Electronic Circuits and Systems, DDECS 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728100739
DOIs
StatePublished - Apr 2019
Event22nd International Symposium on Design and Diagnostics of Electronic Circuits and Systems, DDECS 2019 - Cluj-Napoca, Romania
Duration: 24 Apr 201926 Apr 2019

Publication series

NameProceedings - 2019 22nd International Symposium on Design and Diagnostics of Electronic Circuits and Systems, DDECS 2019

Conference

Conference22nd International Symposium on Design and Diagnostics of Electronic Circuits and Systems, DDECS 2019
Country/TerritoryRomania
CityCluj-Napoca
Period24/04/1926/04/19

Keywords

  • CNN
  • DNN
  • FPGA
  • hardware accelerator
  • low utilization

Fingerprint

Dive into the research topics of 'Implementation of FPGA-based Accelerator for Deep Neural Networks'. Together they form a unique fingerprint.

Cite this