Convolutional Neural Networks Inference Accelerator Design using Selective Convolutional Layer

Tzu Huan Huang*, I. Chyn Wey, Emil Goh, T. Hui Teo

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Convolutional Neural Networks (CNNs) often require a huge amount of multiplication. The current approach of multiplication reduction requires data preprocessing, which is power-hungry and time-consuming. The paper proposed an Image-wised Selective Processing Engine (SPE-I) for accelerating CNN processing by eliminating unessential operations through algorithm-hardware co-designs. The SPE-I compares the similarity of two input images and identifies any redundant calculations that can be skipped. A modified LeNet-5 network, LeNet3x3 was designed to validate the performance improvement of SPE-I using the MNIST dataset. LeNet3x3 with and without SPE-I were implemented in TSMC 90-nm CMOS technology at 87.5 MHz operating frequency. Compared to the network without SPE-I, the network with SPE-I only has 0.12% - 1.79% accuracy drop, achieving 43.1% power saving due to 73% - 81% multiplication reduction. Regarding timing, SPE-I takes 20% of total clock cycles to provide convolutional data compared to the convolutional layer using preprocessing.

Original languageEnglish
Title of host publicationProceedings - 2023 16th IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages166-170
Number of pages5
ISBN (Electronic)9798350393613
DOIs
StatePublished - 2023
Event16th IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2023 - Singapore, Singapore
Duration: 18 12 202321 12 2023

Publication series

NameProceedings - 2023 16th IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2023

Conference

Conference16th IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2023
Country/TerritorySingapore
CitySingapore
Period18/12/2321/12/23

Bibliographical note

Publisher Copyright:
© 2023 IEEE.

Keywords

  • CMOS
  • Convolutional Neural Networks
  • Inference Accelerator
  • Processing Element

Fingerprint

Dive into the research topics of 'Convolutional Neural Networks Inference Accelerator Design using Selective Convolutional Layer'. Together they form a unique fingerprint.

Cite this