Abstract
Convolutional Neural Networks (CNNs) often require a huge amount of multiplication. The current approach of multiplication reduction requires data preprocessing, which is power-hungry and time-consuming. The paper proposed an Image-wised Selective Processing Engine (SPE-I) for accelerating CNN processing by eliminating unessential operations through algorithm-hardware co-designs. The SPE-I compares the similarity of two input images and identifies any redundant calculations that can be skipped. A modified LeNet-5 network, LeNet3x3 was designed to validate the performance improvement of SPE-I using the MNIST dataset. LeNet3x3 with and without SPE-I were implemented in TSMC 90-nm CMOS technology at 87.5 MHz operating frequency. Compared to the network without SPE-I, the network with SPE-I only has 0.12% - 1.79% accuracy drop, achieving 43.1% power saving due to 73% - 81% multiplication reduction. Regarding timing, SPE-I takes 20% of total clock cycles to provide convolutional data compared to the convolutional layer using preprocessing.
| Original language | English |
|---|---|
| Title of host publication | Proceedings - 2023 16th IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2023 |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 166-170 |
| Number of pages | 5 |
| ISBN (Electronic) | 9798350393613 |
| DOIs | |
| State | Published - 2023 |
| Event | 16th IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2023 - Singapore, Singapore Duration: 18 12 2023 → 21 12 2023 |
Publication series
| Name | Proceedings - 2023 16th IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2023 |
|---|
Conference
| Conference | 16th IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2023 |
|---|---|
| Country/Territory | Singapore |
| City | Singapore |
| Period | 18/12/23 → 21/12/23 |
Bibliographical note
Publisher Copyright:© 2023 IEEE.
Keywords
- CMOS
- Convolutional Neural Networks
- Inference Accelerator
- Processing Element