Hardware-Optimized Deep Learning Model for FPGA-Based Character Recognition

No Thumbnail Available

Date

2023

Journal Title

Journal ISSN

Volume Title

Publisher

Institute of Electrical and Electronics Engineers Inc.

Abstract

Deep neural networks (DNNs) are widely used algorithms in machine learning. Even though most of the deep learning applications are driven by software solutions, there has been significant research and development aimed at optimizing these algorithms over the years. However, when considering hardware implementation applications, it becomes essential to optimize the design not only in software but also in hardware. In this paper, we present a straightforward yet effective Convolutional Neural Network architecture that is meticulously optimized both in hardware and software for char-acter recognition applications. The implemented accelerator was realized on a Xilinx Zynq XC7Z020CLG484 FPGA using a high-level synthesis tool. To enhance performance, the accelerator employs an optimized fixed-point data type and applies loop parallelization techniques combining 2D convolution and 2D max pooling operations. The hardware efficiency of the proposed DNN is compared with some of the existing architectures in terms of hardware utilization. © 2023 IEEE.

Description

Keywords

Convolutional Neural Network, Field Programmable Logic Array, Machine Learning, Subsampling

Citation

IEEE Region 10 Annual International Conference, Proceedings/TENCON, 2023, Vol., , p. 238-242

Endorsement

Review

Supplemented By

Referenced By