Hardware-Optimized Deep Learning Model for FPGA-Based Character Recognition

dc.contributor.authorRao, P.S.
dc.contributor.authorPulikala, A.
dc.date.accessioned2026-02-06T06:34:40Z
dc.date.issued2023
dc.description.abstractDeep neural networks (DNNs) are widely used algorithms in machine learning. Even though most of the deep learning applications are driven by software solutions, there has been significant research and development aimed at optimizing these algorithms over the years. However, when considering hardware implementation applications, it becomes essential to optimize the design not only in software but also in hardware. In this paper, we present a straightforward yet effective Convolutional Neural Network architecture that is meticulously optimized both in hardware and software for char-acter recognition applications. The implemented accelerator was realized on a Xilinx Zynq XC7Z020CLG484 FPGA using a high-level synthesis tool. To enhance performance, the accelerator employs an optimized fixed-point data type and applies loop parallelization techniques combining 2D convolution and 2D max pooling operations. The hardware efficiency of the proposed DNN is compared with some of the existing architectures in terms of hardware utilization. © 2023 IEEE.
dc.identifier.citationIEEE Region 10 Annual International Conference, Proceedings/TENCON, 2023, Vol., , p. 238-242
dc.identifier.issn21593442
dc.identifier.urihttps://doi.org/10.1109/TENCON58879.2023.10322427
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/29389
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.subjectConvolutional Neural Network
dc.subjectField Programmable Logic Array
dc.subjectMachine Learning
dc.subjectSubsampling
dc.titleHardware-Optimized Deep Learning Model for FPGA-Based Character Recognition

Files