Shenoy, M.S.Ramesh Kini, M.2026-02-0620227th IEEE International Conference on Recent Advances and Innovations in Engineering, ICRAIE 2022 - Proceedings, 2022, Vol., , p. 377-381https://doi.org/10.1109/ICRAIE56454.2022.10054301https://idr.nitk.ac.in/handle/123456789/29780General-purpose CPUs are sluggish and inefficient when used for computationally intensive applications including in neural networks. It is preferable to develop specialized hardware that can do a large number of multiply-accumulate operations rapidly and efficiently to execute such applications. The Re-configurable Neural Network Accelerator (RNNA) architecture that has been designed is appropriate for a variety of neural network applications. The computational resource requirements vary depending on the application; hence, mapping the application to the available set of resources requires reconfigurability. The fundamental unit of the RNNA is composed of a variety of Multiply-Accumulate (MAC) units, registers, and Address Generation Units (AGU). When compared to the computation performed by a single MAC array, the RNNA with four MAC arrays reduces the time required by approximately 75%. On the Nexys4 DDR Artix-7 FPGA board, RNNA was tested and implemented with a clock frequency of up to 60MHz and power consumption of 0.243W. © 2022 IEEE.Batch ProcessingConvolutional Neural Net-workDeep Learning AcceleratorMultiply-AccumulateNeural NetworksReconfigurabilityTensor Processing UnitDesign and Implementation of Reconfigurable Neural Network Accelerator