Implementation of Reconfigurable Deep Learning Accelerator (RDLA) on PolarFire SoC

dc.contributor.authorShenoy, M.S.
dc.contributor.authorRamesh Kini, M.
dc.date.accessioned2026-02-06T06:34:27Z
dc.date.issued2023
dc.description.abstractIn neural networks and other computationally demanding applications, general-purpose CPUs are slow and ineffective. To run such applications, it is better to create specialized hardware capable of doing several multiply-accumulate operations quickly and effectively. For a wide range of neural network applications, the Reconfigurable Deep Learning Accelerator (RDLA) architecture has been developed. The fundamental unit of the RDLA is composed of a variety of Multiply-Accumulate (MAC) units, registers, and Address Generation Units (AGU). On the PolarFire SoC, RDLA was tested and implemented with a clock frequency of up to 62.5MHz for data processing. This paper shows the results testing with different images for a custom MNIST model with 4 layers with accuracy of 97.49% with power consumption of 1.85W. © 2023 IEEE.
dc.identifier.citationAsia Pacific Conference on Postgraduate Research in Microelectronics and Electronics, 2023, Vol., , p. 48-49
dc.identifier.issn21592144
dc.identifier.urihttps://doi.org/10.1109/PRIMEAsia60757.2023.00024
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/29246
dc.publisherIEEE Computer Society
dc.subjectALU
dc.subjectDeep Learning
dc.subjectMAC
dc.subjectSoC
dc.subjectTPU
dc.titleImplementation of Reconfigurable Deep Learning Accelerator (RDLA) on PolarFire SoC

Files