Please use this identifier to cite or link to this item: https://idr.nitk.ac.in/jspui/handle/123456789/7657
Title: Cache analysis and software optimizations for faster on-chip network simulations
Authors: Parane, K.
Prabhu, Prasad, B.M.
Talawar, B.
Issue Date: 2016
Citation: 11th International Conference on Industrial and Information Systems, ICIIS 2016 - Conference Proceedings, 2016, Vol.2018-January, , pp.83-88
Abstract: Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speedup the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93� and 3.97� were observed for the Mesh topology with 30 � 30 network size by employing thread parallelization and vectorization respectively. � 2016 IEEE.
URI: http://idr.nitk.ac.in/jspui/handle/123456789/7657
Appears in Collections:2. Conference Papers

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.