Goudar, S.I.Nayaka, P.S.J.Girish, K.K.Bhowmik, B.2026-02-062024COSMIC 2024 - IEEE International Conference on Computing, Semiconductor, Mechatronics, Intelligent Systems and Communications, Proceedings, 2024, Vol., , p. 54-59https://doi.org/10.1109/COSMIC63293.2024.10871808https://idr.nitk.ac.in/handle/123456789/28774In parallel computing, where efficiency and speed are crucial, the Message Passing Interface (MPI) is a fundamental paradigm for managing large-scale distributed memory systems. MPI is critical to complex computational tasks, particularly in grid-based computations that solve intricate numerical problems by discretizing spatial domains into structured grids. However, MPI Cartesian communicators exhibit limitations in handling these computations effectively, especially when managing large-scale data exchanges and complex stencil patterns. This paper addresses these challenges by presenting an integrated approach that combines MPI collective and Cartesian communication methods. The proposed solution simplifies data distribution, eliminates redundant interfaces, and enhances communication efficiency. Experimental results show a 43% reduction in execution time and a 40% decrease in communication overhead, with scalability improvements achieving 12.5x speedup using 64 processes. These quantitative outcomes demonstrate the advan-tages of the proposed method over conventional MPI Cartesian approaches, establishing it as a reliable framework for advancing High-Performance Computing (HPC) capabilities in grid-based applications. © 2024 IEEE.Grid ComputingHigh-Performance Computing (HPC)Interface (MPI)Message Passing Distributed Memory SystemsParallel ComputingStencil ComputationEnhancing MPI Communication Efficiency for Grid-Based Stencil Computations