Enhancing MPI Communication Efficiency for Grid-Based Stencil Computations

No Thumbnail Available

Date

2024

Journal Title

Journal ISSN

Volume Title

Publisher

Institute of Electrical and Electronics Engineers Inc.

Abstract

In parallel computing, where efficiency and speed are crucial, the Message Passing Interface (MPI) is a fundamental paradigm for managing large-scale distributed memory systems. MPI is critical to complex computational tasks, particularly in grid-based computations that solve intricate numerical problems by discretizing spatial domains into structured grids. However, MPI Cartesian communicators exhibit limitations in handling these computations effectively, especially when managing large-scale data exchanges and complex stencil patterns. This paper addresses these challenges by presenting an integrated approach that combines MPI collective and Cartesian communication methods. The proposed solution simplifies data distribution, eliminates redundant interfaces, and enhances communication efficiency. Experimental results show a 43% reduction in execution time and a 40% decrease in communication overhead, with scalability improvements achieving 12.5x speedup using 64 processes. These quantitative outcomes demonstrate the advan-tages of the proposed method over conventional MPI Cartesian approaches, establishing it as a reliable framework for advancing High-Performance Computing (HPC) capabilities in grid-based applications. © 2024 IEEE.

Description

Keywords

Grid Computing, High-Performance Computing (HPC), Interface (MPI), Message Passing Distributed Memory Systems, Parallel Computing, Stencil Computation

Citation

COSMIC 2024 - IEEE International Conference on Computing, Semiconductor, Mechatronics, Intelligent Systems and Communications, Proceedings, 2024, Vol., , p. 54-59

Endorsement

Review

Supplemented By

Referenced By