Reddy, M.R.V.S.R.S.Raju, S.R.Girish, K.K.Bhowmik, B.2026-02-062025International Conference on Communication Systems and Networks, COMSNETS, 2025, Vol., 2025, p. 448-45521552487https://doi.org/10.1109/COMSNETS63942.2025.10885723https://idr.nitk.ac.in/handle/123456789/28734Efficient communication is the foundation of parallel computing systems, enabling seamless coordination across multiple processors for optimal performance. At the core of this communication lies the Message Passing Interface, a crucial framework designed to facilitate data exchange between processors through collective operations. However, these MPI operations often face challenges, including fluctuating process counts, varying message sizes, and increased communication overhead. These issues can significantly impact execution times and scalability, leading to potential bottlenecks in large-scale systems. To address these concerns, this paper provides an in-depth evaluation of key MPI collective algorithms - Flat Tree, Chain, and Binary Tree - by examining their performance under varying configurations. By analyzing execution times and communication overhead, the study reveals the trade-offs inherent in each algorithm, offering insights into strategies for reducing communication costs. Through this analysis, we aim to provide valuable guidance to improve the efficiency and scalability of parallel computing, particularly in high-performance systems where communication efficiency is critical. © 2025 IEEE.Communication OverheadHigh Performance Computing (HPC)Message Passing Interface (MPI)Parallel ComputingPerformance AnalysisPerformance Analysis and Predictive Modeling of MPI Collective Algorithms in Multi-Core Clusters: A Comparative Study