Pod Scheduling and Proactive Resource Management in an Edge Cluster using MCDM and Federated Learning

No Thumbnail Available

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

Springer Science and Business Media B.V.

Abstract

Edge computing, which locates computational resources closer to the data sources, has become crucial in meeting the demands of applications that need high bandwidth and low latency. To cater to edge computing scenarios, KubeEdge, an extension of Kubernetes(K8s), expands its capabilities to meet edge-specific requirements such as limited resources, irregular connections, and heterogeneous environments. Edge trace data cannot be shared between cloud providers because of privacy issues, which makes generic distributed training ineffective. However, even with edge computing’s potential advantages, the built-in scheduling algorithms have several drawbacks. A significant problem is the lack of efficient resource management and allocation mechanisms at the edge, which causes edge nodes to be underutilized or overloaded which leads to violation of Quality of Service(QoS) and inefficient utilization of resources leads to Service Level Agreement(SLA) violations. In this regard, VIKOR and ELECTRE III based pod scheduling strategy is proposed in this paper and evaluated using Wikipedia and NASA server workload. The experimental results shows that 50% reduction in standard deviation for ELECTRE III and 40% reduction in standard deviation for VIKOR against default scheduler of Kubernetes. The average response time of 30.6593ms and 31.8803ms is achieved for Electre III and VIKOR for Wikipedia dataset. A proactive resource management system is proposed for KubeEdge containerized services where it incorporates a federated learning framework to predict future workloads using the Bidirectional Long Short-Term Memory (Bi-LSTM) and Gated Recurrent Unit (GRU). The experimental comparison of federated learning shows 99.65%, 98.64% reduction in MSE for CPU utilization % and 89.72%, 76.57% reduction in MSE for Memory utilization % with respect to GRU and BI-LSTM models in contrast to centralized learning. The proposed approach effectiveness is evaluated through statistical techniques and found significant. © The Author(s), under exclusive licence to Springer Nature B.V. 2025.

Description

Keywords

Artificial intelligence, Data privacy, Edge computing, Learning systems, Multitasking, Natural resources management, Reduction, Resource allocation, Scheduling algorithms, % reductions, BI-LSTM, Fedavg, Gated recurrent unit, Kubeedge, Kubernetes, Pod scheduling, Resources optimization, Tasks scheduling, Quality of service

Citation

Journal of Grid Computing, 2025, 23, 3, pp. -

Collections

Endorsement

Review

Supplemented By

Referenced By