Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Satpathy, A."

Filter results by typing the first few letters
Now showing 1 - 18 of 18
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    A Preliminary Study of Serverless Platforms for Latency Sensitive Applications
    (Institute of Electrical and Electronics Engineers Inc., 2022) Sarathi, T.V.; Sai Nischal Reddy, J.; Shiva, P.; Saha, R.; Satpathy, A.; Addya, S.K.
    Serverless computing is the new-age cloud delivery model wherein resources are provisioned only during event-triggered functions. It dramatically improves the flexibility and scalability of applications compared to virtual machine (VM)/container-based service delivery models. As serverless computing is gaining significant impetus, major cloud providers such as Amazon, Microsoft Azure, and Google have launched their respective serverless computing platforms. However, for a user selecting an appropriate service provider (SP), meeting its desired quality-of-services (QoS) is challenging. Moreover, there is not enough public information available to assist the users in making such accurate decisions. Hence, we provide preliminary analysis via real-time experimentation for the users in this work, acting as a stepping stone in selecting an appropriate SP. To evaluate, we consider execution time and execution cost as evaluation metrics to assess different real-world SPs' performance by considering different workloads. Experimental results show that Azure functions achieved lower execution times than AWS Lambda and Google Cloud Functions, but in terms of execution cost, AWS Lambda costs much lower than the other two platforms. © 2022 IEEE.
  • No Thumbnail Available
    Item
    Adaptive Workload Management for Enhanced Function Performance in Serverless Computing
    (Association for Computing Machinery, Inc, 2025) Birajdar, P.A.; Harsha, V.; Satpathy, A.; Addya, S.K.
    Serverless computing streamlines application deployment by removing the need for infrastructure management, but fluctuating workloads make resource allocation challenging. To solve this, we propose an adaptive workload manager that intelligently balances workloads, optimizes resource use, and adapts to changes with auto-scaling, ensuring efficient and reliable serverless performance. Preliminary experiments demonstrate an ≈ 0.6X% and 2X% improvement in execution time and resource utilization compared to the First-Come-First-Serve (FCFS) scheduling algorithm. © 2025 Copyright held by the owner/author(s).
  • No Thumbnail Available
    Item
    CoMCLOUD: Virtual Machine Coalition for Multi-Tier Applications over Multi-Cloud Environments
    (Institute of Electrical and Electronics Engineers Inc., 2023) Addya, S.K.; Satpathy, A.; Ghosh, B.C.; Chakraborty, S.; Ghosh, S.K.; Das, S.K.
    Applications hosted in commercial clouds are typically multi-tier and comprise multiple tightly coupled virtual machines (VMs). Service providers (SPs) cater to the users using VM instances with different configurations and pricing depending on the location of the data center (DC) hosting the VMs. However, selecting VMs to host multi-tier applications is challenging due to the trade-off between cost and quality of service (QoS) depending on the placement of VMs. This paper proposes a multi-cloud broker model called CoMCLOUD to select a sub-optimal VM coalition for multi-tier applications from an SP with minimum coalition pricing and maximum QoS. To strike a trade-off between the cost and QoS, we use an ant-colony-based optimization technique. The overall service selection game is modeled as a first-price sealed-bid auction aimed at maximizing the overall revenue of SPs. Further, as the hosted VMs often face demand spikes, we present a parallel migration strategy to migrate VMs with minimum disruption time. Detailed experiments show that our approach can improve the federation profit up to 23% at the expense of increased latency of approximately 15%, compared to the baselines. © 2013 IEEE.
  • No Thumbnail Available
    Item
    DCRDA: deadline-constrained function scheduling in serverless-cloud platform
    (Springer, 2025) Birajdar, P.A.; Meena, D.; Satpathy, A.; Addya, S.K.
    The serverless computing model frees developers from operational and management tasks, allowing them to focus solely on business logic. This paper addresses the computationally challenging function-container-virtual machine (VM) scheduling problem, especially under stringent deadline constraints. We propose a two-stage holistic scheduling framework called DCRDA targeting deadline-constrained function scheduling. In the first stage, the function-to-container scheduling is modeled as a one-to-one matching game and solved using the classical Deferred Acceptance Algorithm (DAA). The second stage addresses the container-to-VM assignment, modeled as a many-to-one matching problem, and solved using a variant of the DAA, the Revised-Deferred Acceptance Algorithm (RDA), to account for heterogeneous resource demands. Since matching-based strategies require agent preferences, a Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) ranking mechanism is employed to prioritize alternatives based on execution time, deadlines, and resource demands. The primary goal of DCRDA is to maximize the success ratio (SR), defined as the ratio of functions executed within the deadline to the total functions. Extensive test-bed validations over commercial providers such as Amazon EC2 show that the proposed framework significantly improves the success ratio compared to baseline approaches. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
  • No Thumbnail Available
    Item
    Edge and serverless computing for the next generation of ad hoc networks
    (Elsevier B.V., 2025) Addya, S.K.; Pal, S.; Satpathy, A.; Jaisinghani, D.
    This special issue presents a forward-looking exploration of the integration of edge computing and serverless architectures to enable the next generation of ad hoc networks. Fundamentally, ad hoc networks, characterized by their decentralized and dynamic nature, play a critical role in environments where traditional infrastructure is unavailable or unreliable. However, they face significant challenges in terms of latency, scalability, resource efficiency, and real-time responsiveness. Our vision is to show how to bridge these gaps by combining the localized processing power of edge computing with the flexibility and scalability of serverless (function-as-a-service) models. This integration allows for real-time, event-driven decision-making directly at the network edge, reducing reliance on centralized infrastructure and enabling more autonomous and intelligent network behavior. We argue that this paradigm will be essential in the future as the number of connected devices and data-intensive applications continues to grow. From disaster response and smart transportation to remote healthcare and industrial Internet of Things (IoT), such systems demand scalable, resilient, and low-latency solutions. This special issue outlines the potential of edge-serverless synergy, highlights the key technical challenges, such as orchestration, security, and resource constraints, and proposes research directions to address them. We envision this integration as a cornerstone of future intelligent, distributed systems capable of operating in highly dynamic, real-world conditions. © 2025
  • No Thumbnail Available
    Item
    EFraS: Emulated framework to develop and analyze dynamic Virtual Network Embedding strategies over SDN infrastructure
    (Elsevier B.V., 2024) Keerthan Kumar, K.K.; Tomar, S.; Addya, S.K.; Satpathy, A.; Koolagudi, S.G.
    The integration of Software-Defined Networking (SDN) into Network Virtualization (NV) significantly enhances network management, isolation, and troubleshooting capabilities. However, it brings forth the intricate challenge of allocating Substrate Network (SN) resources for various Virtual Network Requests (VNRs), a process known as Virtual Network Embedding (VNE). It encompasses solving two intractable sub-problems: embedding Virtual Machines (VMs) and embedding Virtual Links (VLs). While the research community has focused on formulating embedding strategies, there has been less emphasis on practical implementation at a laboratory scale, which is crucial for comprehensive design, development, testing, and validation policies for large-scale systems. However, conducting tests using commercial providers presents challenges due to the scale of the problem and associated costs. Moreover, current simulators lack accuracy in representing the complexities of communication patterns, resource allocation, and support for SDN-specific features. These limitations result in inefficient implementations and reduced adaptability, hindering seamless integration with commercial cloud providers. To address this gap, this work introduces EFraS (Emulated Framework for Dynamic VNE Strategies over SDN). The goal is to aid developers and researchers in iterating, testing, and evaluating VNE solutions seamlessly, leveraging a modular design and customized reconfigurability. EFraS offers various functionalities, including generating real-world SN topologies and VNRs. Additionally, it integrates with a diverse set of evaluation metrics to streamline the testing and validation process. EFraS leverages Mininet, Ryu controller, and OpenFlow switches to closely emulate real-time setups. Moreover, we integrate EFraS with various state-of-the-art VNE schemes, ensuring the effective validation of embedding algorithms. © 2024 Elsevier B.V.
  • No Thumbnail Available
    Item
    FASE: fast deployment for dependent applications in serverless environments
    (Springer, 2024) Saha, R.; Satpathy, A.; Addya, S.K.
    Function-as-a-service has reduced the user burden by allowing cloud service providers to overtake operational activities such as resource allocation, service deployment, auto-scaling, and load-balancing, to name a few. The users are only responsible for developing the business logic through event-triggered functions catering to an application. Although FaaS brings about multiple user benefits, a typical challenge in this context is the time incurred in the environmental setup of the containers on which the functions execute, often referred to as the cold-start time leading to delayed execution and quality-of-service violations. This paper presents an efficient scheduling strategy FASE that uses a finite-sized warm pool to facilitate the instantaneous execution of functions on pre-warmed containers. Test-bed evaluations over AWS Lambda confirm that FASE achieves a 40% reduction in the average cold-start time and 1.29× speedup compared to the baselines. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.
  • No Thumbnail Available
    Item
    Geo-Distributed Multi-Tier Workload Migration Over Multi-Timescale Electricity Markets
    (Institute of Electrical and Electronics Engineers Inc., 2023) Addya, S.K.; Satpathy, A.; Ghosh, B.C.; Chakraborty, S.; Ghosh, S.K.; Das, S.K.
    Virtual machine (VM) migration enables cloud service providers (CSPs) to balance workload, perform zero-downtime maintenance, and reduce applications' power consumption and response time. Migrating a VM consumes energy at the source, destination, and backbone networks, i.e., intermediate routers and switches, especially in a Geo-distributed setting. In this context, we propose a VM migration model called Low Energy Application Workload Migration (LEAWM) aimed at reducing the per-bit migration cost in migrating VMs over Geo-distributed clouds. With a Geo-distributed cloud connected through multiple Internet Service Providers (ISPs), we develop an approach to find out the migration path across ISPs leading to the most feasible destination. For this, we use the variation in the electricity price at the ISPs to decide the migration paths. However, reduced power consumption at the expense of higher migration time is intolerable for real-time applications. As finding an optimal relocation is $\mathcal {NP}$NP-Hard, we propose an Ant Colony Optimization (ACO) based bi-objective optimization technique to strike a balance between migration delay and migration power. A thorough simulation analysis of the proposed approach shows that the proposed model can reduce the migration time by 25%-30% and electricity cost by approximately 25% compared to the baseline. © 2008-2012 IEEE.
  • No Thumbnail Available
    Item
    LBA: Matching Theory Based Latency-Sensitive Binary Offloading in IoT-Fog Networks
    (Institute of Electrical and Electronics Engineers Inc., 2024) Soni, P.; Deshlahre, O.C.; Satpathy, A.; Addya, S.K.
    The Internet of Things (IoT) is growing more popular with applications like healthcare services, traffic monitoring, video streaming, smart homes, etc. These applications produce an enormous amount of data, so a realistic option in this instance is to offload computational tasks to their proximity fog nodes (FNs) instead of the remote cloud. However, a negligent offloading strategy may cause anomalous computational traffic load at the FNs, causing congestion that may adversely affect the latency. However, the latency of task flows from IoT devices comprises communications latency at BS and computational latency at FNs. Therefore, designing offloading algorithms to distribute the computational load at FN evenly and efficiently utilize the FN resources is crucial. To solve this problem, we proposed LBA in a fog network with a binary offloading strategy using the matching theory-based approach. We utilize the Analytic Hierarchy Process (AHP) to generate the preference list. Furthermore, the binary offloading technique follows the deferred acceptance algorithm (DAA) to produce a stable assignment, and the complete offloading problem is modeled as a one-to-many matching game. Comprehensive simulations ensure that LBA can accomplish a better-balanced assignment for homogeneous and heterogeneous input concerning all the baseline algorithms. © 2024 IEEE.
  • No Thumbnail Available
    Item
    LEASE: Leveraging Energy-Awareness in Serverless Edge for Latency-Sensitive IoT Services
    (Institute of Electrical and Electronics Engineers Inc., 2024) Verma, A.; Satpathy, A.; Das, S.K.; Addya, S.K.
    Resource scheduling catering to real-time IoT services in a serverless-enabled edge network is particularly challenging owing to the workload variability, strict constraints on tolerable latency, and unpredictability in the energy sources powering the edge devices. This paper proposes a framework LEASE that dynamically schedules resources in serverless functions catering to different microservices and adhering to their deadline constraint. To assist the scheduler in making effective scheduling decisions, we introduce a priority-based approach that offloads functions from over-provisioned edge nodes to under-provisioned peer nodes, considering the expended energy in the process without compromising the completion time of microservices. For real-world implementations, we consider a testbed comprising a Raspberry Pi cluster serving as edge nodes, equipped with container orchestrator tools such as Kubernetes and powered by OpenFaaS, an open-source serverless platform. Experimental results demonstrate that compared to the benchmarking algorithm, LEASE achieves a 23.34% reduction in the overall completion time, with 97.64% of microservices meeting their deadline. LEASE also attains a 30.10% reduction in failure rates. © 2024 IEEE.
  • No Thumbnail Available
    Item
    MatchCloud: Service Matching for Multi Cloud Marketplace
    (Institute of Electrical and Electronics Engineers Inc., 2021) Chakma, A.; Kumar, S.; Mahato, P.K.; Satpathy, A.; Addya, S.K.
    The modern applications execute in the cloud via independent executable entities called virtual machines (VMs). In a typical multi-SP market with variable pricing and heterogeneous resource demands of VMs, resource allocation/placement is particularly challenging. To maximize the social welfare of the multi-SP markets, in this paper, we propose a resource allocation technique called MatchCloud formulated as a one-to-many matching game. Owing to the in-applicability of the classical deferred acceptance algorithm (DAA) due to size heterogeneity, we adopt a modified version of the algorithm. Moreover, preference generation is crucial for matching markets. Hence, we also present a simple yet efficient technique to assign preferences to two different stakeholders, i.e., VMs and SPs. Simulation results show that VM proposing RDA performs better compared to when SPs propose. © 2021 IEEE.
  • No Thumbnail Available
    Item
    MatchVNE: A Stable Virtual Network Embedding Strategy Based on Matching Theory
    (Institute of Electrical and Electronics Engineers Inc., 2023) Keerthan Kumar, T.G.K.; Srivastava, A.; Satpathy, A.; Addya, S.K.; Koolagudi, S.G.
    Network virtualization (NV) can provide greater flexibility, better control, and improved quality of service (QoS) for the existing Internet architecture by enabling heterogeneous virtual network requests (VNRs) to share the substrate network (SN) resources. The efficient assignment of the SN resources catering to the demands of virtual machines (VMs) and virtual links (VLs) of the VNRs is known as virtual network embedding (VNE) and is proven to be NP-Hard. Deviating from the literature, this paper proposes a framework MatchVNE that is focused on maximizing the revenue-to-cost ratio of VNRs by considering a blend of system and topological attributes that better capture the inherent dependencies among the VMs. MatchVNE performs a stable VM embedding using the deferred acceptance algorithm (DAA). The preference of the VMs and servers are generated using a hybrid entropy, and the technique for order of preference by similarity to ideal solution (TOPSIS) based ranking strategy for VMs and servers. The attribute weights are determined using entropy, whereas the server and VM ranking are obtained via TOPSIS. The shortest path, VL-embedding, follows VM-embedding. The simulation results show that MatchVNE outperforms the baselines by achieving a 23% boost in the average revenue-to-cost-ratio and 44% improvement in the average acceptance ratio. © 2023 IEEE.
  • No Thumbnail Available
    Item
    NORD: NOde Ranking-based efficient virtual network embedding over single Domain substrate networks
    (Elsevier B.V., 2023) Keerthan Kumar, T.G.; Addya, S.K.; Satpathy, A.; Koolagudi, S.G.
    Network virtualization (NV) allows the service providers (SPs) to partition the substrate resources in the form of isolated virtual networks (VNs) comprising multiple correlated virtual machines (VMs) and virtual links (VLs), capturing the dependencies. Though NV brought about multiple benefits, such as service isolation, improved quality-of-service (QoS), secure communication, and better utilization of substrate resources, it also introduced numerous research challenges. In this regard, one of the predominant challenges is assigning resources to the virtual components, i.e., VMs and VLs, also termed virtual network embedding (VNE). VNE comprises two closely related sub-problems, (i.) VM embedding and (ii.) VL embedding, and both the problems have been demonstrated to be NP-Hard. In the context of VNE, maximizing the revenue to cost ratio remains the focal point for the SPs as it not only boosts acceptance of VNRs but also effectively utilizes the substrate resources. However, the existing literature on VNE suffers from the following pitfalls: (i.) They only consider system resources or (ii.) limited topological attributes. However, both attributes are quintessential in accurately capturing the VNRs and the substrate network dependencies, thereby augmenting the revenue to cost ratio. This paper proposes an efficient VNE strategy called, NOde Ranking-based efficient virtual network embedding over single Domain substrate networks (NORD), to maximize the revenue to cost ratio. To address the problem of VM embedding, NORD utilizes a hybrid entropy and the technique for order of preference by similarity to ideal solution (TOPSIS) based ranking strategy for VMs and servers considering both system and topological attributes that effectively capture the dependencies. Once the ranking is generated, A greedy VM embedding followed by shortest path VL embedding completes the assignment. Simulation results confirm that NORD attains a 40% and 61% increment in average acceptance and revenue-to-cost ratios compared to the baselines. © 2023 Elsevier B.V.
  • No Thumbnail Available
    Item
    Performance Analysis of Disruptive Instances in Cloud Environment
    (Institute of Electrical and Electronics Engineers Inc., 2024) Nandy, P.; Saha, R.; Satpathy, A.; Chakraborty, S.; Addya, S.K.
    Virtualization enables the service providers (SPs) to logically partition the resources into virtual machines (VM) instances. Real-world SPs such as Amazon, Google, Microsoft Azure, IBM, and Oracle provide different flavors of VM instances, such as on-demand, reserved, and low-cost or spot, depending on the type of application hosted. The on-demand instances are short-term and typically incur a higher cost than reserved instances that are provisioned for a longer duration at a discounted rate. Low-cost or spot instances are cost-effective compared to on-demand but are reclaimable by the SPs. The SPs often claim that the on-demand and spot instances achieve similar performance, but it is far from that. This paper studies the performance of spot instances via rigorous experimentation over commercial SPs such as Amazon AWS and Microsoft Azure. Real-world evaluations affirm that spot instances perform poorly compared to their on-demand counterpart concerning memory, CPU, disk read, and write operations. We identify such instances as disruptive and name them so because it does not fulfill the performance, durability, and flexibility expectations like an on-demand instance having the same configuration. We also perform hypothesis testing over the experimental data obtained to corroborate our claim further. © 2024 IEEE.
  • No Thumbnail Available
    Item
    RUSH: Rule-Based Scheduling for Low-Latency Serverless Computing
    (Institute of Electrical and Electronics Engineers Inc., 2025) Birajdar, P.A.; Anchalia, K.; Satpathy, A.; Addya, S.K.
    Serverless computing abstracts server management, enabling developers to focus on application logic while benefiting from automatic scaling and pay-per-use pricing. However, dynamic workloads pose challenges in resource allocation and response time optimization. Response time is a critical performance metric in serverless environments, especially for latency-sensitive applications, where inefficient scheduling can degrade user experience and system efficiency. This paper proposes RUSH (Rule-based Scheduling for Low-Latency Serverless Computing), a lightweight and adaptive scheduling framework designed to reduce cold starts and execution delays. RUSH employs a set of predefined rules that consider system state, resource availability, and timeout thresholds to make proactive, latency-Aware scheduling decisions. We implement and evaluate RUSH on a real-world serverless application that generates emoji meanings. Experimental results demonstrate that RUSH consistently outperforms First-Come-First-Served (FCFS), Random Scheduling, and Profaastinate, achieving ? 33% reduction in average execution time. © IEEE. 2019 IEEE.
  • No Thumbnail Available
    Item
    SEDViN: Secure embedding for dynamic virtual network requests using a multi-attribute matching game
    (Academic Press Inc., 2025) Kumar, T.G.K.; Kumar, R.; Achal, A.M.; Satpathy, A.; Addya, S.K.
    Network virtualization (NV) has gained significant attention as it allows service providers (SP) to share substrate network (SN) resources. It is achieved by partitioning them into isolated virtual network requests (VNRs) comprising interrelated virtual machines (VMs) and virtual links (VLs). Although NV provides various advantages, such as service separation, enhanced quality-of-service, reliability, and improved SN utilization, it also presents multiple scientific challenges. In this context, one pivotal challenge encountered by the researchers is secure virtual network embedding (SVNE). The SVNE encompasses assigning SN resources to components of VNR, i.e., VMs and VLs, adhering to the security demands, which is a computationally intractable problem, as it is proven to be NP-Hard. In this context, maximizing the acceptance and revenue-to-cost ratios remains of utmost priority for SPs as it not only increases the revenue but also effectively utilizes the large pool of SN resources. Though VNE is a well-researched problem, the existing literature has the following flaws: (i.) security features of VMs and VLs are ignored, (ii.) limited consideration of topological attributes, and (iii.) restricted to static VNRs. However, SPs need to develop an embedding framework that overcomes the abovementioned pitfalls. Therefore, this work proposes a framework Secure Embedding for Dynamic Virtual Network requests using a multi-attribute matching game (SEDViN). In SedViN, the deferred acceptance algorithm (DAA) based matching game is used for effective embedding. SEDViN operates primarily in two steps to obtain a secure embedding of dynamic VNRs. Firstly, it generates a unified ranking for VMs and servers using a combination of entropy and a technique for order of preference by similarity to the ideal solution (TOPSIS), considering network, security, and system attributes. Taking these as inputs, in the second step, VNR embedding is conducted using the deferred acceptance approach based on a one-to-many matching strategy for VM embedding and VL embedding using the shortest path algorithm. The performance of SEDViN is evaluated through simulations and compared against different baseline approaches. The simulation outcomes exhibit that SEDViN surpasses the baselines with a gain of 56% in the acceptance and 44% in the revenue-to-cost ratios. © 2025 Elsevier Inc.
  • No Thumbnail Available
    Item
    Virtual Machine Placement in Non-Cooperative Cloud Federation-Alliance
    (Institute of Electrical and Electronics Engineers Inc., 2023) Addya, S.K.; Satpathy, A.; Turuk, A.K.; Shaoo, B.
    Many inter-cloud organizations have been proposed to overcome the current limitations of cloud computing, such as service interruption, lack of interoperability, and degradation of services. One such multi-cloud architecture is the cloud federation, where multiple geographically distributed autonomous service providers voluntarily agree to share resources governed by a well-defined set of rules. Although federation offers numerous benefits for the service providers, sharing resources is supervised using a strict set of protocols offering limited flexibility. Hence, this paper proposes a relaxed resources sharing model for the service providers called cloud federation-alliance. The formation of the alliance is modeled as a non-cooperative game among the service providers. The game's stability and the alliance's performance loss are also studied using price-of-stability (PoS) and price-of-anarchy (PoA), respectively. A modified best-fit placement strategy is focused on reducing the power consumed. To assess the performance of the alliance placement, we compare its performance with random and worst-fit allocation techniques. This work aims to build a stable, sustainable, multi-cloud federation alliance and address this structure's critical issues. Extensive simulation results show stability between 2% and 30% with varying workloads. © 2023 IEEE.
  • No Thumbnail Available
    Item
    VMAP: Matching-based Efficient Offloading in IoT-Fog Environments with Variable Resources
    (IEEE Computer Society, 2023) Morey, J.V.; Satpathy, A.; Addya, S.K.
    Fog computing is a promising technology for critical, resource-intensive, and time-sensitive applications. In this regard, a significant challenge is generating an offloading solution that minimizes the latency, energy, and number of outages for a dense IoT-Fog environment. However, the existing solutions either focus on a single objective or mainly dedicate fixed-sized resources as virtual resource units (VRUs). Moreover, these solutions are restrictive and not comprehensive, resulting in poor performance. To overcome these challenges, this paper proposes a VMAP model addressing the lacunas above. Offloading problem is abstracted as a one-to-many matching game between two sets of entities - tasks and fog nodes (FNs) by considering both preferences. Moreover, the preferences and weights of the parameters are generated using the Analytic Hierarchy Process (AHP). Exhaustive simulations indicate that the proposed strategy outperforms the baseline algorithms, considering average task latency and energy consumption by 35% and 22.2%, respectively. Additionally, resource utilization also experiences a boost by 28.57%, and 97.98% of tasks complete their execution within the deadlines. © 2023 IEEE.

Maintained by Central Library NITK | DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify