Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
17 results
Search Results
Item MatchCloud: Service Matching for Multi Cloud Marketplace(Institute of Electrical and Electronics Engineers Inc., 2021) Chakma, A.; Kumar, S.; Mahato, P.K.; Satpathy, A.; Addya, S.K.The modern applications execute in the cloud via independent executable entities called virtual machines (VMs). In a typical multi-SP market with variable pricing and heterogeneous resource demands of VMs, resource allocation/placement is particularly challenging. To maximize the social welfare of the multi-SP markets, in this paper, we propose a resource allocation technique called MatchCloud formulated as a one-to-many matching game. Owing to the in-applicability of the classical deferred acceptance algorithm (DAA) due to size heterogeneity, we adopt a modified version of the algorithm. Moreover, preference generation is crucial for matching markets. Hence, we also present a simple yet efficient technique to assign preferences to two different stakeholders, i.e., VMs and SPs. Simulation results show that VM proposing RDA performs better compared to when SPs propose. © 2021 IEEE.Item A Preliminary Study of Serverless Platforms for Latency Sensitive Applications(Institute of Electrical and Electronics Engineers Inc., 2022) Sarathi, T.V.; Sai Nischal Reddy, J.; Shiva, P.; Saha, R.; Satpathy, A.; Addya, S.K.Serverless computing is the new-age cloud delivery model wherein resources are provisioned only during event-triggered functions. It dramatically improves the flexibility and scalability of applications compared to virtual machine (VM)/container-based service delivery models. As serverless computing is gaining significant impetus, major cloud providers such as Amazon, Microsoft Azure, and Google have launched their respective serverless computing platforms. However, for a user selecting an appropriate service provider (SP), meeting its desired quality-of-services (QoS) is challenging. Moreover, there is not enough public information available to assist the users in making such accurate decisions. Hence, we provide preliminary analysis via real-time experimentation for the users in this work, acting as a stepping stone in selecting an appropriate SP. To evaluate, we consider execution time and execution cost as evaluation metrics to assess different real-world SPs' performance by considering different workloads. Experimental results show that Azure functions achieved lower execution times than AWS Lambda and Google Cloud Functions, but in terms of execution cost, AWS Lambda costs much lower than the other two platforms. © 2022 IEEE.Item Virtual Machine Placement in Non-Cooperative Cloud Federation-Alliance(Institute of Electrical and Electronics Engineers Inc., 2023) Addya, S.K.; Satpathy, A.; Turuk, A.K.; Shaoo, B.Many inter-cloud organizations have been proposed to overcome the current limitations of cloud computing, such as service interruption, lack of interoperability, and degradation of services. One such multi-cloud architecture is the cloud federation, where multiple geographically distributed autonomous service providers voluntarily agree to share resources governed by a well-defined set of rules. Although federation offers numerous benefits for the service providers, sharing resources is supervised using a strict set of protocols offering limited flexibility. Hence, this paper proposes a relaxed resources sharing model for the service providers called cloud federation-alliance. The formation of the alliance is modeled as a non-cooperative game among the service providers. The game's stability and the alliance's performance loss are also studied using price-of-stability (PoS) and price-of-anarchy (PoA), respectively. A modified best-fit placement strategy is focused on reducing the power consumed. To assess the performance of the alliance placement, we compare its performance with random and worst-fit allocation techniques. This work aims to build a stable, sustainable, multi-cloud federation alliance and address this structure's critical issues. Extensive simulation results show stability between 2% and 30% with varying workloads. © 2023 IEEE.Item MatchVNE: A Stable Virtual Network Embedding Strategy Based on Matching Theory(Institute of Electrical and Electronics Engineers Inc., 2023) Keerthan Kumar, T.G.K.; Srivastava, A.; Satpathy, A.; Addya, S.K.; Koolagudi, S.G.Network virtualization (NV) can provide greater flexibility, better control, and improved quality of service (QoS) for the existing Internet architecture by enabling heterogeneous virtual network requests (VNRs) to share the substrate network (SN) resources. The efficient assignment of the SN resources catering to the demands of virtual machines (VMs) and virtual links (VLs) of the VNRs is known as virtual network embedding (VNE) and is proven to be NP-Hard. Deviating from the literature, this paper proposes a framework MatchVNE that is focused on maximizing the revenue-to-cost ratio of VNRs by considering a blend of system and topological attributes that better capture the inherent dependencies among the VMs. MatchVNE performs a stable VM embedding using the deferred acceptance algorithm (DAA). The preference of the VMs and servers are generated using a hybrid entropy, and the technique for order of preference by similarity to ideal solution (TOPSIS) based ranking strategy for VMs and servers. The attribute weights are determined using entropy, whereas the server and VM ranking are obtained via TOPSIS. The shortest path, VL-embedding, follows VM-embedding. The simulation results show that MatchVNE outperforms the baselines by achieving a 23% boost in the average revenue-to-cost-ratio and 44% improvement in the average acceptance ratio. © 2023 IEEE.Item VMAP: Matching-based Efficient Offloading in IoT-Fog Environments with Variable Resources(IEEE Computer Society, 2023) Morey, J.V.; Satpathy, A.; Addya, S.K.Fog computing is a promising technology for critical, resource-intensive, and time-sensitive applications. In this regard, a significant challenge is generating an offloading solution that minimizes the latency, energy, and number of outages for a dense IoT-Fog environment. However, the existing solutions either focus on a single objective or mainly dedicate fixed-sized resources as virtual resource units (VRUs). Moreover, these solutions are restrictive and not comprehensive, resulting in poor performance. To overcome these challenges, this paper proposes a VMAP model addressing the lacunas above. Offloading problem is abstracted as a one-to-many matching game between two sets of entities - tasks and fog nodes (FNs) by considering both preferences. Moreover, the preferences and weights of the parameters are generated using the Analytic Hierarchy Process (AHP). Exhaustive simulations indicate that the proposed strategy outperforms the baseline algorithms, considering average task latency and energy consumption by 35% and 22.2%, respectively. Additionally, resource utilization also experiences a boost by 28.57%, and 97.98% of tasks complete their execution within the deadlines. © 2023 IEEE.Item LBA: Matching Theory Based Latency-Sensitive Binary Offloading in IoT-Fog Networks(Institute of Electrical and Electronics Engineers Inc., 2024) Soni, P.; Deshlahre, O.C.; Satpathy, A.; Addya, S.K.The Internet of Things (IoT) is growing more popular with applications like healthcare services, traffic monitoring, video streaming, smart homes, etc. These applications produce an enormous amount of data, so a realistic option in this instance is to offload computational tasks to their proximity fog nodes (FNs) instead of the remote cloud. However, a negligent offloading strategy may cause anomalous computational traffic load at the FNs, causing congestion that may adversely affect the latency. However, the latency of task flows from IoT devices comprises communications latency at BS and computational latency at FNs. Therefore, designing offloading algorithms to distribute the computational load at FN evenly and efficiently utilize the FN resources is crucial. To solve this problem, we proposed LBA in a fog network with a binary offloading strategy using the matching theory-based approach. We utilize the Analytic Hierarchy Process (AHP) to generate the preference list. Furthermore, the binary offloading technique follows the deferred acceptance algorithm (DAA) to produce a stable assignment, and the complete offloading problem is modeled as a one-to-many matching game. Comprehensive simulations ensure that LBA can accomplish a better-balanced assignment for homogeneous and heterogeneous input concerning all the baseline algorithms. © 2024 IEEE.Item Performance Analysis of Disruptive Instances in Cloud Environment(Institute of Electrical and Electronics Engineers Inc., 2024) Nandy, P.; Saha, R.; Satpathy, A.; Chakraborty, S.; Addya, S.K.Virtualization enables the service providers (SPs) to logically partition the resources into virtual machines (VM) instances. Real-world SPs such as Amazon, Google, Microsoft Azure, IBM, and Oracle provide different flavors of VM instances, such as on-demand, reserved, and low-cost or spot, depending on the type of application hosted. The on-demand instances are short-term and typically incur a higher cost than reserved instances that are provisioned for a longer duration at a discounted rate. Low-cost or spot instances are cost-effective compared to on-demand but are reclaimable by the SPs. The SPs often claim that the on-demand and spot instances achieve similar performance, but it is far from that. This paper studies the performance of spot instances via rigorous experimentation over commercial SPs such as Amazon AWS and Microsoft Azure. Real-world evaluations affirm that spot instances perform poorly compared to their on-demand counterpart concerning memory, CPU, disk read, and write operations. We identify such instances as disruptive and name them so because it does not fulfill the performance, durability, and flexibility expectations like an on-demand instance having the same configuration. We also perform hypothesis testing over the experimental data obtained to corroborate our claim further. © 2024 IEEE.Item LEASE: Leveraging Energy-Awareness in Serverless Edge for Latency-Sensitive IoT Services(Institute of Electrical and Electronics Engineers Inc., 2024) Verma, A.; Satpathy, A.; Das, S.K.; Addya, S.K.Resource scheduling catering to real-time IoT services in a serverless-enabled edge network is particularly challenging owing to the workload variability, strict constraints on tolerable latency, and unpredictability in the energy sources powering the edge devices. This paper proposes a framework LEASE that dynamically schedules resources in serverless functions catering to different microservices and adhering to their deadline constraint. To assist the scheduler in making effective scheduling decisions, we introduce a priority-based approach that offloads functions from over-provisioned edge nodes to under-provisioned peer nodes, considering the expended energy in the process without compromising the completion time of microservices. For real-world implementations, we consider a testbed comprising a Raspberry Pi cluster serving as edge nodes, equipped with container orchestrator tools such as Kubernetes and powered by OpenFaaS, an open-source serverless platform. Experimental results demonstrate that compared to the benchmarking algorithm, LEASE achieves a 23.34% reduction in the overall completion time, with 97.64% of microservices meeting their deadline. LEASE also attains a 30.10% reduction in failure rates. © 2024 IEEE.Item Adaptive Workload Management for Enhanced Function Performance in Serverless Computing(Association for Computing Machinery, Inc, 2025) Birajdar, P.A.; Harsha, V.; Satpathy, A.; Addya, S.K.Serverless computing streamlines application deployment by removing the need for infrastructure management, but fluctuating workloads make resource allocation challenging. To solve this, we propose an adaptive workload manager that intelligently balances workloads, optimizes resource use, and adapts to changes with auto-scaling, ensuring efficient and reliable serverless performance. Preliminary experiments demonstrate an ≈ 0.6X% and 2X% improvement in execution time and resource utilization compared to the First-Come-First-Serve (FCFS) scheduling algorithm. © 2025 Copyright held by the owner/author(s).Item CoMCLOUD: Virtual Machine Coalition for Multi-Tier Applications over Multi-Cloud Environments(Institute of Electrical and Electronics Engineers Inc., 2023) Addya, S.K.; Satpathy, A.; Ghosh, B.C.; Chakraborty, S.; Ghosh, S.K.; Das, S.K.Applications hosted in commercial clouds are typically multi-tier and comprise multiple tightly coupled virtual machines (VMs). Service providers (SPs) cater to the users using VM instances with different configurations and pricing depending on the location of the data center (DC) hosting the VMs. However, selecting VMs to host multi-tier applications is challenging due to the trade-off between cost and quality of service (QoS) depending on the placement of VMs. This paper proposes a multi-cloud broker model called CoMCLOUD to select a sub-optimal VM coalition for multi-tier applications from an SP with minimum coalition pricing and maximum QoS. To strike a trade-off between the cost and QoS, we use an ant-colony-based optimization technique. The overall service selection game is modeled as a first-price sealed-bid auction aimed at maximizing the overall revenue of SPs. Further, as the hosted VMs often face demand spikes, we present a parallel migration strategy to migrate VMs with minimum disruption time. Detailed experiments show that our approach can improve the federation profit up to 23% at the expense of increased latency of approximately 15%, compared to the baselines. © 2013 IEEE.
