Browsing by Author "Addya, S.K."
Now showing 1 - 20 of 39
- Results Per Page
- Sort Options
Item A Preliminary Study of Serverless Platforms for Latency Sensitive Applications(Institute of Electrical and Electronics Engineers Inc., 2022) Sarathi, T.V.; Sai Nischal Reddy, J.; Shiva, P.; Saha, R.; Satpathy, A.; Addya, S.K.Serverless computing is the new-age cloud delivery model wherein resources are provisioned only during event-triggered functions. It dramatically improves the flexibility and scalability of applications compared to virtual machine (VM)/container-based service delivery models. As serverless computing is gaining significant impetus, major cloud providers such as Amazon, Microsoft Azure, and Google have launched their respective serverless computing platforms. However, for a user selecting an appropriate service provider (SP), meeting its desired quality-of-services (QoS) is challenging. Moreover, there is not enough public information available to assist the users in making such accurate decisions. Hence, we provide preliminary analysis via real-time experimentation for the users in this work, acting as a stepping stone in selecting an appropriate SP. To evaluate, we consider execution time and execution cost as evaluation metrics to assess different real-world SPs' performance by considering different workloads. Experimental results show that Azure functions achieved lower execution times than AWS Lambda and Google Cloud Functions, but in terms of execution cost, AWS Lambda costs much lower than the other two platforms. © 2022 IEEE.Item A Time Series Forecasting Approach to Minimize Cold Start Time in Cloud-Serverless Platform(Institute of Electrical and Electronics Engineers Inc., 2022) Jegannathan, A.P.; Saha, R.; Addya, S.K.Serverless computing is a buzzword that is being used commonly in the world of technology and among developers and businesses. Using the Function-As-A-Service (FaaS) model of serverless, one can easily deploy their applications to the cloud and go live in a matter of days, it facilitates the developers to focus on their core business logic and the backend process such as managing the infrastructure, scaling of the application, updation of software and other dependencies is handled by the Cloud Service Provider. One of the features of serverless computing is ability to scale the containers to zero, which results in a problem called cold start. The challenging part is to reduce the cold start latency without the consumption of extra resources. In this paper, we use SARIMA (Seasonal Auto Regressive Integrated Moving Average), one of the classical time series forecasting models to predict the time at which the incoming request comes, and accordingly increase or decrease the amount of required containers to minimize the resource wastage, thus reducing the function launching time. Finally, we implement PBA (Prediction Based Autoscaler) and compare it with the default HPA (Horizontal Pod Autoscaler), which comes inbuilt with kubernetes. The results showed that PBA performs fairly better than the default HPA, while reducing the wastage of resources. © 2022 IEEE.Item Adaptive Workload Management for Enhanced Function Performance in Serverless Computing(Association for Computing Machinery, Inc, 2025) Birajdar, P.A.; Harsha, V.; Satpathy, A.; Addya, S.K.Serverless computing streamlines application deployment by removing the need for infrastructure management, but fluctuating workloads make resource allocation challenging. To solve this, we propose an adaptive workload manager that intelligently balances workloads, optimizes resource use, and adapts to changes with auto-scaling, ensuring efficient and reliable serverless performance. Preliminary experiments demonstrate an ≈ 0.6X% and 2X% improvement in execution time and resource utilization compared to the First-Come-First-Serve (FCFS) scheduling algorithm. © 2025 Copyright held by the owner/author(s).Item Automating the Selection of Container Orchestrators for Service Deployment(Institute of Electrical and Electronics Engineers Inc., 2022) Chaurasia, P.; Nath, S.B.; Addya, S.K.; Ghosh, S.K.With the ubiquitous usage of cloud computing, the services are deployed as a virtual machine (VM) in cloud servers. However, VM based deployment often takes more amount of resources. In order to minimize the resource consumption of service deployment, container based lightweight virtualization is used. The management of the containers for deployment is a challenging problem as the container managers need to consume less amount of resources while also catering to the needs of the clients. In order to choose the right container manager, we have proposed an architecture based on the application and user needs. In the proposed architecture, we have a machine learning based decision engine to solve the problem. We have considered docker containers for experimentation. The experimental results show that the proposed system can select the proper container manager among docker compose based manager and Kubernetes. © 2022 IEEE.Item Collaborative Deadline-sensitive Multi-task Offloading in Vehicular-Cloud Networks(Institute of Electrical and Electronics Engineers Inc., 2025) Kumar, P.; Sushma, S.A.; Chandrasekaran, K.; Addya, S.K.With the growing technological advancements in the Internet and advanced functionalities in vehicular networks, it becomes crucial to execute tasks quickly and efficiently. However, the limited onboard computational capacity and vehicle mobility make it challenging to accomplish latency-sensitive tasks efficiently. Task offloading provides a promising solution to overcome these challenges. Cloud data centers provide efficient solutions, but returning the results to the vehicles takes longer due to the large physical distance. Leveraging edge servers to execute latency-sensitive tasks provides a fast, interactive response and less transmission cost. However, in a dynamic network, vehicles will be in constant motion with varying speeds, resulting in frequent handoffs from one base station to another. Our proposed work aims to select the optimal nodes to perform binary offloading with minimum cost using the collaborative vehicular network. We use a greedy-based offloading approach to address these challenges and achieve better quality-of-service and quality-of-experience in a dynamic environment to minimize costs, delay reduction ratio, and satisfaction ratio. The proposed work outperforms the baseline by 60.44%, 53.43% in reducing total system cost, delay reduction ratio, and 36% improvement in the satisfaction ratio compared to baseline algorithms. © 2025 IEEE.Item CoMCLOUD: Virtual Machine Coalition for Multi-Tier Applications over Multi-Cloud Environments(Institute of Electrical and Electronics Engineers Inc., 2023) Addya, S.K.; Satpathy, A.; Ghosh, B.C.; Chakraborty, S.; Ghosh, S.K.; Das, S.K.Applications hosted in commercial clouds are typically multi-tier and comprise multiple tightly coupled virtual machines (VMs). Service providers (SPs) cater to the users using VM instances with different configurations and pricing depending on the location of the data center (DC) hosting the VMs. However, selecting VMs to host multi-tier applications is challenging due to the trade-off between cost and quality of service (QoS) depending on the placement of VMs. This paper proposes a multi-cloud broker model called CoMCLOUD to select a sub-optimal VM coalition for multi-tier applications from an SP with minimum coalition pricing and maximum QoS. To strike a trade-off between the cost and QoS, we use an ant-colony-based optimization technique. The overall service selection game is modeled as a first-price sealed-bid auction aimed at maximizing the overall revenue of SPs. Further, as the hosted VMs often face demand spikes, we present a parallel migration strategy to migrate VMs with minimum disruption time. Detailed experiments show that our approach can improve the federation profit up to 23% at the expense of increased latency of approximately 15%, compared to the baselines. © 2013 IEEE.Item Container-based Service State Management in Cloud Computing(Institute of Electrical and Electronics Engineers Inc., 2021) Nath, S.B.; Addya, S.K.; Chakraborty, S.; Ghosh, S.K.In a cloud data center, the client requests are catered by placing the services in its servers. Such services are deployed through a sandboxing platform to ensure proper isolation among services from different users. Due to the lightweight nature, containers have become increasingly popular to support such sandboxing. However, for supporting effective and efficient data center resource usage with minimum resource footprints, improving the containers' consolidation ratio is significant for the cloud service providers. Towards this end, in this paper, we propose an exciting direction to significantly boost up the consolidation ratio of a data-center environment by effectively managing the containers' states. We observe that many cloud-based application services are event-triggered, so they remain inactive unless some external service request comes. We exploit the fact that the containers remain in an idle state when the underlying service is not active, and thus such idle containers can be checkpointed unless an external service request comes. However, the challenge here is to design an efficient mechanism such that an idle container can be resumed quickly to prevent the loss of the application's quality of service (QoS). We have implemented the system, and the evaluation is performed in Amazon Elastic Compute Cloud. The experimental results have shown that the proposed algorithm can manage the containers' states, ensuring the increase of consolidation ratio. © 2021 IFIP.Item Containerized deployment of micro-services in fog devices: a reinforcement learning-based approach(Springer, 2022) Nath, S.B.; Chattopadhyay, S.; Karmakar, R.; Addya, S.K.; Chakraborty, S.; Ghosh, S.K.The real power of fog computing comes when deployed under a smart environment, where the raw data sensed by the Internet of Things (IoT) devices should not cross the data boundary to preserve the privacy of the environment, yet a fast computation and the processing of the data is required. Devices like home network gateway, WiFi access points or core network switches can work as a fog device in such scenarios as its computing resources can be leveraged by the applications for data processing. However, these devices have their primary workload (like packet forwarding in a router/switch) that is time-varying and often generates spikes in the resource demand when bandwidth-hungry end-user applications, are started. In this paper, we propose pick–test–choose, a dynamic micro-service deployment and execution model that considers such time-varying primary workloads and workload spikes in the fog nodes. The proposed mechanism uses a reinforcement learning mechanism, Bayesian optimization, to decide the target fog node for an application micro-service based on its prior observation of the system’s states. We implement PTC in a testbed setup and evaluate its performance. We observe that PTC performs better than four other baseline models for micro-service offloading in a fog computing framework. In the experiment with an optical character recognition service, the proposed PTC gives average response time in the range of 9.71 sec–50 sec, which is better than Foglets (24.21 sec–80.35 sec), first-fit (16.74 sec–88 sec), best-fit (11.48 sec–57.39 sec) and mobility-based method (12 sec–53 sec). A further scalability study with an emulated setup over Amazon EC2 further confirms the superiority of PTC over other baselines. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.Item CSMD: Container state management for deployment in cloud data centers(Elsevier B.V., 2025) Nath, S.B.; Addya, S.K.; Chakraborty, S.; Ghosh, S.K.As the containers are lightweight in resource usage, they are preferred for cloud and edge computing service deployment. Containers serve the requests whenever a user sends a query; however, they remain idle when no user request comes. Again, improving the consolidation ratio of container deployments is essential to ensure fewer servers are used in a cloud data center with an optimal resource balance. To increase the consolidation ratio of a cloud data center, in this paper, we propose a system called Container State Management for Deployment (CSMD) to manage the container states. CSMD uses an algorithm to checkpoint the idle containers so that their resources can be released. The new containers are deployed using the released resources in a server. In addition, CSMD uses an algorithm to check the container status periodically, and the containers are resumed from the checkpoint state when the user requests them. We evaluate CSMD in Amazon Elastic Compute Cloud (Amazon EC2) by performing efficient state management of the containers. The experiments in the Amazon cloud show that the proposed CSMD system is superior to the existing algorithms as the proposed system increases the consolidation ratio of data centers. © 2024 Elsevier B.V.Item DCRDA: deadline-constrained function scheduling in serverless-cloud platform(Springer, 2025) Birajdar, P.A.; Meena, D.; Satpathy, A.; Addya, S.K.The serverless computing model frees developers from operational and management tasks, allowing them to focus solely on business logic. This paper addresses the computationally challenging function-container-virtual machine (VM) scheduling problem, especially under stringent deadline constraints. We propose a two-stage holistic scheduling framework called DCRDA targeting deadline-constrained function scheduling. In the first stage, the function-to-container scheduling is modeled as a one-to-one matching game and solved using the classical Deferred Acceptance Algorithm (DAA). The second stage addresses the container-to-VM assignment, modeled as a many-to-one matching problem, and solved using a variant of the DAA, the Revised-Deferred Acceptance Algorithm (RDA), to account for heterogeneous resource demands. Since matching-based strategies require agent preferences, a Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) ranking mechanism is employed to prioritize alternatives based on execution time, deadlines, and resource demands. The primary goal of DCRDA is to maximize the success ratio (SR), defined as the ratio of functions executed within the deadline to the total functions. Extensive test-bed validations over commercial providers such as Amazon EC2 show that the proposed framework significantly improves the success ratio compared to baseline approaches. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.Item DeepVNE: Deep Reinforcement and Graph Convolution Fusion for Virtual Network Embedding(Institute of Electrical and Electronics Engineers Inc., 2024) Keerthan Kumar, T.G.; Kb, A.; Siddheshwar, A.; Marali, A.; Kamath, A.; Koolagudi, S.G.; Addya, S.K.Network virtualization (NV) plays a crucial role in modern network management. One of the fundamental challenges in NV is allocating physical network (PN) resources to the demands of the virtual network requests (VNRs). This process is known as a virtual network embedding (VNE) and is NP-hard. Most of the existing approaches for VNE are based on heuristic, meta-heuristic, and exact strategies with limited flexibility and the risk of being stuck in local optimal solutions. In this concern, we provide a deep reinforcement learning (DRL) and graph convolution network (GCN) fusion for VNE (DeepVNE) for maximizing the revenue-to-cost ratio. The DeepVNE takes advantage of the power of actor-critic models within the DRL framework to detect network states and provide optimal solutions matched to current conditions. DeepVNE effectively captures the structural dependencies of both VNRs and PN resources by GCNs, allowing better decision-making during the embedding. Considering several features in the agents throughout the training phase improves resource utilization. The experiments show that DeepVNE outperforms the baselines by gaining a 51% acceptance ratio and 28% revenue-to-cost ratio. © 2024 IEEE.Item Delay-aware partial task offloading using multicriteria decision model in IoT–fog–cloud networks(Academic Press, 2025) S.a, S.; E, M.; Addya, S.K.; Rahman, S.; Pal, S.; Karmakar, C.Fog computing plays a prominent role in offloading computational tasks in heterogeneous environments since it provides less service delay than traditional cloud computing. The Internet of Things (IoT) devices cannot handle complex tasks due to less battery power, storage and computational capability. Full offloading has issues in providing efficient computation delay due to more response time and transmission cost. A suitable solution to overcome this problem is to partition the tasks into splittable subtasks. Considering multi-criteria decision parameters like processing efficiency and deadline helps to achieve efficient resource allocation and task assignment. The matching theory is applied to map task nodes to heterogeneous fog nodes and VMs for stability. Compared to baseline algorithms, proposed algorithms like Resource Allocation based on Processing Efficiency (RABP) and Task Assignment Based on Completion Time (TAC) are efficient enough to provide reasonable service delay and discard the non-beneficial tasks, i.e., tasks that do not execute within the deadline. © 2025 The AuthorsItem Democratizing University Seat Allocation using Blockchain(Institute of Electrical and Electronics Engineers Inc., 2022) Jahnavi, Y.; Prathyusha, M.; Shahanaz, S.; Thummar, D.; Ghosh, B.C.; Addya, S.K.Online seat allocation processes such as Joint Seat Allocation Authority in India have streamlined the university seat allocation process and reduced the risk of seats being vacant. Similar centralized online counseling processes are used for many universities in different countries. In-spite of being a collaborative process involving different stakeholders, such systems are centralized having their inherent limitations including lack of transparency, risk of censorship, manipulation, and single point of failure. In this demonstration, we showcase a decentralized ledger technology based system and application for democratizing the university seat allocation process. We demonstrate that the user experience of the proposed system is almost identical to the traditional centralized one, in spite of having the additional benefits of transparency, auditability, and non-repudiability of the decentralized architecture. © 2022 IEEE.Item DeSAT: Towards Transparent and Decentralized University Counselling Process(Institute of Electrical and Electronics Engineers Inc., 2022) Thummar, D.; Jahnavi, Y.; Prathyusha, M.; Shahanaz, S.; Ghosh, B.C.; Addya, S.K.The admission process in academic institutions (universities, colleges, etc.) is more digitized than ever. Starting from standardized tests to application processing, to shortlisting on the basis of merit, to even document verification, everything is carried out through online processes now. However, in spite of having huge benefits in terms of convenience, existing admission processes severely lack transparency. The entire process is dependent on certain central authoritative entities such as the testing authorities followed by the institutes themselves. Moreover, critical tasks such as verifying educational and identity-related documents of students is a tedious affair and the effort is duplicated across all institutions. In this work, we attempt to overcome these limitations of the existing workflow of academic institutes' admission process by designing a distributed ledger based framework that involves the academic institutes, testing authorities, document and credential validators, as well as the students. Our framework DeSAT uses verifiable credentials together with a permissioned ledger to remove the duplicate efforts in verification of test scores as well as validation of students' documents. In addition, it makes the entire process transparent and auditable while enforcing fair merit-based seat allotment through smart contracts. Through a prototype implementation using Hyperledger Fabric, Indy, and Aries, we demonstrate the practicality of DeSAT and show that our system offers acceptable performance while scaling with the number of participating institutions. © 2022 IEEE.Item Edge and serverless computing for the next generation of ad hoc networks(Elsevier B.V., 2025) Addya, S.K.; Pal, S.; Satpathy, A.; Jaisinghani, D.This special issue presents a forward-looking exploration of the integration of edge computing and serverless architectures to enable the next generation of ad hoc networks. Fundamentally, ad hoc networks, characterized by their decentralized and dynamic nature, play a critical role in environments where traditional infrastructure is unavailable or unreliable. However, they face significant challenges in terms of latency, scalability, resource efficiency, and real-time responsiveness. Our vision is to show how to bridge these gaps by combining the localized processing power of edge computing with the flexibility and scalability of serverless (function-as-a-service) models. This integration allows for real-time, event-driven decision-making directly at the network edge, reducing reliance on centralized infrastructure and enabling more autonomous and intelligent network behavior. We argue that this paradigm will be essential in the future as the number of connected devices and data-intensive applications continues to grow. From disaster response and smart transportation to remote healthcare and industrial Internet of Things (IoT), such systems demand scalable, resilient, and low-latency solutions. This special issue outlines the potential of edge-serverless synergy, highlights the key technical challenges, such as orchestration, security, and resource constraints, and proposes research directions to address them. We envision this integration as a cornerstone of future intelligent, distributed systems capable of operating in highly dynamic, real-world conditions. © 2025Item Efficient Task Offloading in IoT-Fog Network(Association for Computing Machinery, 2023) Morey, J.V.; Addya, S.K.Applications using AI or augmented reality which are resource hungry and cannot be computed on the end user device are sent to the cloud for processing. But cloud may not be nearer to the edge device and hence time required to execute that application is more. This paper presents a brief introduction to offloading in IoT-Fog Network. Various aspects and problems present in this area are mentioned. We also performed an analytical analysis to calculate the values quality of service parameters(QOS) for an optimal mapping. © 2023 Owner/Author.Item EFraS: Emulated framework to develop and analyze dynamic Virtual Network Embedding strategies over SDN infrastructure(Elsevier B.V., 2024) Keerthan Kumar, K.K.; Tomar, S.; Addya, S.K.; Satpathy, A.; Koolagudi, S.G.The integration of Software-Defined Networking (SDN) into Network Virtualization (NV) significantly enhances network management, isolation, and troubleshooting capabilities. However, it brings forth the intricate challenge of allocating Substrate Network (SN) resources for various Virtual Network Requests (VNRs), a process known as Virtual Network Embedding (VNE). It encompasses solving two intractable sub-problems: embedding Virtual Machines (VMs) and embedding Virtual Links (VLs). While the research community has focused on formulating embedding strategies, there has been less emphasis on practical implementation at a laboratory scale, which is crucial for comprehensive design, development, testing, and validation policies for large-scale systems. However, conducting tests using commercial providers presents challenges due to the scale of the problem and associated costs. Moreover, current simulators lack accuracy in representing the complexities of communication patterns, resource allocation, and support for SDN-specific features. These limitations result in inefficient implementations and reduced adaptability, hindering seamless integration with commercial cloud providers. To address this gap, this work introduces EFraS (Emulated Framework for Dynamic VNE Strategies over SDN). The goal is to aid developers and researchers in iterating, testing, and evaluating VNE solutions seamlessly, leveraging a modular design and customized reconfigurability. EFraS offers various functionalities, including generating real-world SN topologies and VNRs. Additionally, it integrates with a diverse set of evaluation metrics to streamline the testing and validation process. EFraS leverages Mininet, Ryu controller, and OpenFlow switches to closely emulate real-time setups. Moreover, we integrate EFraS with various state-of-the-art VNE schemes, ensuring the effective validation of embedding algorithms. © 2024 Elsevier B.V.Item Enhancing Security in Smart Contract Wallets : An OTP Based 2-Factor Authentication Approach(Association for Computing Machinery, Inc, 2025) Kalash; Ghosh, B.C.; Addya, S.K.As cryptocurrencies have gained widespread popularity, the security and handling of crypto-assets have become increasingly crucial. Numerous attacks targeting both users and blockchain platforms have led to substantial financial losses. This paper proposes a system for 2-factor authentication (2FA) for smart contract wallets, providing users with a flexible, secure, and customizable way of managing their crypto assets. The proposed methodology utilizes cryptographic hash functions and hash chains to generate One-Time Passwords (OTPs) for authentication, ensuring protection against unauthorized access. The 2FA setup involves a client interacting with a smart contract along with an authenticator and software wallet while using the public-private key pair of wallet as the first factor, and OTPs as the second factor. This is done through a two-stage protocol for bootstrapping and operation execution, and offers a level of security similar to traditional authentication schemes like HOTP. Using a novel pre-commitment scheme we also defend the users from front-running attacks. The implementation of the system is done in the context of public blockchain evaluating the practicality and effectiveness of the 2FA model. We open source our implementation for the Ethereum platform and make it available for the community. Furthermore, we analyse the cost incured based on gas consumption, space requirements and payload. In addition we suggest future enhancements for shorter OTP lengths and time based OTPs. © 2025 Copyright held by the owner/author(s).Item ESMA: Towards elevating system happiness in a decentralized serverless edge computing framework(Academic Press Inc., 2024) Datta, S.; Addya, S.K.; Ghosh, S.K.Due to the rapid growth in the adoption of numerous technologies, such as smartphones and the Internet of Things (IoT), edge and serverless computing have started gaining momentum in today's computing infrastructure. It has led to the production of huge amounts of data and has also resulted in increased network traffic, which if not managed well can cause network congestion. To address this and maintain the quality of service (QoS), in this work, a novel dispatch (destination selection) algorithm called Egalitarian Stable Matching Algorithm (ESMA) for faster data processing has been developed while also considering the best use of server resources in a decentralized Serverless-Edge environment. This will allow us to effectively utilize the enormous volumes of data that are generated. The proposed algorithm has been able to achieve lower overall dissatisfaction scores for the entire system. Individually, the client's happiness as well as the server's happiness have improved over the baseline. Moreover, there has been a drop of 25.7% in the total execution time and the total network resources consumed are lower as compared to the baseline algorithm as well as random-allocation algorithm. © 2023 Elsevier Inc.Item FASE: fast deployment for dependent applications in serverless environments(Springer, 2024) Saha, R.; Satpathy, A.; Addya, S.K.Function-as-a-service has reduced the user burden by allowing cloud service providers to overtake operational activities such as resource allocation, service deployment, auto-scaling, and load-balancing, to name a few. The users are only responsible for developing the business logic through event-triggered functions catering to an application. Although FaaS brings about multiple user benefits, a typical challenge in this context is the time incurred in the environmental setup of the containers on which the functions execute, often referred to as the cold-start time leading to delayed execution and quality-of-service violations. This paper presents an efficient scheduling strategy FASE that uses a finite-sized warm pool to facilitate the instantaneous execution of functions on pre-warmed containers. Test-bed evaluations over AWS Lambda confirm that FASE achieves a 40% reduction in the average cold-start time and 1.29× speedup compared to the baselines. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.
