Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
17 results
Search Results
Item Automating the Selection of Container Orchestrators for Service Deployment(Institute of Electrical and Electronics Engineers Inc., 2022) Chaurasia, P.; Nath, S.B.; Addya, S.K.; Ghosh, S.K.With the ubiquitous usage of cloud computing, the services are deployed as a virtual machine (VM) in cloud servers. However, VM based deployment often takes more amount of resources. In order to minimize the resource consumption of service deployment, container based lightweight virtualization is used. The management of the containers for deployment is a challenging problem as the container managers need to consume less amount of resources while also catering to the needs of the clients. In order to choose the right container manager, we have proposed an architecture based on the application and user needs. In the proposed architecture, we have a machine learning based decision engine to solve the problem. We have considered docker containers for experimentation. The experimental results show that the proposed system can select the proper container manager among docker compose based manager and Kubernetes. © 2022 IEEE.Item LCS : Alleviating Total Cold Start Latency in Serverless Applications with LRU Warm Container Approach(Association for Computing Machinery, 2023) Sethi, B.; Addya, S.K.; Ghosh, S.K.Serverless computing offers "Function-as-a-Service"(FaaS), which promotes an application in the form of independent granular components called functions. FaaS goes well as a widespread standard that facilitates the development of applications in cloud-based environments. Clients can solely focus on developing applications in a serverless ecosystem, passing the overburden of resource governance to the service providers. However, FaaS platforms have to bear the degradation in performance originating from the cold starts of executables i.e. serverless functions. The cold start reflects the delay in provisioning a runtime container that processes the functions. Each serverless platform is handling the problem of cold start with its own solution. In recent times, approaches to deal with cold starts have received the attention of many researchers. This paper comes up with an extensive solution to handle the cold start problem. We propose a scheduling approach to reduce the cold start occurrences by keeping the containers alive for a longer period of time using the Least Recently Used warm Container Selection (LCS ) approach on Affinity-based scheduling. Further, we carried out an evaluation and compared the obtained results with the MRU container selection approach. The proposed LCS approach outperforms by approximately 48% compared to the MRU approach. © 2023 ACM.Item Analysis of Selected Load Balancing Algorithms in Containerized Cloud Environment for Microservices(Institute of Electrical and Electronics Engineers Inc., 2024) Saxena, D.; Bhowmik, B.Microservice architecture has become a widely accepted solution to address the challenges, particularly scala-bility, deployment, and flexibility associated with monolithic architecture. A vital attribute of the microservices architecture is its capability to handle load balancing on a large scale. The load balancer collaborates with a scaler to distribute the workload efficiently across multiple instances. In the literature, different studies employ load-balancing algorithms for efficient microservice load balancing. These works overlook cloud-based microservice applications or focus solely on virtual machines, neglecting containers. This paper addresses these limitations by comparatively assessing selected load-balancing algorithms. The three most used algorithms, random, round-robin, and least connection, are studied on a microservice application. The extensive experiments are conducted using Elastic Container Service (ECS) of Amazon Web Service (AWS) for containerized cloud setup where each service resides in a cluster and traffic is generated through Locust. Experimental results show that throughput and response time range of 6.2-288.7 and 312.2-3375.8 ms, respectively. © 2024 IEEE.Item Influence of die angle on containerless extrusion of commercially pure titanium tubes(2007) Srinivasan, K.; Venugopal, P.Containerless tube extrusion has been investigated with commerically pure titanium at room temperature and a strain rate of 0.07 s-1 using 20 conical dies of five different strains and four different angles with MoS2 lubricant. Theoretical punch pressures have been calculated using appropriate equations from slab analysis of the process and compared with experimentally determined punch pressures. It is found that there exists an optimum angle at which the punch pressure is the least at a given strain.Item Exploring the support for high performance applications in the container runtime environment(Springer Berlin Heidelberg, 2018) Martin, J.P.; Kandasamy, A.; Chandrasekaran, K.Cloud computing is the driving power behind the current technological era. Virtualization is rightly referred to as the backbone of cloud computing. Impacts of virtualization employed in high performance computing (HPC) has been much reviewed by researchers. The overhead in the virtualization layer was one of the reasons which hindered its application in the HPC environment. Recent developments in virtualization, especially the OS container based virtualization provides a solution that employs a lightweight virtualization layer and promises lesser overhead. Containers are advantageous over virtual machines in terms of performance overhead which is a major concern in the case of both data intensive applications and compute intensive applications. Currently, several industries have adopted container technologies such as Docker. While Docker is widely used, it has certain pitfalls such as security issues. The recently introduced CoreOS Rkt container technology overcomes these shortcomings of Docker. There has not been much research on how the Rkt environment is suited for high performance applications. The differences in the stack of the Rkt containers suggest better support for high performance applications. High performance applications consist of CPU-intensive and data-intensive applications. The High Performance Linpack Library and the Graph500 are the commonly used computation intensive and data-intensive benchmark applications respectively. In this work, we explore the feasibility of this inter-operable Rkt container in high performance applications by running the HPL and Graph500 applications and compare its performance with the commonly used container technologies such as LXC and Docker containers. © 2018, The Author(s).Item Straddling the crevasse: A review of microservice software architecture foundations and recent advancements(John Wiley and Sons Ltd vgorayska@wiley.com Southern Gate Chichester, West Sussex PO19 8SQ, 2019) Joseph, C.T.; Chandrasekaran, K.Microservice architecture style has been gaining wide impetus in the software engineering industry. Researchers and practitioners have adopted the microservices concepts into several application domains such as the internet of things, cloud computing, service computing, and healthcare. Applications developed in alignment with the microservices principles require an underlying platform with management capabilities to coordinate the different microservice units and ensure that the application functionalities are delivered to the user. A multitude of approaches has been proposed for the various tasks in microservices-based systems. However, since the field is relatively young, there is a need to organize the different research works. In this study, we present a comprehensive review of the research approaches directed toward microservice architectures and propose a multilevel taxonomy to categorize the existing research. The study also discusses the different distributed computing paradigms employing microservices and identifies the open research challenges in the domain. © 2019 John Wiley & Sons, Ltd.Item Mobility aware autonomic approach for the migration of application modules in fog computing environment(Springer Science and Business Media Deutschland GmbH, 2020) Martin, J.P.; Kandasamy, A.; Chandrasekaran, K.The fog computing paradigm has emanated as a widespread computing technology to support the execution of the internet of things applications. The paradigm introduces a distributed, hierarchical layer of nodes collaboratively working together as the Fog layer. User devices connected to Fog nodes are often non-stationary. The location-aware attribute of Fog computing, deems it necessary to provide uninterrupted services to the users, irrespective of their locations. Migration of user application modules among the Fog nodes is an efficient solution to tackle this issue. In this paper, an autonomic framework MAMF, is proposed to perform migrations of containers running user modules, while satisfying the Quality of Service requirements. The hybrid framework employing MAPE loop concepts and Genetic Algorithm, addresses the migration of containers in the Fog environment, while ensuring application delivery deadlines. The approach uses the pre-determined value of user location for the next time instant, to initiate the migration process. The framework was modelled and evaluated in iFogSim toolkit. The re-allocation problem was also mathematically modelled as an Integer Linear Programming problem. Experimental results indicate that the approach offers an improvement in terms of network usage, execution cost and request execution delay, over the existing approaches. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature.Item IntMA: Dynamic Interaction-aware resource allocation for containerized microservices in cloud environments(Elsevier B.V., 2020) Joseph, C.T.; Chandrasekaran, K.The Information Technology sector has undergone tremendous changes arising due to the emergence and prevalence of Cloud Computing. Microservice Architectures have also been attracting attention from several industries and researchers. Due to the suitability of microservices for the Cloud environments, an increasing number of Cloud applications are now provided as microservices. However, this transition to microservices brings a wide range of infrastructural orchestration challenges. Though several research works have discussed the engineering of microservice-based applications, there is an inevitable need for research on handling the operational phases of the microservice components. Microservice application deployment in containerized datacenters must be optimized to enhance the overall system performance. In this research work, the deployment of microservice application modules on the Cloud infrastructure is first modelled as a Binary Quadratic Programming Problem. In order to reduce the adverse impact of communication latencies on the response time, the interaction pattern between the microservice components is modelled as an undirected doubly weighted complete Interaction Graph. A novel, robust heuristic approach IntMA is also proposed for deploying the microservices in an interaction-aware manner with the aid of the interaction information obtained from the Interaction Graph. The proposed allocation policies are implemented in Kubernetes. The effectiveness of the proposed approach is evaluated on the Google Cloud Platform, using different microservice reference applications. Experimental results indicate that the proposed approach improves the response time and throughput of the microservice-based systems. © 2020 Elsevier B.V.Item Adopting elitism-based Genetic Algorithm for minimizing multi-objective problems of IoT service placement in fog computing environment(Academic Press, 2021) Natesha, B.V.; Guddeti, R.M.R.Fog computing is an emerging computation technology for handling and processing the data from IoT devices. The devices such as the router, smart gateways, or micro-data centers are used as the fog nodes to host and service the IoT applications. However, the primary challenge in fog computing is to find the suitable nodes to deploy and run the IoT application services as these devices are geographically distributed and have limited computational resources. In this paper, we design the two-level resource provisioning fog framework using docker and containers and formulate the service placement problem in fog computing environment as a multi-objective optimization problem for minimizing the service time, cost, energy consumption and thus ensuring the QoS of IoT applications. We solved the said multi-objective problem using the Elitism-based Genetic Algorithm (EGA). The proposed approach is evaluated on fog computing testbed developed using docker and containers on 1.4 GHz 64-bit quad-core processor devices. The experimental results demonstrate that the proposed method outperforms other state-of-the-art service placement strategies considered for performance evaluation in terms of service cost, energy consumption, and service time. © 2021 Elsevier LtdItem Physical model studies on damage and stability analysis of breakwaters armoured with geotextile sand containers(Elsevier Ltd, 2021) Elias, T.; Shirlal, K.G.; E.v, K.Harnessing the advantages of geotextile sand containers (GSCs), numerous submerged breakwaters and shoreline protection structures have been constructed worldwide. But an emerged breakwater structure with geotextile armour units, capable of replacing the conventional structures, is rarely discussed. A 1:30 scaled physical experimentation is chosen as a preliminary investigation to test the feasibility of using GSCs as breakwater armour units. Structural design is evolved based on a comprehensive literature survey. The paper focuses on the stability parameters and damage characteristics of the proposed structure. Four different configurations are subjected to waves, confining to Mangaluru's wave parameters. Effect of armour unit size and sand fill ratio on the stability of the structure is analysed and it is concluded that changing sand fill ratio from 80% to 100% shot up the structural stability to a maximum of 14%. Increasing bag size also resulted in the increased stability up to 8%. Experiments revealed that the best performing configuration could withstand wave heights up to 2.7 m. Stability curves for all tested configurations are discussed and can serve as an effective guideline for designing GSC breakwaters. © 2020 Elsevier Ltd
