Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 5 of 5
  • Item
    Exploring the support for high performance applications in the container runtime environment
    (Springer Berlin Heidelberg, 2018) Martin, J.P.; Kandasamy, A.; Chandrasekaran, K.
    Cloud computing is the driving power behind the current technological era. Virtualization is rightly referred to as the backbone of cloud computing. Impacts of virtualization employed in high performance computing (HPC) has been much reviewed by researchers. The overhead in the virtualization layer was one of the reasons which hindered its application in the HPC environment. Recent developments in virtualization, especially the OS container based virtualization provides a solution that employs a lightweight virtualization layer and promises lesser overhead. Containers are advantageous over virtual machines in terms of performance overhead which is a major concern in the case of both data intensive applications and compute intensive applications. Currently, several industries have adopted container technologies such as Docker. While Docker is widely used, it has certain pitfalls such as security issues. The recently introduced CoreOS Rkt container technology overcomes these shortcomings of Docker. There has not been much research on how the Rkt environment is suited for high performance applications. The differences in the stack of the Rkt containers suggest better support for high performance applications. High performance applications consist of CPU-intensive and data-intensive applications. The High Performance Linpack Library and the Graph500 are the commonly used computation intensive and data-intensive benchmark applications respectively. In this work, we explore the feasibility of this inter-operable Rkt container in high performance applications by running the HPL and Graph500 applications and compare its performance with the commonly used container technologies such as LXC and Docker containers. © 2018, The Author(s).
  • Item
    Straddling the crevasse: A review of microservice software architecture foundations and recent advancements
    (John Wiley and Sons Ltd vgorayska@wiley.com Southern Gate Chichester, West Sussex PO19 8SQ, 2019) Joseph, C.T.; Chandrasekaran, K.
    Microservice architecture style has been gaining wide impetus in the software engineering industry. Researchers and practitioners have adopted the microservices concepts into several application domains such as the internet of things, cloud computing, service computing, and healthcare. Applications developed in alignment with the microservices principles require an underlying platform with management capabilities to coordinate the different microservice units and ensure that the application functionalities are delivered to the user. A multitude of approaches has been proposed for the various tasks in microservices-based systems. However, since the field is relatively young, there is a need to organize the different research works. In this study, we present a comprehensive review of the research approaches directed toward microservice architectures and propose a multilevel taxonomy to categorize the existing research. The study also discusses the different distributed computing paradigms employing microservices and identifies the open research challenges in the domain. © 2019 John Wiley & Sons, Ltd.
  • Item
    Mobility aware autonomic approach for the migration of application modules in fog computing environment
    (Springer Science and Business Media Deutschland GmbH, 2020) Martin, J.P.; Kandasamy, A.; Chandrasekaran, K.
    The fog computing paradigm has emanated as a widespread computing technology to support the execution of the internet of things applications. The paradigm introduces a distributed, hierarchical layer of nodes collaboratively working together as the Fog layer. User devices connected to Fog nodes are often non-stationary. The location-aware attribute of Fog computing, deems it necessary to provide uninterrupted services to the users, irrespective of their locations. Migration of user application modules among the Fog nodes is an efficient solution to tackle this issue. In this paper, an autonomic framework MAMF, is proposed to perform migrations of containers running user modules, while satisfying the Quality of Service requirements. The hybrid framework employing MAPE loop concepts and Genetic Algorithm, addresses the migration of containers in the Fog environment, while ensuring application delivery deadlines. The approach uses the pre-determined value of user location for the next time instant, to initiate the migration process. The framework was modelled and evaluated in iFogSim toolkit. The re-allocation problem was also mathematically modelled as an Integer Linear Programming problem. Experimental results indicate that the approach offers an improvement in terms of network usage, execution cost and request execution delay, over the existing approaches. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature.
  • Item
    IntMA: Dynamic Interaction-aware resource allocation for containerized microservices in cloud environments
    (Elsevier B.V., 2020) Joseph, C.T.; Chandrasekaran, K.
    The Information Technology sector has undergone tremendous changes arising due to the emergence and prevalence of Cloud Computing. Microservice Architectures have also been attracting attention from several industries and researchers. Due to the suitability of microservices for the Cloud environments, an increasing number of Cloud applications are now provided as microservices. However, this transition to microservices brings a wide range of infrastructural orchestration challenges. Though several research works have discussed the engineering of microservice-based applications, there is an inevitable need for research on handling the operational phases of the microservice components. Microservice application deployment in containerized datacenters must be optimized to enhance the overall system performance. In this research work, the deployment of microservice application modules on the Cloud infrastructure is first modelled as a Binary Quadratic Programming Problem. In order to reduce the adverse impact of communication latencies on the response time, the interaction pattern between the microservice components is modelled as an undirected doubly weighted complete Interaction Graph. A novel, robust heuristic approach IntMA is also proposed for deploying the microservices in an interaction-aware manner with the aid of the interaction information obtained from the Interaction Graph. The proposed allocation policies are implemented in Kubernetes. The effectiveness of the proposed approach is evaluated on the Google Cloud Platform, using different microservice reference applications. Experimental results indicate that the proposed approach improves the response time and throughput of the microservice-based systems. © 2020 Elsevier B.V.
  • Item
    Nature-inspired resource management and dynamic rescheduling of microservices in Cloud datacenters
    (John Wiley and Sons Ltd, 2021) Joseph, C.T.; Chandrasekaran, K.
    Distributed Cloud environments are now resorting to Cloud applications composed of heterogeneous microservices. Cloud service providers strive to provide high quality of service (QoS) and response time is one of the key QoS attributes for microservices. The dynamism of microservice ecosystems necessitates runtime adaptations and microservices rescheduling to avoid performance degradation. Existing works target rescheduling in hypervisor-based systems, while ignoring the influence of configuration parameters of container-based microservices. In an effort to address these challenges, this article describes a novel microservice rescheduling framework, throttling and interaction-aware anticorrelated rescheduling for microservices, to proactively perform rescheduling activities whilst ensuring timely service responses. Based on periodic monitoring of the performance attributes, the framework schedules container migrations. Considering the exponentially large solution space, a metaheuristic approach based on multiverse optimization is developed to generate the near-optimal mapping of microservices to the datacenter resources. Experimental results indicate that our framework provides superior performance with a reduction of up to 13.97% in the average response time, when compared with systems with no support for rescheduling. © 2021 John Wiley & Sons Ltd.