Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 10 of 13
  • Item
    Virtual machine migration—a perspective study
    (Springer Verlag service@springer.de, 2018) Joseph, C.; Martin, J.P.; Chandrasekaran, K.; Kandasamy, A.
    The technology of Cloud computing has been ruling the IT world for the past few decades. One of the most notable tools that helped in prolonging the reign of Cloud computing is virtualization. While virtualization continues to be a boon for the Cloud technology, it is not short of its own pitfalls. One such pitfall results from the migration of virtual machines. Though migration incurs an overhead on the system, an efficient system cannot neglect migrating the virtual machines. This work attempts to carry out a perspective study on virtual machine migration. The various migration techniques proposed in the literature have been classified based on the aspects of migration that they consider. A survey of the various metrics that characterize the performance of a migration technique is also done. © 2018, Springer Nature Singapore Pte Ltd.
  • Item
    Unraveling the challenges for the application of fog computing in different realms: A multifaceted study
    (Springer Verlag service@springer.de, 2019) Martin, J.P.; Kandasamy, A.; Chandrasekaran, K.
    Fog computing is an emerging paradigm that deals with distributing data and computation at intermediate layers between the cloud and the edge. Cloud computing was introduced to support the increasing computing requirements of users. Later, it was observed that end users experienced a delay involved in uploading the large amounts of data to the cloud for processing. Such a seemingly centralized approach did not provide a good user experience. To overcome this limitation, processing capability was incorporated in devices at the edge. This led to the rise of edge computing. This paradigm suffered because edge devices had limited capability in terms of computing resources and storage requirements. Relying on these edge devices alone was not sufficient. Thus, a paradigm was needed without the delay in uploading to the cloud and without the resource availability constraints. This is where fog computing came into existence. This abstract paradigm involves the establishment of fog nodes at different levels between the edge and the cloud. Fog nodes can be different entities, such as personal computers (PCs). There are different realms where fog computing may be applied, such as vehicular networks and the Internet of Things. In all realms, resource management decisions will vary based on the environmental conditions. This chapter attempts to classify the various approaches for managing resources in the fog environment based on their application realm, and to identify future research directions. © Springer Nature Singapore Pte Ltd. 2019.
  • Item
    Explicating fog computing key research challenges and solutions
    (CRC Press, 2021) Martin, J.P.; Singh, V.; Chandrasekaran, K.; Kandasamy, A.
    [No abstract available]
  • Item
    Toward efficient autonomic management of clouds: A CDS-based hierarchical approach
    (Springer Verlag service@springer.de, 2018) Martin, J.P.; Kandasamy, A.; Chandrasekaran, K.
    Cloud computing is one of the most sought-after technologies today. Beyond a shadow of doubt, the number of clients opting for Cloud is increasing. This steers the complexity of the management of the Cloud computing environment. In order to serve the demands of customers, Cloud providers are resorting to more resources. Relying on a single managing element to coordinate the entire pool of resources is no more an efficient solution. Therefore, we propose to use a hierarchical approach for autonomic management. The problem that we consider here is to determine the nodes at which we have to place the Autonomic Managers (AMs), in order to ease the management process and minimize the cost of communication between the AMs. We propose a graph-theory-based model using Connected Dominating Set (CDS) that allows to determine an effective placement of AMs in different Data Centers (DCs), and, their collaboration with the Global Manager (GM). The approach considers the construction of domination sets and then, distributing the control of the dominees among the dominators. © 2018, Springer Nature Singapore Pte Ltd.
  • Item
    Machine Learning Approaches for Resource Allocation in the Cloud: Critical Reflections
    (Institute of Electrical and Electronics Engineers Inc., 2018) Murali, A.; Das, N.N.; Sukumaran, S.S.; Chandrasekaran, K.; Joseph, C.T.; Martin, J.P.
    Resource Allocation is the effective and efficient use of a Cloud's resources and is a very challenging problem in cloud environments. Many attempts have been made to make Resource Allocation automated and optimal in terms of profit. The best of these methods used Machine Learning, but this comes with an overhead for computation. A lot of research has been done in this domain to find more efficient methods. Distributed Neural Networks (DNN) is the future of computation and will soon be used to make the computation of large-scale data faster and easier. DNN is currently the most researched area. This paper will summarize the major research works in these fields. A new taxonomy is proposed and can be used as a reference for all future research in this domain. The paper also proposes some areas that need more research in the foreseeable future. © 2018 IEEE.
  • Item
    Location Privacy Using Data Obfuscation in Fog Computing
    (Institute of Electrical and Electronics Engineers Inc., 2019) Naik, C.; Sri Siddhartha, M.; Martin, J.P.; Chandrasekaran, K.
    In the past few decades, smartphones and Global Positioning System(GPS) devices have led to the popularity of Location Based Services. It is crucial for large MNCs to get a lot of data from people and provide their services accordingly. However, on the other side, the concern of privacy has also increased among the users, and they would like to hide their whereabouts. The rise of data consumption and the hunger for faster network speed has also led to the emergence of new concepts such as the Fog Computing. Fog computing paradigm extends the storage, networking, and computing facilities of the cloud computing towards the edge of the networks while removing the load on the server centers and decreasing the latency at the edge device. The fog computing will help in unlimited growth of location services and this adoption of fog computing calls for the need for more secure and robust algorithms for location privacy. One of the ways we can alter the information regarding the location of the user is Location Obfuscation. This can be done reversibly or irreversibly. In this paper, we address the problem of location privacy and present a solution based on the type of data that has to be preserved (in our case, it is distance). A mobile application has been designed and developed to test and validate the feasibility of the proposed obfuscation techniques for the Fog computing environments. © 2019 IEEE.
  • Item
    Fuzzy Reinforcement Learning based Microservice Allocation in Cloud Computing Environments
    (Institute of Electrical and Electronics Engineers Inc., 2019) Joseph, C.T.; Martin, J.P.; Chandrasekaran, K.; Kandasamy, A.
    Nowadays the Cloud Computing paradigm has become the defacto platform for deploying and managing user applications. Monolithic Cloud applications pose several challenges in terms of scalability and flexibility. Hence, Cloud applications are designed as microservices. Application scheduling and energy efficiency are key concerns in Cloud computing research. Allocating the microservice containers to the hosts in the datacenter is an NP-hard problem. There is a need for efficient allocation strategies to determine the placement of the microservice containers in Cloud datacenters to minimize Service Level Agreement violations and energy consumption. In this paper, we design a Reinforcement Learning-based Microservice Allocation (RL-MA) approach. The approach is implemented in the ContainerCloudSim simulator. The evaluation is conducted using the real-world Google cluster trace. Results indicate that the proposed method reduces both the SLA violation and energy consumption when compared to the existing policies. © 2019 IEEE.
  • Item
    HTmRPL++ : A Trust-Aware RPL Routing Protocol for Fog Enabled Internet of Things
    (Institute of Electrical and Electronics Engineers Inc., 2020) Subramanian, N.; Mitra, S.; Martin, J.P.; Chandrasekaran, K.
    With the proliferation of Fog computing, computation is moved to edge devices and is not based on a purely centralized approach. In a Fog computing network, the network topology is dynamic. New nodes will join and leave. One of the major issues in Fog computing is trust. Trust is the level of assurance that an object will behave in a satisfactory manner. The Routing Protocol for Low Power and Lossy Networks (RPL) is a protocol used for routing in IoT networks. RPL provides meager protection against routing or other forms of attacks. The resource-constrained nature of Fog nodes prevents the use of heavyweight cryptographic algorithms to achieve secured communication. A lightweight mechanism is thus essential to impart security in Fog-IoT networks. Trust analysis provides a behavior-based analysis of entities in the system with the power to predict future behavior. In this paper, a lightweight Recommendation based Trust Mechanism is proposed to impart security to RPL. © 2020 IEEE.
  • Item
    Machine Learning Powered Autoscaling for Blockchain-Based Fog Environments
    (Springer Science and Business Media Deutschland GmbH, 2022) Martin, J.P.; Joseph, C.T.; Chandrasekaran, K.; Kandasamy, A.
    Internet-of-Things devices generate huge amount of data which further need to be processed. Fog computing provides a decentralized infrastructure for processing these huge volumes of data. Fog computing environments provide low latency and location-aware alternative to conventional cloud computing by placing the processing nodes closer to the end devices. Co-ordination among end devices can become cumbersome and complex with the increasing amount of IoT devices. Some of the major challenges faced while executing services in the fog environment is the resource provisioning for the user services, service placement among the fog devices and scaling of fog devices based on the current load on the network. Being a decentralized infrastructure, fog computing is vulnerable to external threats such as data thefts. This work presents a blockchain based fog framework for making autoscaling decisions with the use of machine learning techniques. Evaluation is done by performing a series of experiments that show how the services are handled by the fog framework and how the autoscaling decisions are made. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
  • Item
    Exploring the support for high performance applications in the container runtime environment
    (Springer Berlin Heidelberg, 2018) Martin, J.P.; Kandasamy, A.; Chandrasekaran, K.
    Cloud computing is the driving power behind the current technological era. Virtualization is rightly referred to as the backbone of cloud computing. Impacts of virtualization employed in high performance computing (HPC) has been much reviewed by researchers. The overhead in the virtualization layer was one of the reasons which hindered its application in the HPC environment. Recent developments in virtualization, especially the OS container based virtualization provides a solution that employs a lightweight virtualization layer and promises lesser overhead. Containers are advantageous over virtual machines in terms of performance overhead which is a major concern in the case of both data intensive applications and compute intensive applications. Currently, several industries have adopted container technologies such as Docker. While Docker is widely used, it has certain pitfalls such as security issues. The recently introduced CoreOS Rkt container technology overcomes these shortcomings of Docker. There has not been much research on how the Rkt environment is suited for high performance applications. The differences in the stack of the Rkt containers suggest better support for high performance applications. High performance applications consist of CPU-intensive and data-intensive applications. The High Performance Linpack Library and the Graph500 are the commonly used computation intensive and data-intensive benchmark applications respectively. In this work, we explore the feasibility of this inter-operable Rkt container in high performance applications by running the HPL and Graph500 applications and compare its performance with the commonly used container technologies such as LXC and Docker containers. © 2018, The Author(s).