1. Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/1/5
Browse
6 results
Search Results
Item Elucidating the challenges for the praxis of fog computing: An aspect-based study(2019) Martin, J.P.; Kandasamy, A.; Chandrasekaran, K.; Joseph, C.T.The evolutionary advancements in the field of technology have led to the instigation of cloud computing. The Internet of Things paradigm stimulated the extensive use of sensors distributed across the network edges. The cloud datacenters are assigned the responsibility for processing the collected sensor data. Recently, fog computing was conceptuated as a solution for the overwhelmed narrow bandwidth. The fog acts as a complementary layer that interplays with the cloud and edge computing layers, for processing the data streams. The fog paradigm, as any distributed paradigm, has its set of inherent challenges. The fog environment necessitates the development of management platforms that effectuates the orchestration of fog entities. Owing to the plenitude of research efforts directed toward these issues in a relatively young field, there is a need to organize the different research works. In this study, we provide a compendious review of the research approaches in the domain, with special emphasis on the approaches for orchestration and propose a multilevel taxonomy to classify the existing research. The study also highlights the application realms of fog computing and delineates the open research challenges in the domain. 2019 John Wiley & Sons, Ltd.Item Exploring the support for high performance applications in the container runtime environment(2018) Martin, J.P.; Kandasamy, A.; Chandrasekaran, K.Cloud computing is the driving power behind the current technological era. Virtualization is rightly referred to as the backbone of cloud computing. Impacts of virtualization employed in high performance computing (HPC) has been much reviewed by researchers. The overhead in the virtualization layer was one of the reasons which hindered its application in the HPC environment. Recent developments in virtualization, especially the OS container based virtualization provides a solution that employs a lightweight virtualization layer and promises lesser overhead. Containers are advantageous over virtual machines in terms of performance overhead which is a major concern in the case of both data intensive applications and compute intensive applications. Currently, several industries have adopted container technologies such as Docker. While Docker is widely used, it has certain pitfalls such as security issues. The recently introduced CoreOS Rkt container technology overcomes these shortcomings of Docker. There has not been much research on how the Rkt environment is suited for high performance applications. The differences in the stack of the Rkt containers suggest better support for high performance applications. High performance applications consist of CPU-intensive and data-intensive applications. The High Performance Linpack Library and the Graph500 are the commonly used computation intensive and data-intensive benchmark applications respectively. In this work, we explore the feasibility of this inter-operable Rkt container in high performance applications by running the HPL and Graph500 applications and compare its performance with the commonly used container technologies such as LXC and Docker containers. 2018, The Author(s).Item Machine Learning Approaches for Resource Allocation in the Cloud: Critical Reflections(2018) Murali, A.; Das, N.N.; Sukumaran, S.S.; Chandrasekaran, K.; Joseph, C.; Martin, J.P.Resource Allocation is the effective and efficient use of a Cloud's resources and is a very challenging problem in cloud environments. Many attempts have been made to make Resource Allocation automated and optimal in terms of profit. The best of these methods used Machine Learning, but this comes with an overhead for computation. A lot of research has been done in this domain to find more efficient methods. Distributed Neural Networks (DNN) is the future of computation and will soon be used to make the computation of large-scale data faster and easier. DNN is currently the most researched area. This paper will summarize the major research works in these fields. A new taxonomy is proposed and can be used as a reference for all future research in this domain. The paper also proposes some areas that need more research in the foreseeable future. � 2018 IEEE.Item Location Privacy Using Data Obfuscation in Fog Computing(2019) Naik, C.; Siddhartha, M.; Martin, J.P.; Chandrasekaran, K.In the past few decades, smartphones and Global Positioning System(GPS) devices have led to the popularity of Location Based Services. It is crucial for large MNCs to get a lot of data from people and provide their services accordingly. However, on the other side, the concern of privacy has also increased among the users, and they would like to hide their whereabouts. The rise of data consumption and the hunger for faster network speed has also led to the emergence of new concepts such as the Fog Computing. Fog computing paradigm extends the storage, networking, and computing facilities of the cloud computing towards the edge of the networks while removing the load on the server centers and decreasing the latency at the edge device. The fog computing will help in unlimited growth of location services and this adoption of fog computing calls for the need for more secure and robust algorithms for location privacy. One of the ways we can alter the information regarding the location of the user is Location Obfuscation. This can be done reversibly or irreversibly. In this paper, we address the problem of location privacy and present a solution based on the type of data that has to be preserved (in our case, it is distance). A mobile application has been designed and developed to test and validate the feasibility of the proposed obfuscation techniques for the Fog computing environments. � 2019 IEEE.Item Fuzzy Reinforcement Learning based Microservice Allocation in Cloud Computing Environments(2019) Joseph, C.T.; Martin, J.P.; Chandrasekaran, K.; Kandasamy, A.Nowadays the Cloud Computing paradigm has become the defacto platform for deploying and managing user applications. Monolithic Cloud applications pose several challenges in terms of scalability and flexibility. Hence, Cloud applications are designed as microservices. Application scheduling and energy efficiency are key concerns in Cloud computing research. Allocating the microservice containers to the hosts in the datacenter is an NP-hard problem. There is a need for efficient allocation strategies to determine the placement of the microservice containers in Cloud datacenters to minimize Service Level Agreement violations and energy consumption. In this paper, we design a Reinforcement Learning-based Microservice Allocation (RL-MA) approach. The approach is implemented in the ContainerCloudSim simulator. The evaluation is conducted using the real-world Google cluster trace. Results indicate that the proposed method reduces both the SLA violation and energy consumption when compared to the existing policies. � 2019 IEEE.Item Toward efficient autonomic management of clouds: A CDS-based hierarchical approach(2018) Martin, J.P.; Kandasamy, A.; Chandrasekaran, K.Cloud computing is one of the most sought-after technologies today. Beyond a shadow of doubt, the number of clients opting for Cloud is increasing. This steers the complexity of the management of the Cloud computing environment. In order to serve the demands of customers, Cloud providers are resorting to more resources. Relying on a single managing element to coordinate the entire pool of resources is no more an efficient solution. Therefore, we propose to use a hierarchical approach for autonomic management. The problem that we consider here is to determine the nodes at which we have to place the Autonomic Managers (AMs), in order to ease the management process and minimize the cost of communication between the AMs. We propose a graph-theory-based model using Connected Dominating Set (CDS) that allows to determine an effective placement of AMs in different Data Centers (DCs), and, their collaboration with the Global Manager (GM). The approach considers the construction of domination sets and then, distributing the control of the dominees among the dominators. � 2018, Springer Nature Singapore Pte Ltd.