Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorKandasamy, A.-
dc.contributor.advisorChandrasekaran, K.-
dc.contributor.authorMartin, John Paul.-
dc.description.abstractThe recent years witnessed the rapid adoption of the Internet of Things (IoT) paradigm across business and non-business realms alike. Usually, IoT-based systems are located at a multi-hop distance apart from the Cloud datacenters. Consequently, relying on Cloud-centric execution results in a performance penalty for the real-time IoT applications. To circumvent this, the Fog computing paradigm emanated as a widespread computing technology to support the execution of the IoT applications. Fog computing extends Cloud services to the vicinity of end devices, thereby enabling the applications to be executed closer to the data sources. Thus, the Fog paradigm aids in reducing the service delivery time and network congestion. Even though this new paradigm opens a set of potential possibilities, it also introduces several additional challenges and complexities arising from its heterogeneity and resource-constrained characteristics. In order to harness the potential of Fog computing environments, it is imperative to adopt efficient orchestration mechanisms that can manage the resources in the system. Application management is an intrinsic component of resource orchestration systems. This involves identifying suitable options for the initial placement of the applications. The placement decisions have significant impacts on the overall performance of the application. Placement schemes for Fog environments must take into consideration the requirements and characteristics of the different entities in the Fog ecosystem, including the Fog nodes, IoT applications and IoT devices. The category of mission critical IoT applications makes reliable service delivery an essential requirement in Fog environments. There is a resolute need for placement schemes that ensure reliable service delivery. Accordingly, in this research, a placement policy that addresses the conflicting criteria of service reliability and monetary cost is proposed. The proposed Cost and Reliability aware Eagle Whale optimizer (CREW) derives placement decisions that maximize the reliability and minimize the cost. Realtime experiments substantiate that the proposed approach succeeds in improving the performance of applications executed in Fog computing environments. User mobility is another particularity in Fog computing environments. Mobility of the end users may result in an increase in the hop distance between the data source and the Fog node. In order to ensure that this does not adversely impact service delivery, application modules must be migrated across Fog nodes with the objective of maintaining low hop distance values. Therefore, this research also presents a Mobility-aware Autonomic Framework (MAMF) to perform migrations in the Fog environment. The developed framework relies on predetermined user locations to take migration decisions. Performance of the approach is evaluated using real-time mobility traces.en_US
dc.publisherNational Institute of Technology Karnataka, Surathkalen_US
dc.subjectDepartment of Mathematical and Computational Sciencesen_US
dc.subjectFog Computingen_US
dc.subjectService placementen_US
dc.subjectApplication module migrationen_US
dc.subjectMobility supporten_US
dc.subjectMeta-heuristic optimizationen_US
dc.subjectAutonomic Computingen_US
dc.titleOrchestration Mechanisms for Enabling Distributed Processing In the Fog Computing Environmenten_US
Appears in Collections:1. Ph.D Theses

Files in This Item:
File Description SizeFormat 
JOHN PAUL MARTIN - 165093 MA16FV01.pdf2.11 MBAdobe PDFThumbnail

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.