Orchestration Mechanisms for Enabling Distributed Processing In the Fog Computing Environment
Date
2021
Authors
Martin, John Paul.
Journal Title
Journal ISSN
Volume Title
Publisher
National Institute of Technology Karnataka, Surathkal
Abstract
The recent years witnessed the rapid adoption of the Internet of Things (IoT) paradigm
across business and non-business realms alike. Usually, IoT-based systems are located
at a multi-hop distance apart from the Cloud datacenters. Consequently, relying on
Cloud-centric execution results in a performance penalty for the real-time IoT applications.
To circumvent this, the Fog computing paradigm emanated as a widespread
computing technology to support the execution of the IoT applications. Fog computing
extends Cloud services to the vicinity of end devices, thereby enabling the applications
to be executed closer to the data sources. Thus, the Fog paradigm aids in reducing
the service delivery time and network congestion. Even though this new paradigm
opens a set of potential possibilities, it also introduces several additional challenges and
complexities arising from its heterogeneity and resource-constrained characteristics. In
order to harness the potential of Fog computing environments, it is imperative to adopt
efficient orchestration mechanisms that can manage the resources in the system.
Application management is an intrinsic component of resource orchestration systems.
This involves identifying suitable options for the initial placement of the applications.
The placement decisions have significant impacts on the overall performance
of the application. Placement schemes for Fog environments must take into consideration
the requirements and characteristics of the different entities in the Fog ecosystem,
including the Fog nodes, IoT applications and IoT devices. The category of mission
critical IoT applications makes reliable service delivery an essential requirement in Fog
environments. There is a resolute need for placement schemes that ensure reliable service
delivery. Accordingly, in this research, a placement policy that addresses the conflicting
criteria of service reliability and monetary cost is proposed. The proposed Cost
and Reliability aware Eagle Whale optimizer (CREW) derives placement decisions that
maximize the reliability and minimize the cost. Realtime experiments substantiate that
the proposed approach succeeds in improving the performance of applications executed
in Fog computing environments.
User mobility is another particularity in Fog computing environments. Mobility of
the end users may result in an increase in the hop distance between the data source and
the Fog node. In order to ensure that this does not adversely impact service delivery,
application modules must be migrated across Fog nodes with the objective of maintaining
low hop distance values. Therefore, this research also presents a Mobility-aware
Autonomic Framework (MAMF) to perform migrations in the Fog environment. The
developed framework relies on predetermined user locations to take migration decisions.
Performance of the approach is evaluated using real-time mobility traces.
Description
Keywords
Department of Mathematical and Computational Sciences, Fog Computing, Service placement, Application module migration, Reliability, Mobility support, Meta-heuristic optimization, Autonomic Computing