2. Thesis and Dissertations
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/1/10
Browse
35 results
Search Results
Item Parallel Metaheuristic Approaches to Solve Combinatorial Optimization Problems(National Institute of Technology Karnataka, Surathkal, 2021) Yelmewad, Pramod Hanmantrao; Talawar, BasavarajThe time complexity of many optimization problems falls under either exponential time or factorial time. Finding an optimal solution for such optimization problems is of great significance for scientific and engineering applications. Metaheuristic approaches have vast influence while solving optimization problems that provide satisfactory solutions in a reasonable amount of time. A metaheuristic is a guiding strategy to an underlying heuristic approach to solve a specific optimization problem. The metaheuristic approach helps determine an acceptable solution by applying constraints on the exploration space of feasible solutions. However, metaheuristics consume large amounts of CPU time while solving larger instances. Parallel computation reduces overall execution time by executing independent tasks simultaneously. Parallel computing enables faster convergence to solutions of large instances of optimization problems. Challenges facing designers implementing parallel strategies are - cost quality reduction in large problem instances, arriving at near-optimal solutions for a subset of input instances (but not all), large search-space exploration to reach a satisfactory solution. In this thesis work, efficient parallel metaheuristic models are developed that resolve some of the issues mentioned above. Further, satisfactory solutions are arrived at in a reasonable amount of time. The proposed parallel versions of metaheuristics have been applied to three combinatorial optimization problems: traveling salesman problem, minimum latency problem, and vehicle routing problem. The Traveling Salesman Problem (TSP) is an NP-hard combinatorial optimization problem. Metaheuristic methods are used to produce the satisfactory solution by limiting the search-space exploration. Moreover, CPU implementations of metaheuristic methods are too time-consuming for large input instances. The GPU-based Parallel Iterative Hill Climbing (PIHC) approach is presented for solving large TSPLIB instances in a reasonable time. Multiple GPU-based thread mapping strategies have been presented to solve large-scale TSPLIB instances. The improved cost quality has been demonstrated using the symmetric TSPLIB instances having up to 85,900 cities. The PIHC GPU implementation gives up to 193 speedup over its sequential counterpart and up to 979.96 speedup over a state-of-the-art GPU-based TSP solver. The PIHC gives a cost quality with error rate 0.72% in the best case and 8.06% in the worst case. Moreover, two GPU-based parallel strategies have been developed for ant colony algorithm to solve larger instances than existing approaches. The Minimum Latency Problem (MLP) is an NP-Hard combinatorial optimization problem. Metaheuristics use perturbation and randomization to arrive at a satisfactory solution under time constraints. The proposed work uses Deterministic Local Search Heuristic (DLSH) to identify a satisfactory solution without setting up metaheuristic parameters. A move evaluation procedure is proposed for the swap approach which computes a move in a constant time. A GPU-based Parallel Deterministic Local Search Heuristic (PDLSH) is proposed to mitigate the execution time spent in the solution improvement phase. The PDLSH parallelizes the solution improvement phase and solves MLP for larger instances than the state-of-the-art. The DLSH and PDLSH implementations are tested on the TRP and TSPLIB standard instances. DLSH reaches new best solutions for five TSPLIB instances, namely eil51, berlin52, pr107, rat195, and pr226. The proposed PDLSH achieves a speedup of up to 179.75 for the instances of size 10- 11849 nodes compared to its sequential counterpart. The Vehicle Routing Problem (VRP) is an NP-hard, goods transportation scheduling problem with vehicle capacity and transportation cost constraints. This work presents GPU-based parallel strategies for the Local Search Heuristic (LSH) algorithm to solve the large-scale Capacitated Vehicle Routing Problem (CVRP) instances. This work employs a combination of five improvement heuristic approaches to improve the constructed feasible solution. It is noticed that a large amount of CPU time is spent in the feasible solution improvement phase. Two GPU-based parallel strategies, namely, route level and customer level parallel designs, have been developed to reduce the execution time of solution improvement phase. The proposed parallel version of the LSH has been tested on large-scale instances of up to 30000 customers. The customer level parallel design offers speedup up to 147:19 compared to the corresponding sequential version. From this thesis work, the proposed parallel version of IHC solves larger TSPLIB ii instances up to 85900 cities with a speedup of up to 193 times. Also, it reduces error rates of local solutions, i.e., in the range of 0.72% - 8.06%. The limitation of existing GPU-based MLP solver has been overcome in the proposed GPU-based parallel strategy to solve instances above 1000 nodes. PDLSH significantly mitigates the overall execution time while solving MLP, which achieves a speedup of 179 over its sequential counterpart for instances up to 11849 nodes. In the case of CVRP, two parallel strategies, namely route-level and customer-level, are developed to reduce execution time spent in the improvement phase. The customer-level parallel design reduces the execution time significantly and generates the speedup up to 147 for the instances having up to 30000 nodes.Item Cloud Service Selection and Workflow Scheduling Using P System(National Institute of Technology Karnataka, Surathkal, 2021) Raghavan, Santhanam.; Chandrasekaran, K.Cloud Computing is a decade old technology that has changed the landscape of the internet based business model. This technology manifested itself unheralded, a decade ago and has been growing since. It now stands with several inherent complex problems, as a result of its expansion. Out of several issues being researched, service selection in cloud is one of the prime issues which is getting primary attention. Service selection is a process of selecting (ranking) services from a pool of available cloud services which is often based on multiple Quality of Service (QoS) attributes. Our work is divided into two major components. The first part of our work is solving the problem of cloud service selection. This study proposes inherently parallel, robust models for service selection in cloud based on a natural computing model called membrane computing. Membrane Computing, which is realised using P Systems, is an inherently parallel model that is based on the concept of animal cell interaction. There are several variants of P Systems and here Enzymatic Numerical P System (ENPS) is used, based on its suitability to the problem being solved. Multiple approaches have been proposed and the results are analysed. Additionally, two new software tools required for ENPS execution are proposed. The second part involves designing and implementing the algorithm for workflow scheduling in cloud. Workflow is a group of tasks that are collectively aimed at doing a single work. Cloud workflows consist of tasks to be mapped to Virtual Machines (VMs) that are part of the cloud. The process of assigning limited number of VMs to the tasks in a particular manner to optimize certain quality factor, is referred to as workflow scheduling in cloud. In this study the effort is to minimise makespan, which is the net time taken by the workflow to get executed. The ENPS model is used to obtain the sequence of the schedule, based on which the makespan is calculated and compared with other standard methods.Item Hardware-based Acceleration of Network-on-Chip Simulation using FPGAs(National Institute of Technology Karnataka, Surathkal, 2021) M, Prabhu Prasad B.; Talawar, Basavaraj.Replacing the conventional bus-based architectures, Network-on-Chip (NoC) has become a tangible on-chip communication framework in the many-core processors, Chip Multi-Processors (CMPs), and Multi-Processor System-on-Chips (MPSoCs). Also, NoCs have become an integral part of the heterogeneous systems with applicationspecific accelerators such as databases, graph processing, and deep neural networks. In these heterogeneous systems, it is the responsibility of NoCs to interconnect various components. More number of cores are being incorporated in state-of-the-art homogeneous and heterogeneous multi-core processors to achieve high performance and better power efficiency. Likewise, to achieve high performance in the target applications, various components such as processing cores, input/output peripherals, and memory components being integrated on heterogeneous systems are also increasing. When there is an increase in the number of interconnected components, the performance of the target application becomes highly dependent on the performance of NoC. Hence, there is a need to model and evaluate large NoC designs quickly and accurately as thousands of cores are targeted in the near future multi-core architectures due to the advances in CMOS technology. NoC modeling helps understand the impact of various design parameters on the overall system and the performance characteristics. A crucial hurdle in the design and evaluation of large-scale NoC is the lack of rapid methodologies for modeling, which can deliver a high level of accuracy. Analytical models compromise accuracy to achieve results in a short period of time. Hence, to perform the design space exploration of NoCs, designers frequently employ the software simulators. The software simulators provide better accuracy than analytical modeling. When a large-scale NoC with a huge number of nodes is being simulated, the software simulators tend to become too slow. To address the issue of simulation speed, an Field Programmable Gate Arrays (FPGA) based NoC simulation acceleration framework has been proposed in this thesis. A fully parameterized FPGA based NoC simulation framework called YaNoC has been proposed. YaNoC supports the design space exploration of various NoC topologies considering a rich set of router micro-architectural parameters. To simulate the larger topologies, the hard blocks of the FPGA, such as Block RAMs (BRAMs) and DSP blocks, have been employed to map the NoC router components such as FIFO buffers and the crossbar, respectively. Further, a lightweight NoC router architecture has been proposed to reduce the area utilization and improve network performance. The thesis’s initial work employs profiling to analyze the performance of the Booksim2.0 NoC software simulator with various design decision parameters and memory configurations. Various cache design parameters such as cache size, block size, and associativity have been considered to simulate the NoC topologies of Booksim2.0 to observe the effect of cache configurations. The hotspots of the Booksim2.0 simulator are identified, and software optimizations are employed to improve the performance of the Booksim2.0. To reduce the execution time of Booksim2.0, optimization methodologies such as vectorization and thread parallelization are employed. The OpenMP programming model is used for parallelizing and vectorizing the source code of Booksim2.0. Due to high synchronization cost, the gain achieved in simulation speed is not significant. Higher simulation speed can be achieved by sacrificing the simulation accuracy to mitigate the complexity of synchronizations. FPGA-based simulators are becoming a promising approach for enhancing the speed of simulations. An FPGA-based NoC simulation acceleration framework called YaNoC, supporting design space exploration of standard and custom NoC topologies considering a full set of NoC router microarchitectural parameters, has been proposed. YaNoC is capable of designing custom routing algorithms, various traffic patterns. Obtained results show that the YaNoC consumes fewer hardware resources and is faster than the other FPGA based NoC simulation acceleration platforms. Most of the state-of-the-art FPGA based simulators utilize soft logic only for modeling the NoCs, leaving out the hard blocks unutilized. The FPGA soft logic resources become a limiting factor when simulating a large NoC topology. Multiple FPGAs with off-chip memory can be employed to overcome the limitation of the FPGA resources. ii The entire system becomes more complex and slow by using these approaches, leading to a reduction in the system’s performance. Instead of having a multi-FPGA setup to simulate larger topologies, the hard blocks of an FPGA have been utilized efficiently to map the NoC router components. The functionality of the NoC router’s buffer and crossbar switch are embedded in the BRAMs and the wide multiplexers of the DSP48E1 slices. A substantial decrease in the Configurable Logic Blocks (CLBs) utilization of NoC topologies on the FPGA is observed by embedding the functionality of the buffers and crossbar on the hard blocks of the FPGA compared to other state-of-the-art works. Lightweight and high-performance NoC architecture is suitable for designing the heterogeneous systems to achieve area reduction and to improve the overall system performance. A low latency router with a look-ahead bypass called LBNoC has been proposed. The techniques such as single cycle router pipeline bypass, adaptive routing module, parallel virtual channel and switch allocation, combined flow control mechanism like virtual cut through, and wormhole switching are employed in designing the LBNoC router. The input buffer modules of NoC router are mapped on the FPGA BRAM hard blocks to utilize resources efficiently.Item Microservice Orchestration Strategies for Containerized Cloud Environments(National Institute of Technology Karnataka, Surathkal, 2021) Joseph, Christina Terese.; Chandrasekaran, K.The explosion in the popularity of the Internet paralleled with the impetuous evolution of computing and storage technologies has brought about a revolutionary shift in the way computational resources are provisioned. The Cloud computing paradigm facilitates the lease of computational resources as services on a pay-per-use basis. Cloud developers have rapidly embraced the Microservice Architecture to accelerate the development and deployment of Cloud applications. However, the dynamism, agility and distributed characteristics of microservices pose significant challenges in the resource orchestration of microservice-based Cloud environments. Effectively utilizing the distributed resources of the Cloud to obtain performance gains is an issue of paramount importance. Hence, this work focusses on the orchestrational challenges in microservicebased Cloud environments. In order to achieve the desired level of scalability and elasticity, microservice-based Cloud applications are typically packaged in containers. Therefore, microservice orchestration strategies for Cloud environments must effectively manage container clusters to automate processes such as resource allocation, autoscaling and load balancing. In terms of system performance, a key concern is the initial placement of the microservice applications. Placing microservice applications without considering the interactions among the microservices forming an application results in a performance penalty. Accordingly, an interaction-aware microservice placement strategy, called Interactionaware Microservice Allocation (IntMA) that preserves the Quality of Service and maintains resource efficiency, is devised in this research. The interaction pattern is modeled using a doubly weighted interaction graph, which is then used to assign the incoming microservice applications to appropriate nodes in the Cloud datacenter. Experiments on the Google Cloud Platform substantiated that our proposed approach attains better objective values than the existing placement policies. The dynamism of microservice-based Cloud environments renders it essential to revisit the initial placement decisions and perform rescheduling. Rescheduling strategies must strive to resolve degradations in the performance due to fluctuations in the workload. Existing rescheduling strategies, tailored for hypervisor-based virtualization environments, do not consider the features specific to containers. Therefore, this research work also explores the impact of container configuration parameters on microservice application performance. The experiments revealed that larger values for container CPU throttling led to higher response times. In order to circumvent this, a Throttling and Interaction-aware Anticorrelated Rescheduling Framework (TIARM) for microservices, is proposed. Experimental results elucidate the efficacy of the proposed approach in enhancing the performance of containerized Cloud environments.Item Feature-Oriented Model-Driven Development of Energy-Aware Self-Adaptive Software(National Institute of Technology Karnataka, Surathkal, 2021) C, Marimuthu.; Chandrasekaran, K.Smartphone applications are equipped with energy-hungry resources such as display, GPS, and GPU. Mishandling of these resources and associated APIs might result in an abnormal battery drain. In recent years, researchers have adopted self-adaptive strategies to extend battery life with context information. However, the existing solutions focus on the development and testing phases of software development. Handling the energy-awareness and self-adaptive behavior directly at the development phase would increase the development efforts. Therefore, there is a need to consider these requirements in the early phases of software development life cycle. Thus, in this research work, the concepts of feature modeling, domain-specific modeling languages, and code generation has been adopted to model and develop energy-aware self-adaptive software. The location-based applications have been selected as an application domain to prove the efficacy of the ideas presented in this research work. In addition, a self-adaptive system has been selected as a system domain, and Android has been selected as an operating domain. The first objective aims to empirically analyze and organize the developer’s existing knowledge about energy-saving solutions for location-based applications. The second objective aims to aid the domain analyst with an energy-aware modeling framework by extending the popular feature-oriented domain analysis framework. The third objective aims to develop a domain-specific modeling tool (eSAP) for the energy-aware modeling framework. The fourth objective aims to design and develop a tool named eGEN, which includes a textual domain-specific modeling language and automatic code generator. eGEN helps the domain analyst and developers specify energy-related requirements and generates battery-aware source code that can be used in the existing location-based Android applications. The efficacy of the energyaware modeling framework and developed tools has been validated qualitatively using the case studies in software engineering. The obtained results show that the developed tools eSAP and eGEN help the domain analyst and developers reduce the development efforts for introducing energy-awareness and self-adaptivity in the early phases of software development.Item Analysis and Design of Secure Visual Secret Sharing Schemes with Enhanced Contrast(National Institute of Technology Karnataka, Surathkal, 2021) Mhala, Nikhil Chandrakant.; Pais, Alwyn Roshan.The Visual Secret Sharing (VSS) scheme is a cryptography technique, which divides the secret image into multiple shares. These shares are then transmitted over a network to respective participants. To recover the secret image, all participants must have to stack their shares together at the receiver end. Naor and Shamir (1994a) first proposed basic VSS scheme for binary images using threshold scheme. The scheme generated shares with increased sizes, hence it suffered from the problem of expanded share. To overcome the problem of expanded shares, Block-based Progressive Visual Secret Sharing (BPVSS) scheme was proposed by Hou et al. (2013a). BPVSS is an effective scheme suitable for both gray-scale and color images. Although BPVSS scheme recovered secret image with better quality, it still suffers from the problems like 1) The restored image obtained by joining all the shares together always results in a binary image. 2) The maximum contrast achievable by BPVSS is 50%. This thesis presents various mechanisms to improve reconstruction quality and the contrast of a secret image transmitted using BPVSS. First technique proposed by thesis is Randomised Visual Secret Sharing (RVSS) (Mhala et al. 2018). The RVSS is an encryption technique that utilises block-based progressive visual secret sharing and Discrete Cosine Transform (DCT) based reversible data embedding technique to recover a secret image. The recovery method is based on progressive visual secret sharing, which recovers the secret image block by block. The existing block based schemes achieve the highest contrast level of 50% for noise-like and meaningful shares. The presented scheme achieves a contrast level of 70-90% for noise-like and 70-80% for meaningful shares. The enhancement of contrast is achieved by embedding additional information in the shares using DCT-based reversible data embedding technique. Experimental results showed that the proposed scheme restores the secret image with better visual quality in terms of human visual system based parameters Although RVSS scheme recovers secret images with a better contrast; the scheme still suffers from the problems of blocking artifacts. To further improve the reconstruction quality of the RVSS, this thesis presents a novel Super-resolution based Visual Secret Sharing (SRVSS) technique. The SRVSS scheme used super-resolution concept along with data hiding technique to improve the contrast of the secret images. The experimental results showed that the SRVSS scheme achieves the contrast of 70-80% for meaningful shares and 99% for noise-like shares. Also, scheme recovers the secret image free from blocking artifacts. Nowadays, medical information is being shared over the communication networks due to ease of technology. The patient’s medical information has to be securely communicated over a network for Computer Aided Diagnosis (CAD). Most of the communication networks are prone to attacks from an intruder thus compromising the security of patients data. Therefore, there is a need to transmit medical images securely over a network. Visual secret sharing scheme can be used to transmit the medical images over a network securely. This thesis has applied the super-resolution based VSS scheme on the medical images to transmit them over a network. The experimental results showed that, scheme recovers medical images with better contrast. The experimental results showed that the presented system is able to reconstruct the secret image with the contrast of almost 85-90% and similarity of almost 77%. Additionally, the performance of the presented system is evaluated using the existing CAD systems. The reconstructed images using the presented super-resolution based VSS scheme achieves the similar classification accuracy as that of existing CAD system. Nowadays, underwater images are being used to identify various important resources like objects, minerals, and valuable metals. Due to the wide availability of the Internet, the underwater images can be transmitted over a network. As underwater images contain important information, there is a need to transmit them securely over a network. Visual secret sharing (VSS) scheme is a cryptographic technique, which is used to transmit visual information over insecure networks. Randomized VSS (RVSS) scheme recovers Secret Image (SI) with a Self-Similarity index (SSIM) of 60-80%. But, RVSS is suitable for general images, whereas underwater images are more comii plex than general images. The work presented in the thesis to share underwater images over a network uses a super-resolution based VSS scheme. Additionally, it has removed blocking artifacts from the reconstructed secret image using Convolution Neural Network (CNN)-based architecture. The presented CNN-based architecture uses a residue image as a cue to improve the visual quality of the SI. The experimental results show that the presented VSS scheme can reconstruct SI with almost 86-99% SSIM. Hence can be used to transmit complex images over a insecure channels.Item FPGA based Simulation Acceleration of on-Chip Networks(National Institute of Technology Karnataka, Surathkal, 2021) Khyamling; Talawar, Basavaraj.As the number of processing cores in the Systems-on-Chip(SoC) increases, the traditional bus based interconnect will be the major bottleneck to achieving the performance required by modern applications. Further, bus based communication may not provide the required bandwidth and latency to the systems with intensive parallel communication. An efficient interconnection architecture is required to achieve high performance and scalability in many-cores SoC. The Network-on-Chip(NoC) architecture has emerged as the most promising interconnection architecture for the modern Chip Multiprocessor( CMP) and Multi/Many-Processor System-on-Chip(MPSoC) systems. The components in these systems, the cores, accelerators, memory blocks, and peripherals are interconnected using one or more NoCs composed of links and routers. The choice of router parameters and NoC topologies can have a significant impact on the overall performance of heterogeneous many-core systems. The evaluation methodologies of NoCs for future computing systems with a large number of interconnected components rely heavily on analytical models and simulations. The fast modeling of large scale NoCs have been done through analytical models with significant inaccuracy. Fast and flexible NoC simulator frameworks are needed for modeling the large scale NoC based heterogeneous many-core systems, which can deliver a high level of accuracy. Detailed software simulators used for design space exploration of NoCs, provide better accuracy than analytical modelings. However, software simulators are slow when simulating large scale NoCs for interconnection of various components. This thesis presents the optimization of software based NoC simulator and a Field programmable gate arrays(FPGA) based NoC simulation acceleration framework to address the issue of simulation speed, accuracy, and flexibility. Initial work in the thesis involves profiling of the Booksim2.0 software simulator, as it is used extensively for the design and evaluation of NoC architectures. The Booksim2.0 is profiled with the various NoC design parameters and memory configurations to analyze its performance. The performance analysis of Booksim2.0 is based on cache misses, memory usage, and hotspots. Profiling helped in applying focussed software optimization techniques on the simulator. Further, Booksim2.0 was parallelized using OpenMP and SIMD constructs to improve its overall performance. Going beyond software optimization, an FPGA based NoC simulation acceleration framework called YaNoC is proposed to explore the impact of microarchitectural parameters on the performance of the NoC. YaNoC supports for design space exploration of custom topologies with custom routing algorithm along with standard minimal routing algorithm for conventional NoCs. The YaNoC is used to study NoC architectures of a CMP using various traffic patterns, the results show that the YaNoC utilize fewer FPGA resources and is faster than the other state-of-art FPGA based NoC simulation acceleration platforms. The next challenge was to optimize the resources consumed by YaNoC. The FPGA fabric provides hard resources such as Block RAM(BRAM) and DSP48E1 units along with specialized interconnect. Most of the state-of-art FPGA based simulators utilize soft logic only for modeling the NoCs, leaving out the hard blocks to be unutilized. The Input buffer and crossbar functionality of NoC routers embed onto the hard block of Xilinx BRAM and DSP48E1 units thereby reducing the dependence on soft logic. A pure configurable logic block implementation and a hard block based implementation of the NoC router exhibit identical latency and performance behaviour. The utilization of hard units for the design of NoCs results in high performance with low cost design compared to state-of-art frameworks. Next, the design of an FPGA based parameterized framework called P-NoC with configurable Topology, Router and Traffic modules for performance evaluation and design space exploration has been presented. The P-NoC enables the designer to choose from a variety of architectural parameters like Input buffers, Virtual Channels, routing algorithms, traffic patterns, topology for exploration of NoC design. The P-NoC also supports a flexible communication model and traffic generation. In the last piece of work, an FPGA based NoC using a low latency router with a look ahead bypass(LBNoC) has been proposed. The LBNoC design targets the optimized ii area with improved network performance. The techniques such as a single-cycle router bypass, adaptive routing module, parallel Virtual Channel (VC), and Switch allocation, combined virtual cut through and wormhole switching, have been employed in the designing optimized LBNoC router. The LBNoC architecture consumes fewer hardware resources, reduction in average packet latency and gain in speedup than the state-of-art NoC architectures.Item Machine Learning based Design Space Exploration of Networks-on-Chip(National Institute of Technology Karnataka, Surathkal, 2021) Kumar, Anil.; Talawar, Basavaraj.As hundreds to thousands of Processing Elements (PEs) are integrated into Multiprocessor Systems-on-Chip (MPSoCs) and Chip Multiprocessor (CMP) platforms, a scalable and modular interconnection solution is required. The Network-on-Chip (NoC) is an e ective solution for communication among the On-Chip resources in MPSoCs and CMPs. Availability of fast and accurate modelling methodologies enable analysis, development, design space exploration through performance vs. cost tradeo studies, and testing of large NoC designs quickly. Unfortunately, though being much more accurate than analytical modelling, conventional software simulators are too slow to simulate large-scale NoCs with hundreds to thousands of nodes. Machine Learning (ML) approaches are employed to simulate NoCs to address the simulation speed problem in this thesis. A Machine Learning framework is proposed to predict performance, power and area for di erent NoC architectures. The framework provides chip designers with an e cient way to analyze NoC parameters. The framework is modelled using distinct ML regression algorithms to predict performance parameters of NoCs considering di erent synthetic tra c patterns. Because of the lack of trace data from large-scale NoC-based systems, the use of synthetic workloads is practically the only feasible approach for emulating large-scale NoCs with thousands of nodes. The ML-based NoC simulation framework enables a chip designer to explore and analyze various NoC architectures considering both 2D & 3D NoC architectures with various con guration parameters like virtual channels, bu er depth, injection rates and tra c pattern. In this thesis, four frameworks have been presented which can be used to predict the design parameters of various NoC architectures. The rst framework named Learning-Based Framework (LBF-NoC) which predicts the performance, power, area parameters of direct (mesh, torus, cmesh) and indirect (fat-tree, at y) topologies. i LBF-NoC was tested with various regression algorithms like Arti cial Neural Networks with identity and relu activation functions, di erent generalized linear regression algorithms, i.e., lasso, lasso-lars, larsCV, bayesian-ridge, linear, ridge, elastic-net and Support Vector Regression (SVR) with linear, Radial Basis Function, polynomial kernels among these SVR provided the least error hence, it was selected for building the framework. The existing framework was enhanced by using multiprocessing scheme named Multiprocessing Regression Framework (MRF-NoC) to overcome the issue of simulating NoC architecture `n' number of times for 2D Mesh and 3D Mesh in the second framework. The third framework named Ensemble Learning-Based Accelerator (ELBA-NoC) is designed to predict worst-case latency analysis and to predict the design parameters of large scale architectures using the random forest algorithm. It was designed to predict results of ve di erent NoC architectures which consist of both 2D (Mesh, Torus, Cmesh) and 3D (Mesh, Torus) architectures. Later the fourth framework named Knowledgeable Network-on-Chip Accelerator (K-NoC) is presented to predict two types of NoC architectures one with a xed delay between the IPs and another with the accurate dealy and it was build using random forest algorithm. The results obtained from the frameworks has been compared with the most widely software simulators like Booksim 2.0 and Orion. The LBF-NoC framework gave an error rate of 6% to 8% for both direct and indirect topologies. It also provided a speedup of 1000 for direct topologies and speedup of 5000 for indirect topologies. By using MRF-NoC all the various NoC con gurations considered can be simulated in a single run. ELBA-NoC was able to predict the design parameters of ve di erent architectures with an error rate of 4% to 6% and a minimum speedup 16000 when compared to the cycle-accurate simulator. later, K-NoC was able to predict both NoC architectures considered one with xed delay and another with the accurate delay. It gave a speedup of 12000 and error rate less than 6% in both the cases.Item Computational Methods for Modeling Multistep Reactions and Parameter Inference in Transcriptional Processes(National Institute of Technology Karnataka, Surathkal, 2020) Shetty, Keerthi Srinivas.; B, Annappa.A major task in Systems Biology is to conduct accurate mechanistic simulations of multistep reactions. The simulation of a biological process from experimental data requires detailed knowledge of its model structure and kinetic parameters. Despite advances in experimental techniques, estimating unknown parameter values from observed data remains a bottleneck for obtaining accurate simulation results. Therefore, the goal is to focus on development of computationally efficient parameter inference methods for characterizing transcriptional bursting process, for inferring unknown kinetic parameters, given single-cell time-series data. Many biochemical events involve multistep reactions. One of the most important biological processes in gene expression, which involve multistep reactions, is the transcriptional process. Models for multistep reactions necessarily need multiple states, and it is a challenge to compute model parameters that best agree with experimental data. To address this issue, first, a novel model reduction strategy is devised, representing several number of promoter OFF states by a single state, accompanied by specifying a time delay for burst frequency. This model approximates complex promoter switching behavior with Erlang-distributed ON/OFF times. To explore combined effects of parameter inference and simulation, using this model reduction, two inference methods are developed namely, Delay-Bursty MCEM and Clumped-MCEM. These methods are applied to time-series data of endogenous mouse glutaminase promoter to validate model assumptions and infer the values of kinetic parameters. Simulation results are summarized below: 1. Models with multiple OFF states produce behaviour that is most consistent with experimental data and the bursting kinetics are promoter specific. 2. Delay-Bursty MCEM and Clumped-MCEM inference are more efficient for time-series data. The comparison with the state-of-the-art Bursty iMCEM2 method shows that Delay-Bursty MCEM and Clumped-MCEM produce similar numerical accuracy. However, these methods are better in terms of efficiency. Delay-Bursty MCEM reduces computational cost by 37:44% as compared to Bursty MCEM2. Clumped-MCEM reduces computational cost by 57:58% when compared with Bursty MCEM2 and 32:19% when compared with Delay-Bursty MCEM.Item False Data Detection in Wireless Sensor Networks(National Institute of Technology Karnataka, Surathkal, 2020) Kumar, Alok.; Pais, Alwyn R.En-Route filtering is a method to detect and filter false reports in Wireless Sensor Networks (WSNs). The radio capabilities of sensor nodes are very limited. Thus the reports have to be forwarded through intermediate nodes to be collected at a central facility. In En-Route filtering, the intermediate nodes do an authenticity check of all the reports before they are forwarded to next hop. In recent times, many En-Route filtering schemes have been proposed. Each of these schemes use different cryptographic methods to filter false reports from the WSNs. However, the majority of these techniques can handle only limited compromised nodes or either needs node localization or statically configured routes for sending reports. Furthermore, the majority of En-Route filtering techniques are vulnerable to various Denial of Service (DoS) attacks. Though, the contemporary techniques proposed in the field of En-Route filtering have evolved with the time, but still, the majority of them are prone to selective forwarding and report disruption attacks. This research work focuses on handling the problems and limitations of En-Route filtering to device new techniques which are resilient to various DoS attacks. We in our work will try to reduce communication overhead and reduce the effect of various DoS attacks (Report Disruption Attack and Selective Forwarding Attack) in WSNs. The basic idea of En-Route filtering is checking of reports by intermediate nodes. This helps to decrease the processing and checking overhead of sink and thus false reports can be removed from the network within some nodes from the origin, saving energy and bandwidth. In this approach, each report is attached to Message Authentication Codes (MACs) or signatures. Whenever these reports are being forwarded over the network, intermediate nodes can authenticate these MACs or signatures and if any fault is found, reports are dropped. For creation and verification of MACs in the network, sensor nodes exchange secret keys with other sensor nodes in the network. Thus, this research work mainly focuses on proposing new key pre-distribution schemes andthen to extend the proposed key pre-distribution schemes to propose new En-Route filtering schemes. In this thesis, secure key pre-distribution mechanisms are studied. The first study is based on improvements in combinatorial design based key pre-distribution mechanism. We developed three combinatorial design based key pre-distribution schemes which improved the resiliency of the network against compromised sensor nodes without alarmingly increasing the key storage overhead in the network. Second study is devoted to propose a new hybrid key pre-distribution scheme which uses both pair-wise keys and combinatorial design based keys. This helped to ensure high resiliency against compromised sensor nodes in the network while maintaining very low key storage overhead when compared to existing schemes. The last study focused on extending the proposed key pre-distribution schemes to propose novel En-Route filtering schemes. Use of combinatorial design based keys provided a deterministic mechanism for verification of forwarded reports. Thus, the filtering efficiency of the proposed schemes is excellent. For the proposed schemes, a novel report endorsement and verification mechanism is also proposed for robust data authentication and availability in the network. This helped to provide better tolerance against Report Disruption Attack and Selective Forwarding Attack in WSNs. With thorough analysis and simulation results, we have claimed that the network performances of our key pre-distribution and En-Route filtering schemes are much better as compared to those for the existing schemes.