2. Thesis and Dissertations

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/1/10

Browse

Search Results

Now showing 1 - 5 of 5
  • Thumbnail Image
    Item
    Design and Development of An Intelligent System For Medical Diagnosis Based on Multi- Dimensional Analysis
    (National Institute Of Technology Karnataka Surathkal, 2023) T V, Shrivathsa; Rao, Shrikantha S; P, Navin Karanth
    The advancement of healthcare prediction systems has revolutionized the medical field, enabling to predict and prevent diseases severity, improve patient care, and enhance healthcare efficiency. This requires proper study of historic data in the related field and thorough analysis. Greater emphasis is laid on relevance of live data rather than repository data available in scholarly database. Again, the causes of a disease may vary geographically due to distinct living conditions or environmental conditions. At the same time, the ability of a medical practitioner to decipher information out of diagnosis procedure followed will be limited by his expert knowledge or experience. It is in such situations that a reliable accurate prediction system based on Artificial Intelligence (AI) comes as an assisting tool to the medical fraternity in conflict resolution. An AI-based diagnostic system will definitely help the medical expert in arriving at remedial solution, since knowledge base contained in it is based on sound design. The prediction system attempted in the present work consists of two stages. In the first stage, prediction system was developed for classification of undifferentiated fever symptomatic disease. The motivation of good results at this stage led to the development of full-fledged end- to-end predictive system for identification and classification of coronary artery disease (CAD), with consideration of electrocardiogram (ECG) and treadmill test electrocardiogram (TMT-ECG, stressed ECG) signals. It is then validated with angiography results. Accurate diagnosis of undifferentiated fever symptomatic disease at the earliest is a challenging task necessitating extensive diagnostic tests. The aim of the present study was to apply Artificial Intelligence (AI) algorithm using temperature information for the prediction of major categories of diseases among undifferentiated fever symptomatic disease cases. Illnesses like tuberculosis, non-tubercular bacterial infection, dengue fever, and non-infectious diseases have regular manifestations of fever symptoms. The present work uses only temperature data of the patient being referred in predicting the nature of fever symptomatic disease, with the highest degree of accuracy, instead of several self-defined parameters over an interval of time. This was an observational study carried out in tertiary care hospital and validated with thehelp of experienced physicians. Back-propagation algorithm was used to train the network. A good relation was found between the target data set and output data set, purely based on the observed 24 hrs. continuous tympanic temperature of the patients. An accuracy of 99% was achieved from the Artificial Neural Network (ANN) prediction model. Prediction model with different classifiers (logistic regression, decision tree classifier, k-nearest neighbor’s classifier, linear decrement analysis, Gaussian Naive Bayes classifier, and Support Vector Machine) were experimented for optimization. The optimized prediction model deals with lesser time intervals and shows good performance of results when it is combined with additional medical parameters which may be considered during medical testing. A result of predictive system defines with a good classifier adaptation will show a strong performance in identification of fever-symptomatic diseases. Accuracy score and other salient parameters describe the complete picture of the system. No other investigation has ever been carried out so far taking temperature as the only parameter in classification of diseases achieving an accuracy of as high as 99.9%. Based on the success attained here, a more complicated problem is taken up for investigation related to coronary artery disease. Coronary artery disease (CAD) is one of the major cardiovascular diseases and is a cardiac condition where plaque formed in arteries leads to death worldwide. The identification of CAD in the traditional approach needs a report of ECG, TMT ECG, Pharmacological test, and echocardiogram. The confirmation of CAD leads to the next stage of cardiac catheterization. An accurate prediction system that can detect the existence of CAD with an initial test like an ECG or TMT ECG report can assist doctors during periodic health monitoring of patients. It may be challenging and time-consuming to visually assess the ECG signals. Identification of abnormal ECG morphology, especially in low amplitude curves may be prone to error. Initially, an image processing method has been developed and implemented for the extraction of data from ECG and TMT-ECG reports. The 12 lead TMT-ECG report provides cardiac information of abnormality under medication. This information plays a vital role in automated cardiac analysis. Any small discontinuity in the ECG/TMT-ECG images will be patched up by the developed method. The data extraction method involves scanning of ECG and TMT-ECG images, masking,binarization, and morphological operation, etc. These extracted data are compared with the available output of commercial software (IM2GRAPH) In addition to data extraction, a part of the algorithm based on hybrid method is used to identify and classify important major features namely P, Q, R, S, T, PQ segment, QRS complex, QT segment, and ST segment. A convolutional neural network model is developed which works on the data extracted from ECG signals (one-dimensional data). The developed Convolutional Neural Network (CNN) architecture deals with single-lead and multi- lead (12 Lead) ECG and TMT-ECG data effectively. The highlight of the CNN system developed is that entire data is collected from the clinical lab of a renowned neighboring hospital. The automated computer-assisted system helps in the detection of CAD with an accuracy of 99%. The study also focused on developing a prediction system for CAD disease based on raw and filtered, single-lead and twelve-lead ECG signal images (two-dimensional), by passing data extraction. The algorithm results are compared with transfer learning algorithms. The novelty of the work is highlighted by the fact that the prediction accuracy of the developed algorithm, with a single lead and twelve lead ECG or TMT ECG signals (accuracy of approximately 93.5% for single lead and 94% for twelve lead) is much higher compared to transfer learned algorithms. The developed model exhibited better accuracy with lesser number of layers compared to deeper pre-trained algorithms. Further improvement is achieved by developing a novel multi-headed model which deals with both one-dimensional data and two-dimensional data simultaneously. This hybrid deep multi-headed model is built with a combination of two prediction models which work parallelly. The outcomes of these models are concatenated at the end part of model before flowing to the output layer. This process helps to extract and collect more featuristic information related to disease with all possibilities during prediction. To generalize this methodology, it is further tested over a repository dataset and has shown good performance and acceptable results. For good accessibility, a user-friendly Graphical User Interface (GUI) is developed based on proposed algorithm to support healthcare experts in classifying CAD ECG signals without much effort. The prototype model which is developed can be tested with a still larger dataset before implementation for clinical usage.
  • Thumbnail Image
    Item
    Workload Optimization In Federated Cloud Environment
    (National Institute of Technology Karnataka, Surathkal, 2022) S R, Shishira; A., Kandasamy
    Cloud computing is an essential paradigm for processing, computing, storing, and com- munication bandwidth. It offers services on an on-demand basis for the user, that is, pay per use. Cloud computing consists of numerous resources, including the provision of networks, databases for storage, servers, virtual machines, and potential application. It is a widely used technique to handle large amounts of data as it provides versatility and functionality for optimization. Customers submit their request for data exchange and to store it in an existing cloud environment. The customer has a huge advantage in paying for the currently required services. In a federated cloud environment, one or more cloud service providers share their servers to handle user requests. It promotes cost savings, service utilization, and performance enhancement. Clients would bene- fit as a Service Quality Agreement exists between the two. The Cloud federation is an evolving technology through which cloud service providers cooperate to provide clients with customized services to enjoy the real benefits of Cloud Computing. The federated service provider achieve better resource usage and Quality of Service by cooperation, thereby enhancing their market prospects. Workloads are the collection of raw inputs provided to the processing arhcitecture. Based on the successful processing of workloads, efficiency can be assessed. Differ- ent workloads have distinct feature sets. The secret to making optimal configuration decisions and improving system performance is by recognizing the characteristics of workloads. Multiple requests are handled quickly under the dynamic cloud environ- ment, which contributes to the resource allocation problem.The cloud will maintain the workflow active through the proper allocation of resources, virtualization software, or repositories. However, the precise load estimation model is important for efficient management of resources. i It is hard to manage a large number of workloads in an enterprise cloud system. Workloads are the sum of data for processing that are provided to the hardware resource. Its behavior and characteristics play an important role in the efficient processing of resource requests. It is also difficult to predict the existence of workloads if they alter excessively. In this thesis, we propose a conceptual framework for efficient prediction and optimization of workloads that can be easily adapted to a system to address this problem to address this problem. Serving the request in considerably less time leads to an issue with resource allocation. In order to auto-scale the resources, it is more comfortable to have previous awareness of the incoming loads. For the better prediction of workloads in the cloud world, a novel architecture is proposed. Predicted workloads could also be configured smoothly for better use without waving off, the SLA negotiated between the provider and customers. Three essentials for the management of cloud resources are considered in the proposed Fitness Function Extraction Model, i.e. CPU, Disk, and Memory storage. This thesis proposes a BeeM-NN architecture by incorporating Workload Neural Network Algorithm and Novel Bee Mutation Optimization Algorithm into a cloud en- vironment for optimized workload prediction. The proposed model initially includes the Fitness Function Extraction Algorithm to retrieve the attribute samples from the Microsoft Azure traces. With the Novel Bee Mutation Optimization Algorithm in the cloud, the expected QoS are optimized. The developed model is tested using the feder- ated cloud service providers workload data traces and is analyzed with the benchmark methods. The result indicated that the proposed model obtained higher accuracy than the existing systems with optimum efficiency in resource and cost usage.
  • Thumbnail Image
    Item
    Machine Learning based Design Space Exploration of Networks-on-Chip
    (National Institute of Technology Karnataka, Surathkal, 2021) Kumar, Anil.; Talawar, Basavaraj.
    As hundreds to thousands of Processing Elements (PEs) are integrated into Multiprocessor Systems-on-Chip (MPSoCs) and Chip Multiprocessor (CMP) platforms, a scalable and modular interconnection solution is required. The Network-on-Chip (NoC) is an e ective solution for communication among the On-Chip resources in MPSoCs and CMPs. Availability of fast and accurate modelling methodologies enable analysis, development, design space exploration through performance vs. cost tradeo studies, and testing of large NoC designs quickly. Unfortunately, though being much more accurate than analytical modelling, conventional software simulators are too slow to simulate large-scale NoCs with hundreds to thousands of nodes. Machine Learning (ML) approaches are employed to simulate NoCs to address the simulation speed problem in this thesis. A Machine Learning framework is proposed to predict performance, power and area for di erent NoC architectures. The framework provides chip designers with an e cient way to analyze NoC parameters. The framework is modelled using distinct ML regression algorithms to predict performance parameters of NoCs considering di erent synthetic tra c patterns. Because of the lack of trace data from large-scale NoC-based systems, the use of synthetic workloads is practically the only feasible approach for emulating large-scale NoCs with thousands of nodes. The ML-based NoC simulation framework enables a chip designer to explore and analyze various NoC architectures considering both 2D & 3D NoC architectures with various con guration parameters like virtual channels, bu er depth, injection rates and tra c pattern. In this thesis, four frameworks have been presented which can be used to predict the design parameters of various NoC architectures. The rst framework named Learning-Based Framework (LBF-NoC) which predicts the performance, power, area parameters of direct (mesh, torus, cmesh) and indirect (fat-tree, at y) topologies. i LBF-NoC was tested with various regression algorithms like Arti cial Neural Networks with identity and relu activation functions, di erent generalized linear regression algorithms, i.e., lasso, lasso-lars, larsCV, bayesian-ridge, linear, ridge, elastic-net and Support Vector Regression (SVR) with linear, Radial Basis Function, polynomial kernels among these SVR provided the least error hence, it was selected for building the framework. The existing framework was enhanced by using multiprocessing scheme named Multiprocessing Regression Framework (MRF-NoC) to overcome the issue of simulating NoC architecture `n' number of times for 2D Mesh and 3D Mesh in the second framework. The third framework named Ensemble Learning-Based Accelerator (ELBA-NoC) is designed to predict worst-case latency analysis and to predict the design parameters of large scale architectures using the random forest algorithm. It was designed to predict results of ve di erent NoC architectures which consist of both 2D (Mesh, Torus, Cmesh) and 3D (Mesh, Torus) architectures. Later the fourth framework named Knowledgeable Network-on-Chip Accelerator (K-NoC) is presented to predict two types of NoC architectures one with a xed delay between the IPs and another with the accurate dealy and it was build using random forest algorithm. The results obtained from the frameworks has been compared with the most widely software simulators like Booksim 2.0 and Orion. The LBF-NoC framework gave an error rate of 6% to 8% for both direct and indirect topologies. It also provided a speedup of 1000 for direct topologies and speedup of 5000 for indirect topologies. By using MRF-NoC all the various NoC con gurations considered can be simulated in a single run. ELBA-NoC was able to predict the design parameters of ve di erent architectures with an error rate of 4% to 6% and a minimum speedup 16000 when compared to the cycle-accurate simulator. later, K-NoC was able to predict both NoC architectures considered one with xed delay and another with the accurate delay. It gave a speedup of 12000 and error rate less than 6% in both the cases.
  • Thumbnail Image
    Item
    Intra Prediction Strategies for Lossless Compression in High Efficiency Video Coding
    (National Institute of Technology Karnataka, Surathkal, 2020) S, Shilpa Kamath; P, Aparna.
    HEVC, an abbreviation for high efficiency video coding, is a digital coding standard for videos developed by the JCT-VC committee to address the bandwidth and storage space requirements, associated while handling the high definition multimedia content. Sophisticated coding tools and mechanisms are deployed into the framework making it far more superior to its predecessor standard H.264 both in terms of compression efficiency and quality of reconstruction, but at the cost of increased complexity. This thesis is mainly based on the sample-based intra prediction strategies to improve the prediction accuracy that results in the enhancement of the compression efficiency, for the lossless mode of HEVC. Lossless coding becomes imperative in certain applications like video analytics, video surveillance, etc., that mandate distortion-free data reconstruction. The main focus of the thesis is to mitigate the spatial redundancy persistent due to coherence, smoothness, illumination and shadowing effects in the natural video sequences. These issues can also challenge another class of multimedia content commonly referred to as the screen content (SC) sequences as a result of certain peculiarities which they exhibit. Therefore, the prediction generation stage of the CODEC plays a significant role in an attempt to minimize the entropy using the superior intra prediction strategies. The gradient dependent predictor, context-based predictor, and the improvised blend of predictors which is based on the penalizing factor modify the intra prediction mechanisms of HEVC to emerge as the highlight of this thesis. The overall algorithmic performance is evaluated by deriving the savings in bit-rate and run-time. Additionally, the comparisons made with the several state-of-the-art prediction techniques reveal that significant improvements in coding gains with reasonable computational complexity and at par savings in run-time is attained using the proposed methods. The algorithmic modifications are embedded into the HEVC reference software provided by the JCT-VC and its validation is performed using the HEVC test sequences along with another class of natural sequences referred to as the Class 4K
  • Thumbnail Image
    Item
    Damage Level Prediction of NonReshaped Berm Breakwater using Soft Computing Techniques
    (National Institute of Technology Karnataka, Surathkal, 2014) N, Harish.; Rao, Subba; Mandal, Sukomal
    Tranquility condition inside the port and harbor has to be maintained for loading cargo and passengers. In order to maintain calm condition inside the port and harbor, breakwater has to be constructed to dissipate wave energy that is coming inside. The alignment of the breakwater must be carefully considered after examining the predominant direction of approach of waves and winds, degree of protection required, magnitude and direction of littoral drift and the possible effect of these breakwaters on the shoreline. In general these studies are invariably conducted in a physical model test where various alternatives are studied and the final selection will be based on performance consistent with cost. Considering the coastal boundary and depth variation, field analysis of wave structure interaction, determination of stability and damage level of berm breakwater structure is difficult. Mathematical modeling of these complex interactions is difficult while physical modeling will be costly and time consuming. Hence one has to depend on physical model studies which are expensive and time consuming. Soft computing techniques, such as, Artificial Neural Network (ANN), Support Vector Machine (SVM),Adaptive Neuro-Fuzzy Inference System (ANFIS) and Particle Swarm Optimization (PSO) have been efficiently proposed as a powerful tool for modeling and predictions in coastal/ocean engineering problems. For developing soft computing models in prediction of damage level of non-reshaped berm breakwater, data set are obtained from experimental damage level of non-reshaped berm breakwater using regular wave flume at Marine Structure Laboratory, National Institute of Technology, Karnataka, Surathkal, Mangalore, India. These data sets are divided into two groups, one for training and the other for testing. The input parameters that influence the damage level (S) of nonreshaped berm breakwater, such as, relative wave steepness (H/L0), surf similarity (ζ), slope angle (cotα) relative berm position by water depth (hB/d), relative armour stone weight (W50/W50max), relative berm width (B/ L0) and relative berm location (hB/L0) are considered in developing soft computing models for prediction damage level. The ANN model is developed for the prediction of damage level of non-reshaped berm breakwater. Two network models, ANN1 and ANN2 are constructed based on the parameters which influence the damage level of non-reshaped berm breakwater. The seven input parameters that are initially considered for ANN1 model are (H/L0), (ζ), (cotii α), (hB/d), (W50/W50max), (B/ L0) and (hB/L0). The ANN1 model is studied with different algorithm namely, Scaled Conjugate Gradient (SCG), Gradient Descent with Adaptive learning (GDA) and Levenberg-Marquardt Algorithm (LMA) with five numbers of hidden layer nodes and a constant 300 epochs. LMA showed good performance than the other algorithms. Also, influence of input parameters is evaluated using Principal Component Analysis (PCA). From PCA study, it is observed that cotα is the least influencing parameter on damage level. Based on the PCA study, least influencing parameter is discarded and ANN2 model is developed with remaining six input parameters. Training and testing of the ANN2 network models are carried out with LMA for different hidden layer nodes and epochs. The ANN2 with LMA 6-5-1 with 300 epochs gave good results. It is observed that the correlation of about 88% between predicted and observed damage level values by the ANN2 network models and measured values are in good agreement Furthermore, to improve the result of prediction of damage level of non-reshaped berm breakwater, SVM model was developed. This technique works on structural risk minimization principle that has greater generalization ability and is superior to the empirical risk minimization principle as adopted in conventional neural network models. This model was developed based on statistical learning theory. The basic idea of SVM is to map the original data x into a feature space with high dimensionality through a nonlinear mapping function and construct an optimal hyper-plane in new space. SVM models were constructed using different kernel functions. In order to study the performance of each kernel in predicting damage level of non-reshaped berm breakwater, SVM is trained by applying these kernel functions. Performance of SVM is based on the best setting of SVM and kernel parameters. Correlation Coefficient (CC) of SVM (polynomial) model (CC Train = 0.908 and CC Test = 0.888) is considerably better than other SVM models. To avoid over-fitting or under-fitting of the SVM model due to the improper selection of SVM and kernel parameters and also the performance of SVM, hybrid particle swarm optimization tuned support vector machine regression (PSO-SVM) model is developed to predict damage level of non-reshaped berm breakwater. The performance of the PSOSVM models in the prediction of damage level is compared with the measured values using statistical measures, such as, CC, Root mean Square Error (RMSE) and Scatteriii Index (SI). PSO-SVM model with polynomial kernel function gives realistic prediction when compared with the observed values (CC Train = 0.932, CC Test = 0.921). It is observed that the PSO-SVM models yield higher CCs as compared to that of SVM models. However, it is noticed that ANN model in isolation cannot capture all data patterns easily. Adaptive Neuro-Fuzzy Inference System (ANFIS) uses hybrid learning algorithm, which is more effective than the pure gradient decent approach used in ANN. ANFIS models were developed with different membership namely Triangular-shaped built-in membership function (TRIMF), Trapezoidal-shaped built-in membership function (TRAPMF), Generalized bell-shaped built-in membership function (GBELLMF), and Gaussian curve built-in membership function (GAUSSMF) to predict damage level of non-reshaped berm breakwater. The performance of the ANFIS models in the prediction of damage level is compared with the measured values using statistical measures, such as, CC, RMSE and SI. ANFIS model with GAUSSMF gave realistic prediction when compared with the observed values (CC Train = 0.997, CC Test = 0.938). It is observed that the ANFIS models yield higher CCs as compared to that of ANN models. The different soft computing models namely, ANN, SVM, PSO-SVM and ANFIS results are compared in terms of CC, RMSE, SI and computational time. The hybrid models in both (ANFIS and PSO-SVM) cases showed better results compared to individual models (ANN and SVM). When the hybrid models are compared, ANFIS model gives higher CC and lower RMSE. But considering computational time, ANFIS has taken more time than PSO-SVM model. Hence PSO-SVM is computationally efficient as compared to ANFIS. ANFIS and PSO-SVM models perform better and similar to observed values. Hence, ANFIS or PSO-SVM can replace the ANN, SVM for damage level prediction of nonreshaped berm breakwater. ANFIS or PSO-SVM can be utilized to provide a fast and reliable solution in prediction of the damage level prediction of non-reshaped berm breakwater, thereby making ANFIS or PSO-SVM as an alternate approach to map the wave structure interactions of berm breakwater.