Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Vincent, A.M."

Filter results by typing the first few letters
Now showing 1 - 4 of 4
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    An improved hyperparameter optimization framework for AutoML systems using evolutionary algorithms
    (Nature Research, 2023) Vincent, A.M.; Padikkal, J.
    For any machine learning model, finding the optimal hyperparameter setting has a direct and significant impact on the model’s performance. In this paper, we discuss different types of hyperparameter optimization techniques. We compare the performance of some of the hyperparameter optimization techniques on image classification datasets with the help of AutoML models. In particular, the paper studies Bayesian optimization in depth and proposes the use of genetic algorithm, differential evolution and covariance matrix adaptation—evolutionary strategy for acquisition function optimization. Moreover, we compare these variants of Bayesian optimization with conventional Bayesian optimization and observe that the use of covariance matrix adaptation—evolutionary strategy and differential evolution improves the performance of standard Bayesian optimization. We also notice that Bayesian optimization tends to perform poorly when genetic algorithm is used for acquisition function optimization. © 2023, The Author(s).
  • No Thumbnail Available
    Item
    Flood susceptibility mapping using AutoML and a deep learning framework with evolutionary algorithms for hyperparameter optimization
    (Elsevier Ltd, 2023) Vincent, A.M.; Kulithalai Shiyam Sundar, K.S.S.; Padikkal, J.
    Flooding is one of the most common natural hazards that have extremely detrimental consequences. Understanding which areas are vulnerable to flooding is crucial to addressing these effects. In this work, we use machine learning models and Automated machine learning (AutoML) systems for flood susceptibility mapping in Kerala, India. In particular, we used a three-dimensional convolutional neural network (CNN) architecture for this purpose. The CNN model was assisted with hyperparameter optimization techniques that combine Bayesian optimization with evolutionary algorithms like differential evolution and covariance matrix adaptation evolutionary strategies. The performances of all models are compared in terms of cross-entropy loss, accuracy, precision, recall, area under the curve (AUC) and kappa score. The CNN model shows better performance than the AutoML models. Evolutionary algorithm-assisted hyperparameter optimization methods improved the efficiency of the CNN model by 4 and 9 percent in terms of accuracy and by 0.0265 and 0.0497 with reference to the AUC score. © 2023 Elsevier B.V.
  • No Thumbnail Available
    Item
    Improving multiple model ensemble predictions of daily precipitation and temperature through machine learning techniques
    (Nature Research, 2022) Jose, D.M.; Vincent, A.M.; Dwarakish, G.S.
    Multi-Model Ensembles (MMEs) are used for improving the performance of GCM simulations. This study evaluates the performance of MMEs of precipitation, maximum temperature and minimum temperature over a tropical river basin in India developed by various techniques like arithmetic mean, Multiple Linear Regression (MLR), Support Vector Machine (SVM), Extra Tree Regressor (ETR), Random Forest (RF) and long short-term memory (LSTM). The 21 General Circulation Models (GCMs) from National Aeronautics Space Administration (NASA) Earth Exchange Global Daily Downscaled Projections (NEX-GDDP) dataset and 13 GCMs of Coupled Model Inter-comparison Project, Phase 6 (CMIP6) are used for this purpose. The results of the study reveal that the application of a LSTM model for ensembling performs significantly better than models in the case of precipitation with a coefficient of determination (R2) value of 0.9. In case of temperature, all the machine learning (ML) methods showed equally good performance, with RF and LSTM performing consistently well in all the cases of temperature with R2 value ranging from 0.82 to 0.93. Hence, based on this study RF and LSTM methods are recommended for creation of MMEs in the basin. In general, all ML approaches performed better than mean ensemble approach. © 2022, The Author(s).
  • No Thumbnail Available
    Item
    Optimizing Hyperparameters in Meta-Learning for Enhanced Image Classification
    (Institute of Electrical and Electronics Engineers Inc., 2025) Vincent, A.M.; Padikkal, P.; Bini, A.A.
    This paper investigates the significance of hyperparameter optimization in meta-learning for image classification tasks. Despite advancements in deep learning, real-time image classification applications often suffer from data inadequacy. Few-shot learning addresses this challenge by enabling learning from limited samples. Meta-learning, a prominent tool for few-shot learning, learns across multiple classification tasks. We explore different types of meta-learners, with a particular focus on metric-based models. We analyze the potential of hyperparameter optimization techniques, specifically Bayesian optimization and its variants, to enhance the performance of these models. Experimental results on the Omniglot and ImageNet datasets demonstrate that incorporating Bayesian optimization, particularly its evolutionary strategy variant, into meta-learning frameworks leads to improved accuracy compared to settings without hyperparameter optimization. Here, we show that by optimizing hyperparameters for individual tasks rather than using a uniform setting, we achieve notable gains in model performance, underscoring the importance of tailored hyperparameter configurations in meta-learning. © 2013 IEEE.

Maintained by Central Library NITK | DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify