Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Padikkal, J."

Filter results by typing the first few letters
Now showing 1 - 20 of 24
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    A New Parameter Choice Strategy for Lavrentiev Regularization Method for Nonlinear Ill-Posed Equations
    (MDPI, 2022) George, S.; Padikkal, J.; Remesh, K.; Argyros, I.K.
    In this paper, we introduced a new source condition and a new parameter-choice strategy which also gives the known best error estimate. To obtain the results we used the assumptions used in earlier studies. Further, we studied the proposed new parameter-choice strategy and applied it to the method (in the finite-dimensional setting) considered in George and Nair (2017). © 2022 by the authors.
  • No Thumbnail Available
    Item
    A retinex inspired deep image prior model for despeckling and deblurring of aerial and satellite images using proximal gradient method
    (Taylor and Francis Ltd., 2025) Shastry, A.; Bini, A.A.; Padikkal, J.
    Unsupervised learning models, particularly in the remote sensing domain, have gained significant attention in recent years. Various degradations in the satellite images, primarily occurring during acquisition, pose a substantial hurdle in obtaining reliable ground truth and extensive training data. The Deep Image Prior model (DIP) addresses these issues by performing restoration tasks using a single image, relying on the implicit regularization inherent in the network architecture. In this paper, we propose a novel approach, integrating the DIP model within the retinex framework to restore aerial and satellite images from the Gamma distributed speckles and linear shift-invariant Gaussian blur along with contrast enhancement using the alternating proximal gradient descent ascent (PGDA) method. Our proposed methodology combines implicit regularization with explicit total variational (TV) regularization, incorporating automated estimation of local regularization parameters. The data-fidelity component in the optimization function is formulated using the Bayesian Maximum A posteriori (MAP) estimate, assuming the speckles follow the Gamma distribution. Demonstration of despeckling and deblurring alone and in addition as a combined task is carried out on aerial and Synthetic Aperture Radar (SAR) images with different resolutions and polarization from various sources. Results obtained are compared with various state-of-the-art despeckling and deblurring models using distinct image quality metrics such as Equivalent Number of Looks (ENL), Contrast to Noise Ratio (CNR), Edge Preserving Index (EPI), Entropy, Global Contrast Factor (GCF), Natural Image Quality Evaluator (NIQE), Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) and Bradley-Terry (B-T) score based on the various factors. The quality of restored images depicted superior performance of the proposed method over the existing models under study. © 2024 Informa UK Limited, trading as Taylor & Francis Group.
  • No Thumbnail Available
    Item
    A self-attention driven retinex-based deep image prior model for satellite image restoration
    (Elsevier Ltd, 2024) Shastry, A.; Padikkal, J.; George, S.; Bini, A.A.
    A self attention driven Deep Image Prior (DIP) framework has been proposed in this work for restoring satellite images corrupted by speckled interference and contrast deficiency. The retinex-based framework incorporated here-in leverages the benefits of DIP approach for image restoration, thus requiring only a single input image, eliminating the need for ground truth or training data. An attention framework is incorporated into the architecture of DIP networks to effectively capture fine textures, enhancing the restoration capability of the model. Two generative networks are employed to obtain the luminance and reflectance maps, with the model's loss functions specifically designed to tackle speckle interference and contrast distortions present in the input. These generated maps eventually reconstruct the enhanced version of the image. Satellite images from different sensors are used to demonstrate and compare the performance of the model. Various state-of-the-art models are evaluated and compared with the proposed strategy using different image quality metrics and statistical tests. The experimental results, incorporating both visual and statistical inferences, demonstrate the superiority and efficiency of the model. Additionally, an ablation analysis is performed to determine optimal regularization parameters, and the significance of integrating attention modules at different architecture layers is also demonstrated. © 2023 Elsevier Ltd
  • No Thumbnail Available
    Item
    A weighted nuclear norm (WNN)-based retinex DIP framework for restoring aerial and satellite images corrupted by gamma distributed speckle noise
    (Springer, 2024) Shastry, A.; Padikkal, J.; George, S.; Bini, A.A.
    Restoration and enhancement are crucial preprocessing steps in the satellite domain. Mainly in active remote sensing such as Synthetic Aperture Radar (SAR), the images are more prone to speckle distortions and their reduction is not so trivial. Traditional deep learning models require large training datasets, limiting their applicability. This paper introduces a novel approach that combines the Deep Image Prior (DIP) model with a weighted nuclear norm (WNN) within a variational retinex framework to address these challenges. DIP leverages prior knowledge about noise distribution and works effectively with a single noisy image, eliminating the need for a large number of training images or ground truth. The WNN assigns non-negative weights to singular values, capturing the significance of each value and preserving crucial information during restoration. This approach offers a promising solution for satellite image restoration without relying on huge training data. The proposed method is evaluated through extensive experiments using various image quality metrics, including PSNR, SSIM, ENL, CNR, Entropy, and GCF. The comparative studies provide compelling evidence that the proposed method surpasses existing techniques in effectively restoring and enhancing speckled input images. Furthermore, statistical analysis performed using the Friedman test demonstrates the superior denoising performance of the model. Additionally, an ablation study is conducted to empirically determine the optimal regularization parameters, ensuring the optimal performance of the model. However, the theoretical selection of parameters for achieving optimal results remains an area that requires further exploration. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.
  • No Thumbnail Available
    Item
    An apriori parameter choice strategy and a fifth order iterative scheme for Lavrentiev regularization method
    (Institute for Ionics, 2023) George, S.; Saeed, M.; Argyros, I.K.; Padikkal, J.
    In this paper, we propose a new source condition and introduce a new apriori parameter choice strategy for Lavrentiev regularization method for nonlinear ill-posed operator equation involving a monotone operator in the setting of a Hilbert space. Also, a fifth order iterative method is being proposed for approximately solving Lavrentiev regularized equation. A numerical example is illustrated to demonstrate the performance of the method. © 2022, The Author(s) under exclusive licence to Korean Society for Informatics and Computational Applied Mathematics.
  • No Thumbnail Available
    Item
    An improved hyperparameter optimization framework for AutoML systems using evolutionary algorithms
    (Nature Research, 2023) Vincent, A.M.; Padikkal, J.
    For any machine learning model, finding the optimal hyperparameter setting has a direct and significant impact on the model’s performance. In this paper, we discuss different types of hyperparameter optimization techniques. We compare the performance of some of the hyperparameter optimization techniques on image classification datasets with the help of AutoML models. In particular, the paper studies Bayesian optimization in depth and proposes the use of genetic algorithm, differential evolution and covariance matrix adaptation—evolutionary strategy for acquisition function optimization. Moreover, we compare these variants of Bayesian optimization with conventional Bayesian optimization and observe that the use of covariance matrix adaptation—evolutionary strategy and differential evolution improves the performance of standard Bayesian optimization. We also notice that Bayesian optimization tends to perform poorly when genetic algorithm is used for acquisition function optimization. © 2023, The Author(s).
  • No Thumbnail Available
    Item
    AttentionDIP: attention-based deep image prior model to restore satellite and aerial images from gamma distributed speckle interference
    (Springer Science and Business Media Deutschland GmbH, 2024) Shastry, A.; George, S.; Bini, A.A.; Padikkal, J.
    Image restoration is an inevitable pre-processing step in most satellite imaging applications. The satellite imaging modality such as Synthetic Aperture Radar (SAR) is prone to speckle distortions due to constructive and destructive interference of the probing signals. Speckles being data correlated and multiplicative, their reduction is not so trivial. Since speckles are not purely noise interventions, a blind reduction process leads to spurious analysis at the later stages. Moreover, the image details are liable to get compromised during such a noise reduction process. An attention-based deep image prior (DIP) model with U-Net architecture has been proposed in this work to carefully address these setbacks. The attention block is used to scale the features extracted from the encoder, and they are concatenated with the features from the decoder to obtain both low- and high-level features. The attention module incorporated in the model helps to extract significant complex structures in SAR images. Further, the DIP model duly respects the noise distribution of speckles while performing the despeckling task. Various synthetic, natural, aerial, and satellite images are subjected to the testing and verification process, and the results obtained are in favor of the proposed model. The quantitative analysis carried out using various statistical metrics in this study also reveals the restoration ability of the proposed method in terms of both despeckling and structure preservation. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023.
  • No Thumbnail Available
    Item
    Convergence Order of a Class of Jarratt-like Methods: A New Approach
    (Multidisciplinary Digital Publishing Institute (MDPI), 2025) Kunnarath, A.; George, S.; Padikkal, J.; Argyros, I.K.
    Symmetry and anti-symmetry appear naturally in the study of systems of nonlinear equations resulting from numerous fields. The solutions of such equations can be obtained in analytical form only in some special situations. Therefore, algorithms or iterative schemes are mostly studied, which approximate the solution. In particular, Jarratt-like methods were introduced with convergence order at least six in Euclidean spaces. We study the methods in the Banach-space setting. Semilocal convergence is studied to obtain the ball containing the solution. The local convergence analysis is performed without the help of the Taylor series with relaxed differentiability assumptions. Our assumptions for obtaining the convergence order are independent of the solution; earlier studies used assumptions involving the solution for local convergence analysis. We compare the methods numerically with similar-order methods and also study the dynamics. © 2024 by the authors.
  • No Thumbnail Available
    Item
    Detection of retinal disorders from OCT images using generative adversarial networks
    (Springer, 2022) Smitha, A.; Padikkal, J.
    Retinal image analysis has opened up a new window for prompt diagnosis and detection of various retinal disorders. Optical Coherence Tomography (OCT) is one of the major diagnostic tools to identify retinal abnormalities related to macular disorders like Age-Related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). The clinical findings include retinal layer analysis to spot the abnormalities on OCT images. Though various models are proposed over the years to diagnose these disorders automatically, an end-to-end system that performs automatic denoising, segmentation, and classification does not exist to the best of our knowledge. This paper proposes a Generative Adversarial Network (GAN) based approach for automated segmentation and classification of OCT-B scans to diagnose AMD and DME. The proposed method incorporates the integration of handcrafted Gabor features to enhance the retina layer segmentation and non-local denoising to remove speckle noise. The classification metrics of GAN are compared with existing methods. The accuracy of up to 92.42% and F1-score of 0.79 indicates that the GANs can perform well for segmentation and classification of OCT images. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
  • No Thumbnail Available
    Item
    Enhancing the practicality of Newton–Cotes iterative method
    (Institute for Ionics, 2023) Sadananda, R.; George, S.; Kunnarath, A.; Padikkal, J.; Argyros, I.K.
    The new Newton-type iterative method developed by Khirallah et al. (Bull Math Sci Appl 2:01–14, 2012), is shown to be of the convergence order three, without the application of Taylor series expansion. Our analysis is based on the assumptions on second order derivative of the involved operator, unlike the earlier studies. Moreover, this technique is extended to methods of higher order of convergence, five and six. This paper also verifies the theoretical approach using numerical examples and comparisons, in addition to the visualization of Julia and Fatou sets of the corresponding methods. © 2023, The Author(s) under exclusive licence to Korean Society for Informatics and Computational Applied Mathematics.
  • No Thumbnail Available
    Item
    Extending the Applicability of Cordero Type Iterative Method
    (MDPI, 2022) Remesh, K.; Argyros, I.K.; Saeed, M.; George, S.; Padikkal, J.
    Symmetries play a vital role in the study of physical systems. For example, microworld and quantum physics problems are modeled on the principles of symmetry. These problems are then formulated as equations defined on suitable abstract spaces. Most of these studies reduce to solving nonlinear equations in suitable abstract spaces iteratively. In particular, the convergence of a sixth-order Cordero type iterative method for solving nonlinear equations was studied using Taylor expansion and assumptions on the derivatives of order up to six. In this study, we obtained order of convergence six for Cordero type method using assumptions only on the first derivative. Moreover, we modified Cordero’s method and obtained an eighth-order iterative scheme. Further, we considered analogous iterative methods to solve an ill-posed problem in a Hilbert space setting. © 2022 by the authors.
  • No Thumbnail Available
    Item
    Finite dimensional realization of fractional Tikhonov regularization method in Hilbert scales
    (Elsevier B.V., 2022) Mekoth, C.; George, S.; Padikkal, J.; Erappa, S.M.
    One of the intuitive restrictions of infinite dimensional Fractional Tikhonov Regularization Method (FTRM) for ill-posed operator equations is its numerical realization. This paper addresses the issue to a considerable extent by using its finite dimensional realization in the setting of Hilbert scales. Using adaptive parameter choice strategy, we choose the regularization parameter and obtain an optimal order error estimate. Also, the proposed method is applied to the well known examples in the setting of Hilbert scales. © 2021 The Author(s)
  • No Thumbnail Available
    Item
    Finite dimensional realization of the FTR method with Raus and Gfrerer type discrepancy principle
    (Springer-Verlag Italia s.r.l., 2023) George, S.; Padikkal, J.; Krishnendu, R.
    It is known that the standard Tikhonov regularization methods oversmoothen the solution x^ of the ill-posed equation T(x) = y, so the computed approximate solution lacks many inherent details that are expected in the desired solution. To rectify this problem, Fractional Tikhonov Regularization (FTR) method have been introduced. Kanagaraj et al. (J Appl Math Comput 63(1):87–105, 2020), studied FTR method for solving ill-posed problems. Techniques are developed to study the Finite Dimensional FTR (FDFTR) method. We also study Raus and Gfrerer type discrepancy principle for FDFTR method and compare the numerical results with other discrepancy principles of the same type. © 2023, The Author(s), under exclusive licence to Springer-Verlag Italia S.r.l., part of Springer Nature.
  • No Thumbnail Available
    Item
    Flood susceptibility mapping using AutoML and a deep learning framework with evolutionary algorithms for hyperparameter optimization
    (Elsevier Ltd, 2023) Vincent, A.M.; Kulithalai Shiyam Sundar, K.S.S.; Padikkal, J.
    Flooding is one of the most common natural hazards that have extremely detrimental consequences. Understanding which areas are vulnerable to flooding is crucial to addressing these effects. In this work, we use machine learning models and Automated machine learning (AutoML) systems for flood susceptibility mapping in Kerala, India. In particular, we used a three-dimensional convolutional neural network (CNN) architecture for this purpose. The CNN model was assisted with hyperparameter optimization techniques that combine Bayesian optimization with evolutionary algorithms like differential evolution and covariance matrix adaptation evolutionary strategies. The performances of all models are compared in terms of cross-entropy loss, accuracy, precision, recall, area under the curve (AUC) and kappa score. The CNN model shows better performance than the AutoML models. Evolutionary algorithm-assisted hyperparameter optimization methods improved the efficiency of the CNN model by 4 and 9 percent in terms of accuracy and by 0.0265 and 0.0497 with reference to the AUC score. © 2023 Elsevier B.V.
  • No Thumbnail Available
    Item
    New Trends in Applying LRM to Nonlinear Ill-Posed Equations
    (Multidisciplinary Digital Publishing Institute (MDPI), 2024) George, S.; Sadananda, R.; Padikkal, J.; Kunnarath, A.; Argyros, I.K.
    Tautenhahn (2002) studied the Lavrentiev regularization method (LRM) to approximate a stable solution for the ill-posed nonlinear equation (Formula presented.), where (Formula presented.) is a nonlinear monotone operator and X is a Hilbert space. The operator in the example used in Tautenhahn’s paper was not a monotone operator. So, the following question arises. Can we use LRM for ill-posed nonlinear equations when the involved operator is not monotone? This paper provides a sufficient condition to employ the Lavrentiev regularization technique to such equations whenever the operator involved is non-monotone. Under certain assumptions, the error analysis and adaptive parameter choice strategy for the method are discussed. Moreover, the developed theory is applied to two well-known ill-posed problems—inverse gravimetry and growth law problems. © 2024 by the authors.
  • No Thumbnail Available
    Item
    On Newton’s Midpoint-Type Iterative Scheme’s Convergence
    (Springer, 2022) Krishnendu, R.; Saeed, M.; George, S.; Padikkal, J.
    This paper introduce new three step iterative schemes with order of convergence five and six for solving nonlinear equations in Banach spaces. The proposed scheme’s convergence is assessed using assumptions on the operator’s derivatives up to order two. Unlike earlier studies, the convergence study of these methods are not based on the Taylor’s expansion. Numerical examples and Basin of attractions are given in this study © 2022, The Author(s), under exclusive licence to Springer Nature India Private Limited.
  • No Thumbnail Available
    Item
    On obtaining order of convergence of Jarratt-like method without using Taylor series expansion
    (Springer Nature, 2024) George, S.; Kunnarath, A.; Sadananda, R.; Padikkal, J.; Argyros, I.K.
    In 2014, Sharma and Arora introduced two efficient Jarratt-like methods for solving systems of non-linear equations which are of convergence order four and six. To prove the respective convergence order, they used Taylor expansion which demands existence of derivative of the function up to order seven. In this paper, we obtain the respective convergence order for these methods using assumptions only on first three derivatives of the function. Other problems with this approach are: the lack of computable a priori estimates on the error distances involved as well as isolation of the solution results. These concerns constitute our motivation for this article. One extension of the fourth order method is presented which is of convergence order eight and the same is proved without any extra assumptions on the higher order derivatives. All the results are proved in a general Banach space setting. Numerical examples and dynamics of the methods are studied to analyse the performance of the method. © The Author(s) under exclusive licence to Sociedade Brasileira de Matemática Aplicada e Computacional 2024.
  • No Thumbnail Available
    Item
    On the convergence of Homeier method and its extensions
    (Springer Science and Business Media B.V., 2022) Muhammed Saeed, K.; Krishnendu, R.; George, S.; Padikkal, J.
    A third-order Homeier method for solving equations in Banach space is studied. Using assumptions on the first and second derivatives, we obtained third-order convergence. Our technique does not involve Taylor series expansion and can be extended to similar higher-order methods. We have given two extensions of the method with orders five and six. Examples with radii of convergence and basins of attraction are provided. © 2022, The Author(s), under exclusive licence to The Forum D’Analystes.
  • No Thumbnail Available
    Item
    On the convergence of open Newton’s method
    (Springer Science and Business Media B.V., 2023) Kunnarath, A.; George, S.; Sadananda, R.; Padikkal, J.; Argyros, I.K.
    Cordero and Torregrosa proved the convergence of two Newton’s-like methods in 2007. Using Taylor expansion (requiring existence of derivatives of order up to four of the involved operator) they obtained the convergence order three for these methods. The convergence order three is obtained for Open Newton’s method and two extensions of it with assumptions only on first two derivatives of the operator involved. We verified the results with examples and dynamics of the results are presented. © 2023, The Author(s), under exclusive licence to The Forum D’Analystes.
  • No Thumbnail Available
    Item
    On the Order of Convergence of the Noor–Waseem Method
    (MDPI, 2022) George, S.; Sadananda, R.; Padikkal, J.; Argyros, I.K.
    In 2009, Noor and Waseem studied an important third-order iterative method. The convergenceorder is obtained using Taylor expansion and assumptions on the derivatives of order up tofour. In this paper, we have obtained convergence order three for this method using assumptionson the first and second derivatives of the involved operator. Further, we have extended the methodto obtain a fifth- and a sixth-order methods. The dynamics of the methods are also provided in thisstudy. Numerical examples are included. The same technique can be used to extend the utilization ofother single or multistep methods. © 2022 by the authors.
  • «
  • 1 (current)
  • 2
  • »

Maintained by Central Library NITK | DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify