Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 4 of 4
  • Item
    Local comparison between two ninth convergence order algorithms for equations
    (MDPI AG rasetti@mdpi.com Postfach Basel CH-4005, 2020) Regmi, S.; Argyros, I.K.; George, S.
    A local convergence comparison is presented between two ninth order algorithms for solving nonlinear equations. In earlier studies derivatives not appearing on the algorithms up to the 10th order were utilized to show convergence. Moreover, no error estimates, radius of convergence or results on the uniqueness of the solution that can be computed were given. The novelty of our study is that we address all these concerns by using only the first derivative which actually appears on these algorithms. That is how to extend the applicability of these algorithms. Our technique provides a direct comparison between these algorithms under the same set of convergence criteria. This technique can be used on other algorithms. Numerical experiments are utilized to test the convergence criteria. © 2020 by the authors.
  • Item
    Extended Kantorovich theory for solving nonlinear equations with applications
    (Springer Nature, 2023) Regmi, S.; Argyros, I.K.; George, S.; Argyros, M.
    The Kantorovich theory plays an important role in the study of nonlinear equations. It is used to establish the existence of a solution for an equation defined in an abstract space. The solution is usually determined by using an iterative process such as Newton’s or its variants. A plethora of convergence results are available based mainly on Lipschitz-like conditions on the derivatives, and the celebrated Kantorovich convergence criterion. But there are even simple real equations for which this criterion is not satisfied. Consequently, the applicability of the theory is limited. The question there arises: is it possible to extend this theory without adding convergence conditions? The answer is, Yes! This is the novelty and motivation for this paper. Other extensions include the determination of better information about the solution, i.e. its uniqueness ball; the ratio of quadratic convergence as well as more precise error analysis. The numerical section contains a Hammerstein-type nonlinear equation and other examples as applications. © 2023, The Author(s) under exclusive licence to Sociedade Brasileira de Matemática Aplicada e Computacional.
  • Item
    Hybrid Newton-like Inverse Free Algorithms for Solving Nonlinear Equations
    (Multidisciplinary Digital Publishing Institute (MDPI), 2024) Argyros, I.K.; George, S.; Regmi, S.; Argyros, C.I.
    Iterative algorithms requiring the computationally expensive in general inversion of linear operators are difficult to implement. This is the reason why hybrid Newton-like algorithms without inverses are developed in this paper to solve Banach space-valued nonlinear equations. The inverses of the linear operator are exchanged by a finite sum of fixed linear operators. Two types of convergence analysis are presented for these algorithms: the semilocal and the local. The Fréchet derivative of the operator on the equation is controlled by a majorant function. The semi-local analysis also relies on majorizing sequences. The celebrated contraction mapping principle is utilized to study the convergence of the Krasnoselskij-like algorithm. The numerical experimentation demonstrates that the new algorithms are essentially as effective but less expensive to implement. Although the new approach is demonstrated for Newton-like algorithms, it can be applied to other single-step, multistep, or multipoint algorithms using inverses of linear operators along the same lines. © 2024 by the authors.
  • Item
    An algorithm with feasible inexact projections for solving constrained generalized equations
    (John Wiley and Sons Ltd, 2025) Regmi, S.; Argyros, I.K.; George, S.
    The goal of this article is to design a more flexible algorithm than the ones used previously for solving constrained generalized equations. It turns out that the new algorithm even if specialized provides a finer error analysis with advantages: larger radius of convergence; tighter upper error bounds on the distances; and a more precise information on the isolation of the solution. Moreover, the same advantages exist even if the generalized equation reduces to a nonlinear equation. These advantages are obtained under the same computational cost, since the new parameters and majorant functions are special cases of the ones used in earlier studies. Applications complement the theoretical results. © 2024 John Wiley & Sons Ltd.