ANALYSIS AND OPTIMIZATION OF ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS
Main Article Content
Abstract
Solving nonlinear equations is a fundamental problem across science and engineering. Many real-world problems – from weather forecasting to satellite orbit determination – boil down to finding roots of nonlinear equations. In most practical cases these equations cannot be solved analytically, so iterative numerical methods are employed to obtain approximate solutions. The motivation for this research is the widespread importance of efficient and reliable solvers for nonlinear equations in diverse fields (physics, biology, finance, engineering, etc.). Effective root-finding algorithms enable modeling and simulation of complex systems where closed-form solutions are impossible.
This study aims to analyze and optimize iterative methods for nonlinear equations. We focus on classical methods (like Newton-Raphson, Secant, and bisection) as well as modern improvements, examining their convergence, stability, and performance. Key objectives include: (1) reviewing existing iterative algorithms and their theoretical convergence properties, (2) developing and discussing strategies to accelerate or stabilize these methods, and (3) implementing the algorithms in Python to compare performance on representative nonlinear problems. In particular, we ask: Which iterative methods converge fastest for a given problem, and how can their efficiency or robustness be improved? We also explore how recent techniques (e.g. adaptive step-sizing and AI-based enhancements) can address the limitations of classical approaches. The structure of our research splits into a theoretical foundation (Sections 1–4) followed by practical experimentation (Sections 5–8). (Ahmed & Khan, 2011)