Ann. Acad. Rom. Sci.  
Ser. Math. Appl.  
ISSN 2066-6594  
Vol. 18, No. 2/2026  
EXTENDED FOUR PARAMETER  
CHEBYSHEV-HALLEY-TYPE METHODS OF  
ORDER SIX∗  
Ioannis K. Argyros†  
Stepan Shakhno‡  
Halyna Yarmola§  
Communicated by G. Moro¸sanu  
DOI  
10.56082/annalsarscimath.2026.2.111  
Abstract  
A study of the local and the semilocal convergence is carried out  
for the Chebyshev-Halley-type iterative methods under ω-type condi-  
tions. The conditions are imposed only on the first-order derivatives.  
In both cases, the convergence region and the region of uniqueness of  
the solution is established. The new technique is a usefull alternative  
to expensive Taylor series used to study the convergence of iterative  
methods requiring high order derivatives not on the methods. The re-  
sults of a numerical experiment are presented to check the convergence  
conditions.  
Keywords: complete normed space, Chebyshev-Halley-type methods,  
local and semi-local convergence, order six.  
MSC: 65J15, 65H10, 65G99, 47H30.  
Accepted for publication on December 02, 2025  
iargyros@cameron.edu, Department of Computing and Mathematical Sciences,  
Cameron University, Lawton, OK 73505, USA  
stepa.shakhno@lnu.edu.ua, Department of Theory of Optimal Processes, Ivan  
Franko National University of Lviv, Lviv, Ukraine  
§halyna.yarmola@lnu.edu.ua, Department of Computational Mathematics, Ivan  
Franko National University of Lviv, Lviv, Ukraine  
111  
Extended Chebyshev-Halley-type methods  
112  
1 Introduction  
Mathematical models of the complex physical or technological processes de-  
scribed often by nonlinear problems, in particular by systems of nonlinear  
algebraic or transcendental equations, nonlinear integral equations, nonlin-  
ear boundary value problems and other. It is extremely rare to find an exact  
solution to such problems. Therefore, research and development of methods  
for numerically solving nonlinear problems is an important task. In general,  
these problems are written in the form of an operator equation [1,8,11,12,14]  
F(x) = 0.  
(1)  
Here the operator F : Ω B1 B2 is Frech´et-differentiable, Ω is an open  
and convex set, whereas B1 and B2 are Banach spaces.  
There is a large number of iterative methods for the numerical solving  
of the equation (1), in particular Newton method or methods with divided  
differences. One of the characteristics of iterative methods is the amount of  
calculations of the values of F and its derivatives (or divided differences). In  
most cases, at each step of single-step methods, it is necessary to calculate  
one value of F and find one inverse operator for F0 or its approximation.  
To increase the convergence order, it is necessary to increase the number  
of calculations of F and F0 which leads to increasing computational costs  
and a decreasing of the efficiency of the method. Such methods had been  
investigated in [27,9,10,13].  
In this paper we consider the three step method with four real parameters  
b, c, p and q (b = 0), which is defined for x0 Ω and each n = 0, 1, 2, . . .  
2
yn = xn F0(xn)1F(xn),  
An = I F0(xn)1F0(yn), Mn = I An,  
3
c
b
1 3b 2  
1 9b2 3b 4c + 2  
, (2)  
Tn = I + a1An + a2A2 , a1 =  
, a2 =  
n
4
b
8
b2  
1
zn = xn I +  
Mn1An TnF0(xn)1F(xn),  
2b  
xn+1 = zn (pI + qAn)F0(xn)1F(zn).  
The local convergence order six is shown in [9] for B1 = B2 = IRm,  
3
p = 1, q =  
using the Taylor series technique. There are constraints with  
2
this technique.  
   
I.K. Argyros, S. Shakhno, H. Yarmola  
113  
Motivation for writing this paper.  
(E1) The local analysis (see [9]) is restricted only on IRm and the assumption  
is made that F(5) (the fifth derivative exists and is bounded on Ω. But  
this high order derivative is not on the method (2). Hence, the results  
in [9] are applicable to solve equation (1)) provided that the derivatives  
reaching up to order five exist, which may not be true. As an example,  
let m = 1 and say Ω = (1.3, 1.3). Define the function F : Ω IR by  
(
d1t5 log t2 + d2t6 + d3t7, t = 0  
F(t) =  
0,  
t = 0,  
where d1 = 0 and d2 + d3 = 0. It follows by this definition that  
t= 1 Ω solves the equation F(t) = 0. But the function F(5) is  
unbounded at t = 0 Ω. Thus, the results in [9] cannot assure the  
convergence of the method (2) to t. But the method (2) converges to  
3
t= 1 if say x0 = 0.95, b = c = p = 1 and q = . This observation  
2
indicates that if an alternative to the Taylor series technique is used  
the sufficient convergence conditions in [9] may be weakened.  
(E2) No radius of convergence is given in [9]. So, the initial points x0 are  
given in not known.  
(E3) There are no previous knowledge of the integer N such that  
kxxnk < ε n N and some ε 0.  
(E4) There are no isolation of the solution results.  
(E5) The more difficult and important semi-local analysis of method (2) is  
not previously studied.  
The technique of this paper positively addresses all issues (E1)-(E5) and se-  
ries as an alternative to Taylor series for studying the convergence of method  
(2) as well as other similar methods analogously [47,10,13]. In particular,  
we achieve:  
(E1)0 The local analysis of method (2) is shown by using only the operators  
on it, i.e. F and F0.  
(E2)0 A computable radius of convergence is provided. So, the initial points  
ate selected from a certain ball about xor x0.  
(E3)0 The number of iterations to be carried out, i.e. N is known a priori.  
Extended Chebyshev-Halley-type methods  
114  
(E4)0 Domains are determined which contain only one solution.  
(E5)0 Majorizing scalar sequences [1] are employed to determine the conver-  
gence of {xn} generated by the method (2). The analysis is presented  
in the more general setting of a Banach space. Moreover, the derivative  
F0 is controlled by generalized continuity conditions used to control it  
and also sharpen the error distances kxxnk [13].  
The new local convergence analysis is shown using only the operators on  
the method, i.e. F and F0. Moreover, the semi-local analysis not previously  
studied is presented using majorizing sequences. Both analyses provided in  
the more general setting of a Banach space and also depend on generalized  
continuity which controls the derivative F0. The same technique can be used  
to extend the applicability of other methods along the same lines [47,10,13].  
2 Local convergence  
First we introduce the following notation and conditions used in the study  
of local convergence of the method (2).  
Denote by U(x, r) and U[x, r] open and closed balls, respectively, with  
center at the point x and of radius r. Let us set S = [0, ).  
Suppose  
(LC1) There exists a function ω0 : S S, which is continuous and strictly  
increasing on S such that equation ω0(t) 1 = 0 has at least one posi-  
tive root.  
We denote by %0 the smallest such root and set  
S0 = [0, %0).  
(LC2) There exists a function ω : S0 S, which is continuous and strictly  
increasing on S0 such that for function g1 : S0 S given by  
R
R
1 w((1 θ)t)+  
1 + 1 w0(θt)dθ  
1
3
0
0
g1(t) =  
,
(3)  
1 w0(t)  
equation g1(t) 1 = 0 has at least one root in the interval (0, %0). We  
denote by r1 the smallest such root.  
(LC3) The equation µ(t)1 = 0 has at least one root in the interval (0, %0).  
We denote by % the smallest such root. Set S1 = [0, %).  
Define the function g2 : S1 S by  
R
λ(t) 1 + 1 w0(θt)dθ  
R
1 w((1 θ)t)dθ  
0
0
g2(t) =  
+
1 w0(t)  
1 w0(t)  
I.K. Argyros, S. Shakhno, H. Yarmola  
115  
Z
1
(1 + λ(t))w(t)  
+
1 +  
w0(θt)dθ ,  
2|b|(1 w0(t))2(1 µ(t))  
0
where  
w((1 + g1(t))t)  
ꢄ ꢄ  
c
w(t)  
ꢄ ꢄ  
w(t) =  
or  
µ(t) =  
,
ꢄ ꢄ  
b 1 w0(t)  
w0(t) + w0(g1(t)t),  
w(t)  
w(t)  
λ(t) =  
|a1| + |a2|  
.
1 w0(t)  
1 w0(t)  
The equation  
g2(t) 1 = 0  
has at least one root in the interval (0, r1). We denote by r2 the  
smallest such root.  
Define the function g3 : S1 S by  
R
w(t) 1 + 1 w0(θg2(t)t)dθ  
R
1 w((1 θ)g2(t)t)dθ  
0
0
g3(t) =  
+
1 w0(g2(t)t)  
(1 w0(t))(1 w0(g2(t)t))  
R
1 + 1 w0(θt)dθ  
|q|w(t)  
0
+ |p 1| +  
g2(t),  
1 w0(t)  
1 w0(t)  
where  
w((1 + g2(t))t)  
or  
w(t) =  
w0(t) + w0(g2(t)t),  
The equation  
g3(t) 1 = 0  
has at least one root in the interval (0, r2). We denote by r3 the  
smallest such root.  
Define the parameter  
r= min{rj},  
j = 1, 2, 3.  
(4)  
This parameter is shown to be a radius of convergence in Theorem 1  
for the method (2). Set S2 = [0, r).  
Extended Chebyshev-Halley-type methods  
116  
It follows by these definitions that  
0 ω0(t) < 1 t S2 and 0 ω0(g2(t)t) < 1 t S2  
(5)  
(6)  
and t S2  
0 gi(t) < 1, i = 1, 2, 3.  
(LC4) There exists a solution xΩ of the equation F(x) = 0, L ∈  
L(B1, B2) such that L1 ∈ L(B2, B1) and  
kL1(F0(x) L)k ≤ ω0(kx xk) x .  
Set Ω0 = Ω U(x, %0).  
(LC5) kL1(F0(x) F0(y))k ≤ ω(kx yk) x, y 0.  
(LC6) U(x, r) Ω.  
Theorem 1. Suppose that conditions (LC1)-(LC6) hold and x0 U(x, r).  
Then, the sequence {xn} generated by method (2) is well defined in U(x, r)  
for each n = 0, 1, . . . and converges to the solution xof the equation  
(1). Moreover, the following error estimates hold for each n = 0, 1, . . .  
kyn xk ≤ g1(r)kxn xk ≤ kxn xk < r,  
(7)  
(8)  
kzn xk ≤ g2(r)kxn xk ≤ kxn xk  
and  
kxn+1 xk ≤ g3(r)kxn xk ≤ kxn xk.  
(9)  
Proof. The proof is carried out by mathematical induction. Using conditions  
(LC4) and (5), we obtain in turn that  
kL1(F0(xn) L)k ≤ ω0(kxn xk) ω0(r) < 1.  
(10)  
The Banach Lemma on invertible linear operators [1,11] and (10) imply that  
F0(xn)1 ∈ L(B2, B1) and  
1
kF0(xn)1Lk ≤  
.
(11)  
1 ω0(kxn xk)  
             
I.K. Argyros, S. Shakhno, H. Yarmola  
117  
Then, by the first substep of (2), we can write in turn  
1
yn x= xn xF0(xn)1F(xn) + F0(xn)1F(xn)  
3
Z
1
= F0(xn)1 F0(xn) −  
F0(x+ θ(xn x))(xn x)  
0
Z
1
1
+ F0(xn)1  
F0(x+ θ(xn x))F0(x) + F0(x)  
3
0
×(xn x).  
Taking into account conditions (LC4), (LC5), (6), (11) and the last equality,  
we have  
R
R
1 w((1 θ)kxn xk)+  
1 + 1 w0(θkxn xk)dθ  
1
3
0
0
kyn xk ≤  
1 w0(kxn xk)  
×kxn xk ≤ g1(kxn xk)kxn xk ≤ kxn xk < r.  
Using conditions (LC4), (LC5) and (11), we have  
kAnk = kI F0(xn)1F0(yn)k = kF0(xn)1(F0(xn) F0(x)  
+F0(x) F0(yn))k ≤ kF0(xn)1Lk kL1(F0(xn) F0(x))k  
ω0(kxn xk) + ω0(kyn xk)  
+kL1(F0(x) F0(yn))k ≤  
1 ω0(kxn xk)  
ω0(kxn xk) + ω0(g1(kxn xk)kxn xk)  
=
1 ω0(kxn xk)  
or  
kAnk = kI F0(xn)1F0(yn)k = kF0(xn)1(F0(xn) F0(yn))k  
≤ kF0(xn)1Lk kL1(F0(xn) F0(yn))k  
ω(kxn ynk)  
ω((1 + g1(kxn xk))kxn xk)  
1 ω0(kxn xk)  
1 ω0(kxn xk)  
and  
ω(kxn xk)  
kAnk ≤  
.
(12)  
1 ω0(kxn xk)  
In view of the following equality holds  
I Tn = a1An a2A2 ,  
n
 
Extended Chebyshev-Halley-type methods  
118  
we obtain using the estimates (10) and (12)  
2
w(kxn xk)  
w(kxn xk)  
kI Tnk ≤ |a1|  
+ |a2|  
1 w0(kxn xk)  
1 w0(kxn xk)  
= λ(kxn xk) = λn,  
(13)  
(14)  
kTnk ≤ 1 + λn,  
ꢄ ꢄ  
c
b
c
w(kxn xk)  
ꢄ ꢄ  
An  
= µ(kxn xk) = µn < 1  
ꢄ ꢄ  
b 1 w0(kxn xk)  
and  
1ꢇ  
c
I An  
b
1
kMn1k = k(I (I Mn))1k =  
.
(15)  
1 µn  
Then, by the second substep of (2), we can write in turn  
zn x= xn xF0(xn)1F(xn) + (I Tn) F0(xn)1F(xn)  
1
Mn1AnTnF0(xn)1F(xn)  
2b  
Z
1
= F0(xn)1 F0(xn) −  
F0(x+ θ(xn x))(xn x)  
0
Z
1
+ (I Tn) F0(xn)1  
F0(x) + F0(x) (xn x)  
F0(x+ θ(xn x))dθ  
0
Z
1
1
Mn1AnTnF0(xn)1  
F0(x+ θ(xn x))dθ  
2b  
0
F0(x) + F0(x) (xn x).  
By using (LC4), (LC5), estimates (6), (11), (12)–(15), we get  
"
R
1 w((1 θ)kxn xk)dθ  
kzn xk ≤  
0
1 w0(kxn xk)  
R
λn 1 + 1 w0(θkxn xk)dθ  
1 0w0(kxn xk)  
+
+
(1 + λn)w(kxn xk)  
2|b|(1 w0(kxn xk))2(1 µn)  
ꢁꢈ  
Z
1
× 1 +  
w0(θkxn xk)dθ  
kxn xk  
= g2(kxn 0xk)kxn xk ≤ kxn xk.  
 
I.K. Argyros, S. Shakhno, H. Yarmola  
119  
Then, by the third substep of (2), we can write in turn  
xn+1 x= zn xF0(zn)1F(zn) + F0(zn)1 F0(xn)1 F(zn)  
(p 1)F0(xn)1F(zn) qAnF0(xn)1F(zn),  
Z
1
= F0(zn)1 F0(zn) −  
F0(x+ θ(zn x))(zn x)  
0
Z
1
+ F0(zn)1 F0(xn)1  
F0(x+ θ(zn x))dθ  
0
F0(x) + F0(x) (zn x) ((p 1)I + qAn) F0(xn)1  
Z
1
×
F0(x+ θ(zn x))F0(x) + F0(x)  
0
×(zn x).  
Using condition (LC3), (5) and (8), we obtain in turn that  
kL1(F0(zn) L)k ≤ ω0(kzn xk) ω0(g2(r)r) < 1.  
(16)  
The Banach Lemma on invertible linear operators [1,11] and (16) imply that  
F0(zn)1 ∈ L(B2, B1) and  
1
kF0(zn)1Lk ≤  
.
(17)  
1 ω0(kzn xk)  
By the following equality  
F0(zn)1 F0(xn)1 = F0(xn)1 F0(xn) F0(zn) F0(xn)1  
,
and taking into account the conditions (LC4), (LC5), estimates (6), (8),  
(11), (12) and (17), we have  
R
w(kxn xk) 1 + 1 w0(θkzn xk)dθ  
0
kxn+1 xk ≤  
(1 w0(kxn xk))(1 w0(kzn xk))  
R
1 w((1 θ)kzn xk)dθ  
0
+
1 w0(kzn xk)  
|q|w(kxn xk)  
+ |p 1| +  
1 w0(kxn xk)  
R
1 + 1 w0(θkzn xk)dθ  
0
×
kzn x k  
1 w0(kxn xk)  
g3(kxn xk)kxn xk ≤ kxn xk.  
   
Extended Chebyshev-Halley-type methods  
120  
Moreover, by (9) there exists α [0, 1) such that  
kxn+1 xk ≤ αkxn xk ≤ αn+1kx0 xk < r.  
(18)  
Therefore, it follows from (18) that the iterate xn+1 U(x, r) and  
lim xn = x.  
n→∞  
Next, we present a result for uniqueness of the solution of the equa-  
tion (1).  
Proposition 1. Suppose:  
(a) The condition (LC4) holds in the ball U(x, %1) for some %1 > 0.  
(b) There exists %2 %1 such that  
Z
1
ω0 θ%2 dθ < 1.  
(19)  
0
Set 1 = U[x, %2] .  
Then, the equation F(x) = 0 has the unique solution xin the region 1.  
Proof. Suppose that there exists u1, u= xand F(u) = 0. Let  
1
R
T =  
F0(x+ θ(ux)). It follows by (a) and (b) that  
0
Z
1
kL1(T L)k ≤  
ω0(kx+ θ(ux) xk)≤  
0
0
Z
Z
1
1
ω0 θkuxk dθ  
ω0 θ%2 dθ < 1.  
0
Hence, the operator T is invertible. Then, by the identity  
ux= T1(F(u) F(x)) = T1(0 0) = 0,  
we conclude that u= x.  
3 Semi-local convergence  
Majorizing sequences are used to provide the semi-local analysis.  
Suppose:  
(SLC1) There exists a function v0 : S S which is continuous and nonde-  
creasing such that the equation  
v0(t) 1 = 0  
has a smallest solution %3. Set S3 = [0, %3).  
 
I.K. Argyros, S. Shakhno, H. Yarmola  
121  
(SLC2) There exists a function v : S3 S which is continuous and nonde-  
creasing.  
The majorant sequence {αn} is defined for α0 = 0,  
2
β0 ≥ kF0(x0)1F(x0)k  
3
and n = 0, 1, 2, . . . by  
v(βn αn),  
vn =  
or  
v0(αn) + v0(βn),  
ꢄ ꢄ  
vn  
vn  
c
vn  
ꢄ ꢄ  
λn = |a1| + |a2|  
,
µ =  
n
,
ꢄ ꢄ  
1 v0(αn)  
1 v0(αn)  
b 1 v0(αn)  
3
2
1
(1 + λn)vn  
γn = βn +  
+ λn +  
(βn αn),  
3
2|b|(1 v0(αn))(1 µn)  
Z
1
ξn  
=
v((1 θ)(γn αn))(γn αn) + (1 + v0(αn))(γn βn)  
0
1
+ (1 + v0(αn))(βn αn),  
2
vn  
ξn  
αn+1 = γn + |p| + |q|  
,
(20)  
1 v0(αn)  
1 v0(αn)  
Z
1
δn+1  
=
v((1 θ)(αn+1 αn))(αn+1 αn)  
0
1
+(1 + v0(αn))(αn+1 βn) + (1 + v0(αn))(βn αn)  
2
and  
2
δn+1  
βn+1 = αn+1  
+
.
3 1 v0(αn+1  
)
(SLC3)  
Lemma 1. Suppose that n = 0, 1, 2, ...  
0 v0(αn) < 1,  
µn < 1,  
0 αn < α for some α > 0.  
(21)  
Then, the sequence {αn} given by the formula (20) is nondecreasing  
and convergent to its unique least upper bound α[0, 1].  
 
Extended Chebyshev-Halley-type methods  
122  
Proof. The sequence {αn} is nondecreasing and bounded from above  
by α and as such it is convergent to α.  
The additional conditions shall be used in the semi-local convergence  
analysis of method (2).  
(SLC4) There exist points x0 Ω and parameter β0 such that F0(x0)1  
,
L0 1 £(Y, X) and  
2
kF0(x0)1F(x0)k ≤ β0.  
3
(SLC5) kL1(F0(x) F0(x0))k ≤ v0(kx x0k) x .  
Set Ω2 = U(x0, %3) .  
(SLC6) kL1(F0(x) F0(y))k ≤ v(kx yk) x, y 2.  
(SLC7) U[x0, α] .  
Next, the semi-local convergence of the method (2) is presented based on  
the conditions (SLC1)-(SLC7) and the preceding terminology.  
Theorem 2. Suppose that the conditions (SLC1)-(SLC7) hold. Then, the  
sequence {xn} generated by the method (2) is well defined in U(x0, α),  
remains in U(x0, α) for all n = 0, 1, 2, ... and converges to a solution  
xU[x0, α] of the equation F(x) = 0. Moreover, the following error  
estimates hold  
kxxnk ≤ ααn.  
(22)  
Proof. By the condition (SLC4) the iterate y0 is well defined, since from the  
first substep of the method for n = 0 we have  
2
ky0 x0k ≤  
kF0(x0)1F(x0)k ≤ β0 α0 = β0 < α,  
3
and the iterate y0 U(x0, α).  
Suppose that yi, zi, xi+1 U(x0, α) for i = 0, . . . , n 1 and  
kyi xik ≤ βi αi,  
kzi yik ≤ γi βi,  
kxi+1 yik ≤ αi+1 βi,  
kxi+1 zik ≤ αi+1 γi.  
Then for i = n, we get following estimates.  
   
I.K. Argyros, S. Shakhno, H. Yarmola  
123  
Using conditions (SLC3) and (SLC5), we obtain in turn that  
kL1(F0(xn) L)k ≤ v0(kxn x0k) v0(αn) < 1.  
(23)  
By Banach Lemma [1] and (23) the operator F0(xn)1 ∈ L(B2, B1) and  
1
kF0(xn)1Lk ≤  
.
(24)  
1 v0(kxn x0k)  
By the identity  
3
F(xn) = F(xn) F(xn1) F0(xn1)(yn1 xn1  
)
2
= F(xn) F(xn1) F0(xn1)(xn xn1) + F0(xn1)(xn yn1  
)
1
F0(xn1)(yn1 xn1  
)
2
1
Z
=
F0(xn1 + θ(xn xn1)) F0(xn1) (xn xn1  
)
0
1
+F0(xn1)(xn yn1) F0(xn1)(yn1 xn1),  
2
we get  
Z
1
kL1F(xn)k ≤  
v (1 θ)(αn αn1) (αn αn1  
)
0
+(1 + v0(αn1))(αn βn1  
)
1
+ (1 + v0(αn1))(βn1 αn1) = δn.  
(25)  
2
We have by the first substep of the method  
2
2
δn  
kyn xnk ≤  
kF0(xn)1LkkL1F(xn)k ≤  
= βn αn  
3
3 1 v0(αn)  
and  
kyn x0k ≤ kyn xnk + kxn x0k ≤ βn αn + αn = βn < α.  
Subtracting the third and second substeps of the method gives  
1
zn yn = F0(xn)1F(xn) + (I Tn) F0(xn)1F(xn)  
3
1
Mn1AnTnF0(xn)1F(xn).  
(26)  
2b  
     
Extended Chebyshev-Halley-type methods  
124  
Using (SLC5), (SLC6) and (23), we have  
kAnk = kI F0(xn)1F0(yn)k  
= kF0(xn)1(F0(xn) F0(x0) + F0(x0) F0(yn))k  
≤ kF0(xn)1Lk kL1(F0(xn) F0(x0))k  
+kL1(F0(x0) F0(yn))k  
v0(kxn x0k) + v0(kyn x0k)  
v0(αn) + v0(βn)  
1 v0(kxn x0k)  
1 v0(α0)  
or  
kAnk = kI F0(xn)1F0(yn)k = kF0(xn)1(F0(xn) F0(yn))k  
v(kxn ynk)  
≤ kF0(xn)1LkkL1(F0(xn) F0(yn))k ≤  
1 v0(kxn x0k)  
v0(αn βn)  
1 v0(α0)  
and  
v(kxn ynk)  
v¯n  
kAnk ≤  
.
(27)  
1 v0(kxn x0k)  
1 v0(α)  
Here v(kxn ynk) = v0(kxn x0k) + v0(kyn x0k) or  
v(kxn ynk) = v(kxn ynk).  
From the following equality  
I Tn = a1An a2A2 ,  
n
and using the estimate (27), we obtain that  
2
v(kxn x0k)  
v(kxn x0k)  
kI Tnk ≤ |a1|  
+ |a2|  
(28)  
(29)  
1 v0(kxn x0k)  
1 v0(kxn x0k)  
λn  
and  
kTnk ≤ 1 + λn.  
(30)  
Moreover,  
ꢄ ꢄ  
c
c
v(kxn x0k)  
ꢄ ꢄ  
An  
= µn < 1,  
ꢄ ꢄ  
b
b 1 v0(kxn x0k)  
1ꢇ  
c
I An  
b
1
kMn1k = k(I (I Mn))1k =  
.
(31)  
1 µn  
       
I.K. Argyros, S. Shakhno, H. Yarmola  
125  
Taking into account the equality (26), and estimates (23), (27), (28),  
(30), (31), we get in turn  
1
3
(1 + λn)vn  
3
2
kzn ynk ≤  
+ λn +  
(βn αn) = γn βn  
2|b|(1 v0(αn))(1 µn)  
and  
kzn x0k ≤ kzn ynk + kyn x0k ≤ γn βn + βn = γn < α.  
Since,  
3
F(zn) = F(zn) F(xn) F0(xn)(yn xn)  
2
= F(zn) F(xn) F0(xn)(zn xn) + F0(xn)(zn yn)  
1
F0(xn)(yn xn)  
2
Z
1
=
(F0(xn + θ(zn xn)) F0(xn))(zn xn)  
0
1
+F0(xn)(zn yn) F0(xn)(yn xn).  
2
Then, using (SLC5) and (SLC6), we have  
Z
1
kL1F(zn)k ≤  
v((1 θ)(γn αn))(γn αn) + (1 + v0(αn))  
0
1
×(γn βn) + (1 + v0(αn))(βn αn) = ξn.  
(32)  
2
We get by the last substep of the method, (23), (27) and (32)  
vn  
ξn  
kxn+1 znk ≤ |p| + |q|  
= αn+1 γn  
1 v0(αn)  
1 v0(αn)  
and  
kxn+1 x0k ≤ kxn+1 znk + kzn x0k ≤ αn+1 γn + γn = αn+1 < α.  
Thus, the iterates yn, zn, xn+1 U(x0, α).  
It follows from the obtained estimates that the sequence {xn} is complete  
in a Banach space B1. Hence, it converges to some point xU[x0, α].  
Furthermore, by letting n → ∞ in (25) and using the continuity of F we  
conclude that F(x) = 0. Finally, by the estimate  
kxn+i xnk ≤ kxn+i xn+i1k + ... + kxn+1 xnk  
αn+i αn+i1 + ... + αn+1 αn = αn+i αn,  
we show (22) if i → ∞.  
 
Extended Chebyshev-Halley-type methods  
126  
Proposition 2. Suppose:  
(1) There exists a solution xU(x0, %4) of the equation (1) for some  
%4 > 0.  
(2) Condition (SLC5) holds on U(x0, %4).  
(3) There exist %5 > %4 such that  
Z
1
v0 (1 θ)%5 + θ%4 dθ < 1.  
(33)  
0
Set 3 = U[x0, %5] .  
Then, the equation (1) has unique solution xin the region 3.  
Proof. Suppose that there exists y3, y= xand F(y) = 0. Define  
1
R
the linear operator G by G =  
F0(y+ θ(xy)). Using (SLC2) and  
0
(33), we obtain  
Z
1
kL1(G F0(x0))k ≤  
v0(ky+ θ(xy) x0k)dθ  
0
0
0
Z
1
1
v0 (1 θ)kyx0k + θkxx0k dθ  
Z
v0 (1 θ)%5 + θ%4 dθ < 1.  
So, the linear operator G is invertible. Therefore, from the identity  
yx= G1(F(y) F(x)) = G1(0 0) = 0,  
we conclude that y= x.  
4 Numerical examples  
In this section, we give the results of verifying the convergence conditions of  
the Theorems 1 and 2 for the considered method (2). The experiments were  
conducted in GNU Octave 7.3.0 software. To stop the iterative process the  
condition kxn+1 xnk ≤ ε was used. The calculations were performed with  
ε = 108 and the norms k · kis used.  
Example 1. Consider the nonlinear integral equations [1]  
Z
1
F(x)(s) = x(s) 5s  
tx(t)3dt, s, t [0, 1].  
0
Here, B1 = B2 = C[0, 1], C[0, 1] and the exact solution x(s) = 0.  
   
I.K. Argyros, S. Shakhno, H. Yarmola  
127  
The derivative of operator F is defined by the following formula  
Z
1
F0(x)h(s) = h(s) 15s  
tx(t)2h(t)dt, h C[0, 1]  
0
and we have that F0(x) = I (I is an identity operator),  
Z
1
F0(x)h(s) F0(y)h(s) = 15s  
t(y(t) + x(t))(y(t) x(t))h(t)dt.  
0
Let’s choose L = F0(x) and Ω = U(x, 1) for a local case. Then, we have  
2
2
ω0(kx xk) = 7.5kx xk, ρ0  
=
, Ω0  
=
x,  
and a function  
15  
15  
ω(kx yk) = 2kx yk, x, y 0. The radii obtained for different values of  
parameters are given in Table 1.  
Table 1: Radii for Example 1.  
Parameters  
Radius r∗  
5.3551e-02  
5.1918e-02  
5.1268e-02  
b = 3, c = p = q = 1  
b = 3, c = p = 1, q = 3/2  
b = 1, c = p = 1, q = 3/2  
b = 3, c = p = 1, q = 3/2 4.8902e-02  
Example 2. Consider the system of m equations  
m
X
xj + ex 1 = 0,  
i = 1, . . . , m.  
i
j=1  
Here, B1 = B2 = IRm, IRm and the exact solution x= (0, . . . , 0)T .  
For this problem elements of the Jacobian matrix are calculated by for-  
mulas  
ex + 1,  
i = j,  
2,  
i = j,  
i
F0(x)ij =  
and F0(x)ij =  
1,  
i = j,  
1,  
i = j.  
Let’s choose L = F0(x) and Ω = U(x, 1) for a local case. Then, we  
have  
1(F0(x) L) = L1diag {ex 1, . . . , ex 1} ,  
1
m
L
xm  
1
L
1(F0(x) F0(y)) = L1diag {ex e , . . . , e  
y1  
ey } .  
m
   
Extended Chebyshev-Halley-type methods  
128  
Therefore, functions ω0 and ω have the following form  
ω0(kx xk) = σ(e 1)kx xk  
and  
ω(kx yk) = σemin{1,% }kx yk,  
.
0
where σ = (F0(x))111  
Local case. Let m = 5, b = 3, c = p = q = 1. Then, Ω0 (x, 0.6984),  
rmin{0.2658, 0.1393, 0.1267} ≈ 0.1267  
and U[x, r] [0.1267, 0.1267]m 0. The method (2) converges to the  
exact solution at two iterations for starting approximation  
x0 = (0.12, . . . , 0.12)T  
and error estimates (7)-(9) hold for each n 0 (see Table 2).  
Table 2: Error estimates at iteration for Example 2.  
n
0
1
2
kxn xk  
g3∗  
kyn xk  
g1∗  
kzn xk  
g2∗  
1.2000e-01  
4.0849e-02 6.8860e-02 2.6447e-07 9.7156e-02  
1.8766e-09 1.2000e-01 6.2554e-10 1.0769e-09 7.3635e-17 1.5194e-09  
7.0097e-17 1.8766e-09  
3
Similar results are obtained for b = 3, c = p = 1 and q =  
:
2
0 (x, 0.6984), rmin{0.2658, 0.1393, 0.1211} ≈ 0.1211 and  
U[x, r] [0.1211, 0.1211]m 0. Error estimates at iteration are given  
in Table 3.  
Table 3: Error estimates at iteration for Example 2.  
n
0
1
2
kxn xk  
g3∗  
kyn xk  
g1∗  
kzn xk  
g2∗  
1.2000e-01  
4.0849e-02 6.7307e-02 2.6447e-07 1.2775e-01  
6.3512e-11 1.2000e-01 2.1171e-11 3.5623e-11 7.6134e-17 4.6702e-11  
7.0513e-17 6.3512e-11  
0
0
0
0
In Tables 2 and 3 we use following notations: g1= g1(r)kxn xk,  
g2= g2(r)kxn xk and g3= g3(r)kxn1 xk.  
   
I.K. Argyros, S. Shakhno, H. Yarmola  
129  
5 Conclusions  
The paper studies the convergence of a three-step iterative method con-  
taining inverse linear operators under the weak conditions. Moreover, these  
conditions contain only operators that are in the method. A local and a  
semilocal convergence analysis of this method under generalized Lipschitz  
conditions for only the first-order derivatives is presented. The regions of  
convergence and uniqueness of the solution are established. The results of  
a numerical experiment are also presented. The new technique is an alter-  
native to expensive Taylor series usually employed to study the convergence  
of iterative methods. The same technique is applicable to extend other  
methods [414].  
References  
[1] I.K. Argyros, Convergence and Applications of Newton-type Iterations,  
New York, Springer-Verlag, 2008.  
[2] I.K. Argyros, S. Shakhno, S. Regmi and H. Yarmola, On the conver-  
gence of two-step Kurchatov-type methods under generalized continuity  
conditions for solving nonlinear equations, Symmetry 14 (2022), 2548.  
[3] I.K. Argyros, S. Shakhno, S. Regmi and H. Yarmola, On the semi-  
local convergence of two competing sixth order methods for equations  
in Banach space, Algorithms 16 (2023), 2.  
[4] S. Artidiello, A. Cordero, J.R. Torregrosa and M.P. Vassileva, Design  
of high-order iterative methods for nonlinear systems by using weight  
function procedure, Abs. Appl. Anal. 12 (2015), 289029.  
[5] A. Cordero, J.L. Hueso, E. Martinet and J.R. Torregrosa, A modified  
Newton-Jarratt’s composition, Numer. Algorithms 55 (2010), 87-99.  
[6] M. Grau-S´anchez, A. Grau and M. Noguera, On the computational  
efficiency index and some iterative methods for solving systems of non-  
linear equations, J. Comput. Appl. Math. 236 (2011), 1259-1265.  
[7] J.M. Gut´ıerrez and M.A. Hernandez, A family of Chebyshev-Haliey  
type methods in Banach spaces, Bull. Aust. Math. Soc. 55 (1997), 113-  
130.  
[8] C.T. Kelley, Solving Nonlinear Equations with Newtons Method, SIAM,  
Philadelphia, 2003.  
           
Extended Chebyshev-Halley-type methods  
130  
[9] M. Narang, S. Bhatia and V. Kanwar, New two-parameter Chebyshev-  
Halley-like family of fourth and sixth-order methods for systems of non-  
linear equations, Appl. Math. Comput., 275 (2016), 394-403.  
[10] C.H. Nedzhibov, V.I. Hasanov and M.G. Petlcov, On some families of  
multi-point iterative methods for solving nonlinear equations, Numer.  
Algorithms 42 (2006) 127-136.  
[11] J.M. Ortega and W.C. Rheinboldt, Iterative Solution of Nonlinear  
Equations in Several Variables, Academic Press, New York, 1970.  
[12] A.M. Ostrowski, Solution of Equations and Systems of Equations, Aca-  
demic Press, New York, 1966.  
[13] J.R. Sharma, R.K. Guha and R. Sharma, An efficient fourth order  
weighted-Newton method for systems of nonlinear equations, Numer.  
Algorithms 62 (2013), 307-323.  
[14] J.F. Traub, Iterative Methods for the Solution of Equations, Prentice-  
Half, Englewood Cliffs, 1964.