Homepage :https://tci-thaijo.org/index.php/SciTechAsia Science & Technology Asia
Vol.28 No.1 January - March 2023 Page: [90-107]
Original research article
Progressive Iterative Approximation Method with Memory and Sequences of Weights for Least Square Curve Fitting
Saknarin Channark1,2, Poom Kumam1,2,3*, Parin Chaipunya1,2, Wachirapong Jirakitpuwapat4
1Department of Mathematics, Faculty of Science, King Mongkutโs University of Technology Thonburi, Bangkok 10140, Thailand
2Center of Excellence in Theoretical and Computational Science, Science Laboratory Building, Faculty of Science, King Mongkutโs University of Technology Thonburi, Bangkok 10140, Thailand
3Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
4National Security and Dual-Use Technology Center, National Science and Technology Development Agency, Pathum Thani 12120, Thailand
Received 1 November 2021อพ Received in revised form - Accepted 3 August 2022อพ Available online 20 March 2023
ABSTRACT
The progressive iterative approximation method with memory and sequences of weights for least square curve fitting (SSLSPIA) is presented in this paper. This method improves the MLSPIA method by varying the weights of the moving average between it- erations, using three sequences of weights derived from the singular values of a colloca- tion matrix. It is proved that a sequence of fitting curves with an appropriate alternative of weights converge to the solution of least square fitting and that the convergence rate of the new method is faster than that of the MLSPIA method. Some examples and applications in this paper prove the SSLSPIA method is superior.
Keywords: Least square curve fittingอพ Progressive iterative approximationอพ Sequences of weights
1. Introduction
Least-squares fitting (LSF) to data points is defined using a parametric curve which is a commonly used method in sci-
entific and engineering research, involving geometric modeling and Computer Aided Design (CAD) [1โ3]. It is widely used in- side numerous applications. Due to its ra-
pidity, itโs especially useful when dealing with enormous data sets. LSF can be di- vided into two types which are derived from several metric descriptions. Control points and parameter knots are variables in one, and the problems are addressed using non- linear least squares. Due to the nonlinear nature of them, a sufficient initial fitting is necessary. The second is data collection that has been organized by giving each data point a parameter. The control points can then be obtained by solving the linear LSF.
This may be a simpler construction method because it is linear. Details and examples are provided in [1, 2, 4โ21] and references therein.
Smoothness and evenness are both enhanced with the fitting curves and sur- faces, according to user-defined parameters (occasionally regarded, [2,5,6,12,22]). Al- though there is no ideal alternative to these user-defined settings, the smoothness and precision of the fits may be impacted. Re- searchers devised a method for defining a relatively small amount of control points [9] in order to improve fitting efficiency.
When examining linear LSF within a cer- tain amount of control points, a value of pa- rameters comes from each point of the data set, we will assume no fairness terms in this article. Standard fits to each type of data collection can obtain the control points by directly resolving consistent systems of lin- ear equations. It is possible to collect and solve a new type of linear equation.
One of the LSF methods that solves a linear system is the MLSPIA method, which is a least square form of the PIA method [23] and LSPIA method [8] cre- ated directly in [24]. It constructs a se- quence of converging curves to arrive at a least square fitting solution by repeatedly approximating control points and construct- ing a sequence of converging curves. It is
a straightforward approach to obtain a suit- able solution that maintains the properties by maintaining the shape. When an extra control point is introduced in [8, 13, 24], just the iterative formula has to be changed rather than developing a new linear system.
In this work, Algorithm 1 presents a progressive iterative approximation method with memory and sequences of weights for least square curve fitting (SSLSPIA) to improve convergence rate. The method frequently creates a sequence of curves by iteratively updating the control points and the sequenceโs weighted sums. The justification for naming this new tech- nique โSSLSPIAโ is detailed in section 3.
The following are our contributions to the SSLSPIA method:
โข Whether or not the collocation ma- trix has a deficient column rank, the SSLSPIA method is feasible.
โข The SSLSPIA method has a higher convergence rate than the MLSPIA method when the collocation matrix has full column rank [24].
โข The SSLSPIA method conserves the MLSPIA benefits, including the abil- ity to choose a large amount of set points, and allowing the number of starting control points, the shape con- serving property, and parallel execu- tion.
โข The SSLSPIA method uses semi- constant knots for fitting curve which differs from knots in the MLSPIA method.
โข The MLSPIA method is a variant of the SSLSPIA method that uses con- stant weight sequences.
The following is how the article is organized. In Section 2, there is a brief overview of related works. Section 3 pro- poses an iterative format and a convergence analysis for the SSLSPIA method. Section 4 performs the implementation, which in- cludes three examples, and Section 5 de- scribes the SSLSPIA applications. Finally, Section 6 expresses the conclusion.
2. Related work
Progressive Iterative Approximation (PIA) methods feature iterative forms simi- lar to geometric interpolation (GI) methods, as illustrated in [25, 26], and other exam- ples. The PIA methods rely on parametric distance identically to the GI methods. The initial discoveries were made in the 1970s by de Boor [27, 28], and Yamaguji [29] . Each version of the PIA method creates a sequence of curves by adjusting parameters and directly solving a linear system.
Assume that {๐๐}๐๐=1 is an ordered point set for fitted and {0 = ๐ก1 < ๐ก2 <
๐ก3 < ยท ยท ยท < ๐ก๐ = 1} is an increasing se- quence parameters of {๐๐}๐๐=1. At the be- ginning of the iteration, we select {๐๐0}๐๐=1 from {๐๐}๐๐=1 to be the control point set and generate a segment of blending curve ๐ถ0(๐ก),i.e.,
๐ถ0(๐ก) = ร๐
๐=1
๐ต๐(๐ก)๐0๐, ๐กโ [๐ก1, ๐ก๐], (2.1) where {๐ต๐(๐ก)}๐๐=1 is a normalized totally positive (NTP) that is used as the blending base for real functions.
We know that if ๐ต๐(๐ก) โฅ 0, ๐ โ {1,2,3, . . . , ๐}andร๐
๐=1๐ต๐(๐ก) = 1, the ba- sis is NTP. For each๐ก in[0,1], hold. Any non-decreasing series of collocation matrix is a totally positive matrix, meaning that ev- ery one of the minors in the matrix is non- negative [30, 31]. On0 = ๐ก1 < ๐ก2 < ๐ก3 <
ยท ยท ยท < ๐ก๐ =1, the collocation matrix of the
NTP blending basis is
๐ต๐ร๐=ยฉยญ
ยญยญยญ
ยซ
๐ต1(๐ก1) ๐ต2(๐ก1) ... ๐ต๐(๐ก1) ๐ต1(๐ก2) ๐ต2(๐ก2) ... ๐ต๐(๐ก2) . .. . .. ... . ..
๐ต1(๐ก๐) ๐ต2(๐ก๐) ... ๐ต๐(๐ก๐) ยชยฎยฎยฎ
ยฎ
ยฌ , (2.2)
With a point set{๐๐}๐๐=1and starting control point set {๐0๐}๐๐=1, the PIA method assigned in [23], the LSPIA method in [8]
and the MLSPIA method in [24] approxi- mate the curve with the NTP basis by the following curves repetitively:
๐ถ๐+1(๐ก)= ร๐ ๐=1
๐ต๐(๐ก)๐๐๐+1, ๐กโ [0,1], ๐โฅ0, (2.3)
the control points are adjusted
๐๐๐+1=๐๐๐+ฮ๐๐, ๐โ {1,2,3, . . . , ๐}, ๐โฅ0, (2.4)
where adjusting the vector
ฮ๐๐=
(๐๐โ๐ถ๐(๐ก๐), for PIA method, ๐ร๐
๐=1๐ต๐(๐ก๐) (๐๐โ๐ถ๐(๐ก๐)),for LSPIA method,
๐is a constant, ๐ โ {1,2,3, . . . , ๐}, ๐ โฅ 0, and adjusting the vector for MLSPIA method
๏ฃฑ๏ฃด๏ฃด๏ฃด๏ฃฒ
๏ฃด๏ฃด๏ฃด๏ฃณ
ฮ0๐ =๐ร๐
๐=1๐ต๐(๐ก๐)ฮฆ0๐,
ฮ๐๐ =(1โ๐)ฮ๐๐โ1+๐พ ๐ฟ๐๐+ (๐โ๐พ)๐ฟ๐๐โ1, ๐โฅ1, ๐ฟ๐๐=๐ร๐
๐=1๐ต๐(๐ก๐) (๐๐โ๐ถ๐(๐ก๐)), ๐โฅ0,
byฮฆ0๐is a vector where{๐๐}๐๐=1is located for every ๐ โ {1,2,3, . . . , ๐}, and ๐, ๐พ, ๐ are real weights.
The PIA method in the format of Eqs.
(2.3)-(2.4), described and originally labeled in [32], depends on the notion of benefit and loss improvement as proposed in [27, 28]., and is based on the non-uniform B-spline foundation. Of [33], the PIA approach in [23] evolved into a weighted PIA method, which accelerates convergence.
In the conventional PIA format, how- ever, the amount of control points is equal to the amount of input points. This is not feasi- ble when the amount of input points is really large. A new extended PIA (EPIA) format
was introduced in [34], in which the amount of control points is less than the amount of input points. Because of its local property and parallel processing capabilities, the ex- tended PIA is an ideal approach for fitting an extensive amount of data.
In addition, [35] has been expanded to approximate NURBS curves. According to the local property of the PIA method [36], if a subset of the control points is repeat- edly updated, only the limit curve interpo- lates the subset of input points, leaving the others unchanged. The PIA methods con- verge with appropriate weights and have the properties of convexity conserving, as well as the evident exposition of curves, locality, and adaptivity. By altering only the control points which affect this segment to lower- cost [23, 32], the locality and adaptivity en- sure improved probability that the resultant curve segment approximation is accurate.
More details are shown in [13].
The LSPIA method assigned in [8]
is primarily designed for large sets of data and inherits the benefits of PIA methods in the form of Eqs. (2.3)-(2.4). The LSPIA method has two distinct advantages. For starters, the LSPIA method can fit much larger sets quickly and reliably. Second, when using the LSPIA method for incre- mental data fitting, a new set of repetitively may be begun from the initial result of the previous output, saving a significant pro- cessing. T-splines in [7] are used to con- struct it. Then, in [10], replace the B-spline basis with the generalized B-spline basis to get the weighted LSF curve. In the nonsin- gular situation, the LSPIA method [8] con- verges, whereas [11] proves convergence in the singular case.
The PIA method with memory for least square fitting (MLSPIA) is one (LSF) method which is ineffectual for solving lin- ear systems, a least square version of PIA
method [23], and the LSPIA method [8]
created directly in [24] to enhance conver- gence rate. It repeatedly estimates control points and creates a sequence of converging curves to arrive at a least square fitting solu- tion. It is a way to obtain a suitable solution that maintains the properties by maintaining the form.The construction of the MLSPIA method in the format of Eqs. (2.3)-(2.4), assigned in [8], is analogous to the LSPIA method in the format of Eqs. (2.3)-(2.4), as- signed in [8], except for the difference ofฮ๐๐ for each possible๐and๐, which is similar to the LSPIA method. For appropriate weights ๐, ๐, ๐พ, it also converged to the LSF curve of{๐๐}๐๐=1.
In this article, we will compare the MLSPIA method from the difference of three real weights to the difference of three real sequences of weights. It will con- verge to the LSF curve as well. Further- more, for the cubic B-spline fitting curve, we use semi-constant knots, which is differ- ent from knots used in the LSPIA and ML- SPIA methods.
3. The SSLSPIA method
We will present the SSLSPIA method, which is a progressive iterative approximation method for the LSF curve with memory and weight sequences, and analyze its convergence rate.
With the provided data point set {๐๐}๐๐=1 and starting control point set {๐0๐}๐๐=1, and ๐ > ๐, the NTP basis ๐ต1(๐ก), ๐ต2(๐ก), ๐ต3(๐ก), . . . , ๐ต๐(๐ก) and the se- quence0=๐ก1 < ๐ก2 < ๐ก3 <ยท ยท ยท < ๐ก๐ =1, the PIA method [23], LSPIA method [8], and MLSPIA method [24] iteratively approxi- mate the curve by Eqs. (2.3)-(2.4) in the vector space where{๐๐}๐๐=1is located.
The SSLSPIA method iteratively ap- proximates the curve using Eqs. (2.3)- (2.4) by resetting the control points with
the adjusting vector again from the previous curve.
๏ฃฑ๏ฃด๏ฃด๏ฃด๏ฃฒ
๏ฃด๏ฃด๏ฃด๏ฃณ
ฮ0๐ =๐๐ร๐
๐=1๐ต๐(๐ก๐)ฮฆ0๐,
ฮ๐๐ =(1โ๐๐)ฮ๐๐โ1+๐พ๐๐ฟ๐๐+ (๐๐โ๐พ๐)๐ฟ๐๐โ1, ๐ โฅ1, ๐ฟ๐๐=๐๐ร๐
๐=1๐ต๐(๐ก๐) (๐๐โ๐ถ๐(๐ก๐)), ๐โฅ0, (3.1)
where ฮฆ0๐ is a point in the vector space which {๐๐}๐๐=1 is located for every ๐ โ {1,2,3, . . . , ๐}, and๐๐, ๐พ๐, ๐๐ are real se- quences of weights.
It is called the SSLSPIA method as its construction is expected to develop the difference of three real weight ๐,๐พ, ๐ to the difference of three real sequences of weights ๐๐, ๐พ๐, ๐๐ for each possible ๐, which is similar to the MLSPIA method defined in [24]. It will also converge to the LSF curve of {๐๐}๐๐=1 for suitable se- quences of weights ๐๐, ๐๐, ๐พ๐ which we can discuss later. It is a memory method because when we use Eq. (3.1) to com- pute ฮ๐โ๐ 1, we have to gather and utilize ฮ๐โ๐ 1and๐ฟ๐โ๐ 1from the preceding curve, as well as compute ๐ฟ๐๐ for every ๐ โฅ 1 and ๐โ {1,2,3, . . . , ๐}. Furthermore, as shown in section 4, we use semi-constant knots for the cubic B-spline fitting curve, which differ from knots used in the LSPIA and MLSPIA methods. In [24], the SSLSPIA method has a higher convergence rate than MLSPIA.
The equivalent formulation of the SSLSPIA methods developed by Eqs.
(2.3)-(3.1) is first given. The proof follow- ing line by line of the proof of [24].
Lemma 3.1. Let {๐0๐}๐๐=1 be an starting control point set and let{ฮฆ0๐}๐๐=1be a point set. The ๐๐๐, ๐ โ {1,2,3, . . . , ๐}, ๐ โฅ 0 constructed by Eqs. (2.4)-(3.1). They can be rewritten as
๐๐๐+1=๐๐๐+๐๐ ร๐ ๐=1
๐ต๐(๐ก๐)ฮฆ๐๐, ๐โ {1,2,3, . . . , ๐}, ๐ โฅ0, (3.2)
where
ฮฆ๐+๐ 1=(1โ๐๐)ฮฆ๐๐+๐๐๐๐
โร๐ ๐=1
๐ต๐(๐ก๐)๏ฃฎ๏ฃฏ
๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฐ๐๐๐๐๐+๐พ๐๐๐ ร๐ ๐1=1
๐ต๐(๐ก๐1)ฮฆ๐๐1๏ฃน๏ฃบ
๏ฃบ๏ฃบ๏ฃบ๏ฃป, ๐ โฅ0. (3.3)
Proof. It is simple to affirm that Eq. (3.2) contains for ๐ = 0 via Eqs. (2.4)-(3.1).
Assume that for some ๐ โฅ 0, Eq. (3.2) holds true. As a consequence, for each๐ โ {1,2,3, . . . , ๐},
๐๐๐ฟ๐๐+๐พ๐(๐ฟ๐๐+1โ๐ฟ๐๐)
=๐๐๐๐ ร๐ ๐=1
๐ต๐(๐ก๐)
๐๐โ๐ถ๐(๐ก๐)
โ๐๐๐พ๐ ร๐ ๐=1
๐ต๐(๐ก๐)
๐ถ๐+1(๐ก๐) โ๐ถ๐(๐ก๐)
=๐๐๐๐ ร๐ ๐=1
๐ต๐(๐ก๐) ๐๐โร๐ ๐=1
๐ต๐(๐ก๐)๐๐๐
!
โ๐๐๐พ๐ ร๐ ๐=1
๐ต๐(๐ก๐)ร๐ ๐=1
๐ต๐(๐ก๐)
๐๐๐+1โ๐๐๐ ,
holds by Eq. (3.1), we get by Eqs. (2.4)- (3.1) for every๐โ {1,2,3, . . . , ๐}that
ฮ๐+๐ 1=(1โ๐๐)ฮ๐๐+๐๐๐ฟ๐๐+๐พ๐(๐ฟ๐๐+1โ๐ฟ๐๐)
=(1โ๐๐)
๐๐๐+1โ๐๐๐
+๐๐๐๐ ร๐ ๐=1
๐ต๐(๐ก๐) ๐๐โ ร๐ ๐=1
๐ต๐(๐ก๐)๐๐๐
!
โ๐๐๐พ๐ ร๐ ๐=1
๐ต๐(๐ก๐)ร๐ ๐=1
๐ต๐(๐ก๐)
๐๐๐+1โ๐๐๐
=(1โ๐๐)๐๐ ร๐ ๐=1
๐ต๐(๐ก๐)ฮฆ๐๐ +๐๐๐๐
ร๐ ๐=1
๐ต๐(๐ก๐) ๐๐โร๐ ๐=1
๐ต๐(๐ก๐)๐๐๐
!
โ๐๐2๐พ๐ ร๐ ๐=1
๐ต๐(๐ก๐)ร๐ ๐=1
๐ต๐(๐ก๐) ร๐ ๐1=1
๐ต๐(๐ก๐1)ฮฆ๐๐1
=๐๐ ร๐ ๐=1
๐ต๐(๐ก๐)ฮฆ๐+๐ 1,
if we give
ฮฆ๐+๐ 1=(1โ๐๐)ฮฆ๐๐+๐๐๐๐
โร๐ ๐=1
๐ต๐(๐ก๐)๏ฃฎ๏ฃฏ
๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฐ๐๐๐๐๐+๐พ๐๐๐ ร๐ ๐1=1
๐ต๐(๐ก๐1)ฮฆ๐๐1
๏ฃน๏ฃบ๏ฃบ๏ฃบ
๏ฃบ๏ฃป.
It follows that, for๐โ {1,2,3, . . . , ๐},
๐๐๐+2=๐๐๐+1+ฮ๐+๐ 1=๐๐๐+1+๐๐ ร๐ ๐=1
๐ต๐(๐ก๐)ฮฆ๐+๐ 1,
which means that Eq. (3.2) holds for ๐ + 1. From mathematical induction, Eq. (3.2)
holds for every๐ โฅ 0. โก
Denote
( ฮฆ๐๐=
ฮฆ๐1,ฮฆ๐2,ฮฆ๐3, . . . ,ฮฆ๐๐๐
, ๐ โฅ0, ๐๐๐=
๐1๐, ๐2๐, ๐3๐, . . . , ๐๐๐๐ (3.4)
and
๐๐=[๐1, ๐2, ๐3, . . . , ๐๐]๐ , ๐ต= ๐ต๐(๐ก๐) ๐ร๐,
(3.5)
where๐๐๐, ๐ โ {1,2,3, . . . , ๐},andฮฆ๐๐, ๐ โ {1,2,3, . . . , ๐},are assigned by Eqs. (3.2)- (3.3), respectively, for every๐ โฅ 0.There- fore Eqs. (3.2)-(3.3) can be written as
( ฮฆ๐+1=
(1โ๐๐)๐ผ๐โ๐พ๐๐๐๐ต๐ต๐๐
ฮฆ๐+๐๐(๐โ๐ต๐๐), ๐๐+1=๐๐๐ต๐ฮฆ๐+๐๐, ๐โฅ0,
(3.6)
or equivalent to,
ฮฆ๐+1 ๐๐+1
=๐ป๐๐,๐พ๐,๐๐ ฮฆ๐
๐๐
+๐ถ๐๐,๐พ๐,๐๐, ๐ โฅ0, (3.7)
where
๐ป๐๐,๐พ๐,๐๐=
(1โ๐๐)๐ผ๐โ๐พ๐๐๐๐ต๐ต๐ โ๐๐๐ต
๐๐๐ต๐ ๐ผ๐
, (3.8)
and
๐ถ๐๐,๐พ๐,๐๐= ๐๐๐
0
.
The matrix iterative format can be thought of as Eq. (3.7). As a result, we an- alyze the convergence of
ฮฆ๐ ๐๐
provided by Eq. (3.7) from
ฮฆ0 ๐0
while considering the curves generated by the SSLSPIA method.
Assumption 1. Assume that Eq. (3.5) as- signs๐ > ๐and๐ต. The singular values of the matrix ๐ตare ๐๐๐๐(๐ต) = ๐ ๐๐๐ ๐1 โฅ ๐2 โฅ๐3 โฅ . . .โฅ ๐๐ >0. Suppose that
0< ๐๐ <2, ๐๐โ ๐๐
๐12๐๐ < ๐พ๐< ๐๐
2 โ๐๐โ2 ๐12๐๐ , ๐๐>0.
(3.9)
Lemma 3.2. Under the Assumption 1, sup- pose that ร
๐ = ๐๐๐๐(๐1, ๐2, ๐2, . . . , ๐๐) with ๐1 โฅ ๐2 โฅ ๐3 โฅ . . . โฅ ๐๐ > 0. ๐ is an eigenvalue of
๐ปห๐๐,๐พ๐,๐๐
=ยฉยญ
ยซ
(1โ๐๐)๐ผ๐โ๐พ๐๐๐ร2
๐ 0 โ๐๐ร
๐ 0 (1โ๐๐)๐ผ๐โ๐ 0 ๐๐ร
๐ 0 ๐ผ๐
ยชยฎ
ยฌ , (3.10)
equivalent to๐=1โ๐๐ or๐is a solution of the one of the equations as follows
๐2+
๐พ๐๐๐๐๐2โ (2โ๐๐)
๐+๐๐2๐๐(๐๐โ๐พ๐)+1โ๐๐=0, (3.11)
where ๐โ {1,2,3, . . . , ๐}.
Proof. Because the matrices๐ป๐๐,๐พ๐,๐๐ and ๐ปห๐๐,๐พ๐,๐๐ are orthogonally identical, their eigenvalues are the same. We consider that
|๐ปห๐๐,๐พ๐,๐๐โ๐๐ผ๐+๐|
=
(1โ๐๐โ๐)๐ผ๐โ๐พ๐๐๐ร2
๐ 0 โ๐๐ร
๐ 0 (1โ๐๐โ๐)๐ผ๐โ๐ 0 ๐๐ร
๐ 0 (1โ๐)๐ผ๐
=(1โ๐๐โ๐)๐โ๐
รร๐ ๐=1
(1โ๐) (1โ๐๐โ๐โ๐พ๐๐๐๐๐2) +๐๐๐๐๐๐2 .
So๐is an eigenvalue of๐ปห๐๐,๐พ๐,๐๐ equiva- lent to๐=1โ๐๐ or
ร๐ ๐=1
(1โ๐) (1โ๐๐โ๐โ๐พ๐๐๐๐๐2) +๐๐๐๐๐2๐
=0.
Therefore,๐=1โ๐๐ or๐is a solution of the equations Eq. (3.11). โก Lemma 3.3. [37] The solutions of the real quadratic equation ๐2 โ ๐๐ + ๐ = 0 are smaller than unity in modulus, equivalent to
|๐| <1and|๐| < 1+๐.
Lemma 3.4. Under the Assumption 1, ๐(๐ปห๐๐,๐พ๐,๐๐) < 1, where๐(๐ปห๐๐,๐พ๐,๐๐) de- notes the spectral radius of ๐ป๐๐,๐พ๐,๐๐ as- signed by Eq.(3.10), equivalent to weights ๐๐, ๐พ๐, ๐๐ satisfy Eq.(3.9).
Proof. Defined weights ๐๐, ๐พ๐, ๐๐, by Lemma 3.3 unity in modulus are larger than all the solutions of equations in Eq. (3.11) if and only if
(|๐๐2๐๐(๐๐โ๐พ๐) +1โ๐๐|<1, ๐โ {1,2,3, . . . , ๐},
|๐พ๐๐๐๐๐2โ (2โ๐๐) |<2โ๐๐+๐2๐๐๐(๐๐โ๐พ๐).
(3.12)
Then, by Lemma 3.2๐(๐ปห๐๐,๐พ๐,๐๐) < 1if and only if|๐๐โ1| <1and Eq. (3.11) holds.
Or equivalently, equations in Eq. (3.11) are smaller than unity in modulus if and only if
๏ฃฑ๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃฒ
๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃณ
0< ๐๐<2, ๐๐2๐๐๐๐>0,
๐๐โ2< ๐๐2๐๐(๐๐โ๐พ๐)< ๐๐, ๐๐2๐๐(๐๐โ2๐พ๐)>2(๐๐โ2),
๐โ {1,2,3, . . . , ๐}.
(3.13)
Eq. (3.9) is equivalent to Eq. (3.13) because ๐12 โฅ ๐22 โฅ๐32 โฅ . . .โฅ ๐๐2 > 0. โก Theorem 3.5. Assume that the Assump- tion 1. Then, for every starting con- trol point set {๐0๐}๐๐=1 and every point set {ฮฆ0๐}๐๐=1, the curve sequences which are created by the SSLSPIA method, assigned by Eq. (2.3), Eq. (2.4) and Eq. (3.1), with weights ๐๐, ๐พ๐, ๐๐, converge to the LSF curve of the given{๐0๐}๐๐=1.
Proof. The rewritten format of the iteration matrix๐ป๐๐,๐พ๐,๐๐assigned by Eq. (3.8). As- sume that the singular value decomposition (SVD) of๐ตis
๐ต=๐ร
๐ 0
0 0
๐๐, (3.14) for ๐ โ R๐ร๐ and ๐ โ R๐ร๐ are the orthogonal matrices and ร
๐ =
๐๐๐๐(๐1, ๐2, ๐3, . . . , ๐๐) with ๐1 โฅ ๐2 โฅ ๐3 โฅ. . .โฅ ๐๐ > 0. Suppose
๐ = ๐ 0
0 ๐
, (3.15)
then๐ โ R(๐+๐)ร(๐+๐) is the orthogonal matrix. Next,
๐๐๐ต๐ =
ร๐ 0 0 0
โR๐ร๐, ๐๐๐ต๐๐=
ร๐ 0 0 0
โR๐ร๐, and
๐๐๐ต๐ต๐๐= ร2
๐ 0
0 0
โR๐ร๐, it holds that
๐๐๐ป๐๐,๐พ๐,๐๐๐ =
๐ปห๐๐,๐พ๐,๐๐ 0 0 ๐ผ๐โ๐
, (3.16)
where
๐ปห๐๐,๐พ๐,๐๐
=ยฉยญ
ยซ
(1โ๐๐)๐ผ๐โ๐พ๐๐๐ร2
๐ 0 โ๐๐ร
๐ 0 (1โ๐๐)๐ผ๐โ๐ 0 ๐๐ร
๐ 0 ๐ผ๐
ยชยฎ
ยฌ . (3.17)
We focus on the convergence of the sequence {๐๐}โ๐=0 because proving that the curve sequence {๐ถ๐(๐ก)}โ๐=0 generated by the SSLSPIA method converges to a LSF curve is equivalent to proving that the repetitively sequence {๐๐}โ๐=0 created by Eq. (3.6) converges to a solution of ๐ต๐๐ต๐ =๐ต๐๐,where๐ตand๐are assigned by Eq. (3.5). For any weights๐๐, ๐พ๐, ๐๐ satisfying Eq. (3.9), let๐ผ=๐โ๐ต๐ฝ, where ๐ฝis the solution of ๐ต๐๐ต๐ = ๐ต๐๐, then it can be simply affirmed that(๐ผ๐๐ฝ๐)๐ is the result of the equation
(๐ผ๐+๐โ๐ป๐๐,๐พ๐,๐๐) (๐ฅ๐๐ฆ๐)๐ =๐ถ๐๐,๐พ๐,๐๐, (3.18)
where ๐ป๐๐,๐พ๐,๐๐ and ๐ถ๐๐,๐พ๐,๐๐, are as- signed in Eq. (3.8). That means, the equa- tion Eq. (3.18) is compatible. Assume
ฮฆ ๐
be the result of the equation Eq. (3.18), i.e.,
(๐ผ๐+๐โ๐ป๐๐,๐พ๐,๐๐) ฮฆ
๐
=๐ถ๐๐,๐พ๐,๐๐. (3.19)