• Tidak ada hasil yang ditemukan

1.Introduction SaknarinChannark ,PoomKumam ,ParinChaipunya ,WachirapongJirakitpuwapat ProgressiveIterativeApproximationMethodwithMemoryandSequencesofWeightsforLeastSquareCurveFitting

N/A
N/A
Protected

Academic year: 2024

Membagikan "1.Introduction SaknarinChannark ,PoomKumam ,ParinChaipunya ,WachirapongJirakitpuwapat ProgressiveIterativeApproximationMethodwithMemoryandSequencesofWeightsforLeastSquareCurveFitting"

Copied!
18
0
0

Teks penuh

(1)

Homepage :https://tci-thaijo.org/index.php/SciTechAsia Science & Technology Asia

Vol.28 No.1 January - March 2023 Page: [90-107]

Original research article

Progressive Iterative Approximation Method with Memory and Sequences of Weights for Least Square Curve Fitting

Saknarin Channark1,2, Poom Kumam1,2,3*, Parin Chaipunya1,2, Wachirapong Jirakitpuwapat4

1Department of Mathematics, Faculty of Science, King Mongkutโ€™s University of Technology Thonburi, Bangkok 10140, Thailand

2Center of Excellence in Theoretical and Computational Science, Science Laboratory Building, Faculty of Science, King Mongkutโ€™s University of Technology Thonburi, Bangkok 10140, Thailand

3Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan

4National Security and Dual-Use Technology Center, National Science and Technology Development Agency, Pathum Thani 12120, Thailand

Received 1 November 2021อพ Received in revised form - Accepted 3 August 2022อพ Available online 20 March 2023

ABSTRACT

The progressive iterative approximation method with memory and sequences of weights for least square curve fitting (SSLSPIA) is presented in this paper. This method improves the MLSPIA method by varying the weights of the moving average between it- erations, using three sequences of weights derived from the singular values of a colloca- tion matrix. It is proved that a sequence of fitting curves with an appropriate alternative of weights converge to the solution of least square fitting and that the convergence rate of the new method is faster than that of the MLSPIA method. Some examples and applications in this paper prove the SSLSPIA method is superior.

Keywords: Least square curve fittingอพ Progressive iterative approximationอพ Sequences of weights

1. Introduction

Least-squares fitting (LSF) to data points is defined using a parametric curve which is a commonly used method in sci-

entific and engineering research, involving geometric modeling and Computer Aided Design (CAD) [1โ€“3]. It is widely used in- side numerous applications. Due to its ra-

(2)

pidity, itโ€™s especially useful when dealing with enormous data sets. LSF can be di- vided into two types which are derived from several metric descriptions. Control points and parameter knots are variables in one, and the problems are addressed using non- linear least squares. Due to the nonlinear nature of them, a sufficient initial fitting is necessary. The second is data collection that has been organized by giving each data point a parameter. The control points can then be obtained by solving the linear LSF.

This may be a simpler construction method because it is linear. Details and examples are provided in [1, 2, 4โ€“21] and references therein.

Smoothness and evenness are both enhanced with the fitting curves and sur- faces, according to user-defined parameters (occasionally regarded, [2,5,6,12,22]). Al- though there is no ideal alternative to these user-defined settings, the smoothness and precision of the fits may be impacted. Re- searchers devised a method for defining a relatively small amount of control points [9] in order to improve fitting efficiency.

When examining linear LSF within a cer- tain amount of control points, a value of pa- rameters comes from each point of the data set, we will assume no fairness terms in this article. Standard fits to each type of data collection can obtain the control points by directly resolving consistent systems of lin- ear equations. It is possible to collect and solve a new type of linear equation.

One of the LSF methods that solves a linear system is the MLSPIA method, which is a least square form of the PIA method [23] and LSPIA method [8] cre- ated directly in [24]. It constructs a se- quence of converging curves to arrive at a least square fitting solution by repeatedly approximating control points and construct- ing a sequence of converging curves. It is

a straightforward approach to obtain a suit- able solution that maintains the properties by maintaining the shape. When an extra control point is introduced in [8, 13, 24], just the iterative formula has to be changed rather than developing a new linear system.

In this work, Algorithm 1 presents a progressive iterative approximation method with memory and sequences of weights for least square curve fitting (SSLSPIA) to improve convergence rate. The method frequently creates a sequence of curves by iteratively updating the control points and the sequenceโ€™s weighted sums. The justification for naming this new tech- nique โ€SSLSPIAโ€ is detailed in section 3.

The following are our contributions to the SSLSPIA method:

โ€ข Whether or not the collocation ma- trix has a deficient column rank, the SSLSPIA method is feasible.

โ€ข The SSLSPIA method has a higher convergence rate than the MLSPIA method when the collocation matrix has full column rank [24].

โ€ข The SSLSPIA method conserves the MLSPIA benefits, including the abil- ity to choose a large amount of set points, and allowing the number of starting control points, the shape con- serving property, and parallel execu- tion.

โ€ข The SSLSPIA method uses semi- constant knots for fitting curve which differs from knots in the MLSPIA method.

โ€ข The MLSPIA method is a variant of the SSLSPIA method that uses con- stant weight sequences.

(3)

The following is how the article is organized. In Section 2, there is a brief overview of related works. Section 3 pro- poses an iterative format and a convergence analysis for the SSLSPIA method. Section 4 performs the implementation, which in- cludes three examples, and Section 5 de- scribes the SSLSPIA applications. Finally, Section 6 expresses the conclusion.

2. Related work

Progressive Iterative Approximation (PIA) methods feature iterative forms simi- lar to geometric interpolation (GI) methods, as illustrated in [25, 26], and other exam- ples. The PIA methods rely on parametric distance identically to the GI methods. The initial discoveries were made in the 1970s by de Boor [27, 28], and Yamaguji [29] . Each version of the PIA method creates a sequence of curves by adjusting parameters and directly solving a linear system.

Assume that {๐‘„๐‘—}๐‘š๐‘—=1 is an ordered point set for fitted and {0 = ๐‘ก1 < ๐‘ก2 <

๐‘ก3 < ยท ยท ยท < ๐‘ก๐‘š = 1} is an increasing se- quence parameters of {๐‘„๐‘—}๐‘š๐‘—=1. At the be- ginning of the iteration, we select {๐‘ƒ๐‘–0}๐‘–๐‘›=1 from {๐‘„๐‘—}๐‘š๐‘—=1 to be the control point set and generate a segment of blending curve ๐ถ0(๐‘ก),i.e.,

๐ถ0(๐‘ก) = ร•๐‘›

๐‘–=1

๐ต๐‘–(๐‘ก)๐‘ƒ0๐‘–, ๐‘กโˆˆ [๐‘ก1, ๐‘ก๐‘š], (2.1) where {๐ต๐‘–(๐‘ก)}๐‘›๐‘–=1 is a normalized totally positive (NTP) that is used as the blending base for real functions.

We know that if ๐ต๐‘–(๐‘ก) โ‰ฅ 0, ๐‘– โˆˆ {1,2,3, . . . , ๐‘›}andร๐‘›

๐‘–=1๐ต๐‘–(๐‘ก) = 1, the ba- sis is NTP. For each๐‘ก in[0,1], hold. Any non-decreasing series of collocation matrix is a totally positive matrix, meaning that ev- ery one of the minors in the matrix is non- negative [30, 31]. On0 = ๐‘ก1 < ๐‘ก2 < ๐‘ก3 <

ยท ยท ยท < ๐‘ก๐‘š =1, the collocation matrix of the

NTP blending basis is

๐ต๐‘šร—๐‘›=ยฉยญ

ยญยญยญ

ยซ

๐ต1(๐‘ก1) ๐ต2(๐‘ก1) ... ๐ต๐‘›(๐‘ก1) ๐ต1(๐‘ก2) ๐ต2(๐‘ก2) ... ๐ต๐‘›(๐‘ก2) . .. . .. ... . ..

๐ต1(๐‘ก๐‘š) ๐ต2(๐‘ก๐‘š) ... ๐ต๐‘›(๐‘ก๐‘š) ยชยฎยฎยฎ

ยฎ

ยฌ , (2.2)

With a point set{๐‘„๐‘—}๐‘š๐‘—=1and starting control point set {๐‘ƒ0๐‘–}๐‘›๐‘–=1, the PIA method assigned in [23], the LSPIA method in [8]

and the MLSPIA method in [24] approxi- mate the curve with the NTP basis by the following curves repetitively:

๐ถ๐‘˜+1(๐‘ก)= ร•๐‘› ๐‘–=1

๐ต๐‘–(๐‘ก)๐‘ƒ๐‘–๐‘˜+1, ๐‘กโˆˆ [0,1], ๐‘˜โ‰ฅ0, (2.3)

the control points are adjusted

๐‘ƒ๐‘–๐‘˜+1=๐‘ƒ๐‘–๐‘˜+ฮ”๐‘˜๐‘–, ๐‘–โˆˆ {1,2,3, . . . , ๐‘›}, ๐‘˜โ‰ฅ0, (2.4)

where adjusting the vector

ฮ”๐‘–๐‘˜=

(๐‘„๐‘–โˆ’๐ถ๐‘˜(๐‘ก๐‘–), for PIA method, ๐œ‡ร๐‘š

๐‘—=1๐ต๐‘–(๐‘ก๐‘—) (๐‘„๐‘—โˆ’๐ถ๐‘˜(๐‘ก๐‘—)),for LSPIA method,

๐œ‡is a constant, ๐‘– โˆˆ {1,2,3, . . . , ๐‘›}, ๐‘˜ โ‰ฅ 0, and adjusting the vector for MLSPIA method

๏ฃฑ๏ฃด๏ฃด๏ฃด๏ฃฒ

๏ฃด๏ฃด๏ฃด๏ฃณ

ฮ”0๐‘– =๐œˆร๐‘š

๐‘—=1๐ต๐‘–(๐‘ก๐‘—)ฮฆ0๐‘—,

ฮ”๐‘˜๐‘– =(1โˆ’๐œ”)ฮ”๐‘–๐‘˜โˆ’1+๐›พ ๐›ฟ๐‘–๐‘˜+ (๐œ”โˆ’๐›พ)๐›ฟ๐‘–๐‘˜โˆ’1, ๐‘˜โ‰ฅ1, ๐›ฟ๐‘–๐‘˜=๐œˆร๐‘š

๐‘—=1๐ต๐‘–(๐‘ก๐‘—) (๐‘„๐‘—โˆ’๐ถ๐‘˜(๐‘ก๐‘—)), ๐‘˜โ‰ฅ0,

byฮฆ0๐‘—is a vector where{๐‘„๐‘—}๐‘š๐‘—=1is located for every ๐‘— โˆˆ {1,2,3, . . . , ๐‘š}, and ๐œ”, ๐›พ, ๐œˆ are real weights.

The PIA method in the format of Eqs.

(2.3)-(2.4), described and originally labeled in [32], depends on the notion of benefit and loss improvement as proposed in [27, 28]., and is based on the non-uniform B-spline foundation. Of [33], the PIA approach in [23] evolved into a weighted PIA method, which accelerates convergence.

In the conventional PIA format, how- ever, the amount of control points is equal to the amount of input points. This is not feasi- ble when the amount of input points is really large. A new extended PIA (EPIA) format

(4)

was introduced in [34], in which the amount of control points is less than the amount of input points. Because of its local property and parallel processing capabilities, the ex- tended PIA is an ideal approach for fitting an extensive amount of data.

In addition, [35] has been expanded to approximate NURBS curves. According to the local property of the PIA method [36], if a subset of the control points is repeat- edly updated, only the limit curve interpo- lates the subset of input points, leaving the others unchanged. The PIA methods con- verge with appropriate weights and have the properties of convexity conserving, as well as the evident exposition of curves, locality, and adaptivity. By altering only the control points which affect this segment to lower- cost [23, 32], the locality and adaptivity en- sure improved probability that the resultant curve segment approximation is accurate.

More details are shown in [13].

The LSPIA method assigned in [8]

is primarily designed for large sets of data and inherits the benefits of PIA methods in the form of Eqs. (2.3)-(2.4). The LSPIA method has two distinct advantages. For starters, the LSPIA method can fit much larger sets quickly and reliably. Second, when using the LSPIA method for incre- mental data fitting, a new set of repetitively may be begun from the initial result of the previous output, saving a significant pro- cessing. T-splines in [7] are used to con- struct it. Then, in [10], replace the B-spline basis with the generalized B-spline basis to get the weighted LSF curve. In the nonsin- gular situation, the LSPIA method [8] con- verges, whereas [11] proves convergence in the singular case.

The PIA method with memory for least square fitting (MLSPIA) is one (LSF) method which is ineffectual for solving lin- ear systems, a least square version of PIA

method [23], and the LSPIA method [8]

created directly in [24] to enhance conver- gence rate. It repeatedly estimates control points and creates a sequence of converging curves to arrive at a least square fitting solu- tion. It is a way to obtain a suitable solution that maintains the properties by maintaining the form.The construction of the MLSPIA method in the format of Eqs. (2.3)-(2.4), assigned in [8], is analogous to the LSPIA method in the format of Eqs. (2.3)-(2.4), as- signed in [8], except for the difference ofฮ”๐‘–๐‘˜ for each possible๐‘–and๐‘˜, which is similar to the LSPIA method. For appropriate weights ๐œ”, ๐œˆ, ๐›พ, it also converged to the LSF curve of{๐‘„๐‘—}๐‘š๐‘—=1.

In this article, we will compare the MLSPIA method from the difference of three real weights to the difference of three real sequences of weights. It will con- verge to the LSF curve as well. Further- more, for the cubic B-spline fitting curve, we use semi-constant knots, which is differ- ent from knots used in the LSPIA and ML- SPIA methods.

3. The SSLSPIA method

We will present the SSLSPIA method, which is a progressive iterative approximation method for the LSF curve with memory and weight sequences, and analyze its convergence rate.

With the provided data point set {๐‘„๐‘—}๐‘š๐‘—=1 and starting control point set {๐‘ƒ0๐‘–}๐‘›๐‘–=1, and ๐‘š > ๐‘›, the NTP basis ๐ต1(๐‘ก), ๐ต2(๐‘ก), ๐ต3(๐‘ก), . . . , ๐ต๐‘›(๐‘ก) and the se- quence0=๐‘ก1 < ๐‘ก2 < ๐‘ก3 <ยท ยท ยท < ๐‘ก๐‘š =1, the PIA method [23], LSPIA method [8], and MLSPIA method [24] iteratively approxi- mate the curve by Eqs. (2.3)-(2.4) in the vector space where{๐‘„๐‘—}๐‘š๐‘—=1is located.

The SSLSPIA method iteratively ap- proximates the curve using Eqs. (2.3)- (2.4) by resetting the control points with

(5)

the adjusting vector again from the previous curve.

๏ฃฑ๏ฃด๏ฃด๏ฃด๏ฃฒ

๏ฃด๏ฃด๏ฃด๏ฃณ

ฮ”0๐‘– =๐œˆ๐‘˜ร๐‘š

๐‘—=1๐ต๐‘–(๐‘ก๐‘—)ฮฆ0๐‘—,

ฮ”๐‘˜๐‘– =(1โˆ’๐œ”๐‘˜)ฮ”๐‘–๐‘˜โˆ’1+๐›พ๐‘˜๐›ฟ๐‘–๐‘˜+ (๐œ”๐‘˜โˆ’๐›พ๐‘˜)๐›ฟ๐‘–๐‘˜โˆ’1, ๐‘˜ โ‰ฅ1, ๐›ฟ๐‘–๐‘˜=๐œˆ๐‘˜ร๐‘š

๐‘—=1๐ต๐‘–(๐‘ก๐‘—) (๐‘„๐‘—โˆ’๐ถ๐‘˜(๐‘ก๐‘—)), ๐‘˜โ‰ฅ0, (3.1)

where ฮฆ0๐‘— is a point in the vector space which {๐‘„๐‘—}๐‘š๐‘—=1 is located for every ๐‘— โˆˆ {1,2,3, . . . , ๐‘š}, and๐œ”๐‘˜, ๐›พ๐‘˜, ๐œˆ๐‘˜ are real se- quences of weights.

It is called the SSLSPIA method as its construction is expected to develop the difference of three real weight ๐œ”,๐›พ, ๐œˆ to the difference of three real sequences of weights ๐œ”๐‘˜, ๐›พ๐‘˜, ๐œˆ๐‘˜ for each possible ๐‘˜, which is similar to the MLSPIA method defined in [24]. It will also converge to the LSF curve of {๐‘„๐‘—}๐‘š๐‘—=1 for suitable se- quences of weights ๐œ”๐‘˜, ๐œˆ๐‘˜, ๐›พ๐‘˜ which we can discuss later. It is a memory method because when we use Eq. (3.1) to com- pute ฮ”๐‘˜โˆ’๐‘– 1, we have to gather and utilize ฮ”๐‘˜โˆ’๐‘– 1and๐›ฟ๐‘˜โˆ’๐‘– 1from the preceding curve, as well as compute ๐›ฟ๐‘˜๐‘– for every ๐‘˜ โ‰ฅ 1 and ๐‘–โˆˆ {1,2,3, . . . , ๐‘›}. Furthermore, as shown in section 4, we use semi-constant knots for the cubic B-spline fitting curve, which differ from knots used in the LSPIA and MLSPIA methods. In [24], the SSLSPIA method has a higher convergence rate than MLSPIA.

The equivalent formulation of the SSLSPIA methods developed by Eqs.

(2.3)-(3.1) is first given. The proof follow- ing line by line of the proof of [24].

Lemma 3.1. Let {๐‘ƒ0๐‘–}๐‘›๐‘–=1 be an starting control point set and let{ฮฆ0๐‘—}๐‘š๐‘—=1be a point set. The ๐‘ƒ๐‘–๐‘˜, ๐‘– โˆˆ {1,2,3, . . . , ๐‘›}, ๐‘˜ โ‰ฅ 0 constructed by Eqs. (2.4)-(3.1). They can be rewritten as

๐‘ƒ๐‘–๐‘˜+1=๐‘ƒ๐‘–๐‘˜+๐œˆ๐‘˜ ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—)ฮฆ๐‘˜๐‘—, ๐‘–โˆˆ {1,2,3, . . . , ๐‘›}, ๐‘˜ โ‰ฅ0, (3.2)

where

ฮฆ๐‘˜+๐‘— 1=(1โˆ’๐œ”๐‘˜)ฮฆ๐‘˜๐‘—+๐œ”๐‘˜๐‘„๐‘—

โˆ’ร•๐‘› ๐‘–=1

๐ต๐‘–(๐‘ก๐‘—)๏ฃฎ๏ฃฏ

๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฐ๐œ”๐‘˜๐‘ƒ๐‘–๐‘˜+๐›พ๐‘˜๐œˆ๐‘˜ ร•๐‘š ๐‘—1=1

๐ต๐‘–(๐‘ก๐‘—1)ฮฆ๐‘˜๐‘—1๏ฃน๏ฃบ

๏ฃบ๏ฃบ๏ฃบ๏ฃป, ๐‘˜ โ‰ฅ0. (3.3)

Proof. It is simple to affirm that Eq. (3.2) contains for ๐‘˜ = 0 via Eqs. (2.4)-(3.1).

Assume that for some ๐‘˜ โ‰ฅ 0, Eq. (3.2) holds true. As a consequence, for each๐‘– โˆˆ {1,2,3, . . . , ๐‘›},

๐œ”๐‘˜๐›ฟ๐‘–๐‘˜+๐›พ๐‘˜(๐›ฟ๐‘–๐‘˜+1โˆ’๐›ฟ๐‘–๐‘˜)

=๐œˆ๐‘˜๐œ”๐‘˜ ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—)

๐‘„๐‘—โˆ’๐ถ๐‘˜(๐‘ก๐‘—)

โˆ’๐œˆ๐‘˜๐›พ๐‘˜ ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—)

๐ถ๐‘˜+1(๐‘ก๐‘—) โˆ’๐ถ๐‘˜(๐‘ก๐‘—)

=๐œˆ๐‘˜๐œ”๐‘˜ ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—) ๐‘„๐‘—โˆ’ร•๐‘› ๐‘–=1

๐ต๐‘–(๐‘ก๐‘—)๐‘ƒ๐‘–๐‘˜

!

โˆ’๐œˆ๐‘˜๐›พ๐‘˜ ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—)ร•๐‘› ๐‘–=1

๐ต๐‘–(๐‘ก๐‘—)

๐‘ƒ๐‘–๐‘˜+1โˆ’๐‘ƒ๐‘–๐‘˜ ,

holds by Eq. (3.1), we get by Eqs. (2.4)- (3.1) for every๐‘–โˆˆ {1,2,3, . . . , ๐‘›}that

ฮ”๐‘˜+๐‘– 1=(1โˆ’๐œ”๐‘˜)ฮ”๐‘–๐‘˜+๐œ”๐‘˜๐›ฟ๐‘–๐‘˜+๐›พ๐‘˜(๐›ฟ๐‘–๐‘˜+1โˆ’๐›ฟ๐‘–๐‘˜)

=(1โˆ’๐œ”๐‘˜)

๐‘ƒ๐‘–๐‘˜+1โˆ’๐‘ƒ๐‘–๐‘˜

+๐œˆ๐‘˜๐œ”๐‘˜ ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—) ๐‘„๐‘—โˆ’ ร•๐‘› ๐‘–=1

๐ต๐‘–(๐‘ก๐‘—)๐‘ƒ๐‘–๐‘˜

!

โˆ’๐œˆ๐‘˜๐›พ๐‘˜ ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—)ร•๐‘› ๐‘–=1

๐ต๐‘–(๐‘ก๐‘—)

๐‘ƒ๐‘–๐‘˜+1โˆ’๐‘ƒ๐‘–๐‘˜

=(1โˆ’๐œ”๐‘˜)๐œˆ๐‘˜ ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—)ฮฆ๐‘˜๐‘— +๐œˆ๐‘˜๐œ”๐‘˜

ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—) ๐‘„๐‘—โˆ’ร•๐‘› ๐‘–=1

๐ต๐‘–(๐‘ก๐‘—)๐‘ƒ๐‘–๐‘˜

!

โˆ’๐œˆ๐‘˜2๐›พ๐‘˜ ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—)ร•๐‘› ๐‘–=1

๐ต๐‘–(๐‘ก๐‘—) ร•๐‘› ๐‘—1=1

๐ต๐‘–(๐‘ก๐‘—1)ฮฆ๐‘˜๐‘—1

=๐œˆ๐‘˜ ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—)ฮฆ๐‘˜+๐‘— 1,

if we give

ฮฆ๐‘˜+๐‘— 1=(1โˆ’๐œ”๐‘˜)ฮฆ๐‘˜๐‘—+๐œ”๐‘˜๐‘„๐‘—

โˆ’ร•๐‘› ๐‘–=1

๐ต๐‘–(๐‘ก๐‘–)๏ฃฎ๏ฃฏ

๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฐ๐œ”๐‘˜๐‘ƒ๐‘–๐‘˜+๐›พ๐‘˜๐œˆ๐‘˜ ร•๐‘š ๐‘—1=1

๐ต๐‘–(๐‘ก๐‘—1)ฮฆ๐‘˜๐‘—1

๏ฃน๏ฃบ๏ฃบ๏ฃบ

๏ฃบ๏ฃป.

(6)

It follows that, for๐‘–โˆˆ {1,2,3, . . . , ๐‘›},

๐‘ƒ๐‘–๐‘˜+2=๐‘ƒ๐‘–๐‘˜+1+ฮ”๐‘˜+๐‘– 1=๐‘ƒ๐‘–๐‘˜+1+๐œˆ๐‘˜ ร•๐‘š ๐‘—=1

๐ต๐‘–(๐‘ก๐‘—)ฮฆ๐‘˜+๐‘— 1,

which means that Eq. (3.2) holds for ๐‘˜ + 1. From mathematical induction, Eq. (3.2)

holds for every๐‘˜ โ‰ฅ 0. โ–ก

Denote

( ฮฆ๐‘–๐‘˜=

ฮฆ๐‘˜1,ฮฆ๐‘˜2,ฮฆ๐‘˜3, . . . ,ฮฆ๐‘˜๐‘š๐‘‡

, ๐‘˜ โ‰ฅ0, ๐‘ƒ๐‘–๐‘˜=

๐‘ƒ1๐‘˜, ๐‘ƒ2๐‘˜, ๐‘ƒ3๐‘˜, . . . , ๐‘ƒ๐‘›๐‘˜๐‘‡ (3.4)

and

๐‘„๐‘–=[๐‘„1, ๐‘„2, ๐‘„3, . . . , ๐‘„๐‘š]๐‘‡ , ๐ต= ๐ต๐‘–(๐‘ก๐‘—) ๐‘šร—๐‘›,

(3.5)

where๐‘ƒ๐‘–๐‘˜, ๐‘– โˆˆ {1,2,3, . . . , ๐‘›},andฮฆ๐‘˜๐‘—, ๐‘— โˆˆ {1,2,3, . . . , ๐‘š},are assigned by Eqs. (3.2)- (3.3), respectively, for every๐‘˜ โ‰ฅ 0.There- fore Eqs. (3.2)-(3.3) can be written as

( ฮฆ๐‘˜+1=

(1โˆ’๐œ”๐‘˜)๐ผ๐‘šโˆ’๐›พ๐‘˜๐œˆ๐‘˜๐ต๐ต๐‘‡๐‘‡

ฮฆ๐‘˜+๐œ”๐‘˜(๐‘„โˆ’๐ต๐‘ƒ๐‘˜), ๐‘ƒ๐‘˜+1=๐œˆ๐‘˜๐ต๐‘‡ฮฆ๐‘˜+๐‘ƒ๐‘˜, ๐‘˜โ‰ฅ0,

(3.6)

or equivalent to,

ฮฆ๐‘˜+1 ๐‘ƒ๐‘˜+1

=๐ป๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜ ฮฆ๐‘˜

๐‘ƒ๐‘˜

+๐ถ๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜, ๐‘˜ โ‰ฅ0, (3.7)

where

๐ป๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜=

(1โˆ’๐œ”๐‘˜)๐ผ๐‘šโˆ’๐›พ๐‘˜๐œˆ๐‘˜๐ต๐ต๐‘‡ โˆ’๐œ”๐‘˜๐ต

๐œˆ๐‘˜๐ต๐‘‡ ๐ผ๐‘›

, (3.8)

and

๐ถ๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜= ๐œ”๐‘˜๐‘„

0

.

The matrix iterative format can be thought of as Eq. (3.7). As a result, we an- alyze the convergence of

ฮฆ๐‘˜ ๐‘ƒ๐‘˜

provided by Eq. (3.7) from

ฮฆ0 ๐‘ƒ0

while considering the curves generated by the SSLSPIA method.

Assumption 1. Assume that Eq. (3.5) as- signs๐‘š > ๐‘›and๐ต. The singular values of the matrix ๐ตare ๐‘Ÿ๐‘Ž๐‘›๐‘˜(๐ต) = ๐‘Ÿ ๐‘Ž๐‘›๐‘‘ ๐œŽ1 โ‰ฅ ๐œŽ2 โ‰ฅ๐œŽ3 โ‰ฅ . . .โ‰ฅ ๐œŽ๐‘Ÿ >0. Suppose that

0< ๐œ”๐‘˜ <2, ๐œ”๐‘˜โˆ’ ๐œ”๐‘˜

๐œŽ12๐œˆ๐‘˜ < ๐›พ๐‘˜< ๐œ”๐‘˜

2 โˆ’๐œ”๐‘˜โˆ’2 ๐œŽ12๐œˆ๐‘˜ , ๐œˆ๐‘˜>0.

(3.9)

Lemma 3.2. Under the Assumption 1, sup- pose that ร

๐‘Ÿ = ๐‘‘๐‘–๐‘Ž๐‘”(๐œŽ1, ๐œŽ2, ๐œŽ2, . . . , ๐œŽ๐‘Ÿ) with ๐œŽ1 โ‰ฅ ๐œŽ2 โ‰ฅ ๐œŽ3 โ‰ฅ . . . โ‰ฅ ๐œŽ๐‘Ÿ > 0. ๐œ† is an eigenvalue of

๐ปห†๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜

=ยฉยญ

ยซ

(1โˆ’๐œ”๐‘˜)๐ผ๐‘Ÿโˆ’๐›พ๐‘˜๐œˆ๐‘˜ร2

๐‘Ÿ 0 โˆ’๐œ”๐‘˜ร

๐‘Ÿ 0 (1โˆ’๐œ”๐‘˜)๐ผ๐‘šโˆ’๐‘Ÿ 0 ๐œˆ๐‘˜ร

๐‘Ÿ 0 ๐ผ๐‘Ÿ

ยชยฎ

ยฌ , (3.10)

equivalent to๐œ†=1โˆ’๐œ”๐‘˜ or๐œ†is a solution of the one of the equations as follows

๐œ†2+

๐›พ๐‘˜๐œˆ๐‘˜๐œŽ๐‘–2โˆ’ (2โˆ’๐œ”๐‘˜)

๐œ†+๐œŽ๐‘–2๐œˆ๐‘˜(๐œ”๐‘˜โˆ’๐›พ๐‘˜)+1โˆ’๐œ”๐‘˜=0, (3.11)

where ๐‘–โˆˆ {1,2,3, . . . , ๐‘Ÿ}.

Proof. Because the matrices๐ป๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜ and ๐ปห†๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜ are orthogonally identical, their eigenvalues are the same. We consider that

|๐ปห†๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜โˆ’๐œ†๐ผ๐‘š+๐‘Ÿ|

=

(1โˆ’๐œ”๐‘˜โˆ’๐œ†)๐ผ๐‘Ÿโˆ’๐›พ๐‘˜๐œˆ๐‘˜ร2

๐‘Ÿ 0 โˆ’๐œ”๐‘˜ร

๐‘Ÿ 0 (1โˆ’๐œ”๐‘˜โˆ’๐œ†)๐ผ๐‘šโˆ’๐‘Ÿ 0 ๐œˆ๐‘˜ร

๐‘Ÿ 0 (1โˆ’๐œ†)๐ผ๐‘Ÿ

=(1โˆ’๐œ”๐‘˜โˆ’๐œ†)๐‘šโˆ’๐‘Ÿ

ร—ร–๐‘Ÿ ๐‘–=1

(1โˆ’๐œ†) (1โˆ’๐œ”๐‘˜โˆ’๐œ†โˆ’๐›พ๐‘˜๐œˆ๐‘˜๐œŽ๐‘–2) +๐œ”๐‘˜๐œˆ๐‘˜๐œŽ๐‘–2 .

So๐œ†is an eigenvalue of๐ปห†๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜ equiva- lent to๐œ†=1โˆ’๐œ”๐‘˜ or

ร–๐‘Ÿ ๐‘–=1

(1โˆ’๐œ†) (1โˆ’๐œ”๐‘˜โˆ’๐œ†โˆ’๐›พ๐‘˜๐œˆ๐‘˜๐œŽ๐‘–2) +๐œ”๐‘˜๐œˆ๐‘˜๐œŽ2๐‘–

=0.

Therefore,๐œ†=1โˆ’๐œ”๐‘˜ or๐œ†is a solution of the equations Eq. (3.11). โ–ก Lemma 3.3. [37] The solutions of the real quadratic equation ๐œ†2 โˆ’ ๐‘๐œ† + ๐‘ = 0 are smaller than unity in modulus, equivalent to

|๐‘| <1and|๐‘| < 1+๐‘.

Lemma 3.4. Under the Assumption 1, ๐œŒ(๐ปห†๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜) < 1, where๐œŒ(๐ปห†๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜) de- notes the spectral radius of ๐ป๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜ as- signed by Eq.(3.10), equivalent to weights ๐œ”๐‘˜, ๐›พ๐‘˜, ๐œˆ๐‘˜ satisfy Eq.(3.9).

(7)

Proof. Defined weights ๐œ”๐‘˜, ๐›พ๐‘˜, ๐œˆ๐‘˜, by Lemma 3.3 unity in modulus are larger than all the solutions of equations in Eq. (3.11) if and only if

(|๐œŽ๐‘–2๐œˆ๐‘˜(๐œ”๐‘˜โˆ’๐›พ๐‘˜) +1โˆ’๐œ”๐‘˜|<1, ๐‘–โˆˆ {1,2,3, . . . , ๐‘Ÿ},

|๐›พ๐‘˜๐œˆ๐‘˜๐œŽ๐‘–2โˆ’ (2โˆ’๐œ”๐‘˜) |<2โˆ’๐œ”๐‘˜+๐œŽ2๐‘–๐œˆ๐‘˜(๐œ”๐‘˜โˆ’๐›พ๐‘˜).

(3.12)

Then, by Lemma 3.2๐œŒ(๐ปห†๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜) < 1if and only if|๐œ”๐‘˜โˆ’1| <1and Eq. (3.11) holds.

Or equivalently, equations in Eq. (3.11) are smaller than unity in modulus if and only if

๏ฃฑ๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃฒ

๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃณ

0< ๐œ”๐‘˜<2, ๐œŽ๐‘–2๐œˆ๐‘˜๐œ”๐‘˜>0,

๐œ”๐‘˜โˆ’2< ๐œŽ๐‘–2๐œˆ๐‘˜(๐œ”๐‘˜โˆ’๐›พ๐‘˜)< ๐œ”๐‘˜, ๐œŽ๐‘–2๐œˆ๐‘˜(๐œ”๐‘˜โˆ’2๐›พ๐‘˜)>2(๐œ”๐‘˜โˆ’2),

๐‘–โˆˆ {1,2,3, . . . , ๐‘Ÿ}.

(3.13)

Eq. (3.9) is equivalent to Eq. (3.13) because ๐œŽ12 โ‰ฅ ๐œŽ22 โ‰ฅ๐œŽ32 โ‰ฅ . . .โ‰ฅ ๐œŽ๐‘Ÿ2 > 0. โ–ก Theorem 3.5. Assume that the Assump- tion 1. Then, for every starting con- trol point set {๐‘ƒ0๐‘–}๐‘›๐‘–=1 and every point set {ฮฆ0๐‘—}๐‘š๐‘—=1, the curve sequences which are created by the SSLSPIA method, assigned by Eq. (2.3), Eq. (2.4) and Eq. (3.1), with weights ๐œ”๐‘˜, ๐›พ๐‘˜, ๐œˆ๐‘˜, converge to the LSF curve of the given{๐‘„0๐‘—}๐‘š๐‘—=1.

Proof. The rewritten format of the iteration matrix๐ป๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜assigned by Eq. (3.8). As- sume that the singular value decomposition (SVD) of๐ตis

๐ต=๐‘ˆร

๐‘Ÿ 0

0 0

๐‘‰๐‘‡, (3.14) for ๐‘ˆ โˆˆ R๐‘šร—๐‘š and ๐‘‰ โˆˆ R๐‘›ร—๐‘› are the orthogonal matrices and ร

๐‘Ÿ =

๐‘‘๐‘–๐‘Ž๐‘”(๐œŽ1, ๐œŽ2, ๐œŽ3, . . . , ๐œŽ๐‘Ÿ) with ๐œŽ1 โ‰ฅ ๐œŽ2 โ‰ฅ ๐œŽ3 โ‰ฅ. . .โ‰ฅ ๐œŽ๐‘Ÿ > 0. Suppose

๐‘Š = ๐‘ˆ 0

0 ๐‘‰

, (3.15)

then๐‘Š โˆˆ R(๐‘š+๐‘›)ร—(๐‘š+๐‘›) is the orthogonal matrix. Next,

๐‘ˆ๐‘‡๐ต๐‘‰ =

ร๐‘Ÿ 0 0 0

โˆˆR๐‘šร—๐‘›, ๐‘‰๐‘‡๐ต๐‘‡๐‘ˆ=

ร๐‘Ÿ 0 0 0

โˆˆR๐‘›ร—๐‘š, and

๐‘ˆ๐‘‡๐ต๐ต๐‘‡๐‘ˆ= ร2

๐‘Ÿ 0

0 0

โˆˆR๐‘šร—๐‘š, it holds that

๐‘Š๐‘‡๐ป๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜๐‘Š =

๐ปห†๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜ 0 0 ๐ผ๐‘›โˆ’๐‘Ÿ

, (3.16)

where

๐ปห†๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜

=ยฉยญ

ยซ

(1โˆ’๐œ”๐‘˜)๐ผ๐‘Ÿโˆ’๐›พ๐‘˜๐œˆ๐‘˜ร2

๐‘Ÿ 0 โˆ’๐œ”๐‘˜ร

๐‘Ÿ 0 (1โˆ’๐œ”๐‘˜)๐ผ๐‘šโˆ’๐‘Ÿ 0 ๐œˆ๐‘˜ร

๐‘Ÿ 0 ๐ผ๐‘Ÿ

ยชยฎ

ยฌ . (3.17)

We focus on the convergence of the sequence {๐‘ƒ๐‘˜}โˆž๐‘˜=0 because proving that the curve sequence {๐ถ๐‘˜(๐‘ก)}โˆž๐‘˜=0 generated by the SSLSPIA method converges to a LSF curve is equivalent to proving that the repetitively sequence {๐‘ƒ๐‘˜}โˆž๐‘˜=0 created by Eq. (3.6) converges to a solution of ๐ต๐‘‡๐ต๐‘‹ =๐ต๐‘‡๐‘„,where๐ตand๐‘„are assigned by Eq. (3.5). For any weights๐œ”๐‘˜, ๐›พ๐‘˜, ๐œˆ๐‘˜ satisfying Eq. (3.9), let๐›ผ=๐‘„โˆ’๐ต๐›ฝ, where ๐›ฝis the solution of ๐ต๐‘‡๐ต๐‘‹ = ๐ต๐‘‡๐‘„, then it can be simply affirmed that(๐›ผ๐‘‡๐›ฝ๐‘‡)๐‘‡ is the result of the equation

(๐ผ๐‘š+๐‘›โˆ’๐ป๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜) (๐‘ฅ๐‘‡๐‘ฆ๐‘‡)๐‘‡ =๐ถ๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜, (3.18)

where ๐ป๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜ and ๐ถ๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜, are as- signed in Eq. (3.8). That means, the equa- tion Eq. (3.18) is compatible. Assume

ฮฆ ๐‘ƒ

be the result of the equation Eq. (3.18), i.e.,

(๐ผ๐‘š+๐‘›โˆ’๐ป๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜) ฮฆ

๐‘ƒ

=๐ถ๐œ”๐‘˜,๐›พ๐‘˜,๐œˆ๐‘˜. (3.19)

Referensi

Dokumen terkait