• Tidak ada hasil yang ditemukan

Proof of stability of ICHOL(0)

Dalam dokumen Inference, Computation, and Games (Halaman 104-111)

Chapter V: Incomplete Cholesky factorization

5.4 Proof of stability of ICHOL(0)

Theorem17implies that the application of Algorithm3to a suitableO (πœ–)-perturbation Ξ˜βˆ’πΈ returns anO (πœ–)-accurate Cholesky factorization ofΘin computational com-

82 plexity O (𝑁log2(𝑁)log2𝑑(𝑁/πœ–)). In practice, we do not have access to𝐸, so we need to rely on the stability of Algorithm3 to deduce that Θ andΞ˜βˆ’πΈ (used as inputs) would yield similar outputs for sufficiently small 𝐸. Even though such a stability property ofICHOL(0)would also be required by prior works on incomplete LU-factorization such as [94], we did not find this type of result in the literature. We also found it surprisingly difficult to prove (and were unable to do so) when using the maximin ordering and sparsity pattern, although we always observed stability of Algorithm3in practice, for reasonable values of𝜌.

The key problem is that the standard perturbation bounds for Schur complements are multiplicative. Therefore, applying them𝑁times (after each elimination) results in a possible growth of the approximation error that is exponential in𝑁 and cannot be compensated for by a logarithmic increase in𝜌.

However, we have already seen in Section5.2that when using a supernodal multi- color ordering, the incomplete Cholesky factorization can be expressed in a smaller number of groups of independent dense linear algebra operations. In this section, we are going to prove rigorously that the number of colors used by the multicolor ordering is upper bounded as O (log(𝑁)) and that this allows us to control the approximation of the supernodal factorization by only invoking O (log(𝑁)) Schur complement perturbation bounds. Therefore, the error amplification is polynomial in 𝑁 and can be controlled by choosing 𝜌 ' log(𝑁). By relating ordinary and supernodal Cholesky factorization, we are able to deduce the same error bounds for the ordinary Cholesky factorization when using a supernodal multicolor ordering and sparsity pattern.

5.4.2 Revisiting the supernodal multicolor ordering

We begin by reintroducing the supernodal multicolor ordering of Section 5.2 in slightly different notation.

Forπ‘Ÿ >0, 1 ≀ π‘˜ ≀ π‘ž and𝑖 ∈𝐽(π‘˜), write

π΅π‘Ÿ(π‘˜) (𝑖) B {𝑗 ∈𝐽(π‘˜) |𝑑(𝑖, 𝑗) β‰€π‘Ÿ}. (5.11) Construction 3(Supernodal multicolor ordering and sparsity pattern). LetΘ∈R𝐼×𝐼 with 𝐼 B Ð

1β‰€π‘˜β‰€π‘žπ½(π‘˜) and let 𝑑( Β·, Β· ) be a hierarchical pseudometric. For 𝜌 β‰₯ 1, define thesupernodal multicolor orderingβ‰ΊπœŒandsparsity patternπ‘†πœŒas follows. For

each π‘˜ ∈ {1, . . . , π‘ž}, select a subset𝐽˜(π‘˜) βŠ‚ 𝐽(π‘˜) of indices such that

βˆ€π‘–,˜ π‘—Λœβˆˆ 𝐽˜(π‘˜), π‘–Λœβ‰  π‘—Λœ =β‡’ 𝐡(π‘˜)

𝜌/2 π‘–Λœ

∩𝐡(π‘˜)

𝜌/2 π‘—Λœ

=βˆ…, (5.12)

βˆ€π‘– ∈𝐽(π‘˜), βˆƒΛœπ‘– ∈ 𝐽˜(π‘˜) :𝑖 ∈ 𝐡(

π‘˜) 𝜌 π‘–Λœ

. (5.13)

Assign every index in 𝐽(π‘˜) to the element of 𝐽˜(π‘˜) closest to it, using an arbitrary method to break ties. That is, writing 𝑗 { π‘—Λœfor the assignment of 𝑗 to π‘—Λœ,

˜

𝑗 ∈arg min

˜ 𝑗0∈𝐽˜(π‘˜)

𝑑 𝑗 , π‘—Λœ0

, (5.14)

for all 𝑗 ∈ 𝐽(π‘˜) and π‘—Λœβˆˆ 𝐽˜(π‘˜) such that 𝑗 { π‘—Λœ. Define𝐼˜B Ð

1β‰€π‘˜β‰€π‘žπ½Λœ(π‘˜) and define the auxiliary sparsity patternπ‘†ΛœπœŒ βŠ‚ πΌΛœΓ—πΌΛœby

π‘†ΛœπœŒ B 𝑖,˜ π‘—Λœ

∈ πΌΛœΓ—π½Λœ

βˆƒπ‘– { 𝑖, π‘—Λœ { π‘—Λœ: 𝑑(𝑖, 𝑗) ≀ 𝜌 . (5.15) Define the sparsity patternπ‘†πœŒ βŠ‚ 𝐼 ×𝐼 as

π‘†πœŒ B

(𝑖, 𝑗) ∈ 𝐼× 𝐼

βˆƒπ‘–,˜ π‘—Λœβˆˆ 𝐼˜:𝑖 {𝑖, π‘—Λœ { 𝑗 ,˜ 𝑖,˜ π‘—Λœ

βˆˆπ‘†ΛœπœŒ (5.16) and call the elements of𝐽˜(π‘˜) supernodes. Color each π‘—Λœβˆˆ 𝐽˜(π‘˜) in one of 𝑝(π‘˜) colors such that no 𝑖,˜ π‘—Λœ ∈ 𝐽˜(π‘˜) with 𝑖,˜ π‘—Λœ

∈ π‘†ΛœπœŒ have the same color. For 𝑖 ∈ 𝐽(π‘˜) write node(𝑖)for theπ‘–Λœβˆˆ 𝐽˜(π‘˜) such that𝑖 {π‘–Λœand write color(π‘–Λœ)for the color ofπ‘–Λœ. Define the supernodal multicolor orderingβ‰ΊπœŒby reordering the elements of 𝐼such that

(1) 𝑖 β‰ΊπœŒ 𝑗 for𝑖 ∈𝐽(π‘˜), 𝑗 ∈ 𝐽(𝑙) andπ‘˜ < 𝑙;

(2) within each level 𝐽(π‘˜), we order the elements of supernodes colored in the same color consecutively, i.e. given 𝑖, 𝑗 ∈ 𝐽(π‘˜) such that color(node(𝑖)) β‰  color(node(𝑗)),𝑖 β‰ΊπœŒ 𝑗 =β‡’ 𝑖0 β‰ΊπœŒ 𝑗0for color(node(𝑖0)) = color(node(𝑖)), and color(node(𝑗0)) =color(node(𝑗)); and

(3) the elements of each supernode appear consecutively, i.e. given 𝑖, 𝑗 ∈ 𝐽(π‘˜) such that node(𝑖) β‰  node(𝑗), 𝑖 β‰ΊπœŒ 𝑗 =β‡’ 𝑖0 β‰ΊπœŒ 𝑗0 for node(𝑖0) = node(𝑖), and node(𝑗0) =node(𝑗).

Starting from a hierarchical ordering and sparsity pattern, the modified ordering and sparsity pattern can be obtained efficiently:

Lemma 14. In the setting of Examples 1 and 2, given {(𝑖, 𝑗) | 𝑑(𝑖, 𝑗) ≀ 𝜌}, there exist constants𝐢and𝑝maxdepending only on the dimension𝑑and the cost of computing𝑑( Β·,Β· )such that the ordering and sparsity pattern presented in Construc- tion3can be constructed with 𝑝(π‘˜) ≀ 𝑝max, for each1 ≀ π‘˜ ≀ π‘ž, in computational complexity𝐢 π‘ž πœŒπ‘‘π‘.

84 Proof. The aggregation into supernodes can be done via a greedy algorithm by keeping track of all nodes that are not already within distance 𝜌/2 of a supernode and removing them one at a time. We can then go through 𝜌-neighbourhoods and remove points within distance𝜌/2 from our list of candidates for future supernodes.

To create the coloring, we use the greedy graph coloring of [125] on the undirected graph 𝐺 with vertices ˜𝐽(π‘˜) and edges

(𝑖,˜ π‘—Λœ) ∈ π‘†ΛœπœŒ

𝑖,˜ π‘—Λœ ∈ 𝐽˜(π‘˜) . Defining deg(𝐺) as the maximum number of edges connected to any vertex of𝐺, the computational complexity of greedy graph coloring is bounded above by deg(𝐺)#

𝐽(π‘˜)

and the number of colors used by deg(𝐺) +1. A sphere-packing argument shows that deg(𝐺)is at most a constant depending only on the dimension𝑑, which yields the

result.

5.4.3 Proof of stability of incomplete Cholesky factorization in the supernodal multicolor ordering

We will now bound the approximation error of the Cholesky factors obtained from Algorithm3, using the supernodal multicolor ordering and sparsity pattern described in Construction3. For Λœπ‘–, π‘—Λœβˆˆ 𝐼˜, letΞ˜π‘–,Λœπ‘—Λœ be the submatrix (Ξ˜π‘– 𝑗)π‘–βˆˆΛœπ‘–, π‘—βˆˆπ‘—Λœand let√

𝑀 be the (dense and lower-triangular) Cholesky factor of a matrix𝑀.

Algorithm 3 with supernodal multicolor ordering β‰ΊπœŒ and sparsity pattern π‘†πœŒ is equivalent to the block-incomplete Cholesky factorization described in Algorithm9 where the functionRestrict!(Θ, π‘†πœŒ) sets all entries ofΘoutside ofπ‘†πœŒto zero.

Algorithm 9Supernodal incomplete Cholesky factorization Input: Θ∈R𝐼×𝐼 symmetric

Output: 𝐿 ∈R𝐼×𝐼 lower triangular Restrict!(Θ, π‘†πœŒ)

forπ‘–Λœβˆˆ 𝐼˜do 𝐿:,Λœπ‘– ← Θ:,Λœπ‘–/p

Ξ˜π‘–,ΛœΛœπ‘–>

for π‘—ΛœΛœπ‘–Λœ: (𝑖,˜ π‘—Λœ) βˆˆπ‘†Λœdo

forπ‘˜Λœ π‘—Λœ: (π‘˜ ,˜ π‘–Λœ),(π‘˜ ,˜ π‘—Λœ) ∈ π‘†Λœdo Ξ˜π‘˜ ,˜ π‘—Λœ β†Ξ˜π‘˜ ,˜ π‘—Λœβˆ’Ξ˜π‘˜ ,ΛœΛœπ‘–(Ξ˜π‘–,Λœπ‘–Λœ)βˆ’1Ξ˜π‘— ,ΛœΛœπ‘– end for

end for end for return 𝐿

We will now reformulate the above algorithm using the fact that the elimination of nodes of the same color, on the same level of the hierarchy, happens consecutively.

Let 𝑝 be the maximal number of colors used on any level of the hierarchy. We can then write 𝐼 =Ð

1β‰€π‘˜β‰€π‘ž,1≀𝑙≀𝑝𝐽(π‘˜ ,𝑙), where 𝐽(π‘˜ ,𝑙) is the set of indices on level π‘˜ colored in the color 𝑙. LetΘ(π‘˜ ,𝑙),(π‘š,𝑛) be the restriction of Θto 𝐽(π‘˜ ,𝑙) Γ— 𝐽(π‘š,𝑛) and write (π‘š, 𝑛) β‰Ί (π‘˜ , 𝑙) ⇐⇒ π‘š < π‘˜ or(π‘š = π‘˜ and𝑛 < 𝑙). We can then rewrite Algorithm9as:

Algorithm 10Supernodal incomplete Cholesky factorization Input: Θ∈R𝐼×𝐼 symmetric

Output: 𝐿 ∈R𝐼×𝐼 lower triangular for1≀ π‘˜ ≀ π‘ždo

for1≀ 𝑙 ≀ 𝑝do Restrict!(Θ, π‘†πœŒ) 𝐿(:,:),(π‘˜ ,𝑙) β†Ξ˜(:,:),(π‘˜ ,𝑙)/p

Θ(π‘˜ ,𝑙),(π‘˜ ,𝑙)>

Ξ˜β†Ξ˜βˆ’Ξ˜(:,:),(π‘˜ ,𝑙) Θ(π‘˜ ,𝑙),(π‘˜ ,𝑙)

βˆ’1

Θ(π‘˜ ,𝑙),(:,:)

end for end for return 𝐿

For 1 ≀ π‘˜ ≀ π‘ž,1 ≀ 𝑙 ≀ 𝑝and a matrix𝑀 ∈R𝐼×𝐼 with𝑀(:,:),(π‘š,𝑛), 𝑀(π‘š,𝑛),(:,:) =0 for all(π‘š, 𝑛) β‰Ί (π‘˜ , 𝑙), letS[𝑀]be the matrix obtained by applyingRestrict!(𝑀 , π‘†πœŒ) followed by the Schur complementation𝑀 ← π‘€βˆ’π‘€(:,:),(π‘˜ ,𝑙) 𝑀(π‘˜ ,𝑙),(π‘˜ ,𝑙)βˆ’1

𝑀(π‘˜ ,𝑙),(:,:). We now prove a stability estimate for the operatorS. Letπ‘€π‘˜ ,(π‘š,𝑛) be the restriction of a matrix𝑀 ∈R𝐼×𝐼 to𝐽(π‘˜) ×𝐽(π‘š,𝑛).

Lemma 15. For1 ≀ π‘˜β—¦β‰€ π‘žand1 ≀ 𝑙◦ ≀ 𝑝letΘ, 𝐸 ∈R𝐼×𝐼 be such that

Θ(:,:),(π‘š,𝑛),Θ(π‘š,𝑛),(:,:) =0for all(π‘š, 𝑛) β‰Ί (π‘˜β—¦, 𝑙◦), (5.17) and (writingΞ˜π‘˜ ,𝑙 for the 𝐽(π‘˜) ×𝐽(𝑙) submatrix ofΘandπœ†maxfor maximal singular values) define

πœ†min Bπœ†min(Ξ˜π‘˜β—¦, π‘˜β—¦), πœ†maxB max

π‘˜β—¦β‰€π‘˜β‰€π‘ž

πœ†max(Ξ˜π‘˜β—¦, π‘˜). (5.18) If

max

π‘˜β—¦β‰€π‘˜ ,π‘™β‰€π‘žkπΈπ‘˜ ,𝑙kFro ≀ πœ– ≀ πœ†min

2 , (5.19)

86 then the following perturbation estimate holds:

max

π‘˜β—¦β‰€π‘˜ ,π‘™β‰€π‘ž

S[Θ] βˆ’S[Θ+𝐸]

π‘˜ ,𝑙

Fro

≀ 3

2 +2πœ†max πœ†min

+8 πœ†2

max

πœ†2

min

!

πœ– . (5.20) Proof. Write ˜Θ, ˜𝐸for the versions ofΘ,𝐸set to zero outside ofπ‘†πœŒ. Forπ‘˜β—¦ ≀ π‘˜ , 𝑙 ≀ π‘ž,

(S[Θ+𝐸] βˆ’S[Θ])π‘˜ ,𝑙 (5.21)

=Ξ˜Λœπ‘˜ ,𝑙+πΈΛœπ‘˜ ,π‘™βˆ’ Θ˜ +𝐸˜

π‘˜ ,(π‘˜β—¦,𝑙◦) Θ˜ +πΈΛœβˆ’1

(π‘˜β—¦,𝑙◦),(π‘˜β—¦,𝑙◦) Θ˜ +𝐸˜

(π‘˜β—¦,𝑙◦),𝑙 (5.22)

βˆ’Ξ˜Λœπ‘˜ ,𝑙+Ξ˜Λœπ‘˜ ,(π‘˜β—¦,𝑙◦)Ξ˜Λœβˆ’(π‘˜1β—¦,𝑙◦),(π‘˜β—¦,𝑙◦)Θ˜(π‘˜β—¦,𝑙◦),𝑙 (5.23)

=πΈΛœπ‘˜ ,𝑙+ Θ˜ +𝐸˜

π‘˜ ,(π‘˜β—¦,𝑙◦) Θ˜ +πΈΛœβˆ’1

(π‘˜β—¦,𝑙◦),(π‘˜β—¦,𝑙◦)𝐸˜(π‘˜β—¦,𝑙◦),(π‘˜β—¦,𝑙◦)Ξ˜Λœβˆ’1(π‘˜β—¦,𝑙◦),(π‘˜β—¦,𝑙◦) Θ˜ +𝐸˜

(π‘˜β—¦,𝑙◦),𝑙

(5.24)

βˆ’ Θ˜ +𝐸˜

π‘˜ ,(π‘˜β—¦,𝑙◦)Ξ˜Λœβˆ’1(π‘˜β—¦,𝑙◦),(π‘˜β—¦,𝑙◦) Θ˜ +𝐸˜

(π‘˜β—¦,𝑙◦),𝑙+Ξ˜Λœπ‘˜ ,(π‘˜β—¦,𝑙◦)Ξ˜Λœβˆ’1(π‘˜β—¦,𝑙◦),(π‘˜β—¦,𝑙◦)Θ˜(π‘˜β—¦,𝑙◦),𝑙

(5.25)

=πΈΛœπ‘˜ ,𝑙+ Θ˜ +𝐸˜

π‘˜ ,(π‘˜β—¦,𝑙◦) Θ˜ +πΈΛœβˆ’1

(π‘˜β—¦,𝑙◦),(π‘˜β—¦,𝑙◦)𝐸˜(π‘˜β—¦,𝑙◦),(π‘˜β—¦,𝑙◦)Ξ˜Λœβˆ’(π‘˜1β—¦,𝑙◦),(π‘˜β—¦,𝑙◦) Θ˜ +𝐸˜

(π‘˜β—¦,𝑙◦),𝑙

(5.26)

βˆ’πΈΛœπ‘˜ ,(π‘˜β—¦,𝑙◦)Ξ˜Λœβˆ’1(π‘˜β—¦,𝑙◦),(π‘˜β—¦,𝑙◦)Θ˜(π‘˜β—¦,𝑙◦),π‘™βˆ’Ξ˜Λœπ‘˜ ,(π‘˜β—¦,𝑙◦)Ξ˜Λœβˆ’1(π‘˜β—¦,𝑙◦),(π‘˜β—¦,𝑙◦)𝐸˜(π‘˜β—¦,𝑙◦),𝑙 (5.27)

βˆ’πΈΛœπ‘˜ ,(π‘˜β—¦,𝑙◦)Ξ˜Λœβˆ’1(π‘˜β—¦,𝑙◦),(π‘˜β—¦,𝑙◦)𝐸˜(π‘˜β—¦,𝑙◦),𝑙, (5.28)

where the second equality follows from the matrix identity

(𝐴+𝐡)βˆ’1 = π΄βˆ’1βˆ’ (𝐴+𝐡)βˆ’1𝐡 π΄βˆ’1. (5.29) Now recall that, for all 𝐴 ∈ Rπ‘›Γ—π‘š, 𝐡 ∈ Rπ‘šΓ—π‘ , k𝑀k ≀ k𝑀kFro and k𝐴 𝐡kFro ≀ k𝐴k k𝐡kFro. Therefore, k (𝐴+𝐸)βˆ’1k ≀ 2/πœ†min and k𝐴+𝐸k ≀ 2πœ†max. Combining these estimates and using the triangle inequality yields

(S[𝐴+𝐸] βˆ’S[𝐴])π‘˜ ,𝑙

Fro (5.30)

≀ kπΈπ‘˜ ,𝑙kFro+8πœ†2

max

πœ†2

min

kπΈπ‘˜β—¦, π‘˜β—¦kFro+πœ†max πœ†min

( kπΈπ‘˜ ,𝑙kFro+ k𝐸𝑙 , π‘˜kFro) (5.31) +πœ†βˆ’1

minkπΈπ‘˜ , π‘˜β—¦kFrokπΈπ‘˜β—¦,𝑙kFro (5.32)

≀ 1+8πœ†2

max

πœ†2

min

+2πœ†max πœ†min

+ πœ– πœ†min

!

πœ– (5.33)

≀ 3

2+2πœ†max πœ†min

+8πœ†2

max

πœ†2

min

!

πœ– . (5.34)

Recursive application of the above lemma gives a stability result for the incomplete Cholesky factorization.

Lemma 16. For 𝜌 > 0, let β‰ΊπœŒ and π‘†πœŒ be a supernodal ordering and sparsity pattern such that the maximal number of colors used on each level is at most 𝑝. LetπΏπ‘†πœŒbe an invertible lower-triangular matrix with nonzero patternπ‘†πœŒand define 𝑀 B πΏπ‘†πœŒπΏπ‘†πœŒ,>. Assume that 𝑀 satisfies Condition2 with constantπœ…. Then there exists a universal constant𝐢 such that, for all0 < πœ– < πœ†min(𝑀)

2π‘ž2(𝐢 πœ…)2π‘ž 𝑝 and all𝐸 ∈R𝐼×𝐼 withk𝐸kFro ≀ πœ–,

𝑀 βˆ’πΏΛœπ‘†πœŒπΏΛœπ‘†πœŒ,>

Fro ≀ π‘ž2(𝐢 πœ…)2π‘ž π‘πœ– , (5.35) where 𝐿˜(π‘†πœŒ) is the Cholesky factor obtained by applying Algorithm10to𝑀+𝐸. Proof. The result follows from applying Lemma15at each step of Algorithm10.

5.4.4 Conclusion

Using the stability result in Lemma 16, we can finally prove that when using the supernodal multicolor ordering and sparsity pattern incomplete Cholesky factor- ization applied toΘ attains anπœ–-accurate Cholesky factorization in computational complexityO

𝑁log2(𝑁)log2𝑑(𝑁/πœ–) .

Theorem 18. In the setting of Examples1and2, there exists a constant𝐢depending only on 𝑑 , 𝑠,kL k,kLβˆ’1k, β„Ž, and 𝛿 such that, given the ordering β‰ΊπœŒ and sparsity pattern π‘†πœŒ defined as in Construction 3 with 𝜌 β‰₯ 𝐢log(𝑁/πœ–), the incomplete Cholesky factor 𝐿obtained from Algorithm3has accuracy

k𝐿 𝐿𝑇 βˆ’Ξ˜kFro ≀ πœ– . (5.36)

Furthermore, Algorithm3has complexity of at most𝐢 𝑁 𝜌2𝑑log2(𝑁)in time and at most𝐢 𝑁 πœŒπ‘‘log(𝑁)in space.

Proof of Theorem18. Theorem9implies that by choosing 𝜌 β‰₯ 𝐢˜log(𝑁/πœ–), there exists a lower-triangular matrix ΛœπΏπ‘†πœŒwith

Ξ˜βˆ’πΏΛœπ‘†πœŒπΏΛœπ‘†πœŒ,>

Fro ≀ πœ– and sparsity pattern π‘†πœŒ. Theorem 7 implies that the Examples 1 and 2 satisfy πœ†min β‰₯ 1/poly(𝑁).

Therefore, choosing 𝜌 β‰₯ 𝐢˜log𝑁 ensures that πœ– <

πœ†min(Θ)

2 and thus that ˜Θ B πΏΛœπ‘†πœŒπΏΛœπ‘†πœŒ,> satisfies Condition2 with constant 2𝐢Φ, where𝐢Φ is the corresponding constant forΘ. By possibly changing ˜𝐢 again,𝜌 β‰₯𝐢˜log𝑁 ensures that

πœ– ≀ πœ†min(Θ) 2π‘ž2 𝐢 πœ… Θ˜ 2π‘ž 𝑝

,

88 where 𝐢 is the constant of Lemma 16, since π‘ž β‰ˆ log𝑁 and, by Lemma 14, 𝑝 is bounded independently of𝑁. Thus, by Lemma16, the Cholesky factorπΏπ‘†πœŒobtained from applying Algorithm10toΘ =Θ˜ + Ξ˜βˆ’Ξ˜Λœ

satisfies

Θ˜ βˆ’πΏπ‘†πœŒπΏπ‘†πœŒ,>

Fro ≀ π‘ž2(4𝐢 πœ…)2π‘ž π‘πœ– ≀ poly(𝑁)πœ– , (5.37) where πœ… is the constant with which Θ satisfies Condition 2 and the polynomial depends only on𝐢, πœ…, and 𝑝. Since, for the orderingβ‰ΊπœŒ and sparsity pattern π‘†πœŒ, the Cholesky factors obtained via Algorithms 3 and 10 coincide, we obtain the

result.

This result holds for both element-wise and supernodal factorization, in either its left, up, or right-looking forms. As remarked in Section5.3.1, using the two-way supernodal sparsity pattern for factorization of Ξ˜βˆ’1 in the fine-to-coarse ordering degrades the asymptotic complexity. Therefore, the above result does not immedi- ately prove the accuracy of the Cholesky factorization in this setting, with optimal complexity. However, the column-supernodal factorization described in Section5.3 can similarly be described in terms of O (log(𝑁)) Schur-complementations. Thus, the above proof can be modified to show that when using the column-supernodal multicolor ordering and sparsity pattern, ICHOL(0)applied to Ξ˜βˆ’1 computes an πœ–-approximation in computational complexityO (𝑁log(𝑁/πœ–)).

5.5 Numerical example: Compression of dense kernel matrices

Dalam dokumen Inference, Computation, and Games (Halaman 104-111)