• Tidak ada hasil yang ditemukan

Jörg Liesen Volker Mehrmann

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Membagikan "Jörg Liesen Volker Mehrmann"

Copied!
321
0
0

Teks penuh

In our experience, the matrix-oriented approach to Linear Algebra leads to a better intuition and a deeper understanding of the abstract concepts. Another advantage of the matrix-oriented approach to Linear Algebra is given by the simplifications when transferring theoretical results to practical algorithms.

The PageRank Algorithm

To avoid this, the importance values ​​of backlinks in the PageRank algorithm are divided by the number of links of the corresponding page. Analyzing and solving such systems is one of the most important tasks of Linear Algebra.

No Claim Discounting in Car Insurances

For example, a customer in the C1 class remains in C1 in the event of at least one accident. As in the example with the PageRank algorithm, we have translated a practical problem into the language of linear algebra, and we can now study it using linear algebra techniques.

Production Planning in a Plant

The insurance company then has to estimate the probability that a customer who is in class Ci this year will switch to class Cj. As before, we have formulated a real problem in the language of linear algebra and can use mathematical methods to solve it.

Predicting Future Profits

One possible approach is to choose the parameters α and β so as to minimize the sum of the squared distances between the given points and the straight line. Determining the parameters α and β that minimize a sum of squares is called a least squares problem.

Circuit Simulation

This simple example demonstrates that for the simulation of a circuit a system of linear differential equations and algebraic equations must be solved. In this chapter we introduce the mathematical concepts that form the basis for the developments in the following chapters.

Sets and Mathematical Logic

As an exercise create the logical value table for ¬B ⇒ ¬A and compare it with the table for A⇒B.) Therefore the truth of A⇒B can be proved by showing that the truth of ¬Implies with the truth of ¬A, i.e. that "Bis false" implies "Ais false". Since there is nox∈Ø, the statement "x∈Ø" is false, and therefore "x∈Ø⇒x∈ M" is true for every x (cp. remarks on implication A⇒B).

Maps

In these assertions, we used the continuity of the mapping f(x)=x2, which is discussed in basic analysis courses. To show that a given mapping g:Y → X is the unique inverse of the bijective mapping f :X →Y, it suffices to show one of the equations g◦ f =IdX or f ◦g=IdY.

Relations

In these definitions, if at least one of the sets is empty, then the resulting Cartesian product is also the empty set. The set of all residue classes modulon, i.e. the quotient set with respect to Rn, is often denoted by Z/nZ.

Groups

Moreover, there is a number 0 such that 0+a =a for every integer, and for every integer there exists an integer such that(−a)+a=0. Moreover, by combining defined concepts, we can move to further generalizations and in this way expand the mathematical theory step by step. the mathematical method moves forward from the simplest concepts to combinations of them and obtains through such combinations new and more general concepts.”.

Rings and Fields

For each ring R, the following statements hold: . 0∗a)on the left and right side of this equality we get 0=0∗a. But if an element is invertible, then it has a unique inverse, as shown by the following statement.

Basic Definitions and Operations

In the definition of entry j of the matrix A∗B we have not written the multiplication symbol for the elements in R. By the definition of matrix multiplication and using the associative and distributive law in R, we get

Matrix Groups and Rings

Since In is an invertible upper triangular matrix, the set of the invertible upper triangular matrices is a nonempty subset of G Ln(R). We point out that (4.4) represents an italic formula for computing the entries of the inverse of an invertible upper triangular matrix.

Elementary Matrices

If A is invertible, then its echelon form is the identity matrix and the inverse A−1 is the product of the inverses of the elementary matrices. For a non-invertible matrix, its echelon form is in some sense the "closest possible" matrix to the identity matrix.

The Echelon Form and Gaussian Elimination

The echelon shape is calculated in MATLAB with the command rref (“reduced row echelon shape”). This result gives the uniqueness of the echelon form of a matrix and its invariance under left multiplication by invertible matrices.

Rank and Equivalence of Matrices

Due to the invariance of the echelon form (and thus the rank) in left multiplication by invertible matrices (cf. Since rank(A) = rank(S A) = rank(A)and rank([A,b]) = rank([S A, b]) = rank([A,b]), the above discussion also gives the following result on different cases of solvability of a linear system of equations.

Definition of the Determinant

The signature formula is mainly of theoretical interest, as it explicitly represents the determinant of A in terms of the inputs of A. If R = R or R =C, then standard analysis techniques show that det(A) is a continuous function of the inputs of A.

Properties of the Determinant

The determinant map can be interpreted as a map of (Rn,1)ntoR, i.e. as a map of the columns of the matrixA∈Rn,nto the ringR. Because of this property, the determinant map is called an alternating map of the columns of A.

Minors and the Laplace Expansion

In this chapter, we use the determinant map to assign to each square matrix a unique polynomial, called the characteristic polynomial of the matrix. Even more important are the roots of the characteristic polynomial, which are called the eigenvalues ​​of the matrix.

The Characteristic Polynomial and the Cayley-Hamilton

The following lemma shows that for every monic polynomial p ∈ R[t] of degree n ≥1 there exists a matrix A∈Rn,n with PA =p. James Joseph Sylvester coined the name of the theorem in 1884, calling it the "not-so-wonderful Hamilton-Cayley theorem".

Eigenvalues and Eigenvectors

So [1,−1]T is an eigenvector of A corresponding to the eigenvalue 0, but it is not an eigenvector of AT. So [1,−3]T is an eigenvector of AT corresponding to the eigenvalue 0, but it is not an eigenvector of A.

Eigenvectors of Stochastic Matrices

Form the matrixA=compan(p) and compare its structure with that of the companion matrix from Lemma8.4. Therefore, solving the eigenvalue problem A(α)u =ufor small αpotentially gives a good approximation of au ∈ Rn,1 that satisfies Au =u.

Basic Definitions and Properties of Vector Spaces

If it is clear from the context (or not important) which field we are using, we often omit the explicit reference to K and simply write vector space instead of K vector space. In particular, every vector space contains a unique neutral element (with respect to addition) 0V, which is called the null vector.

Bases and Dimension of Vector Spaces

According to the basis extension theorem, every vector space spanned by finitely many vectors has a basis consisting of finitely many elements. On the other hand, the vector space K[t] is not spanned by finitely many vectors (cp. 2 in Example 9.13) and is therefore infinite-dimensional.

Coordinates and Changes of the Basis

Because of this, some authors distinguish the basis as "set", i.e. a collection of elements in no particular order, and an "ordered basis". Since the basis vectors are linearly independent, all coordinates must be zero, and therefore In−P Q =0∈ Kn,n, orP Q=In.

Relations Between Vector Spaces and Their Dimensions

The above definition of the sum can be extended to an arbitrary (but finite) number of subspaces: IfU1,. Which of the following sets (using usual addition and scalar multiplication) are R-vector spaces.

Basic Definitions and Properties of Linear Maps

The kernel of a linear map is sometimes called thenull space(ornullspace) of the map, and some authors use the notation null(f) instead of ker(f). In the field of Linear Algebra, a justification is often given by an isomorphism that identifies vector spaces with each other.

Linear Maps and Matrices

There are many different notations for the matrix representation of linear maps in the literature. Using this theorem, we can study how changing the bases affects the matrix representation of a linear map.

Linear Forms and Dual Spaces

As the following theorem shows, the concepts of the dual map and the transposed matrix are closely related. Because of the close relationship between the transposed matrix and the double image, some authors call the double image f∗the transpose of the linear image f.

Bilinear Forms

This bilinear form is non-degenerate and thus V,V∗ is a dual pair with respect to β (cf. If β is a bilinear form on V×W, then the theme matrix representation of β with respect to the bases B1andB2 is called we used the coordinate map of Lemma 10. .17.

Sesquilinear Forms

Let V be aK-vector space and let U be a subspace of V. Show that the set of all bilinear forms on V×Wwith the operations. Let V be a finite dimensional C-vector space with basis B, and let be a sesquilinear form of V. Show thatsis Hermitian if and only if[s]B×B is Hermitian.

Scalar Products and Norms

As an example of the importance of these concepts in many applications, we study least squares approximations. a Euclidean or unitary vector space Vis again a Euclidean or unitary vector space, respectively, when the scalar product on the space Vis is restricted to the subspace U. 1) A scalar product on Rn,1 is given by. vector space on which a norm is defined is called a normed space. defines a norm called the Euclidean norm of Cn,1.

Orthogonality

Example 12.8 The standard basis vectors are1,e2 ∈R2,1 are orthogonal and {e1,e2} is an orthonormal basis of R2,1 with respect to the standard scalar product (cf. Now we show that every finite-dimensional Euclidean or unitary vector space has an orthonormal basis.

The Vector Product in R 3;1

From (2) and the Cauchy-Schwarz inequality (12.2), it follows that sev×w=0 holds if and only ifv, linearly dependent on ware. Show sev, w∈V are orthogonal with respect to·,· if and only if v+λw = v−λw for allλ∈K. 5) in example 12.4) is the rate induced by this scalar product.

Basic Definitions and Properties

If V is finite-dimensional and β is a non-degenerate bilinear form of V, then according to Theorem 13.2 each f ∈ ​​L(V,V) has a unique right adjointg and a unique left adjointk, such that. Example 13.11 Consider the unit vector space C3,1 with the standard dot product and the linear map.

Adjoint Endomorphisms and Matrices

However, one must be careful to use the relevant field over which this vector space is defined. In particular, the set of self-adjoint endomorphisms on a unit vector space V does not form an AC vector space.

Basic Definitions and Properties

Diagonalizability

Triangulation and Schur’s Theorem

Polynomials

The Fundamental Theorem of Algebra

Cyclic f -invariant Subspaces and Duality

The Jordan Canonical Form

Computation of the Jordan Canonical Form

Matrix Functions and the Matrix Exponential Function

Systems of Linear Ordinary Differential Equations

Normal Endomorphisms

Orthogonal and Unitary Endomorphisms

Selfadjoint Endomorphisms

Referensi

Dokumen terkait

- Extracts, essences and concentrates, of coffee, and preparations with a basis of these extracts, essences or concentrates or with a basis of coffee: 2101.11000 - - Extracts, essences