• Tidak ada hasil yang ditemukan

Alessandro Zampini - Giovanni Landi

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Membagikan "Alessandro Zampini - Giovanni Landi"

Copied!
348
0
0

Teks penuh

Following this approach, the book presents a number of examples and exercises, which are intended as a central part in the development of the theory. The idea of ​​the diagonal operation of an endomorphism or a matrix (the problem of diagonalization and of reduction to the Jordan form) is central to this book, and it is introduced in Chapter 9.

Applied Vectors

By this or simply by observable we mean everything that can be measured in physical space - the space of physical events - through an appropriate process of measurement. The main properties of the operation of product by a scalar are given in the following theorem.

Fig. 1.1 The parallelogram rule
Fig. 1.1 The parallelogram rule

Coordinate Systems

With respect to the coordinate system , we establish, via V3O, a bijection between the ordered triples of real numbers and points in S. The zero vector (null)0 = O − Components Ohas(0,0,0) with respect to each coordinate system whose origin is O, and is the only vector with this property.

Fig. 1.6 The bijection P ( x , y , z ) ↔ P − O = xi + yj + zk in the space
Fig. 1.6 The bijection P ( x , y , z ) ↔ P − O = xi + yj + zk in the space

More Vector Operations

If λ>0, then |λ| =λ and α=α=α; by commutativity and product associativity in R, this gives that a=b=c. iii). In the space S, the vector product between u∧with the surface of the parallelogram defined by u and v, while the mixed product u·(v∧w) gives the volume of the parallelepiped defined byu,v,w.

Fig. 1.7 Orthogonal projections
Fig. 1.7 Orthogonal projections

Divergence, Rotor, Gradient and Laplacian

Proof From the previous example, we have: a) The eigenvalues ​​of λ for a particular orthogonal matrix are given by the solutions of Eq. Example 11.5.5 The (Lorentzian) force F acting on a point electric charge whose motion is given by x(t) in the presence of an electric field E(x) and a magnetic field B(x) is written as

Fig. 2.1 The vector line L (v + w) with respect to the vector lines L (v) and L (w)
Fig. 2.1 The vector line L (v + w) with respect to the vector lines L (v) and L (w)

De fi nition and Basic Properties

Vector Subspaces

Proposition 2.2.7 The intersection W1∩W2 between two vector subspaces W1 and W2 in a real vector space V is a vector subspace of V. These propositions provide a natural way to generalize the notion of direct sum to any number of vector subspaces of a given vector space.

Linear Combinations

Analogously, an infinite system I ⊆V is said to be free if any of its finite subsets is free. vn} be a collection of vectors in V. I is not free if and only if one of the elementsvi is a linear combination of the other elementsv1.

Bases of a Vector Space

It is natural to consider the following basisB=(b1,c1. . ,bn,cn), made of the 2nfollowing elements,. The i-th component of the linear combination z of vector swk is given by the same linear combination of the i-th components of vector swk.

The Dimension of a Vector Space

When we deal with vectors of V3Oin ch.1, we have somehow implicitly used the concepts of length for a vector and orthogonality of vectors as well as amplitude of plane angle between vectors. A scalar product allows, among other things, to talk about the orthogonality of vectors or about the length of a vector in an arbitrary vector space.

Scalar Product, Norm

These last two properties are summarized by saying that the scalar product inR3 is positive definite. A finite dimensional real vector space V equipped with a scalar product will be denoted (V,·) and will be referred to as Euclidean vector space.

Orthogonality

Recalling Proposition 3.2.3 and Corollary 2.5.7, we know that the orthogonal subspace W⊥ has dimension 2.

Orthonormal Basis

Thus, the components of a vector with respect to an orthonormal basis are given by the scalar products of the vector with the corresponding basis elements. The next proposition shows that in any finite dimensional real vector space (V,·), with respect to an orthonormal basis forV, the scalar product has the same form as the canonical scalar product iEn.

Hermitian Products

Basic Notions

The matrix product is an inner operation on Rn,n, so the claim follows from the fact that (Rn,n,+,0Rn,n) is abelian. Proposition 4.1.17 The subset of invertible elements in Rn is a group with respect to the matrix product.

The Rank of a Matrix

Besides the diagonals, a larger class of matrices for which the rank is easy to compute is given in the following definition. If A is a complete upper triangular matrix, then the submatrix B made of the first rows of Awhenm>n, or the first pcolumns of Awhenm

Reduced Matrices

Proof Let Abe be a row-reduced matrix and let A be the submatrix obtained by deleting zero rows of A. If a matrix is ​​column-reduced, its rank coincides with the number of its non-zero columns.

Reduction of Matrices

This matrix can be reduced as in the proof of Proposition 4.4.3 only by transformations of type (D). It is obvious that the matrix B is already row-reduced, so we can write rk(A)=rk(B)=4.

The Trace of a Matrix

The components of these vectors will be given by the components of the matrices in a basis. While the rank measures the linear independence of the row (or column) vectors of a matrix, the determinant (which is only defined for square matrices) is used to check the invertibility of a matrix and to explicitly construct the inverse of an invertible matrix .

A Multilinear Alternating Mapping

5.4) Such an expression is also referred to as the expansion of the determinant of the matrix A with respect to its first row. We can still think of the determinant of the matrix A as a function defined on its columns.

Computing Determinants via a Reduction Procedure

The sequence of transformations defined in Exercise 5.2.4 changes the spacing between the rows and columns of the matrix A. The above proposition suggests that a sequence of transformations of type (D) on the rows of a square matrix simplifies the computation of its determinant.

Invertible Matrices

Note that in the inverse matrix B there is a transposition of the index, i.e. to the determinant factor, of the element bi j of B given by the cofactor αj of A.

Basic Notions

We will systematically use the matrix formalism described in the previous chapters 4 and 5. We check that the solution(1,−1) of the initial system is not a solution for the system AX =B.

The Space of Solutions for Reduced Systems

Starting from the bottom row, one can then determine the unknown corresponding to the pivot element and then, by substituting such an unknown in the remaining equations, repeat the procedure and thus determine the space of solutions. By replacing xm with its value (as a function of xm+1, . . . ,xn) previously determined, one writes xm−1 as a function of the last unknowns xm+1,.

The Space of Solutions for a General Linear System

Since A is a reduced linear system: AX =B is reduced and then solvable by the Gaussian method. In such a case, we have rk(A)=rk(A,B)=4, so there is a unique solution for the linear system.

Homogeneous Linear Systems

Once the elements inSare written in the form of then−ρfree unknowns, a basis forSis is given by fixing for these unknowns the values ​​corresponding to the elements of the canonical basis inRn−ρ. Since the first three rows in A (and then in A) are linearly independent, we choose x4,x5,x6 to be the free unknowns.

Linear Transformations and Matrices

With respect to the bases B=(1,X,X2) for Van and C=(1,X) for W, the picture corresponding to Ace in the previous example is. note that the second equality follows from Theorem 4.1.10). The matrix belonging to f with respect to the basisBandC, which we denote by MC,Bf , is the element in Rm,n whose columns are given by the components with respect to C of the images under f of the basis elements in B.

Basic Notions on Maps

Kernel and Image of a Linear Map

From the above lemma, the vector subspace Im(f) is generated by the images under f of any basis in R3. Its unique solution is (0,0), so because(f)= {0R2} and from the above lemma we can conclude that f is injective.

Isomorphisms

Theorem 7.4.4 shows that a linear isomorphism exists only if its domain has the same dimension as its image. A condition that characterizes isomorphism can then only be introduced for vector spaces with the same dimensions.

Computing the Kernel of a Linear Map

Proof (i) By the given hypothesis, from the definition of the kernel of a linear map we can write. As in Corollary 7.4.5 we can then write the isomorphismS → ker(f) given by. ii) From the isomorphism of the previous point we then have

Computing the Image of a Linear Map

Proof (i) With the given hypothesis, from the definition of the image of a linear map we can write. Representing the matrix by its columns, i.e. A=(C1 · · · Cn), we have. compare this with that in consequence 7.4.5).

Injectivity and Surjectivity Criteria

This statement follows from Lemma 7.3.3, since the images below f of a basis for V generate W (that is, they span linearly). Note 7.7.6 Let f : V → W be a linear map, with Aits corresponding matrix with respect to each basis. To compute the rank of the corresponding matrix A with respect to the canonical basis, we reduce it by rows as usual.

Composition of Linear Maps

A mapping f is an isomorphism if and only if for any choice of bases B for V and C for W the corresponding matrix MC,Bf is invertible with respect to the given bases, s. Proof Assume that fi is an isomorphism: then we can write dim(V)=dim(W), so that MC,Bf is a square matrix of size n×n (say).

Change of Basis in a Vector Space

Note that the upper columns of Aa are given by the components of the vectors in B with respect to the basisE. We consider the vector v=(1,−1,1)Band, whose components we want to determine with respect to C. Solution of the linear system. provide the entries for the change-of-basis matrix that is found to be.

The Dual of a Vector Space

It is straightforward to extend to the complex case, mutatis mutandis, all the results of this chapter given above. In the next section we shall explicitly consider linear transformations of the complex vector space Cn.

The Dirac ’ s Bra-Ket Formalism

8.7) The analogy between (8.6) and (8.7) shows that for a fixed basis Hn the operation of the linear mapping φ with the corresponding matrix A=MφB,B=(aks) is equivalently written as the operation of the operator. Then let φ,ψ be two linear mappings on Hn with the associated matrices A,B with respect to the Hermitian basis B.

Endomorphisms

From Theorem 7.9.6 and Remark 7.9.7, we know that an invertible matrix P gives a change of basis in Rn: there exists a basis C for Rn (the columns of P) such that P = ME,C and P−1 = MC,E. The following proposition confirms this. i) φ is simple, there is a basis B for V such that MφB,B is diagonalizable, (ii) there is a basis C for V such that MφC,C is diagonal,. iii) given any basis D for V the matrix MφD,Dis diagonalisable.

Eigenvalues and Eigenvectors

Proof (i) ⇒(ii): Since MφB,B is similar to a diagonal matrix, from the proof of Proposition 9.1.3 we know that there exists a basis C with respect to which =MφC,Cis diagonal. ii) ⇒(iii): Let there be a basis of V such that MφC,C = is diagonal. Such a kernel is isomorphic (as recalled above) to the solution space of the linear system given by the matrix MφB,B−λidV, where is an arbitrary basis of V.

The Characteristic Polynomial of an Endomorphism

From the fundamental theorem of algebra (see Proposition A.5.7) we know that then (X −α) is a divisor for p(X), and that we have the decomposition. 142 9 Endomorphisms and Diagonalization From Theorem7.6.4 we know that dim Im(φ)=4−dim ker(φ)=3, with a basis of the image φ given by 3 linearly independent columns in A.

Diagonalisation of an Endomorphism

Since each root of the characteristic polynomial has algebraic multiplicity=1, by Corollary 9.4.2 the matrix Ais is diagonalizable, and indeed similar to. The diagonal of the matrix and its diagonal elements are (see Proposition 9.4.4) the eigenvalues ​​of A counted by their multiples.

The Jordan Normal Form

The algebraic multiplicity λj actually coincides with the sum of the orders of Jordan blocks with the same eigenvalue λj. Since the algebraic multiplicity of the eigenvalues ​​λ=1 and λ=2 is equal to 1, their geometric multiplicity is also the same.

Orthogonal Matrices and Isometries

Proposition 10.1.9 The set O(n) of rectangular matrices in Rn,n is a group with respect to the usual matrix product. 156 10 Spectral Theorems on Euclidean Spaces Remark 10.1.16 It is obvious that this definition provides an equivalence relation within the collection of all orthonormal bases for En.

Self-adjoint Endomorphisms

From the above relations we conclude that φ(v) w=v φ(w) for each v,w∈En, which is φself-adjoint. Then the endomorphism is φ ∈ End(E2) corresponding to A with respect to the canonical basis self-adjoint.

Orthogonal Projections

Proposition 10.3.2 Given the Euclidean vector space En, an endomorphism φ∈End(En) is an orthogonal projection if and only if it is self-adjoint and satisfies the conditionφ2=φ. Proof We have already shown that the conditions are necessary for an endomorphism to be an orthogonal projection in En.

The Diagonalization of Self-adjoint Endomorphisms

The corresponding eigenspaces V± are the solutions of homogeneous linear systems associated with matrices. Since φVis a restriction of φ, the elements (v2, . . . , vn) are also eigenvectors for φ, and tov1 orthogonal.

The Diagonalization of Symmetric Matrices

From Proposition 10.4.5 we know that the self-adjointness of φ implies that there is an orthonormal basis of the eigenvectors for φ. The proof that eigenspaces corresponding to distinct eigenvalues ​​are orthogonal to a self-joint endomorphism follows directly from Proposition 10.2.2.

Skew-Adjoint Endomorphisms

Theorem 11.1.4 Given a skew-adjoint invertible endomorphismφ : E2p → E2p, there exists an orthonormal basisB for E2p with respect to what is the representative matrix forφ of the form. Since φ is defined on a three-dimensional space and has a one-dimensional core, the spectrum of the map S=φ2 is made from the Proposition11.1.4 of the simple eigenvalue λ0 =0 and a multiple 2 eigenvalue λ<0, which is such that 2λ=tr(MSE,E)met.

The Exponential of a Matrix

Another option is to use identities (c) and (b) in the previous proposition when Q is diagonalized. In Example 11.3.1 below, we will explicitly see that the exponential map, when restricted to 2-dimensional antisymmetric matrices, is indeed surjective to the group SO(2).

Rotations in Two Dimensions

The restriction of the exponential map to the Lie algebra so(n) of anti-symmetric matrices is surjective to SO(n). Given the 2π periodicity of the trigonometric functions, we see that every element in the special orthogonal group SO(2) corresponds bijectively to an angle∈ [0,2π).

Rotations in Three Dimensions

For each rotation of RofE3, there is a unique vector line (direction) that remains unchanged by the rotation. The width of rotation about the axis of rotation is given by the angle ρ obtained from (11.1) and implicitly given by .

Fig. 11.1 The Euler angles
Fig. 11.1 The Euler angles

The Lie Algebra soð 3 Þ

The Angular Velocity

These and several related notions can be analyzed in terms of the spectral properties of orthogonal matrices that we illustrated above. Example 11.6.1 Consider the motion x(t)inE3 of a point mass such that the distance x(t) from the origin of the coordinate system is fixed.

Rigid Bodies and Inertia Matrix

The Adjoint Endomorphism

Spectral Theory for Normal Endomorphisms

The Unitary Group

Quadratic Forms on Real Vector Spaces

Quadratic Forms on Complex Vector Spaces

The Minkowski Spacetime

Electro-Magnetism

Af fi ne Spaces

Lines and Planes

General Linear Af fi ne Varieties and Parallelism

The Cartesian Form of Linear Af fi ne Varieties

Intersection of Linear Af fi ne Varieties

Euclidean Af fi ne Spaces

Orthogonality Between Linear Af fi ne Varieties

The Distance Between Linear Af fi ne Varieties

Bundles of Lines and of Planes

Symmetries

Conic Sections as Geometric Loci

The Equation of a Conic in Matrix Form

Reduction to Canonical Form of a Conic: Translations

Eccentricity: Part 1

Conic Sections and Kepler Motions

Reduction to Canonical Form of a Conic: Rotations

Eccentricity: Part 2

Why Conic Sections

Gambar

Fig. 1.1 The parallelogram rule
Fig. 1.2 The opposite of a vector: A  − O = −( A − O )
Fig. 1.4 The scaling λ( C − O ) = ( C  − O ) with λ &gt; 1
Fig. 1.5 The bijection P ( x , y ) ↔ P − O = xi + yj in a plane
+7

Referensi

Dokumen terkait

► Draw the next vector with the appropriate length and in the direction specified, with respect to a coordinate system whose origin is the end of vector A and parallel to

Definition of a Vector Space A vector space is a set of elements { U , V , W , · · · } called vectors, together with two operations: addition of vectors, and multiplication of a

select and use: • the scalar product definition  • a rule to calculate the scalar product of two vectors  • a rule to determine the magnitude of vectors  recall and use a

~ an ordered array of entities which is invariant under coordinate transformation; includes scalars & vectors ~ 3n 0th order – 1 component, scalar mass, length, pressure, energy 1st

= {u1,u2,u3} of R3 with respect to the usual Euclidean inner product by the Gram-Schmidt process... c Express each of e1 and e3 as a linear combination of the orthonormal

• If the scalar triple product is equal to zero, then the three vectors a, b, and c are coplanar, since the parallelepiped defined by them would be flat and have no volume.. Vector

تاهجتملا برض Jايناث : Product of Vectors يسايق برضلا نم ناعون كانه : Scalar dot product يهاجتاو vector cross product يسايقلا برضلا لولأا عونلا : Scalar Product ةيلمعلا هذه نم

Affine Spaces  Coordinate System  Origin: a particular reference point Arbitrary placement of basis vectors Basis vectors located at the