• Tidak ada hasil yang ditemukan

Basic Notions

Dalam dokumen Alessandro Zampini - Giovanni Landi (Halaman 57-63)

Definition 4.1.1 Amatrix M with entries in R(or a real matrix) is a collection of elements ai j ∈R, withi =1, . . . ,m; j=1, . . . ,n andm,n∈N, displayed as follows

M =

⎜⎜

⎜⎝

a11 a12 . . . a1n a21 a22 . . . a2n ... ... ...

am1am2. . .amn

⎟⎟

⎟⎠.

The matrixM above is said to be made ofm rowvectors inRn, that is R1=(a11, . . . ,a1n) , . . . , Rm=(am1, . . . ,amn) or byn columnvectors inRm, that is

C1=(a11, . . . ,am1) , . . . , Cn=(a1n, . . . ,amn).

Thus the matrix M above is a m×n-matrix (m rows Ri ∈ Rn and n columns Ri ∈ Rn). As a shorthand, by M =(ai j)we shall denote a matrix M with entry ai j at the i-th row and j-th column. We denote by Rm,n the collection of all m×n-matrices whose entries are inR.

© Springer International Publishing AG, part of Springer Nature 2018 G. Landi and A. Zampini,Linear Algebra and Analytic Geometry for Physical Sciences, Undergraduate Lecture Notes in Physics, https://doi.org/10.1007/978-3-319-78361-1_4

47

48 4 Matrices Remark 4.1.2 It is sometime useful to consider a matrixM ∈Rm,nas the collection of itsncolumns, or as the collection of itsmrows, that is to write

M =(C1 C2 . . . Cn)=

⎜⎜

⎜⎝ R1

R2

...

Rm

⎟⎟

⎟⎠.

An elementM ∈R1,n is called an-dimensionalrow matrix, M =(a11 a12 . . . a1n)

while an elementM ∈Rm,1is called am-dimensionalcolumn matrix,

M =

⎜⎜

⎜⎝ a11 a21 ...

am1

⎟⎟

⎟⎠.

Asquarematrices of ordernis an×nmatrix, that is an element inRn,n. An element M ∈ R1,1is a scalar, that is a single element inR. IfA=(ai j)∈Rn,n is a square matrix, the entries(a11,a22, . . . ,ann)give the (principal)diagonalof A.

Example 4.1.3 The bold typeset entries in

A=

1 2 2

−103 2 47

give the diagonal ofA.

Proposition 4.1.4 The setRm,nis a real vector space whose dimension is mn. With A=(ai j), B=(bi j)∈Rm,n andλ∈R, the vector space operations are defined by

A+B =(ai j +bi j) , λA=(λai j).

Proof We task the reader to show thatRm,nequipped with the above defined oper- ations is a vector space. We only remark that the zero element inRm,nis given by the matrix Awith entriesai j =0R; such a matrix is also called thenullmatrix and denoted 0Rm,n.

In order to show that the dimension of Rm,n is mn, consider the elementary m×n-matrices

Er s =(e(j kr s)) , with e(j kr s)=

1 if (j,k)=(r,s) 0 if (j,k)=(r,s) .

Thus the matrix Er s has entries all zero but for the entry(r,s)which is 1. Clearly there aremnof them and it is immediate to show that they form a basis forRm,n. Exercise 4.1.5 LetA= 1 2 −1

0 −1 1

∈R2,3. One computes

1 2 −1 0−1 1

= 1 0 0 0 0 0

+2 0 1 0 0 0 0

− 0 0 1 0 0 0

+0 0 0 0 1 0 0

− 0 0 0 0 1 0

+ 0 0 0 0 0 1

=E11+2E12E13E22+E23.

In addition to forming a vector space, matrices of ‘matching size’ can be multiplied.

Definition 4.1.6 IfA=(ai j)∈Rm,nandB=(bj k)∈Rn,ptheproductbetweenA andBis the matrix inRm,pdefined by

C =(ci k)=A B∈Rm,p, where ci k=R(iA)·Ck(B)= n

j=1

ai jbj k,

withi =1, . . . ,mandk=1, . . . ,p. HereRi(A)·Ck(B)denotes the scalar product in Rnbetween thei-th row vectorR(iA)of Awith thek-th column vectorCk(B)ofB.

Remark 4.1.7 Notice that the productA B—also called therow by columnproduct—

is defined only if the number of columns of Aequals the number of rows ofB.

Exercise 4.1.8 Consider the matrices

A= 1 2−1 3 0 1

∈R2,3, B=

⎝1 2 2 1 3 4

⎠∈R3,2.

One hasA B=C=(ci k)∈R2,2with

C = 2 0 6 10

. On the other hand,B A=C=(cst )∈R3,3with

C=

⎝ 7 2 1 5 4−1 15 6 1

.

Clearly, comparingCwithCis meaningless, since they are in different spaces.

50 4 Matrices Remark 4.1.9 With A∈Rm,n and B∈Rp,q, the product A B is defined only if n =p, giving a matrixA B∈Rm,q. It is clear that the productB Ais defined only if m=qand in such a case one hasB A∈Rp,n. When both the conditionsm=qand n =pare satisfied both products are defined. This is the case of the matricesAand

Bin the previous exercise.

Let us consider the spaceRn,nof square matrices of ordern. IfA,Bare inRn,nthen bothA BandB Aare square matrices of ordern. An example shows that in general one has A B=B A. If

A= 1 2 1−1

, B= 1−1

1 0

,

one computes that

A B= 3−1 0−1

= B A= 0 3 1 2

.

Thus the product of matrices isnon commutative. We shall say that two matrices A,B∈Rn,ncommute ifA B=B A. On the other hand, the associativity of the prod- uct inRand its distributivity with respect to the sum, allow one to prove analogous properties for the space of matrices.

Proposition 4.1.10 The following identities hold:

(i) A(BC)=(A B)C, for any A∈Rm,n, B∈Rn,p,C ∈Rp,q , (ii) A(B+C)=A B+AC, for any A∈Rm,n, B,C∈Rn,p, (iii) λ(A B)=(λA)B= A(λB), for any A∈Rm,n, B∈Rn,p, λ∈R.

Proof (i) Consider three matrices A=(ai h)∈Rm,n, B=(bhk)∈Rn,p and C=(ck j)∈Rp,q. From the definition of row by column product one has A B=(di k)withdi k =n

h=1ai hbhk, whileBC=(eh j)witheh j =p

k=1bhkck j. Thei j-entries of(A B)CandA(BC)are

p k=1

di kck j= p

k=1

n

h=1

ai hbhk

ck j =

p k=1

n h=1

(ai hbhkck j), n

h=1

ai heh j = n h=1

ai h

p

k=1

bhkck j

= n h=1

p k=1

(ai hbhkck j).

These two expressions coincide (the last equality on both lines follows from the distributivity inRof the product with respect to the sum).

(ii) Take matricesA=(ai h)∈Rm,n,B=(bh j)∈Rn,pandC=(ch j)∈Rn,p. The equality A(B+C)=A B+AC is proven again by a direct computation of thei j-entry for both sides:

[A(B+C)]i j = n h=1

ai h(bh j +ch j)

= n h=1

ai hbh j+ n h=1

ai hch j

= [A B]i j + [AC]i j

= [A B+AC]i j.

(iii) This is immediate.

The matrix product inRn,nis inner and it has a neutral element, a multiplication unit.

Definition 4.1.11 Theunitmatrix of ordern, denoted byIn, is the element inRn,n given by

In =i j) , with δi j =

1 if i = j 0 if i = j .

The diagonal entries ofInare all 1, while its off-diagonal entries are all zero.

Remark 4.1.12 It is easy to prove that, withA∈Rm,n, one has A In= A and ImA= A.

Proposition 4.1.13 The spaceRn,nof square matrices of order n, endowed with the sum and the product as defined above, is a non abelian ring.

Proof Recall the definition of a ring given in A.1.6. The matrix product is an inner operation inRn,n, so the claim follows from the fact that(Rn,n,+,0Rn,n)is an abelian

group and the results of the Proposition4.1.10.

Definition 4.1.14 A matrix A∈Rn,nis said to beinvertible(alsonon-singularor non-degenerate) if there exists a matrix B∈Rn,n, such that A B=B A=In. Such a matrix Bis denoted byA−1and is called theinverseof A.

Definition 4.1.15 If a matrix is non invertible, then it is calledsingularordegener- ate.

Exercise 4.1.16 An element of the ringRn,nneeds not be invertible. The matrix

A= 1 1 0 1

∈R2,2

is invertible, with inverse

A1= 1−1 0 1

as it can be easily checked. On the other hand, the matrix

52 4 Matrices

A= 1 0 k0

∈R2,2

is non invertible, for any value of the parameterk∈R. It is easy to verify that the matrix equation

1 0 k0

x y z t

= 1 0 0 1

has no solutions.

Proposition 4.1.17 The subset of invertible elements inRn,nis a group with respect to the matrix product. It is called thegeneral linear groupof order n and is denoted byGL(n,R)or simply byGL(n).

Proof Recall the definition of a group in A.2.7. We observe first that if AandBare both invertible thenA Bis invertible sinceA B(B1A1)=(B1A1)A B=In; this means that (A B)1=B1A1 so GL(n)is closed under the matrix product. It is evident that In1=In and that if Ais invertible, then Ais the inverse ofA1, thus

the latter is invertible.

Notice that sinceA Bis in general different fromB Athe group GL(n)is non abelian.

As an example, the non commuting matricesAandBconsidered in the Remark4.1.9 are both invertible.

Definition 4.1.18 GivenA=(ai j)∈Rm,nitstranspose, denoted bytA, is the matrix obtained fromAwhen exchanging its rows with its columns, that istA=(bi j)∈Rn,m withbi j =aj i.

Exercise 4.1.19 The matrix

A= 1 2−1 3 0 1

∈R2,3

has transposetA∈R3,2given by

tA=

⎝ 1 3 2 0

−1 1

.

Proposition 4.1.20 The following identities hold:

(i) t(A+B)=tA+tB, for any A,B∈Rm,n, (ii) t(A B)=tBtA, for A∈Rm,nand B ∈Rn,p,

(iii) if A∈GL(n)thentA∈GL(n)that is, if A is invertible its transpose is invertible as well with(tA)−1=t(A−1).

Proof (i) It is immediate.

(ii) GivenA=(ai j)andB=(bi j), denotetA=(ai j)andtB=(bi j)withai j =aj i

andbi j =bj i. IfA B=(ci j), thenci j =n

h=1 ai hbh j. Thei j-element int(A B) is thenn

h=1aj hbhi; thei j-element intBtAisn

h=1 bi h ah j and these elements clearly coincide, for anyi and j.

(iii) It is enough to show thatt(A−1)tA=In. From (ii) one has indeed

t(A−1)tA=t(A A−1)=tIn =In.

This finishes the proof.

Definition 4.1.21 A square matrix of ordern,A=(ai j)∈Rn,n, is said to besym- metriciftA= Athat is, if for anyi,j it holds thatai j =aj i.

Exercise 4.1.22 The matrixA=

⎝ 1 2−1 2 0 1

−1 1−1

⎠is symmetric.

Dalam dokumen Alessandro Zampini - Giovanni Landi (Halaman 57-63)