• Tidak ada hasil yang ditemukan

Linear Algebra KNU Math 231 Classnotes

N/A
N/A
Protected

Academic year: 2023

Membagikan "Linear Algebra KNU Math 231 Classnotes"

Copied!
59
0
0

Teks penuh

Try to interpret a solution to this equation of column vectors in terms of the column image. It is clear that by performing one of these row operations, the solutions of the system of equations do not change.

Inverses and Transposes

Multiplying by A−1 is really just doing row operations, so if we do those row operations over the expanded matrix, we get A−1 in the second part. Using Gaussian elimination to find the inverse takes about twice as long since you have to undo the augmented matrix I while eliminating A.

Triangular Factors and Row Exchanges

Apermutation matrix is ​​a matrix that we can extract from the identity by performing row swaps. If we run an elimination and need to do a row change, we can pretend we did this before we started the elimination.

Vector Spaces and Subspaces

For a subset S of vectors in a vector space V, the subspace generated by S is the smallest subspace of V that contains S. Use this to argue that for any subset S of V, the subspace generated by S is uniquely defined.

The rank r ≤ n of anm × n matrix M is the number of turns when it has been reduced to Echelon Form. We noted that, unlike the solution set Mx=0, the solution set Mx=b is not necessarily a space.

Linear Independence

Then you should be able to express b= (7,0,−8) as a linear combination of the other three vectors. Given a basis B of a vector space V, a vector v in V has a unique expression as a linear combination of vectors in B.

The Four Fundamental Subspaces

Since Mx =0 ⇐⇒ Ux=0, the columns of M are independent if and only if the same columns of C are independent. ClearlyM and U share the same line space; and it is generated by the rotating (nonzero) rows of U. In general, the rows L corresponding to the nonrotating rows of M are in the null space and are exactly m−ro, so they are a basis for N(MT ).

Since they are a standard basis for Rn, we then have N(MT) =Rn and so on. Note that if M has a left inverse of L, then LT is a right inverse of AT: (ATLT) = (LA)T =IT =I. The following example shows how we can find the left inverse for a high matrix M: the one where n≤m.

So adding to any row of L does not change the fact that it is the left inverse of M. Here we found the right inverse R of a wide matrix M, and all we needed was m pivots.

Graphs applications

The component in a graph is a maximal subset C of vertices such that for every two vertices and cranes there is a path in Gbetweenuandv. The characteristic vector χC of a cycle C is the vector (x1, . . . , xm) such that xi= 1 ifei is a forward arc inC, xi =−1 ifei is a backward arc inC, and otherwise, xi= 0. We also see from Fact 2.21 depend the ranges of MG corresponding to the arcs of a Race cycle.

The dimension r of the row space is the size of the largest subgraph without cycles. The edges separating a face from the rest of the face are a cycle, and it is not difficult to see that these "face cycles" generate the cycle space of G (the left null space of MG). Generally, in graph theory or topology, we count the outer face as a face, but not for today!). The number of faces is therefore exactly the size of the cycle space, so f =m−n+c(G).

We need that the sum of the values ​​of the arc around each cycle (paying attention to the orientation) is 0. MGx=bka a solution if and only if the sum of the values ​​corresponding to any cycle of, sums up to 0.

Linear Transformations

If we trace the arcs around a circle and add values ​​at the vertices according to the values ​​at the edges, it might not work. View the values ​​xi on each vertex as a voltage, and the values ​​on the arcs as voltage drops (or as current if the wire between two vertices is assumed to have resistance 1), then 'there is a solution for MGx=b if and only if the current around each closed walk adds up to 0.” You may recognize this as Kirchoff's law of conservation of energy. A good way to find the matrix that gives a transformation is to first consider what the transformation does to a basis of the vector space.

How do we represent the projection on the line Lθ that makes a positive angle ofθ with the x-axis. For homework, you need to know that the kernel of a transformationT is the set of vectors such that T(v) = 0.

Orthogonal Vectors and Subspaces

For a subspace V of Rn, the orthogonal complement V⊥ (read 'V perp') of V is the space of all vectors orthogonal to all vectors of V. To show that two subspaces are orthogonal, it suffices to show that every vector in the basis of one space is orthogonal to every vector in the basis of the other. Since the rows of a matrix contain a basis of its row space, we see that the row space and null space of any matrix are orthogonal.

In fact, since the null space is the set of all vectors x such that r·x= 0, the null space N(M) of a matrix is ​​exactly the orthogonal complement C(MT)⊥ of its row space.

Cosines and Projections

Finding an orthonormal basis of a higher-dimensional subspace is the same idea: from the third basis vector we need to remove the projections on the previous two. The projection onto a vector is a linear transformation, so it can be represented as (multiplication on the left by) a matrix P.

Projections and least squares

We'll come back to this in a few sections and look at an orderly way to do this. Since Mxˆ is the projection of x onto the column space of M, this matrixM(MTM)−1MT is called the projection matrix. We don't divide by matrices, but if we did, we could write the projection matrix at M M T.

For an m × n matrix M, the cross product matrix MTM is a symmetric n × n matrix that is invertible if and only if the rank of M is the same. On the other hand, it cannot rank lower than M asM and MTM because of the following. Any matrix P with these properties projects vectors onto the column space of some matrix M.

A nice application of finding the approximate solution of a ˆx system is to use it to find the least-squares regression line of a data set. How about replacing w with the nearest vector ˆwin of the column space and finding the solution ˆx for it.

Orthogonal Basis and Gram Schmidt

The matrixQ whose columns are a set of orthonormal vectors has some other nice properties that result from the fact that QTQ=I. For an orthogonal matrixQthenQT =Q−1., which is convenient: since the inverse is immediate, it is easy to solve Qx=b. Q(QTQ)QT =QIQT =QQT soQQT =:PQ is the matrix projecting onto the column space of Q.

Instead of calculating ˆx= (MTM)MTb for the least squares problem, we have MTM =RTQTQR=RTR.

Fast Fourier Transforms

To generalize this - to approximate periodic functions corresponding to points - we should find functions such as cosx, coskx, sinx, sin3(kx) + cos(2kx), with period dividing 2π, such that the vectors. Remember that in the complex plane C=R×iRa point x+iy can be expressed as reiθ=r(cosθ+isinθ). This notation is exceptionally useful, and is appropriate since one can show that eiθeiθ0 =ei(θ+θ0).

The number ωn=ei2πn is the prime root of unity, since ωnn= (ei2πn)n=ei2π= cos 2π+isin 2π= 1. This sum is the sum of even n vectors that are even distributed around the unit circle in the complex plane.

Introduction

Properties of the Determinant

We now know how the determinant behaves under row operations, so we can calculate the determinant using Gaussian elimination. We will prove them after developing some formulas for the determinant in the next section.

Formulae for Determinants

We have decomposed the matrix into 'summan matrix' with a non-zero entry in each row. Unless there was a nonzero entry in each column, the determinant of the summand was zero. If there was one nonzero entry per column, the determinant was the product of the entries multiplied by a factor of −1 if the columns were not in the correct order.

In this decomposition, we get a non-zero summation by choosing a permuation of the columns and then multiplying the entries along the diagonal. Where Sm is the set of n×n permutation matrices, the permutation formula for M = [mi,j] then becomes. This permutation formula can be described in another way; it leads to the so-called co-factor expansion formula.

The (i, j)thminor of an n×nmatrixM is the (n−1)×(n−1) matrix Mij obtained from M by removing the fourth row and the jth column. The first equation is called the "cofactor expansion along the fourth row" and the second is the "cofactor expansion along the fourth column".

Applications of Determinants

This is the expansion along the first row of a matrix with the same first row as M, but we replace the cofactorc1n with the cofactorc2n. This is the (i, n) cofactor of the matrix M0 that we get from M by replacing its second row with its first. The entire expression is therefore the determinant of the matrix M0, whose first two rows are the first row of M.

This is the cofactor expansion along the i-th column of the matrix Bi obtained from M by replacing the i-th column with the vector b. The best thing you've seen in days is that this holds true even if the sides of the box aren't orthogonal. Since the determinant of M is preserved by taking the transposea, we can alternately think of it as the volume of the figure spanned by the columns of M.

If M can be successfully factorized into a triangular T, let the ithdiagonal entry of the factored matrix be called theith pivotti. Let Mm refer to the square matrix consisting of the intersection of the first m rows and m columns of M .

Introduction

Referensi

Dokumen terkait