• Tidak ada hasil yang ditemukan

Operators, Eigenvectors, and Eigenfunctions

If you’ve heard the phrase “quantum operator” and you’re wondering “What exactly is an operator?,” you’ll be happy to learn that an operator is simply an instruction to perform a certain process on a number, vector, or function.

You’ve undoubtedly seen operators before, although you may not have called them that. But you know that the symbol “√ ” is an instruction to take the square root of whatever appears under the roof of the symbol, and “d( )/dx

tells you to take the first derivative with respect toxof whatever appears inside the parentheses.

32

The operators you’ll encounter in quantum mechanics are called “linear”

because applying them to a sum of vectors or functions gives the same result as applying them to the individual vectors or functions and then summing the results. So ifOis a linear operator1andf1andf2are functions, then

O(f1+f2)=O(f1)+O(f2). (2.1) Linear operators also have the property that multiplying a function by a scalar and then applying the operator gives the same result as first applying the operator and then multiplying the result by the scalar. So ifcis a (potentially complex) scalar andf is a function, then

O(cf)=cO(f), (2.2)

ifOis a linear operator.

To understand the operators used in quantum mechanics, I think it’s helpful to begin by representing an operator as a square matrix and considering what happens when you multiply a matrix and a vector (in quantum mechanics there are times when it’s easier to comprehend a process by considering matrix mathematics, and this is one of those times). From the rules of matrix multiplication, you may remember that multiplying a matrix (R) by a column¯¯

vectorAworks like this2: R¯¯A=

R11 R12

R21 R22

A1

A2

=

R11A1+R12A2

R21A1+R22A2

. (2.3)

This type of multiplication can be done only when the number of columns of the matrix equals the number of rows of the vector (two in this case, sinceAhas two components). So the process of multiplying a matrix by a vector produces another vector – the matrix has “operated” on the vector, transforming it into another vector. That’s why you’ll see linear operators described as “linear transformations” in some texts.

What effect does this type of operation have on the vector? That depends on the matrix and on the vector. Consider, for example, the matrix

¯¯

R=

4 −2

−2 4

1There are several ways of denoting an operator, but the most common in quantum texts is to put a caret hat (ˆ) on top of the operator label.

2There doesn’t seem to be a standard notation for matrices in quantum books, so I’ll use the double-bar hat (¯¯) for two-dimensional matrices and the vector symbol () or the ket symbol| for single-column matrices.

x A

y

(a) (b)

x A´

y

1 2 3 4 5 –1

–2 –3 –4 –5

5 10

1 2 3 4 5 –1

–2 –3 –5–4

A

5 10

Ax = 1 Ay = 3

A´ = –2x

A´ = 10 y

Operation by matrix R changes the length and direction of vector A

Figure 2.1 VectorAbefore (a) and after (b) operation of matrixR.¯¯

and the vectorA= ˆı+3jˆ, shown inFig. 2.1a. Writing the components ofA as a column vector and multiplying gives

R¯¯A=

4 −2

−2 4 1 3

=

(4)(1)+(−2)(3) (−2)(1)+(4)(3)

= −2

10

. (2.4)

So the operation of matrixR¯¯ on vectorAproduces another vector that has a different length and points in a different direction. This new vector is shown as vectorAinFig. 2.1b.

Why does a matrix operating on a vector generally change the direction of the vector? You can understand that by realizing that the x-component of the new vector A is a weighted combination of both components of the original vectorA, and the weighting coefficients are provided by the first row of matrixR. Likewise, the y-component of¯¯ A is a weighted combination of both components ofA, with weighting coefficients provided by the second row of matrixR.¯¯

This means that, depending on the values of the matrix elements and the components of the original vector, the weighted combinations will, in general, endow the new vector with a different magnitude from that of the original vector. And here’s a key consideration: if the ratioof the new components differs from the ratio of the original components, then the new vector will

x B

y

(a) (b)

x B´

y

1 2

1

1 2

Bx = 1 By = 1

B´ = 2x

B´ = 2 y

Operation by matrix R changes the length but not the direction of B

2 3

1 2 3

Figure 2.2 VectorBbefore (a) and after (b) operation of matrixR.¯¯

point in a different direction from that of the original vector. In such cases, the relative amounts of the basis vectors are changed by the operation of the matrix on the vector.

Now consider the effect of matrixR¯¯on a different vector – for example, vec- torB= ˆı+ ˆjshown inFig. 2.2a. In this case, the multiplication looks like this:

R¯¯B=

4 −2

−2 4 1 1

=

(4)(1)+(−2)(1) (−2)(1)+(4)(1)

= 2

2

=2 1

1

=2B. (2.5) So operating on vectorBwith matrixR¯¯ stretches the length ofBto twice its original value but does not change the direction of B. That means that the relative amounts of the basis vectors in vectorBare the same as in vectorB.

A vector for which the direction is not changed after multiplication by a matrix is called an “eigenvector” of that matrix, and the factor by which the length of the vector is scaled is called the “eigenvalue” for that eigenvector (if the vector’s length is also unaffected by operation of the matrix, the eigenvalue for that eigenvector equals one). So vector B = ˆı+ ˆj is an eigenvector of matrixR¯¯with eigenvalue of 2.

Eq. 2.5is an example of an “eigenvalue equation”; the general form is

R¯¯A=λA, (2.6)

in whichArepresents an eigenvector of matrixR¯¯with eigenvalueλ.

The procedure for determining the eigenvalues and eigenvectors of a matrix is not difficult; you can see that procedure and several worked examples on the book’s website. If you work through that process for matrixR¯¯ in the previous example, you’ll find that the vectorC= ˆı− ˆj is also an eigenvector of matrix R; its eigenvalue is 6.¯¯

Here are two helpful hints for the matrices you’re likely to encounter in quantum mechanics: the sum of the eigenvalues of a matrix is equal to the trace of the matrix (that is, the sum of the diagonal elements of the matrix, which is 8 in this case), and the product of the eigenvalues is equal to the determinant of the matrix (which is 12 in this case).

Just as matrices act as operators on vectors to produce new vectors, there are mathematical processes that act as operators on functions to produce new functions. If the new function is a scalar multiple of the original function, that function is called an “eigenfunction” of the operator. The eigenfunction equation corresponding to the eigenvector equation (Eq. 2.6) is

=λψ (2.7)

in whichψrepresents an eigenfunction of operatorOwith eigenvalueλ.

You may be wondering what kind of operator works on a function to produce a scaled version of that function. As an example, consider a “derivative operator” D = dxd. To determine whether the function f(x) = sinkx is an eigenfunction of operatorD, applyDtof(x)and see if the result is proportional tof(x):

Df(x)= d(sinkx)

dx =kcoskx=? λ(sinkx). (2.8) So is there any single number (real or complex) that you can multiply by sinkx to getkcoskx? If you think about the values of sinkxandkcoskxatkx = 0 andkx=π(or look at a graph of these two functions), it should be clear that there’s no value ofλthat makesEq. 2.8true. So sinkxdoes not qualify as an eigenfunction of the operatorD= dxd.

Now try the same process for the second-derivative operator!D2= dxd22: D!2f(x)= d2(sinkx)

dx2 = d(kcoskx)

dx = −k2sinkx=? λ(sinkx). (2.9) In this case, the eigenvalue equation is true if λ = −k2. That means that sinkx is an eigenfunction of the second-derivative operator !D2 = dxd22, and the eigenvalue for this eigenfunction isλ= −k2.

Main Ideas of This Section

A linear operator may be represented as a matrix that transforms a vector into another vector. If that new vector is a scaled version of the original vector, that vector is an eigenvector of the matrix, and the scaling factor is the eigenvalue for that eigenvector. An operator may also be applied to a function, producing a new function; if that new function is a multiple of the original function, then that function is an eigenfunction of the operator.

Relevance to Quantum Mechanics

In quantum mechanics, every physical observable such as position, momen- tum, and energy is associated with an operator, and the state of a system may be expressed as a linear combination of the eigenfunctions of that operator.

The eigenvalues for those eigenfunctions represent possible outcomes of measurements of that observable.