• Tidak ada hasil yang ditemukan

Finding Components Using the Inner Product

Main Idea of This Section

Just as the vectors in three-dimensional physical space must be perpen- dicular if their scalar product is zero, N-dimensional abstract vectors and continuous functions are defined as orthogonal if their inner product is zero.

Relevance to Quantum Mechanics

As you’ll see inSection 2.5, orthogonal basis functions play an important role in determining the possible outcomes of measurements of quantum observables and the probability of each outcome.

Orthogonal functions are extremely useful in physics, for reasons that are similar to the reasons that orthogonal coordinate systems are useful. The final section of this chapter shows you how to use the inner product and orthogonal functions to determine the components of multi-dimensional abstract vectors.

x A

θ

y

|A|cosθ

ñ á

ϵ1ϵ1

ñ á

ϵ1 A

|ϵ1| Ax =

ñ á

ϵ1ϵ1

ñ á

ϵ1 A

Ax = = ϵ1 Acosθ ϵ1 ϵ1

cos0o

ϵ1 A cosθ ϵ1 ϵ1

=

This factor of cancels the from the inner product in the numerator

|ϵ1|

|ϵ1|

This factor of divided into |A|cosθ tells you how many times fits into the projection of A onto the x-axis

|ϵ1|

|ϵ1| Projection of A onto the x-axis

Figure 1.10 Normalizing the inner product for a basis vector with non-unit length.

vector hats () rather than unit-vector hats (ˆ). In that case, to find the vector’s components using the inner product, it’s necessary to divide the result of the inner product by the square of the basis vector’s length, as you can see in the denominators of the fractions inEq. 1.32. This factor wasn’t necessary in Eqs. 1.30or1.31because each Cartesian unit vectorıjˆ, andkˆhas a length of one.

If you’re wondering why it’s necessary to divide by the square rather than the first power of the length of each basis vector, consider the situation shown inFig. 1.10.

In this figure, basis vector1points along the x-axis, and the angle between vectorAand the positive x-axis isθ. The projection of vectorAonto the x-axis is| A|cosθ;Eq. 1.32gives the x-component ofAas

Ax= 1A

|1|2 = 1|A

1|1. (1.33)

As shown inFig. 1.10, the two factors of|1|in the denominator ofEq. 1.33 are exactly what’s needed to giveAxin units of|1|. That’s because one factor of|1|cancels the same factor from the inner product in the numerator, and the second factor of|1|converts| A|cosθ into the number of “steps” of|1|that fit into the projection ofAonto the x-axis.

So if, for example, vectorAis a real spatial vector with length | A|of 10 km at an angle of 35to the x-axis, then the projection ofAonto the x-axis (| A|cosθ) is about 8.2 km. But if the basis vector 1 has length of 2 km, dividing 8.2 km by 2 km gives 4.1 “steps” of 2 km, so the x-component of AisAx=4.1 (not 4.1 km, because the units are carried by the basis vectors).

Had you chosen a basis vector with length of one unit (of the units in which vectorAis measured, which is kilometers in this example), then the denominator ofEq. 1.33would have a value of one, and the number of steps along the x-axis would be 8.2.

The process of dividing by the square of the norm of a vector or function is called “normalization,” and orthogonal vectors or functions that have a length of one unit are “orthonormal.” The condition of orthonormality for basis vectorsis often written as

ij= ij

=δi,j, (1.34)

in whichδi,jrepresents theKronecker delta, which has a value of one ifi=j or zero ifi=j.

The expansion of a vector as the weighted combination of a set of basis vectors and the use of the normalized scalar product to find the vector’s components for a specified basis can be extended to the functions of Hilbert space. Expressing these functions as kets, the expansion of function|ψusing basis functions|ψnis

|ψ =c1|ψ1 +c2|ψ2 + · · · +cN|ψN = N n=1

cn|ψn, (1.35) in whichc1tells you the “amount” of basis function|ψ1in function|ψ,c2

tells you the “amount” of basis function|ψ2in function|ψ, and so on. As long as the basis functions|ψ1,|ψ2. . .|ψNare orthogonal, the components c1,c2, . . .cNcan be found using the normalized inner product:

c1= ψ1|ψ ψ1|ψ1 =

−∞ψ1(x)ψ(x)dx

−∞ψ1(x)ψ1(x)dx c2= ψ2|ψ

ψ2|ψ2 =

−∞ψ2(x)ψ(x)dx

−∞ψ2(x)ψ2(x)dx cN = ψN|ψ

ψN|ψN =

−∞ψN(x)ψ(x)dx

−∞ψN(x)ψN(x)dx,

(1.36)

in which each numerator represents the projection of function|ψonto one of the basis functions, and each denominator represents the square of the norm of that basis function.

This approach to finding the components of a function (using sinusoidal basis functions) was pioneered by the French mathematician and physicist Jean-Baptiste Joseph Fourier in the early part of the nineteenth century. Fourier theory comprehends both Fourier synthesis, in which periodic functions are synthesized by weighted combination of sinusoidal functions, and Fourier analysis, in which the sinusoidal components of a periodic function are determined using the approach described earlier. In quantum mechanics texts, this process is sometimes called “spectral decomposition,” since the weighting coefficients (cn) are called the “spectrum” of a function.

To see how this works, consider a function|ψ(x)expanded using the basis functions|ψ1 = sinx,|ψ2 = cosx, and|ψ3 = sin 2xover the interval x= −πtox=π:

ψ(x)=5|ψ1 −10|ψ2 +4|ψ3.

In this case, you can read the components c1 = 5,c2 = −10, andc3 = 4 directly from this equation for ψ(x). But to understand how Eq. 1.36gives these values, write

c1=

−∞ψ1(x)ψ(x)dx

−∞ψ1(x)ψ1(x)dx = π

π[sinx][5 sinx−10 cosx+4 sin 2x]dx π

π[sinx]sinx dx c2=

−∞ψ2(x)ψ(x)dx

−∞ψ2(x)ψ2(x)dx = π

π[cosx][5 sinx−10 cosx+4 sin 2x]dx π

π[cosx]cosx dx c3=

−∞ψ3(x)ψ(x)dx

−∞ψ3(x)ψ3(x)dx = π

π[sin 2x][5 sinx−10 cosx+4 sin 2x]dx π

π[sin 2x]sin 2x dx . These integrals can be evaluated with the help of the relations

π

π

sin2ax dx= x

2 −sin 2ax 4a

π

π =π π

π

cos2ax dx= x

2 +sin 2ax 4a

π

π =π π

π

sinxcosx dx= 1

2sin2x π

π =0 π

π

sinmxsinnx dx=

sin(mn)x

2(mn) +sin(m+n)x 2(m+n)

π

π =0,

in whichmandnare (different) integers. Applying these gives c1=5(π )−10(0)+4(0)

π =5

c2=5(0)−10(π )+4(0)

π = −10

c3=5(0)−10(0)+4(π )

π =4,

as expected. Notice that in this example the basis functions sinx, cosx, and sin 2x are orthogonal but not orthonormal, since their norms are π rather than one. Some students express surprise that sinusoidal functions are not normalized, since their values run from−1 to +1. But remember that it’s the integral of the square of the function, not the peak value, that determines the function’s norm.

Once you feel confident in your understanding of functions as members of an abstract vector space, the expansion of vectors and functions using components in a specified basis, Dirac’s bra/ket notation, and the role of the inner product in determining the components of vectors and functions, you should be ready to tackle the subjects of operators and eigenfunctions. You can read about those topics in the next chapter, but if you’d like to make sure that you’re able to put the concepts and mathematical techniques covered in this chapter into practice before proceeding, you may find the problems in the next section helpful (and if you get stuck or just want to check your solutions, remember that full interactive solutions to every problem are available on the book’s website).