label – specifically, it’s the name of the vector represented by the ket. So when you move an operator into a ket to make a new ket (such as|Oψ), what you’re really doing is changing the vector to which the ket refers, from vectorψ to the vector produced by operatingOonψ. If you give that new vector the name
−→Oψ, then the associated ket is |Oψ. It’s that new ket that forms an inner product with|φin the expressionφ|O|ψ.
Going to the left with the operator in an expression such asφ|O|ψcan be done in two ways, one of which involves moving the operatorOinside the bra φ|. But you can’t move an operator inside a bra without changing that opera- tor. That change is called taking the “adjoint” of the operator,5written asO†. So the process of moving operatorOfrom outside to inside a bra looks like this:
ψ|O= O†ψ|. (2.21)
When you consider the expression O†ψ|, remember that the label inside a bra (such asO†ψ) refers to a vector – in this case, the vector that is formed by allowing operatorO†to operate on vectorψ. So the bra O†ψ|is the dual of ket |O†ψ.
Finding the adjoint of an operator in matrix form is straightforward. Just take the complex conjugate of each element of the matrix, and then form the transpose of the matrix – that is, interchange the rows and columns of the matrix. So the first row becomes the first column, the second row becomes the second column, and so forth. If operatorOhas matrix representation
O=
⎛
⎝O11 O12 O13
O21 O22 O23
O31 O32 O33
⎞
⎠, (2.22)
then its adjointO†is
O†=
⎛
⎝O∗11 O∗21 O∗31 O∗12 O∗22 O∗32 O∗13 O∗23 O∗33
⎞
⎠. (2.23)
If you think about applying this conjugate-transpose process to a column vector, you’ll see that the Hermitian adjoint of a ket is the associated bra:
|A =
⎛
⎝A1
A2
A3
⎞
⎠
|A†=
A∗1 A∗2 A∗3
= A|.
5Also called the “transpose conjugate” or “Hermitian conjugate” of the operator.
It’s useful to know howOand its adjointO†differ in form, but you should also understand how they differ in function. Here’s the answer: ifOtransforms ket|ψ into ketψ
, then O† transforms braψ|into bra
ψ. In equations this is
O|ψ =ψ ψ|O†=
ψ, (2.24)
in which braψ|is the dual of ket|ψand bra
ψis the dual of ketψ . Be sure to note that inEqs. 2.24the operatorsOandO†are outside|ψandψ|.
You should also be aware that it’s perfectly acceptable to evaluate an expression such asψ|Owithout moving the operator inside the bra. Since a bra can be represented by a row vector, a bra standing on the left of an operator can be written as a row vector standing on the left of a matrix. That means you can multiply them together as long as the number of elements in the row vector matches the number of rows in the matrix. So if|ψ,ψ|, andOare given by
|ψ = ψ1
ψ2
ψ| =
ψ1∗ ψ2∗
O=
O11 O12
O21 O22
, then
ψ|O=
ψ1∗ ψ2∗ O11 O12
O21 O22
=
ψ1∗O11+ψ2∗O21 ψ1∗O12+ψ2∗O22
, (2.25)
which is the same result as O†ψ|: O†=
O∗11 O∗21 O∗12 O∗22
O†ψ| = |O†ψ†=
O†|ψ†
=
O∗11 O∗21 O∗12 O∗22
ψ1
ψ2 †
=
ψ1O∗11+ψ2O∗21 ψ1O∗12+ψ2O∗22
†
=
ψ1∗O11+ψ2∗O21 ψ1∗O12+ψ2∗O22
, (2.26)
in agreement withEq. 2.25.
So when you’re confronted with a bra standing to the left of an operator (outside the bra), you can either multiply the row vector representing the bra by the matrix representing the operator, or you can move the operator into the bra, taking the operator’s Hermitian conjugate in the process.
With an understanding of how to deal with operators outside and inside bras and kets, you should be able to see the equivalence of the following expressions:
φ|O|ψ = φ|Oψ = O†φ|ψ. (2.27) The reason for making the effort to get toEq. 2.27is to help you understand an extremely important characteristic of certain operators. Those operators are called “Hermitian,” and their defining characteristic is this: Hermitian operators equal their own adjoints. So ifOis a Hermitian operator, then
O=O† (HermitianO). (2.28)
It’s easy to determine whether an operator is Hermitian by looking at the operator’s matrix representation. ComparingEqs. 2.22and2.23, you can see that for a matrix to equal its own adjoint, the diagonal elements must all be real (since only a purely real number equals its complex conjugate), and every off-diagonal element must equal the complex conjugate of the corresponding element on the other side of the diagonal (soO21 must equalO∗12,O31 must equalO∗13,O23must equalO∗32, and so forth).
Why are Hermitian operators of special interest? To see that, look again at the second equality inEq. 2.27. If operatorOequals its adjointO†, then
φ|O|ψ = φ|Oψ = O†φ|ψ = Oφ|ψ, (2.29) which means that a Hermitian operator may be applied toeithermember of an inner product with the same result.
For complex continuous functions such asf(x)andg(x), the equivalent to Eq. 2.29is
∞
∞ f∗(x)$ Og(x)% dx=
∞
∞
&
O†f∗(x) '
g(x)dx
= ∞
∞ $ Of∗(x)% g(x)dx.
(2.30)
The ability to move a Hermitian operator to either side of an inner product may seem like a minor computational benefit, but it has major ramifications.
To appreciate those ramifications, consider what happens when a Hermitian operator is sandwiched between a ket such as|ψ and its corresponding bra ψ|. That makesEq. 2.29
ψ|O|ψ = ψ|Oψ = Oψ|ψ. (2.31)
Now consider what this equation means if |ψ is an eigenket of O with eigenvalueλ. In that case, Oψ
= |λψand Oψ= λψ|, so
ψ|λψ = λψ|ψ. (2.32)
To learn something from this equation, you need to understand the rules for pulling a constant from inside to outside (or outside to inside) a ket or bra. For kets, you can move a constant, even if that constant is complex, from inside to outside (or outside to inside) a ket without changing the constant. So
c|A = |cA. (2.33)
You can see why this is true by writing the ket as a column vector:
c|A =c
⎛
⎝Ax
Ay
Az
⎞
⎠=
⎛
⎝cAx
cAy
cAz
⎞
⎠= |cA.
But if you want to move a constant from inside to outside (or outside to inside) a bra, it’s necessary to take the complex conjugate of that constant:
cA| =
c∗A, (2.34)
because in this case cA| =c
A∗x A∗y A∗z =
cA∗x cA∗y cA∗z
=
(c∗Ax)∗ (c∗Ay)∗ (c∗Az)∗
= c∗A. If you don’t see why that last equality is true, remember that for the ket
c∗A
=
⎛
⎝c∗Ax
c∗Ay
c∗Az
⎞
⎠,
the corresponding bra isc∗A| =
(c∗Ax)∗ (c∗Ay)∗ (c∗Az)∗
. This matches the expression forcA|, socA| = c∗A|.
The result of all this is that a constant can be moved in or out of a ket without change, but moving a constant in or out of a bra requires you to take the complex conjugate of the constant. So pulling the constantλout of the ket
|λψon the left side ofEq. 2.32and out of the braλψ|on the right side of that equation gives
ψ|λ|ψ =λ∗ψ|ψ. (2.35) At the start of this section, you saw that a constant sandwiched between a bra and a ket (but not inside either one) can be moved either to the left of the bra
or to the right of the ket without change. Pulling the constantλfrom between braψ|and ket|ψon the left side ofEq. 2.35gives
λψ|ψ =λ∗ψ|ψ. (2.36) This can be true only ifλ = λ∗, which means that the eigenvalueλmust be real. So Hermitian operators must have real eigenvalues.
Another useful result can be obtained by considering an expression in which a Hermitian operator is sandwiched between two different functions, as in Eq. 2.29:
φ|O|ψ = φ|Oψ = O†φ|ψ = Oφ|ψ. (2.29) Consider the case in whichφis an eigenfunction of Hermitian operatorOwith eigenvalueλφandψis also an eigenfunction ofOwith (different) eigenvalue λψ.Eq. 2.29is then
φ|O|ψ = φλψψ
= λφφψ
, and pulling out the constantsλψandλφgives
λψφ|ψ =λ∗φφ|ψ.
But the eigenvalues of Hermitian operators must be real, soλ∗φ=λφ, and λψφ|ψ =λφφ|ψ
(λψ−λφ)φ|ψ =0.
This means that either (λψ − λφ) or φ|ψ (or both) must be zero. But we specified that the eigenfunctionsφandψ have different eigenvalues, so (λψ−λφ)cannot be zero, and the only possibility is that φ|ψ = 0. Since the inner product between two functions can be zero only when the functions are orthogonal, this means that the eigenfunctions of a Hermitian operator with different eigenvalues must be orthogonal.
And what if two or more eigenfunctions share an eigenvalue? That’s called the “degenerate” case, and the eigenfunctions with the same eigenvalue will not, in general, be orthogonal. But in such cases it is always possible to use a weighted combination of the non-orthogonal eigenfunctions to produce an orthogonal set of eigenfunctions for the degenerate eigenvalue. So in the nondegenerate case (in which no eigenfunctions share an eigenvalue), only one set of eigenfunctions exist, and those eigenfunctions are guaranteed to be orthogonal. But in the degenerate case, there are an infinite number of non-orthogonal eigenfunctions, from which you can always construct an orthogonal set.6
6TheGram–Schmidt procedurefor constructing a set of orthogonal vectors is explained on the book’s website.
There’s one more useful characteristic of the eigenfuctions of aHermitian operator: they form a complete set. That means that any function in the abstract vector space containing the eigenfunctions of a Hermitian operator may be made up of a linear combination of those eigenfunctions.
Main Ideas of This Section
Hermitian operators may be applied to either member of an inner product and the result will be the same. Hermitian operators have real eigenvalues, and the nondegenerate eigenfunctions of a Hermitian operator are orthogo- nal and form a complete set.
Relevance to Quantum Mechanics
The discussion of the solutions to the Schr¨odinger equation inChapter 4 will show that every quantum observable (such as position, momentum, and energy) is associated with an operator, and the possible results of any measurement are given by the eigenvalues of that operator. Since the results of measurements must be real, operators associated with observables must be Hermitian. The eigenfunctions of Hermitian operators are (or can be combined to be) orthogonal, and the orthogonality of those eigenfunc- tions has a profound impact on our ability to construct solutions to the Schr¨odinger equation and to use those solutions to determine the probability of various measurement outcomes.