Routine 3.4.3: Postselective Amplitude Update
3.4.5 Matrix multiplication and inversion
The previous two routines can be used to multiply a matrixA ∈RN×N to a vector x∈ RN in amplitude encoding, and with a procedure to invert eigenvalues, this can be extended to matrix inversion. This is a rather technical quantum routine which is the heart piece of one approach to quantum-enhanced machine learning. To understand the basic idea, it is illuminating to writeAx in terms ofA’s eigenvalues λr and eigenvectorsχr,r = 1...R. The vectorxcan be written as a linear combination of eigenvectors ofA,
x=
R
X
r=1
(χTrx)χr. ApplyingAleads to
Ax=
R
X
r=1
λr(χTrx)χr. (3.12)
and applyingA−1 yields
A−1x=
R
X
r=1
λ−1r (χTrx)χr. (3.13) Under certain conditions, quantum algorithms can find eigenvalues and eigenstates of unitary operators exponentially fast [99]. This promises powerful tools, but one will see that it can only be used for specific problems. I will introduce the matrix multiplication algorithm and then mention how to adapt it slightly in order to do matrix inversion as proposed in [90] (also called thequantum linear systems of equation routine).
Consider a quantum state|ψxithat represents a normalised classical vector in amplitude encoding.
The goal is to map this quantum state to a normalised representation ofAxwith
|ψxi=
R
X
r=1
hψχr|ψxi|ψχri → |ψAxi=
R
X
r=1
λrhψχr|ψxi|ψχri.
This can be done in three steps. First, create a unitary operatorU = e−iAt and apply it to|ψxi. Second, do the phase estimation procedure to write the eigenvalues ofA into a register in basis encoding. Third, use a postselective amplitude update to write the eigenvalues into the amplitude.
1. SimulatingA
In the first step, we need a unitary operator whose eigenvalue equations are given by U|ψχri = e−iλrt|ψχri. If one can evolve a system by eiHt with the Hamiltonian H = A,
this is in fact the case. To resemble a Hamiltonian, A has to be Hermitian, but the trick from Equation (3.8) circumvents this requirement by doubling the Hilbert with one additional qubit.
Techniques to implement general H are called Hamiltonian simulation and are discussed in Section 4.3.
The first step prepares a quantum state of the form 1
K
K
X
k=1
|kie−iA∆t|ψxi= 1 K
K
X
k=1
|ki
R
X
r=1
e−iλr∆thψχr|ψxi|ψχri,
where on the right side, |ψxi is simply expressed in A’s basis as defined above. This is a slight simplification of the original proposal, in which the index register |ki is not in a uniform superposition to exploit some further advantages, but the principle remains the same.
2. Extracting the eigenvalues
In the second step, the quantum phase estimation routine is applied to the index register|ki to
‘reduce’ its superposition to basis states encoding the eigenvalues, 1
K
R
X
r=1 K
X
k=1
αk|rhψχr|ψxi|ki|ψχri.
As explained for the quantum phase estimation routine, the coefficients lead to a large probability
|αk|r|2 for computational basis states |ki that approximate the eigenvalues λr well. If enough qubitsn1 are given in the|kiregister, one can assume that approximately
R
X
r=1
hψχr|ψxi|λri|ψχri,
where|λriencodes an1qubit approximation ofλr. The time needed to implement this step is in O(1) [90].
3. Adjusting the amplitudes
The third step ‘writes’ the eigenvalues into the amplitudes by using the technique of the postse- lective amplitude update,
R
X
r=1
hψχr|ψxi|λri|ψχri(p
1−λ2r|0i+λr|1i)→
R
X
r=1
λrhψχr|ψxi|ψχri,
where on the right side and after the successful conditional measurement, the eigenvalue register and ancilla were discarded. Note that this is only possible because no terms in the sum interfere as a result (one could say that the superposition is ‘kept intact’ by the |ψχri state). The final state corresponds to a normalised version ofAxin amplitude encoding.
For matrix inversion, the last step is slightly adjusted: When conditionally rotating the ancilla qubit, one writes the inverse of the eigenvalue1/λr into the respective amplitude. Since there is no guarantee that these are smaller than 1, a normalisation constantC has to be introduced that is
of the order of the smallest eigenvalue, and the conditional measurement yields
R
X
r=1
hψχr|ψxi|λ˜ri|ψχri( s
1−C2
λ2r|0i+ C
λr|1i)→C
R
X
r=1
1
λrhψχr|ψxi|ψχri.
As discussed above, the conditional measurement is a non-unitary operation and requires the routine to be repeated on averageO(p 1
success) times until it succeeds. For matrix multiplication, the success probability is given bypsuccess =P
r|hψχr|ψxi|2λ2r while for the inversion technique psuccess = P
r|hψχr|ψxi|2Cλ2r2 ≤ κ−2, where κ is the condition number of the matrix defined as the ratio of the largest and the smallest eigenvalue. The condition number is a measure of how
‘invertible’ or ‘singular’ a matrix is. Just like when considering numerical stability in classical inversion techniques, when the condition number is large, the quantum algorithm takes a long time to succeed on average.
Example 3.4.1: Simulation of the quantum matrix inversion routine
Since the matrix inversion routine is rather technical, the following classical simulation shall illumi- nate how a quantum state gets successively manipulated. The matrixAand vectorbconsidered here are given by
A=
2/3 1/3 1/3 2/3
, b=
0.275 0.966
.
The eigenvalues of A are λ1 = 1 and λ2 = 1/3, with the corresponding eigenvectors v1 = (1/√
2,1/√
2)T and v2 = (−1/√ 2,1/√
2)T. The number of qubits for the eigenvalue register is chosen as τ = 10, and the binary representation of the eigenvalues then becomes λ1= 1111111111 andλ2 = 0101010101. The error of this representation is<0.001.
The code of the simulation can be downloaded on my github repository5. The program prints only the basis states of non-zero amplitudes. The qubits are divided in three registers: The first qubit is the ancilla used for the postselective amplitude update, the following ten qubits form the eigenvalue register, and the last qubit initially encodes the vectorb, and in the end it encodes the solutionx.
State preparation: Encodebinto the last qubit:
Index Amplitude Basis State --- --- ---
0 0.275 |0 0000000000 0>
1 0.966 |0 0000000000 1>
After writing the eigenvalues into the second register via phase estimation:
Index Amplitude Basis State --- --- ---
682 -0.343 |0 0101010101 0>
683 0.343 |0 0101010101 1>
2046 0.618 |0 1111111111 0>
2047 0.618 |0 1111111111 1>
After rotating the ancilla conditioned on the eigenvalue register:
Index Amplitude Basis State
5See https://github.com/mariaschuld/PhD-thesis.
--- --- --- 682 -0.297 |0 0101010101 0>
683 0.297 |0 0101010101 1>
2046 0.609 |0 1111111111 0>
2047 0.609 |0 1111111111 1>
2730 -0.172 |1 0101010101 0>
2731 0.172 |1 0101010101 1>
4094 0.103 |1 1111111111 0>
4095 0.103 |1 1111111111 1>
After uncomputing the eigenvalue register:
Index Amplitude Basis State --- --- ---
0 0.312 |0 0000000000 0>
1 0.907 |0 0000000000 1>
2048 -0.069 |1 0000000000 0>
2049 0.275 |1 0000000000 1>
After successful conditional measurement of the ancilla in 1:
Index Amplitude Basis State --- --- ---
2048 -0.242 |1 0000000000 0>
2049 0.970 |1 0000000000 1>
The amplitudes now encode the result of the computation,A−1b:
RESULTS: --- Result of the quantum algorithm x = [-0.412 1.648]
Classical Solution (linalg.solve): x = [-0.412 1.648]
Classical Solution (manual SVD): x = [-0.412, 1.648]
The comparison to python’s ‘linalg.solve’ package as well as a singular value decomposition confirm this result. The code shows beautifully how the routine starts with a small superposition (here of only two basis states) that gets ‘blown up’ and then again reduced to encode the two-dimensional output.