• Tidak ada hasil yang ditemukan

Routine 4.3.3: Density matrix exponentiation (in superposition)

12.1 Summary of findings

Conclusion

This thesis investigated how the problem of supervised pattern recognition - finding the label to a feature vector given a dataset of labelled feature vectors - can be solved using a universal quantum computer. As a conclusion I want to highlight the central points made in each chapter, summarise the general findings and suggest avenues for further research.

the system (dynamic encoding).

Chapter 4. Efficient state preparation routines to encode large datasets in quantum states are crucial for quantum machine learning algorithms.

Writing large datasets into quantum states is a non-trivial task, but necessarily the first step of a quantum algorithm that performs supervised learning with classical data. This is even more true since the destruction of the final quantum state by measurement forces us to repeat the entire routine for every prediction. Efficient data encoding is particularly difficult for amplitude encoded algorithms, namely if the number of operations is supposed to be sublinear in the dimensions of the dataset to maintain the “super-efficient” runtime of a training algorithm. There are a few proposals that fulfil this requirement, but all place restrictions on the dataset they are able to process.

Chapter 5. While we know that quantum computing can speed up the runtime of classical machine learning algorithms, evidence suggests that it cannot reduce the number of data points needed for learning.

One can distinguish different ways to enhance machine learning with quantum computers.

For example, time complexity refers to more efficient algorithms, while the sample complexity compares the number of training vectors needed to learn a model function. Current results suggest that the sample complexity is similar in the quantum and classical setting, while we can hope for speedups in terms of the runtime, which the rest of the thesis focussed on. Model complexity - the flexibility or representational capacity of a ‘quantum’ model - is the least well understood, as is the robustness of quantum learning against noise.

Chapter 6. The literature on quantum-enhanced machine learning can be distin- guished into four different approaches of how to solve optimisation problems for classical models on quantum devices.

The majority of literature on quantum machine learning algorithms on supervised pattern recognition considers a classical method consisting of a distinct model family and a way of how to use the dataset to choose the best model from this family, and translates it into the quantum setting. This means to outsource optimisation problems such as search, matrix inversion, combinatorial optimisation or sampling to a quantum computer, and hope for some speedup in solving the problem. The result of the quantum computation is aimed to reproduce the classical result, but with a faster computation in the light of growing problem sizes.

Part III contributed five quantum machine learning algorithms to tackle the problem of supervised pattern recognition:

Algorithm 1: Quantum least-squares linear regression

The quantum linear regression algorithm implements a singular value decomposition to solve the linear system of equations derived from a convex quadratic optimisation problem. To do this, it combines the tools of quantum matrix exponentiation and the quantum linear systems algorithm.

For realistic applications, regularisation terms amend the linear system of equations, and it has been shown that this can be seamlessly integrated into the quantum algorithm. Also, the data matrix that defines the original problem can be replaced by other kernel matrices to effectively

implement a feature map of the data into a higher dimensional space.

Algorithm 2: Quantum gradient descent

In collaboration with Dr Patrick Rebentrost and Prof Seth Lloyd

For more complex optimisation problems, closed form solutions such as linear systems of equations are not possible, and optimisation can be approached by gradient descent methods. In standard gradient descent, the candidate for the solution is iteratively updated by making steps in the direction of steepest descent, while Netwon’s method also uses curvature information. The quantum algorithm is based on amplitude encoding, from which it derives its logarithmic runtime in the dimension of the search space. The proposal considers a very specific type of objective function that allows us to compute the derivative and Hessian matrix at the point of the current solution using an extended version of density matrix exponentiation. The probabilistic nature of the quantum algorithm implies that every iteration only produces a proportion of correct copies of the quantum state, and the method therefore scales exponentially with the number of steps. It is therefore suitable for local searches in large search spaces.

Algorithm 3: Quantum nearest neighbour

The quantum version of the nearest neighbour algorithm writes a function of the distance between a new input and the training inputs into the amplitudes of a quantum state. Two versions can be distinguished, depending on if we want to use amplitude encoding or basis encoding for the dataset. The versions implement slightly different distance functions, which can be interpreted as different kernels. The runtime is linear in or even independent from the number of qubits, excluding state preparation. The two algorithms are excellent candidates to implement with small-scale quantum processors due to their simplicity.

Algorithm 4: Quantum perceptron

Quantum neural network models have been extensively referred to in the literature related to quantum machine learning, but rarely manage to translate their basic characteristics - the concatenation of linear algebra and nonlinearities as well as non-convex training - into a quantum setting. Approaching this problem from the perspective of information encoding strategies helps to shed light on the difficulties involved in formulating a quantum neural network that harvests obvious speedups. The quantum perceptron algorithm is a suggestion of how to implement the step activation function on basis encoded inputs by using the quantum phase estimation routine.

Algorithm 5: Quantum ensemble method

Quantum parallelism is an interesting candidate for ensemble methods, where different (quantum) models are combined for prediction. The method defines a quantum ensemble through a state preparation routine of a weighted superposition of base models to which a classification routine can be applied in parallel. This is is close to a Bayesian learning framework. The case of weights that correspond to the accuracy of a model has been further investigated. It was shown that if the model exhibits a certain symmetry, the ensemble will effectively only consist of accurate classifiers.

Simulations reveal that such a combined classifier performs well in simple examples.