Bulletin of the American Physical Society
2023 APS March Meeting
Volume 68, Number 3
Las Vegas, Nevada (March 5-10)
Virtual (March 20-22); Time Zone: Pacific Time
Session K73: Quantum Neural NetworksFocus Session
|
Hide Abstracts |
Sponsoring Units: DQI Chair: Marco Cerezo, Los Alamos National Laboratory Room: Room 405 |
Tuesday, March 7, 2023 3:00PM - 3:12PM |
K73.00001: Learning in Finitely-Sampled Quantum Systems 1: Expressive Capacity Fangjun Hu, Gerasimos M Angelatos, Saeed A Khan, Marti Vives, Esin Tureci, Leon Y Bello, Graham E Rowlands, Guilhem J Ribeill, Hakan E Tureci Quantitative insight into the meaningful computational capacity of current quantum platforms is critical to efforts in quantum machine learning and sensing. We introduce an intuitive notion of expressive capacity in terms of the space of functions that can be computed, and develop a mathematical framework for analyzing the capacity of qubit-based systems in the presence of sampling noise. We obtain a tight bound for the expressive capacity of a given quantum system under S shots, and present the mathematical construction of an optimal measurement basis that is robust to sampling noise. We apply this analysis to learning through a quantum annealer-based continuous encoding and parameterized quantum circuits, highlighting how quantum correlations and system size influence the expressive capacity in the presence of sampling noise. |
Tuesday, March 7, 2023 3:12PM - 3:24PM |
K73.00002: Learning in Finitely-Sampled Quantum Systems 2: Applications Gerasimos M Angelatos, Fangjun Hu, Saeed A Khan, Marti Vives, Esin Tureci, Leon Y Bello, Graham E Rowlands, Guilhem J Ribeill, Hakan E Tureci Machine learning in quantum systems has been hindered by the emergence of “barren plateaus” which preclude training, particularly in the presence of sampling noise, and are associated with increasing expressibility - defined as a distance from a 2-design. Here we present a practical method for learning in finitely-sampled qubit systems based instead on the expressive capacity analysis from part 1. This capacity metric encompasses the input-distribution, algorithm, physical device, and measurement - ideal for a full-stack analysis of current quantum platforms and informing ansatz design tailored to a specific system. We describe the construction of optimal computational-basis measurements that are robust to sampling noise, providing a compressed representation of the quantum feature space and a powerful learning method inspired by quantum reservoir computing which is not subject to barren plateaus. We demonstrate this approach on parameterized quantum circuits experimentally realized on an IBM superconducting qubit processor, emphasizing how one naturally obtains principal features unique to a given system and its particular noise environment. Using very limited measurement resources we are able to reliably estimate device principal features, and connect capacity with performance on learning tasks. |
Tuesday, March 7, 2023 3:24PM - 3:36PM |
K73.00003: Quantum Persistent Homology for Time Series Bernardo Ameneyro, George Siopsis, Vasileios Maroulas Persistent homology, a powerful mathematical tool for data analysis, summarizes the shape of data through tracking changes to topological features across different scales. Classical algorithms for persistent homology are often constrained by running times and memory requirements that grow exponentially on the number of data points. To surpass this problem, two quantum algorithms of persistent homology have been developed based on two different approaches. However, both of these quantum algorithms consider a data set in the form of a point cloud, which can be restrictive considering that many data sets, like biological signals and stock prices, come in the form of time series. We alleviate this issue by establishing a quantum Takens's delay embedding algorithm, which identifies a time series with a point cloud by considering an embedding into its phase space. Having this quantum transformation of time series to point clouds, one may then use a quantum persistent homology algorithm to extract the topological features from the point cloud associated with the original times series. Furthermore, embeddings retain all topological information, so the topological features extracted from the point cloud can be used to analyse the corresponding time series. |
Tuesday, March 7, 2023 3:36PM - 3:48PM Author not Attending |
K73.00004: On training variational quantum circuits Jacob Biamonte We consider certain training phenomena including under versus over parameterisation and noise effects in variational quantum circuits. Particularly we contrast abrupt training transitions, reachability deficits, parameter concentrations and parameter saturations. |
Tuesday, March 7, 2023 3:48PM - 4:00PM |
K73.00005: Solving Efficiently Variational Quantum Circuits with Flat Landscapes David P Fitzek, Robert Jonsson, Christian Schaefer Variational quantum algorithms represent a promising approach to utilize currently available quantum computing infrastructures. The framework is based on a parameterized quantum circuit that is optimized in a closed loop via a classical algorithm. This tandem approach reduces the load on the quantum computing unit but comes at the cost of a classical optimization that can feature a flat energy landscape. Existing techniques including either imaginary time-propagation, natural gradient or momentum-based approaches have shown limited success, depending on the requirements on the quantum computing unit and the complexity of the problem at hand. In this work, we propose a novel optimizer that aims to distill the best aspects of the existing approaches. By employing the Broyden approach to approximate updates in the Fisher-information and combining it with a momentum-based algorithm, the optimizer reduces quantum-resource requirements while performing superior than more resource-demanding predecessors. Benchmarks for barren plateau, LiH and maxcut demonstrate an overall stable performance with a clear improvement over existing techniques in case of flat landscapes. The optimizer introduces a new development strategy for gradient-based VQAs with a plethora of possible improvements. |
Tuesday, March 7, 2023 4:00PM - 4:12PM |
K73.00006: Novel Data Encoding Method for Quantum Machine Learning Kaiwen Gui, Alexander M Dalzell, Alessandro Achille, Martin Suchara, Frederic T Chong Quantum machine learning (QML) has the potential to provide computational speedup over classical methods. One of the potential quantum advantages comes from the ability to encode classical data in an exponentially compact form, such as amplitude encoding that encodes N values with only logN qubits. However, the data encoding process is costly, requiring either O(logN) qubits with O(N) gate depth, or O(N) ancilla qubits with O(logN) gate depth. |
Tuesday, March 7, 2023 4:12PM - 4:48PM |
K73.00007: Efficient and Large-Scale Semidefinite Programming with Quantum Neural Networks Invited Speaker: Taylor L Patti Semidefinite programs are optimization methods with a wide array of applications, such as approximating difficult combinatorial problems. We introduce a variational quantum algorithm for semidefinite programs that uses only $n$ qubits, a constant number of circuit preparations, and $O(n^2)$ expectation values in order to solve semidefinite programs with up to $N=2^n$ variables and $M=2^n$ constraints. Efficient optimization is achieved by encoding the objective matrix as a properly parameterized unitary conditioned on an auxiliary qubit, a technique known as the Hadamard Test. The Hadamard Test enables us to optimize the objective function by estimating only a single expectation value of the ancilla qubit, rather than separately estimating exponentially many expectation values. Similarly, we illustrate that the semidefinite programming constraints can be effectively enforced by implementing a second Hadamard Test, as well as imposing $sim n^2/2$ Pauli string amplitude constraints. We demonstrate the effectiveness of our protocol by devising an efficient quantum implementation of the Goemans-Williamson algorithm, which is a useful approximation for various NP-hard problems, such as MaxCut. Our method exceeds the performance of analogous classical methods on a diverse subset of well-studied MaxCut problems from the GSet library. Extremely large-scale graphs and implications for classical optimization are discussed. Experimental implementations are explored. |
Tuesday, March 7, 2023 4:48PM - 5:00PM Author not Attending |
K73.00008: Signatures of double descent in deep quantum models Aroosa Ijaz, Jason W Rocks, Juan Carrasquilla, Evan Peters, Marco Cerezo Deep neural networks show amazing generalization properties despite being highly over-parameterized. They show a transition past an interpolation point where despite fitting every training data point perfectly, the generalization error reduces again as the number of parameters is increased. This violates the long-standing bias-variance trade-off phenomenon. We present thorough empirical evidence of this “double descent” phenomenon in deep quantum neural networks. We also present various tools to study this transition such as changes in bias and variance, Quantum Neural Tangent Kernel, and Fisher information. We also try to understand where quantum deep learning is feasible and propose efficient anzatses. This is the first step towards using deep quantum models to avoid barren plateaus and achieve fast convergences. |
Tuesday, March 7, 2023 5:00PM - 5:12PM |
K73.00009: Quantifying Information Flow in Parametrized Quantum Circuits. Lasse B Kristensen, Abhinav Anand, Felix Frohnert, Sukin Sim, Alán Aspuru-Guzik Variational quantum algorithms has become a highly influential paradigm for the design of near term quantum algorithms. At the heart of these algorithms lie the parametrized quantum circuit, a parametrized computation containing parameters that can be optimized in order to make the computation solve a given problem. However, this optimization can be both highly nontrivial and resource demanding, especially when there is a large number of tuneable parameters involved. In the work presented in this talk, an optimization strategy is proposed which aims to mitigate this problem by only working with a subset of the parameters in each step of the optimization. Inspired by the way that information flows through a quantum system during a computation, the method tries to help the optimizer focus only on the parameters that most heavily influences the relevant readout from the computation. To facilitate the choice of parameters, a metric for reasoning about the flow of information in a parametrized quantum circuit is proposed, and the method is benchmarked on tasks with and without local structure to highlight the strengths and weaknesses of this metric. Overall, the general method shows good promise in several tasks, while the metric-assisted version seems best suited for problems with local structure. |
Tuesday, March 7, 2023 5:12PM - 5:24PM |
K73.00010: Theoretical Guarantees for Permutation-Equivariant Quantum Neural Networks Louis Schatzki, Martin Larocca, Frédéric Sauvage, Marco Cerezo Despite the great promise of quantum machine learning models, there are several challenges |
Tuesday, March 7, 2023 5:24PM - 5:36PM |
K73.00011: The Quantum Lottery Ticket Hypothesis William A Simon, Sukin Sim Variational Quantum Algorithms (VQAs) for problems in chemistry, optimization, and machine learning are successful for small systems. However, training these algorithms with limited quantum resources is a barrier to scaling them to larger systems. Recent works have shown that overparameterization of VQAs can alleviate these difficulties at the cost of deeper circuits. Pruning quantum circuits - a method derived from pruning classical neural networks - has been shown to be effective at reducing these costs by finding sparse approximations of overparameterized quantum circuits. The Lottery Ticket Hypothesis in classical machine learning occurs when within an untrained, overparameterized neural network, there exist sparse subnetworks that can be trained to produce similar or better accuracy than the original overparameterized network. In this work, we ask: within an untrained, overparameterized quantum circuit, does there exist a sparse subcircuit that can be trained to produce similar or better accuracy? Answering this question could help determine how pruning and overparameterization can be used to scale VQAs beyond proof-of-concept models. |
Tuesday, March 7, 2023 5:36PM - 5:48PM |
K73.00012: Demystifying a generative Quantum Machine Learning model using Information scrambling and Imaginary components of out-of-time correlators. Manas Sajjan, Vinit K Singh, Sabre Kais Quantum Machine Learning models have gained popularity as an effective substitute to their classical counterparts for various tasks incorporating both classical and quantum data. However, these models are still used as a black box, and knowledge of their inner working are limited. In this talk, we demonstrate how a generative model (called the learner), which expresses a quantum state using neural networks, trains itself by exchanging information among its sub-units in real time. To quantify the process, we use out-of-time correlators (OTOCs). We analytically illustrate that the imaginary components of such OTOCs can be related to conventional measures of correlation, like mutual information for the learner network. We also rigorously establish the inherent mathematical bounds on such quantities respected by the dynamical evolution during the training of the network. We further explicate how the mere existence of such bounds can be exploited to identify phase transitions in the simulated physical system (called the driver). Such an analysis offers important insights into the training dynamics by unraveling how quantum information is scrambled through such a network introducing correlation among its constituent sub-systems and how footprints of correlated behavior from the simulated driver are surreptitiously imprinted onto the representation of the learner. This approach not only demystifies the training of quantum machine learning models but can also shed important insight into the capacitive quality of the model. |
Tuesday, March 7, 2023 5:48PM - 6:00PM |
K73.00013: Evaluating the performance of sigmoid quantum perceptrons in quantum neural networks Samuel A Wilkinson, Michael J Hartmann Quantum neural networks (QNN) have been proposed as a promising architecture for quantum machine learning. There exist a number of different quantum circuit designs being branded as QNNs, however no clear candidate has presented itself as more suitable than the others. Rather, the search for a ''quantum perceptron" -- the fundamental building block of a QNN -- is still underway. One candidate is quantum perceptrons designed to emulate the nonlinear activation functions of classical perceptrons. Such sigmoid quantum perceptrons (SQPs) inherit the universal approximation property that guarantees that classical neural networks can approximate any function. However, this does not guarantee that QNNs built from SQPs will have any quantum advantage over their classical counterparts. Here we critically investigate both the capabilities and performance of SQP networks by computing their effective dimension and effective capacity, as well as examining their performance on real learning problems. The results are compared to those obtained for other candidate networks which lack activation functions. It is found that simpler, and apparently easier-to-implement parametric quantum circuits actually perform better than SQPs. This indicates that the universal approximation theorem, which a cornerstone of the theory of classical neural networks, is not a relevant criterion for QNNs. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700