Bulletin of the American Physical Society
APS March Meeting 2015
Volume 60, Number 1
Monday–Friday, March 2–6, 2015; San Antonio, Texas
Session T38: Quantum Characterization and Tomography |
Hide Abstracts |
Sponsoring Units: GQI Chair: Joshua Combes, Perimeter Institute Room: 212B |
Thursday, March 5, 2015 11:15AM - 11:27AM |
T38.00001: Iterated benchmarking to separate unitary errors from decoherence Lev Bishop, Sarah Sheldon, Stefan Filipp, Matthias Steffen, Jerry M. Chow, Jay M. Gambetta We describe a scalable experimental protocol for estimating the relative contribution of unitary errors and decoherence to the fidelity of individual quantum gates. As an extension to interleaved randomized benchmarking (Magesan PRL 109, 080505 2012), this protocol consists of interleaving random Clifford gates between n-fold repetitions of the gate of interest. The type of error is revealed by the scaling with number of repetitions: linear in the case of errors due to decoherence; quadratic in the case of pure unitary errors. This protocol has recently been implemented experimentally for transmon superconducting qubits and found useful for calibrating microwave pulses as well as identifying new error sources that may be affecting gate fidelity. [Preview Abstract] |
Thursday, March 5, 2015 11:27AM - 11:39AM |
T38.00002: Non-Markovianity in Randomized Benchmarking Harrison Ball, Tom M. Stace, Michael J. Biercuk Randomized benchmarking is routinely employed to recover information about the fidelity of a quantum operation by exploiting probabilistic twirling errors over an implementation of the Clifford group. Standard assumptions of Markovianity in the underlying noise environment, however, remain at odds with realistic, correlated noise encountered in real systems. We model single-qubit randomized benchmarking experiments as a sequence of ideal Clifford operations interleaved with stochastic dephasing errors, implemented as unitary rotations about $\sigma_z$. Successive error rotations map to a sequence of random variables whose correlations introduce non-Markovian effects emulating realistic colored-noise environments. The Markovian limit is recovered by turning off all correlations, reducing each error to an independent Gaussian-distributed random variable. We examine the dependence of the statistical distribution of fidelity outcomes on these noise correlations, deriving analytic expressions for probability density functions and related statistics for relevant fidelity metrics. This enables us to characterize and bear out the distinction between the Markovian and non-Markovian cases, with implications for interpretation and handling of experimental data. [Preview Abstract] |
Thursday, March 5, 2015 11:39AM - 11:51AM |
T38.00003: Hyperaccuracy and Error Scaling in Gate Set Tomography Kenneth Rudinger, Erik Nielsen, John King Gamble, Robin Blume-Kohout Standard quantum tomographic procedures are limited in their usefulness by errors in the prior knowledge of the implemented POVMs and prepared states. Gate set tomography (GST) is a tomographic framework introduced to solve this problem of self-referential calibration [arXiv:1310.4492]. GST seeks to simultaneously and self-consistently characterize the set of implemented gates, prepared states, and POVMs. This talk will provide detailed analysis of imperfections in GST-based estimations. From simulations, we establish 1) lower bounds on the experimental resources required to ensure that GST will provide a reliable and useful estimate of the gates, and 2) the scaling of GST's accuracy with number of samples per experiment, maximum length of experiment, and rate of incoherent error. These results demonstrate that GST can be far more accurate than standard tomography. Lastly we show (from both simulations and experiments) that experiment-by-experiment $\chi^2$ tests are extremely effective at diagnosing inconsistencies in the model caused by non-Markovian noise. [Preview Abstract] |
Thursday, March 5, 2015 11:51AM - 12:03PM |
T38.00004: Gate Set Tomography on a trapped ion qubit Erik Nielsen, Robin Blume-Kohout, John Gamble, Kenneth Rundinger, Jonathan Mizrahi, Johathan Sterk, Peter Maunz We present enhancements to gate-set tomography (GST), which is a framework in which an entire set of quantum logic gates (including preparation and measurement) can be fully characterized without need for pre-calibrated operations. Our new method, ``extended Linear GST'' (eLGST) uses fast, reliable analysis of structured long gate sequences to deliver tomographic precision at the Heisenberg limit with GST's calibration-free framework. We demonstrate this precision on a trapped-ion qubit, and show significant (orders of magnitude) advantage over both standard process tomography and randomized benchmarking. This work was supported by the Laboratory Directed Research and Development (LDRD) program at Sandia National Laboratories. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000. [Preview Abstract] |
Thursday, March 5, 2015 12:03PM - 12:15PM |
T38.00005: Gate Set Tomography of a 3D Transmon Qubit Yudan Guo, Sergey Novikov, Daniel Greenbaum, Andrew Skinner, B.S. Palmer Quantum gate set tomography\footnote{Blume-Kohout \textit{et al.}, arXiv: 1310.4492 (2013).}$^{,}$\footnote{Merkel \textit{et al.}, Phys. Rev. A., \textbf{87}, 062119 (2013).} is a recently developed tool for characterizing quantum gates that does not suffer from the inaccuracies inherent in standard quantum process tomography. We present the results of a gate set tomography (GST) experiment done on a superconducting 3D transmon qubit. $\pi$ and $\pi/2$ rotations over the x- and y-axes were used as the initial calibrated gates. We performed linear inversion on data from a 4-fiducial experiment to obtain an initial tomographic estimate, which was then used as the starting point for a maximum likelihood procedure. The calibrated gates all achieved fidelity above 98\%, which was further verified by randomized benchmarking. The robustness of GST was also examined by introducing errors deliberately. We show that GST with maximum likelihood estimation is able to discern errors due to a mixed initial state, as well as due to a tilted rotation axis in our gate operation. [Preview Abstract] |
Thursday, March 5, 2015 12:15PM - 12:27PM |
T38.00006: Self-consistent verification of quantum measurements properties Marcus da Silva Measurements are an important aspect of quantum mechanics, as they represent the controlled extraction of information about quantum systems. Recent approaches for quantum tomography, such as gate-set tomography, have demonstrated that is is possible to recover a self-consistent description of a quantum system (including measurements) without assuming perfect knowledge about any of its components. However, these approaches typically focus on destructive measurements, and the inherent gauge freedom of quantum experiments makes many of the familiar properties quantum measurements (e.g., efficiency and projectiveness) difficult or impossible to verify. Here we describe how the characterization of {\em non-destructive} measurements avoids some of these problems, and propose alternate measurement properties that have the advantage of being gauge invariant, so that can be verified through experiments. [Preview Abstract] |
Thursday, March 5, 2015 12:27PM - 12:39PM |
T38.00007: Precision measurements with a single quantum system Klaus Molmer, Alexander Kiilerich, Pinja Haikka Continuous probing of a single quantum system provides information about physical parameters that govern its evolution. The stochastic character of the quantum measurement process and the back action on the system accompanying different outcomes makes the extraction of precision information a dynamical process. Quantum trajectory theory of light emitting systems yields an efficient Bayesian estimation, and full photodetection records reveal much more information than integrated signals [1,2]. We present an analysis of the Cramer-Rao bound, quantifying the asymptotic scaling of the estimation error after long time probing of light from a single emitter [2]. The choice of measurement strategy significantly influences the estimation sensitivity of different parameters, but for Markovian decay, a deterministic equation provides the maximally possible estimation sensitivity by any measurement on the system and its emitted radiation [3]. 1. S Gammelmark and K Molmer, \textit{Bayesian parameter inference from continuously monitored quantum systems}; Phys. Rev. A \textbf{87}, 032115 (2013). 2. A. H. Kiilerich and K. Molmer, \textit{Estimation of atomic interaction parameters by photon counting}, Phys. Rev. A \textbf{89}, 052110 (2014). 3. K. Molmer, Hypothesis testing with open quantum systems; arXiv:1408.4568 [Preview Abstract] |
Thursday, March 5, 2015 12:39PM - 12:51PM |
T38.00008: Applying Model Selection to Quantum State Tomography: Choosing Hilbert Space Dimension Travis Scholten Reconstructing the quantum state of a continuous variable system (e.g., an optical mode) using quantum tomography presents a unique problem: the dimension of its Hilbert space is infinite. Its density matrix has infinitely many parameters, which cannot all be estimated from finite data. Brute force reconstruction (e.g., via the Radon transform or deconvolution) produces undesirable overfitting artifacts. Smoothing is one solution, but lacks a good theoretical justification. I introduce a statistically well-motivated approach based on model selection and log likelihoods. Maximum likelihood estimates in a sequence of D-dimensional subspaces (spanned by the first D Fock states) are ranked by their log likelihood. This ranking allows one to find an estimate whose dimension is smaller while simultaneously providing a good fit to data. I apply this method to heterodyne tomography and demonstrate the method can indeed eliminate overfitting by choosing a good dimension (D) in which to reconstruct optical states. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000. [Preview Abstract] |
Thursday, March 5, 2015 12:51PM - 1:03PM |
T38.00009: Informationally complete measurements from compressed sensing methodology Amir Kalev, Carlos Riofrio, Robert Kosut, Ivan Deutsch Compressed sensing (CS) is a technique to faithfully estimate an unknown signal from relatively few data points when the measurement samples satisfy a restricted isometry property (RIP). Recently this technique has been ported to quantum information science to perform tomography with a substantially reduced number of measurement settings. In this work we show that the constraint that a physical density matrix is positive semidefinite provides a rigorous connection between the RIP and the informational completeness (IC) of a POVM used for state tomography. This enables us to construct IC measurements that are robust to noise using tools provided by the CS methodology. The exact recovery no longer hinges on a particular convex optimization program; solving any optimization, constrained on the cone of positive matrices, effectively results in a CS estimation of the state. From a practical point of view, we can therefore employ fast algorithms developed to handle large dimensional matrices for efficient tomography of quantum states of a large dimensional Hilbert space. [Preview Abstract] |
Thursday, March 5, 2015 1:03PM - 1:15PM |
T38.00010: Quantum Bootstrapping via Compressed Quantum Hamiltonian Learning Nathan Wiebe, Christopher Granade, David Cory Recent work has shown that quantum simulation is a valuable tool for learning empirical models for quantum systems. We build upon these results by showing that a small quantum simulators can be used to characterize and learn control models for larger devices for wide classes of physically realistic Hamiltonians. This leads to a new application for small quantum computers: characterizing and controlling larger quantum computers. Our protocol achieves this by using Bayesian inference in concert with Lieb-Robinson bounds and interactive quantum learning methods to achieve compressed simulations for characterization. Whereas Fisher information analysis shows that current methods which employ short-time evolution are suboptimal, interactive quantum learning allows us to overcome this limitation. We illustrate the efficiency of our bootstrapping protocol by showing numerically that an 8-qubit Ising model simulator can be used to calibrate and control a 50 qubit Ising simulator while using only about 750 kilobits of experimental data. [Preview Abstract] |
Thursday, March 5, 2015 1:15PM - 1:27PM |
T38.00011: Practical variational tomography for critical 1D systems Jong Yeon Lee, Olivier Landon-Cardinal We further investigate a recently introduced efficient quantum state reconstruction procedure targeted to states well-approximated by the multi-scale entanglement renormalization ansatz (MERA). First, we introduce an improved optimization scheme that can be easily generalized for MERA states with larger bond dimension. Second, we provide a detailed analysis of the error propagation and quantify how it affects the distance between the experimental state and the reconstructed state. Third, we explain how to bound this distance using local data, providing an efficient scalable certification method. Fourth, we examine the performance of MERA tomography on the ground states of several 1D critical models. [Preview Abstract] |
Thursday, March 5, 2015 1:27PM - 1:39PM |
T38.00012: Controlling qubit drift by recycling error correction syndromes Robin Blume-Kohout Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated -- if it is ignored, error rates will rise to intolerable levels -- but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple -- yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. [Preview Abstract] |
Thursday, March 5, 2015 1:39PM - 1:51PM |
T38.00013: Heat bath algorithmic cooling using electron-nuclear spin ensemble in the solid state: characterization of the open quantum system control Kyungdeock Park, Robabeh Darabad, Guanru Feng, Stephane Labruyere, Jonathan Baugh, Raymond Laflamme The ability to perform multiple rounds of Quantum Error Correction (QEC) is an essential task for scalable quantum information processing, but experimental realizations of it are still in their infancy. Key requirements for QEC are high control fidelity and the ability to extract entropy from ancilla qubits. Nuclear Magnetic Resonance (NMR) quantum processors have demonstrated high control fidelity with up to 12 qubits. A remaining challenge is to prepare nearly pure ancilla qubits to enable QEC. Heat Bath Algorithmic Cooling (HBAC) is an efficient tool for extracting entropy from qubits that interact with a heat bath, allowing cooling below the bath temperature. For implementing HBAC with spins, a hyperfine coupled electron-nuclear system in a single crystal is more advantageous than conventional NMR systems since the electron, with higher polarization and faster relaxation, can act as a heat bath. We characterize 3 and 5 qubit spin systems in gamma-irradiated malonic acid and present simulation and experimental results of HBAC to benchmark our quantum control. Two control schemes are compared: electron nuclear double resonance and indirect control of nuclei via the anisotropic hyperfine interaction. [Preview Abstract] |
Thursday, March 5, 2015 1:51PM - 2:03PM |
T38.00014: Characterization of Qudit Entanglement Through the Visualization of Spin-Coherent-State Wigner Functions Todd Tilma, Mark Everitt, Kae Nemoto, William Munro The purpose of our research is to determine whether or not there is a general relationship between the degree of entanglement and the total amount of negativity in the Wigner function of various combinations of finite-dimensional quantum states. Specifically, by using the Stratonovich-Weyl correspondence we can take the density matrix of a known, finite-dimensional quantum state (hereafter known as a ``qudit'') and generate its corresponding, finite-dimensional Wigner function. This Wigner function reproduces the qudit density matrix through a known volume integral. By doing the same volume integral, but with the absolute value of the Wigner function as the kernel, we get a measure of the total amount of negativity of the Wigner function instead of reproducing the density matrix. Our question is thus, is this ``negative volume'' equivalent to the amount of entanglement in the initial qudit state? Our results for general two-qubit states have confirmed a monotonic relationship between concurrence and this negative volume for specific cases. By analyzing the various Wigner functions of three and more qubits, as well as qubit-qutrit Wigner functions we hope to build a consensus on whether or not the negativity in the Wigner function is a measure of, or witness to, entanglement. [Preview Abstract] |
Thursday, March 5, 2015 2:03PM - 2:15PM |
T38.00015: Adaptive characterization of coherent states Markku P.V. Stenberg, Kevin Pack, Frank K. Wilhelm We present a method for efficient characterization of an optical coherent state $|\alpha\rangle$. We choose measurement setups adaptively based on the data while it is collected. Our algorithm divides the estimation in three different steps with different measurement strategies: (i) Searching a crude estimate, (ii) rapidly improving the accuracy, and (iii) the phase where the improvement of the accuracy slows down due to the quantum nature of the coherent state. Our algorithm significantly outperforms nonadaptive schemes. While our standard strategy is robust against measurement errors we also present strategies optimized for the presence of such errors. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700