Bulletin of the American Physical Society
APS March Meeting 2023
Volume 68, Number 3
Las Vegas, Nevada (March 5-10)
Virtual (March 20-22); Time Zone: Pacific Time
Session K01: Information Theory and PhysicsFocus
|
Hide Abstracts |
Sponsoring Units: GSNP DSOFT DBIO Chair: Kieran Murphy, University of Pennsylvania Room: Room 124 |
Tuesday, March 7, 2023 3:00PM - 3:36PM |
K01.00001: The Other Side of Entropy Invited Speaker: Stefano Martiniani Following its inception in the mid-19th century, our understanding of thermodynamic entropy has undergone many revisions, most notably through the development of microscopic descriptions by Boltzmann and Gibbs, which led to a deep understanding of equilibrium thermodynamics. The role of entropy has since moved beyond the traditional boundaries of equilibrium thermodynamics, towards problems for which the development of a statistical mechanical theory seems plausible but the a-priori probabilities of states are not known, making the definition and calculation of entropy-like quantities challenging. In this talk, I will discuss information theoretic ideas and methods that enable these computations. First, we will explore why universal data compression (Lempel Ziv coding) provides a good starting point for estimating entropy in and out of equilibrium. Then I will show through a simple argument how from the classical LZ bound we can derive a pattern matching estimator that readily generalizes to higher dimensions and that provides a tight bound on the entropy, overcoming the limitations of previous approaches. Finally, starting again from the simple LZ bound, I will show how we can obtain a new KL divergence estimator that outperforms existing methods, and how we used it to estimate local entropy production and to explore its relation to extractable work in active matter. I will illustrate these ideas by considering their applications in a variety of contexts: from colloidal systems, to absorbing-state models, to active matter, in simulations and in experiments. Throughout the talk, I will highlight challenges and promising future directions for these measurements. |
Tuesday, March 7, 2023 3:36PM - 3:48PM |
K01.00002: First Law of Thermodynamics with strong coupling. Susanne Still, Gavin E Crooks When systems are strongly coupled, with non-negligible energetic interactions, then state changes are accompanied not only by changes in mutual information, but also by changes in interaction energy. Strong coupling can potentially involve many degrees of freedom and work exchange on the nano-scale. The distinction between work and heat may thus become difficult in these situations. Nonetheless, since all thermodynamic considerations depend on the definition of work vs. heat, this is a central issue. |
Tuesday, March 7, 2023 3:48PM - 4:00PM |
K01.00003: Information and Optimal Inference Michael C Abbott, Julian A Rubinfien, Benjamin B Machta Physics works because it ignores microscopic details, not because they are small, but because they are unimportant. This idea of importance has a natural interpretation in information theory, and in earlier work we showed that omitting detail isn’t a trade-off: A model which includes many irrelevant parameters captures much less information than a simpler model without them, when both are fitted to the same limited noisy data [1]. We also showed that a Bayesian prior which is unaware that some parameters are irrelevant induces large bias, which is avoided by the simpler model [2]. Here we present recent results on inferring the parameters of the optimally simple model directly from data, with knowledge of the experiment and its noise, but without the need to first construct a prior. |
Tuesday, March 7, 2023 4:00PM - 4:12PM |
K01.00004: Data Efficiency of the Symmetric Information Bottleneck K. Michael Martini, Ilya M Nemenman The information bottleneck is an example of traditional Dimensionality Reduction; it compresses one set of variables while preserving maximal information about the other. The symmetric information bottleneck, on the other hand, is a Dual Dimensionality Reduction technique that simultaneously compresses two sets of random variables while preserving maximal information between the compressed sets. We explore the data size requirements of both methods by analytically calculating error bounds and mean squared errors in the estimation of mutual information terms used in both bottlenecks. We additionally introduce and examine the data size requirements of the deterministic symmetric information bottleneck, a symmetric bottleneck where the mutual information is replaced by entropy and the produced compression mappings are deterministic. We show that, in many situations of practical interest, the symmetric information bottleneck is more data efficient than the non-symmetric information bottleneck. We believe that this is an example of a more general principle that Dual Dimensionality Reduction methods are often more data efficient than their traditional Dimensionality Reduction equivalents. |
Tuesday, March 7, 2023 4:12PM - 4:24PM |
K01.00005: Machine-Learning surrogates for information geometric analysis of multi-parameter models Jay C Spendlove, Mark K Transtrum, Tracianne B Neilsen Information theoretic tools are emerging as an important class of model-analysis methods. These include tools such as the Fisher Information Matrix (FIM) and information geometric techniques for model selection and parameter identifiability analysis. Critical to these methods is the ability to evaluate derivatives of the model predictions with respect to model parameters. Finite difference methods are often not sufficiently accurate, and so modern automatic differentiation (AD) methods are an enabling technology for these information theoretic analyses. However, many models are available only as "legacy code" to which AD methods are difficult to apply. To utilize these AD methods, we propose a general method in which a machine learned model is used as a surrogate model for these legacy models. We demonstrate this method for a legacy model that calculates underwater acoustic transmission loss in an ocean environment due to seafloor characteristics. We densely sample the parameter space of the model and then use manifold learning methods to construct the surrogate model. We then validate the FIM of the surrogate model against that of the original model, which has been calculated by tediously optimizing a finite-difference approach. Finally, we present some preliminary information geometric analyses of the surrogate model. |
Tuesday, March 7, 2023 4:24PM - 4:36PM |
K01.00006: Optimal compression and transcriptional control Marianne Bauer, William S Bialek The information bottleneck procedure provides a selective compression, using only a limited number of bits to describe the state of a system while preserving as much of the relevant information as possible. We are interested in using these ideas to describe enhancers, the regulatory elements that control transcription in more complex organisms, identifying the state of the enhancer with a compressed description of the concentrations of transcription factors (TF) to which it is responding. In general the bottleneck problem has to be solved numerically, but here we search for limits in which analytic progress is possible. If conditional distributions are Gaussian and effective noise levels are small, we derive a scaling relation onto which all optimal bottleneck curves collapse. We verify this scaling relation on the example of gene expression of the gap genes in early fruit fly development, which even goes slightly above the low noise limit. Mapping the compressed variables of the bottleneck to enhancer states, we explore the connections between noise levels and the cooperativity of responses, and the changing structure of the problem as we include more TF inputs to a single enhancer. |
Tuesday, March 7, 2023 4:36PM - 4:48PM |
K01.00007: The Information Bottleneck and the double pendulum: Using machine learning to study how chaos destroys information Kieran A Murphy, Dani S Bassett A hallmark of chaotic dynamics is the loss of information with time. Although information loss is often expressed through a connection to Lyapunov exponents---valid in the limit of high information about the system state---this picture misses the rich spectrum of information decay across different levels of granularity. Here we show how machine learning presents new opportunities for the study of information loss in chaotic dynamics, with a double pendulum serving as a model system. We use the Information Bottleneck as a training objective for a neural network to extract information from the state of the system that is optimally predictive of the future state after a prescribed time horizon. We then decompose the optimally predictive information by distributing a bottleneck to each state variable, recovering the relative importance of the variables in determining future evolution. The optimal information corresponds to a measurement process that clusters system states by fate, and states are measured with varying precision based on their different rates of information loss. The framework we develop is broadly applicable to chaotic systems and pragmatic to apply, leveraging data and machine learning to monitor the limits of predictability and map out the destruction of information. |
Tuesday, March 7, 2023 4:48PM - 5:00PM |
K01.00008: What does it mean to invert an Exact Renormalization Group Flow? Marc Klinger Building on the view of the Exact Renormalization Group (ERG) as an instantiation of Optimal Transport described by a functional convection-diffusion equation, we provide a new, fully information theoretic perspective for understanding ERG through the intermediary of Bayesian Statistical Inference. The connection is facilitated by the Dynamical Bayesian Inference scheme, which encodes Bayesian inference in the form of a one parameter family of probability distributions solving an integro-differential equation derived from Bayes' law. In this note, we demonstrate how the Dynamical Bayesian Inference equation is, itself, equivalent to a continuity equation which we dub Bayesian Diffusion. Identifying the features that define Bayesian Diffusion and mapping them onto the features that define ERG, we obtain a dictionary outlining how ERG can be understood in terms of a statistical inference paradigm run in reverse with an effective coarse-graining imparted by the loss of data. This suggests the compelling interpretation that the inversion of ERG is a form of statistical inference in which the aforementioned lost data is reincorporated into the model. This correspondence matches closely with the Error Correcting Picture of Renormalization, a fact one might have anticipated through the relationship between the Petz Map and the classical Bayesian posterior distribution. |
Tuesday, March 7, 2023 5:00PM - 5:12PM |
K01.00009: An Equivalence between the Exact Renormalization Group and Continuous Entanglement Renormalization Samuel Goldman As a generalization of Wilsonian renormalization, the Exact Renormalization Group (ERG) describes the flow of effective models as a function of scale, the central object being the field theory partition function and its invariance under an RG flow. Entanglement renormalization, on the other hand, is described by an isometric or unitary tensor network which achieves entangled state preparation of lattice or continuum theories through an iterated process of local entangling and dilation quantum channels. Although renormalization at the level of states and partition functions are similar, a precise connection between the two formalisms has not been established. Here we attempt to fill the gap and show that for free field theories one may develop a "Hilbert space picture" of the ERG, giving a direct equivalence of entanglement renormalization and a subclass of ERG flows. This observation opens the door to understanding ERG flows in terms of quantum channels on the Hilbert space constructed from the path integral, and allows one to leverage tools from quantum information theory in understanding renormalization beyond the context of tensor network constructions. |
Tuesday, March 7, 2023 5:12PM - 5:24PM |
K01.00010: Information driven renormalisation group on irregular lattices Doruk Efe Gokmen, Maciej Koch-Janusz, Zohar Ringel, Sounak Biswas, Felix Flicker Traditionally, real-space renormalisation group (RSRG) is performed by applying a single, often heuristic, coarse-graining transformation to blocks of identical nature across the system. In the absence of such translation invariance, the coarse-graining transformations should be adapted to different regions. This inherent difficulty made it elusive to develop a model-independent paradigm for RSRG in inhomogeneous systems. In this work we address this gap by extending the recent information-theoretical formulation of RSRG to arbitrary irregular lattices. To showcase our method, we tackle the problem of dimer coverings on quasiperiodic Ammann-Beenker tilings. The coarse-graining rules, computed numerically using real-space mutual information neural estimation, (1) vary depending on the location of the block and (2) map the degrees of freedom into emergent "super-dimers" obeying an effective exclusion constraint at the larger-scale. This result indicates an intriguing discrete scale-invariance and proximity of the original model to an RG fixed point. |
Tuesday, March 7, 2023 5:24PM - 5:36PM |
K01.00011: Constrained Models on Quasicrystals Felix Flicker Some of the most important phenomena in condensed matter physics, such as fractionalisation and topological order arise when strong correlations emerge from local constraints. Examples include dimer models (tiling a chess board with dominoes), emergent magnetic monopoles in the spin ice materials, and resonating valence bond solids. We outline results for a range of constrained models in a new setting: aperiodic long-range ordered Ammann Beenker tilings (AB), which have the symmetries of certain known quasicrystals. Treating the vertices and edges of AB as those of bipartite graphs, we (i) prove the existence of Hamiltonian cycles (visiting each vertex precisely once), an NP-complete problem in general graphs, and thereby construct polynomial-time solutions in AB to a range of NP-complete problems with applications in adsorption, catalysis, scanning tunneling microscopy, and elsewhere; (ii) demonstrate the existence of fully-packed loops on Ammann Beenker tilings, and study their statistics numerically, giving a generalisation of ice-type models to aperiodic settings; (iii) apply MPS-DMRG to study the quantum dimer model; we are able to apply this technique in 2D owing to structures emerging naturally from the dimer constraint. |
Tuesday, March 7, 2023 5:36PM - 5:48PM |
K01.00012: Phase transition in the computational complexity of the shortest common superstring and genome assembly David Yllanes, Victor Martin-Mayor, Luis Antonio Fernandez Genome assembly, the process of reconstructing a long genetic sequence by aligning and merging short fragments, or reads, is known to be NP-hard, either as a version of the shortest common superstring problem or in a Hamiltonian-cycle formulation. That is, the computing time is believed to grow exponentially with the the problem size in the worst case. Despite this fact, high-throughput technologies and modern algorithms currently allow bioinformaticians to produce and assemble datasets of billions of reads. Using methods from statistical mechanics, we address this conundrum by demonstrating the existence of a phase transition in the computational complexity of the problem and showing that practical instances always fall in the ‘easy’ phase (solvable by polynomial-time algorithms). |
Tuesday, March 7, 2023 5:48PM - 6:00PM |
K01.00013: First passage time and information of a one-dimensional Brownian particle with stochastic resetting to random positions Javier Q Toledo-Marin, Denis Boyer We explore the effects of stochastic resetting to random positions of a Brownian particle on first passage times and Shannon's entropy. We explore the different entropy regimes, namely, the externally-driven, the zero-entropy and the Maxwell demon regimes. We show that the mean first passage time (MPFT) minimum can be found in any of these regimes. We provide a novel analytical method to compute the MFPT, the mean first passage number of resets (MFPNR) and mean first passage entropy (MFPE) in the case where the Brownian particle resets to random positions sampled from a set of distributions known a priori. We show the interplay between the reset position distribution's second moment and the reset rate, and the effect it has on the MFPT and MFPE. We further propose a mechanism whereby the entropy per reset can be either in the Maxwell demon or the externally driven regime, yet the overall mean first passage entropy corresponds to the zero-entropy regime. Additionally, we find an overlap between the dynamic phase space and the entropy phase space. We use this method in a generalized version of the Evans-Majumdar model by assuming the reset position is random and sampled from a Gaussian distribution. We then consider the toggling reset whereby the Brownian particle resets to a random position sampled from a distribution dependent on the reset parity. All our results are compared to and in agreement with numerical simulations. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700