Bulletin of the American Physical Society
2024 APS March Meeting
Monday–Friday, March 4–8, 2024; Minneapolis & Virtual
Session N28: Information Theory and PhysicsFocus Session
|
Hide Abstracts |
Sponsoring Units: GSNP DSOFT DBIO Chair: Kieran Murphy, University of Pennsylvania Room: 101I |
Wednesday, March 6, 2024 11:30AM - 12:06PM |
N28.00001: Connecting relevant information to coarse-graining in biological systems Invited Speaker: Stephanie E Palmer Biological systems must selectively encode partial information about the environment, as dictated by the capacity constraints at work in all living organisms. For example, we cannot see every feature of the light field that reaches our eyes; temporal resolution is limited by transmission noise and delays, and spatial resolution is limited by the finite number of photoreceptors and output cells in the retina. Classical efficient coding theory describes how sensory systems can maximize information transmission given such capacity constraints, but it treats all input features equally. Not all inputs are, however, of equal value to the organism. Our work quantifies whether and how the brain selectively encodes stimulus features, specifically predictive features, that are most useful for fast and effective movements. We have shown that efficient predictive computation starts at the earliest stages of the visual system, in the retina. We borrow techniques from statistical physics and information theory to assess how we get terrific, predictive vision from these imperfect (lagged and noisy) component parts. In broader terms, we aim to build a more complete theory of efficient encoding in the brain, and along the way have found some intriguing connections between formal notions of coarse graining in biology and physics. |
Wednesday, March 6, 2024 12:06PM - 12:18PM |
N28.00002: Trading information among multiple variables Marianne Bauer, William S Bialek Information relevant to cellular function is represented in the concentrations of signaling molecules, including the transcription factors that control gene expression. The amount of relevant information that can be extracted is limited by the precision or information capacity of the mechanisms that respond to changing concentrations. We have explored this tradeoff in the context of early events in the fruit fly embryo [1] and shown that there are universal features to this tradeoff in limits that are relevant to the embryo [2]. Those results showed that the capacity needed to extract the observed level of positional information in the embryo exceeds what is plausible for a single “readout mechanism’’ or enhancer element. Here we consider more explicitly how information can be extracted by parallel readout mechanisms, each with limited capacity. This analysis could lead to a theory for the functional behavior of multiple enhancers in the control of gene expression. Initial results point to the importance of non-monotonicity and combinatorial responses. |
Wednesday, March 6, 2024 12:18PM - 12:30PM |
N28.00003: Coarse-graining retinal responses to reveal predictive information Adam G Kline, Aleksandra M Walczak, Thierry Mora, Maciej Koch-Janusz, Stephanie E Palmer The vertebrate retina performs prediction on incoming visual signals, which can compensate for lags in neural processing [1]. This computation is collective, meaning it relies upon interactions between many neurons. However, it is not well understood how correlations between neurons enable prediction in large subpopulations (greater than ten) or when the visual stimulus is complex. In this work, we address these challenges together by searching for maximally-predictive collective variables in large subsets of 93 salamander retinal ganglion cells under stimulation with natural movies. To find these collective variables, we apply a tractable, approximate implementation of the information bottleneck method to neural data [2], and infer a lower-dimensional representation that is maximally informative about the future neural activity. We observe scaling relationships between this mutual information estimate, neural subset size, and information decay timescale. Further, we examine the structure of collective modes learned by this method and compare them to those obtained by other forms of coarse-graining. |
Wednesday, March 6, 2024 12:30PM - 12:42PM |
N28.00004: Quantifying information flow in cells Mirna Elizabeta Kramar, Lauritz Hahn, Aleksandra M Walczak, Thierry Mora, Mathieu Coppey Cells guide their decisions relying on the information that streams through the signalling pathways into the cellular interior. Signalling pathways are interconnected cascades of biochemical reactions which form complex networks. Given the variety of cellular behaviours and responses to external stimuli, the ability to distinguish between a multitude of signals is expected. However, existing quantifications of cellular information flow reported low sensitivity for graded signals. Here, we study information flow in MAPK pathway, one of key signalling pathways in eukaryotes. Combining optogenetic experiments and data analysis based on information theory, we quantify the input-output relationships and elucidate the role of intracellular and extracellular noise, stochastic activations of the pathway and the temporal aspect of the information processing in the cell. We show that the signalling pathway has a higher information capacity than previously reported, as well as bring to light the spatio-temporal aspects of information flow in a cellular popularion. |
Wednesday, March 6, 2024 12:42PM - 12:54PM |
N28.00005: ABSTRACT WITHDRAWN
|
Wednesday, March 6, 2024 12:54PM - 1:06PM |
N28.00006: Coordination Between Sub-populations of Interneuron in the Spinal Cord Revealed by Information Theory Weiheng Qin, Candida Tufo, Ying Zhang, Francisco Alvarez, Martyn Goulding, Eiman Azim, Graziana Gatto, Tatyana O Sharpee
|
Wednesday, March 6, 2024 1:06PM - 1:18PM |
N28.00007: Estimating Mutual Information with the Deep Variational Symmetric Information Bottleneck Eslam Abdelaleem, K. Michael Martini, Ilya M Nemenman Mutual Information (MI) captures nonlinear statistical relations between two variables. MI has proved to be useful in analysis of complex systems, in methods involving clustering, feature selection, and dimensionality reduction, among others. Estimation of MI between high-dimensional variables is a challenging task, often requiring impractically large sample sizes for accurate estimation, and thus limiting wider adoption of methods. One approach to resolve the sampling issue is to reduce the dimensionality of the variables. However, such a reduction can destroy correlations between the variables. We resolve this problem using the Deep Variational Symmetric Information Bottleneck (DVSIB), which simultaneously compresses the variables X and Y into two corresponding lower dimensional latent variables ZX and ZY, while maximizing the information between the latent variables. The information between ZX and ZY produced by DVSIB can be used as a proxy for the information between X and Y. We demonstrate the effectiveness of this method by assessing its performance on synthetic and real datasets, showcasing its robustness and accuracy. We show that our method can estimate mutual information between two high-dimensional variables in many cases where standard estimators fail. |
Wednesday, March 6, 2024 1:18PM - 1:30PM |
N28.00008: The information bottleneck learns spectral properties of dynamical systems Matthew Schmitt, Maciej Koch-Janusz, Michel Fruchart, Daniel Seara, Vincenzo Vitelli A common task across the physical sciences is that of model reduction: given a high-dimensional and complex description of a full system, how does one reduce it to a small number of important collective variables? Here we investigate model reduction for dynamical systems using the information bottleneck framework. We show that the optimal compression of a system's state is achieved by encoding spectral properties of its transfer operator. After demonstrating this in analytically-tractable examples, we show our findings hold also in variational compression schemes using experimental fluids data. These results shed light into the latent variables in certain neural network architectures, and show the practical utility of information-based loss functions. |
Wednesday, March 6, 2024 1:30PM - 1:42PM |
N28.00009: Obtaining machine learned surrogate underwater acoustics models to enable model reduction Jay C Spendlove, Mark K Transtrum, Tracianne B Neilsen Of great interest in underwater acoustics is the inverse problem to infer environmental seafloor parameters from acoustic data in a shallow ocean environment. One of the primary challenges with this "geoacoustic inversion" is model selection. Complex geoacoustic models have the potential of overfitting, and may include parameters that are unidentifiable, or not constrained by acoustic data. Information geometry provides tools for parameter identifiability analysis, including computational differential geometry tools like the Manifold Boundary Approximation Method (MBAM) for algorithmically finding unidentifiable model parameters to remove. These computational differential geometry tools require the ability to calculate accurate derivatives of model predictions with respect to model parameters. Automatic differentiation (AD) enables rapid evaluation of derivatives but faces implementation challenges with many underwater sound propagation models, such as use with models implemented in "legacy code" and having non-differentiable points in function space. We propose methods for obtaining machine learned (ML) surrogates of the model to which AD can be easily applied. Using a computational underwater acoustics model, surrogate and original model predictions will be compared. Additionally, the use of MBAM for model reduction will be demonstrated to enable next steps for geoacoustic inversion. |
Wednesday, March 6, 2024 1:42PM - 1:54PM |
N28.00010: Extending the Manifold Boundary Approximation Method to reduce large-scale, multi-parameter models Mark K Transtrum The Manifold Boundary Approximation Method (MBAM) is a model reduction technique based on information geometry and sloppy model analysis. This approach interprets a multi-parameter model as a manifold with parameters as coordinates. The Fisher Information Matrix is a natural metric on this manifold, so that distance is statistical distinguishability from data. Multi-parameter models often exhibit a systematic compression of the parameter space in the information metric, so the model manifold is very narrow in most directions, a phenomenon known as sloppiness. We empirically observe that the boundaries of these manifolds are physically interpretable, reduced-order models. MBAM identifies reduced-order models using geodesics to connect a complicated model to a simpler one on the boundary. This approach is computationally and manually intensive, limiting it to moderately-sized models with a few dozen parameters. I present a computationally efficient generalization of MBAM, applicable to models with many more parameters. After reparameterizing, I recast the model reduction problem as a sequence of convex optimizations that can be solved efficiently for high-dimensional parameter spaces. I demonstrate on models from physics, biology, and power systems. |
Wednesday, March 6, 2024 1:54PM - 2:06PM |
N28.00011: Information Theoretic Characterization of Critical Phenomena in the Ising Model and Flocking Models Sean M Kelty, Gourab Ghoshal, Damian R Sowinski The Ising Model, with its known analytical solutions in one and two dimensions, has been used as a prototypical case study for analyzing critical phenomena and phase transitions. Information Theory has been foundational in analyzing the regularity and patterns contained within random processes, exposing the Informational Architecture instantiated in the underlying physical system. In this work we characterize aspects of the informational architecture of the Ising model on the square lattice using multiple measures, including configurational entropy, configurational complexity, and transfer entropy. Their extremal behavior is compared and contrasted with well understood critical behavior in the Ising model as well as Viscek's flocking model. These results strengthen the foundation for understanding how criticality affects information storage and processing in systems that may not have a clear cut order parameter. |
Wednesday, March 6, 2024 2:06PM - 2:18PM |
N28.00012: Random-Energy Secret Sharing via Extreme Synergy Vudtiwat Ngampruetikorn, David J Schwab The random-energy model (REM), a solvable spin-glass model, has impacted an incredibly diverse set of problems, from protein folding to combinatorial optimization to many-body localization. Here, we explore a new connection to secret sharing. We formulate a secret-sharing scheme, based on the REM, and analyze its information-theoretic properties. Our analyses reveal that the correlations between subsystems of the REM are highly synergistic and form the basis for secure secret-sharing schemes. We derive the ranges of temperatures and secret lengths over which the REM satisfies the requirement of secure secret sharing. We show further that a special point in the phase diagram exists at which the REM-based scheme is optimal in its information encoding. Our analytical results for the thermodynamic limit are in good qualitative agreement with numerical simulations of finite systems, for which the strict security requirement is replaced by a tradeoff between secrecy and recoverability. Our work offers a further example of information theory as a unifying concept, connecting problems in statistical physics to those in computation. |
Wednesday, March 6, 2024 2:18PM - 2:30PM |
N28.00013: A Fourier Tour of Protein Function Prediction Amirali Aghazadeh Predicting the biological functions of proteins from their amino acid sequences is one of the long-standing challenges in biology. A comprehensive solution has remained elusive due to the vastness of the combinatorial space of sequences and our limited ability to probe the space experimentally. In this talk, we view protein function prediction from a signal recovery and information theory perspective through the lens of the Fourier transform—also known as Walsh-Hadamard (WH) transform for sequence functions. We discuss how WH transform allows us to view protein functions as a multilinear polynomial and in terms of high-order sparse nonlinear interactions. We demonstrate that an intuitive divide-and-conquer strategy can find the polynomial using a number of samples and times that grows only linearly with the length of the protein sequence. Next, we discuss how we can leverage natural assumptions about the polynomial, such as sparsity, to develop efficient protein function prediction algorithms rooted inc oding theory. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700