Bulletin of the American Physical Society
APS March Meeting 2020
Volume 65, Number 1
Monday–Friday, March 2–6, 2020; Denver, Colorado
Session P22: Inference, Information, and Learning in Biophysics: I |
Hide Abstracts |
Sponsoring Units: DBIO GSNP DCOMP Chair: Pankaj Mehta, Boston University Room: 303 |
Wednesday, March 4, 2020 2:30PM - 3:06PM |
P22.00001: What can and can't Machine Learning do for Physics? Invited Speaker: Pankaj Mehta This speaker is on last year's program for invited talk but in fact that talk was given by someone else, due to a death in the family, making him eligible this year. |
Wednesday, March 4, 2020 3:06PM - 3:18PM |
P22.00002: Coarse scale representation of spiking neural networks: from dynamics to backpropagation through spikes Angel Yanguas-Gil Leaky integrate and fire neurons have long been used as a model system to understand the dynamics of spiking neural networks, recently becoming the underlying model of neuromorphic chips such as Intel's Loihi. One of the interesting features of this type of models is the presence of an absolute refractory period, which essentially limits the maximum spike rate that can be attained by the system. This is also the largest time interval that guarantees that at most a single spike is produced per neuron. In this work we have explored the development of coarse scale representations of leaky integrate and fire neurons that operate at this timescale. Our coarse scale approximation is obtained by approximating spike arrival times as being homogeneously distributed over our time interval, and results on a discrete representation that exhibits equivalent dynamics on randomly connected networks. Moreover, the coarse scale model allows us to implement stochastic gradient descent methods for spiking neurons that take advantage of backpropagation. This provides a useful baseline with which to compare more bio-inspired approaches based on local learning rules, as well as the impact of different codings on the network's ability to learn and generalize. |
Wednesday, March 4, 2020 3:18PM - 3:30PM |
P22.00003: Towards a grammar of probabilistic models for large biological networks Philipp Fleig, Ilya M Nemenman Biological interaction networks such as biological neural networks, amino acid sequences in proteins, etc. are critical to the functioning of any living system. The trend of modern experiments is to record data with a rapidly increasing number of simultaneously measured network variables. Inferring models for such complex data is becoming increasingly more difficult, since one is confronted with a combinatorial explosion in the number of possible interactions between variables. Here we present first steps of an approach to overcome this obstacle. We investigate the question whether a small set of carefully chosen statistical models suffices to describe rich phenomenology in data of biological networks. As candidate models for this grammar we consider low-rank approximation, clustering, sparsity, etc.. We discuss the distribution of eigenvalues and pairwise correlations characteristic for each model, working under the assumption that they serve as key indicators for the phenomenology described by a model. We provide examples of modelling data of Ising spin systems and outline a vision for how combinations of models in the grammar cover a large part of model space occupied by biological networks. |
Wednesday, March 4, 2020 3:30PM - 3:42PM |
P22.00004: Different noise assumptions yield qualitatively different landscapes and transition paths in gene regulation models John Vastola, William R. Holmes Intrinsic gene expression noise is a major source of phenotypic variability in cancer biology, and noise-induced transitions are thought to contribute to everything from developmental error correction to drug resistance. It is increasingly common to incorporate noise into mathematical models of gene networks, but limited experimental knowledge forces noise to be modeled in a phenomenological/approximate way. Do the different ad-hoc ways noise is included in these models qualitatively affect their predictions? Building on earlier work that analyzed one and two gene toy models, we present results on how noise assumptions affect landscapes and transition paths in models of the epithelial-to-mesenchymal transition (EMT) and early T cell development. We focus on two aspects of modeling noise: its functional form (constant/additive, multiplicative/linearly dependent on concentration, or the ‘canonical’ Gillespie-like prescription) and its symmetry (whether different genes have the same amount of noise). We find that different assumptions about noise can dramatically impact (i) the relative occupancy of different states, (ii) the stability/existence of intermediate states, and (iii) transition rates and paths. |
Wednesday, March 4, 2020 3:42PM - 3:54PM |
P22.00005: Stochastic Modelling of Dynein Motors on a One-Dimensional Lattice: Dynamics and Stationary State Riya Nandi, Priyanka . The motion of molecular motors inside cells constitutes an exciting and biologically motivated non-equilibrium physics problem. Experimental studies in recent years have shown that dynein motors can move with variable step sizes along the microtubule, depending on the load and ATP concentration. Inspired by the dynamics of dyneins, we have developed a model of an exclusion process on a one-dimensional lattice, where the motors can move in the forward direction up to four steps depending on the load attached to it. We study the dynamics of the mean-square displacement, stationary state current, and gap distribution for both open and periodic boundary conditions. In the transient regime, the fluctuation grows as t.log(t) for the periodic boundary conditions and is ballistic for open boundary conditions. The gap distribution between the motors in the stationary state for periodic boundary conditions shows discrete peaks of exponentially decaying amplitude at multiples of four gap size. We have also verified these interesting results using a mean-field analysis. |
Wednesday, March 4, 2020 3:54PM - 4:06PM |
P22.00006: Limits to biochemical signalling in a changing environment as an inference problem Thierry Mora, Ilya M Nemenman Cells must sense concentrations of external ligands as well as internal signalling molecules in order to adapt to environment changes and execute developmental programs. Berg and Purcell calculated an upper bound to the accuracy of concentration sensing by physical objects, due to the particle nature of molecules and finite sensor size. However, that bound assumed that the concentration to be sensed was constant. In realistic situations, concentrations may vary quickly over orders of magnitude. Here, we calculate a new bound to concentration sensing of a changing concentration by mapping the problem onto a field theory through Bayesian inference, which we solve using a Gaussian approximation. We find that the inverse square root dependency of the error as a function of concentration, ligand diffusivity, sensor size and time in the classical Berg and Purcell bound is replaced by a quartic root. The solution to the inference problem provides dynamical inference equations which can be mimicked by simple biochemical downstream networks, providing a plausible biological implementation of optimal inference. |
Wednesday, March 4, 2020 4:06PM - 4:18PM |
P22.00007: Human information processing in complex networks Christopher Lynn, Evangelia Papadopoulos, Ari Kahn, Danielle Bassett Humans communicate using systems of interconnected stimuli or concepts - from language and music to literature and science - yet it remains unclear if and how the structure of these networks supports the communication of information. Although information theory provides tools to quantify the information produced by a system, traditional metrics do not account for the inefficient and biased ways that humans process this information. Here we develop an analytical framework to study the information generated by a system as perceived by a human observer. We demonstrate experimentally that this perceived information depends critically on a system's network topology. Applying our framework to several real networks, we find that they communicate a large amount of information (having high entropy) and do so efficiently (maintaining low divergence from human expectations). Moreover, we show that such efficient communication arises in networks that are simultaneously heterogeneous, with high-degree hubs, and clustered, with tightly-connected modules - the two defining features of hierarchical organization. Together, these results suggest that many real networks are constrained by the pressures of information transmission, and that these pressures select for specific structural features. |
Wednesday, March 4, 2020 4:18PM - 4:30PM |
P22.00008: Information tradeoffs in sensing and sampling Caroline Holmes, William S Bialek Organisms sense the world through arrays of receptor cells. In some cases, such as the compound eyes of insects, these arrays are nearly crystalline. In other cases, including the human retina, sampling is much less regular. While ordered sampling typically gathers more information, this comes at the cost of specifying the positions of all the cells. We explore this tradeoff, asking about the maximum entropy of the sampling lattice that is consistent with gathering a certain amount of information from a Gaussian random signal; bits of sensory information are traded against bits of positional information. This problem maps to an equilibrium statistical mechanics problem for the positions of the receptor cells, with interactions that depend on the correlation structure of the input signal. In some limits we find that the information cost of disorder is surprisingly small. In other limits there are transitions where an ordered sampling lattice melts as we change the parameters of the sensory environment. |
Wednesday, March 4, 2020 4:30PM - 4:42PM |
P22.00009: Optical reservoir computing with tumor spheroids Davide Pierangeli, Valentina Palmieri, Giulia Marcucci, Chiara Moriconi, Giordano Perini, Marco De Spirito, Massimiliano Papi, Claudio Conti Photonics enables the implementation of many modern neural network architectures, like deep learning and random neural networks. When a multiple scattering medium is adopted for mixing optical signals and perform computations, one can realize optically different conventional applications of artificial intelligence, like image recognition or time-series prediction. |
Wednesday, March 4, 2020 4:42PM - 4:54PM |
P22.00010: Stochastic Force Inference Pierre Ronceray, Anna Frishman Brownian dynamics is ubiquitous in biophysics, from the motion of single molecules and cytoskeletal filaments to effective models for cell and small animal behavior. We propose a principled framework, Stochastic Force Inference, for the inverse problem of Brownian dynamics: reconstruct spatially dependent force and diffusion fields from individual trajectories. It consists in a linear regression of these fields with a basis of smooth functions. We show that it is data efficient and successfully addresses the many challenges associated to real biological data: localization error, high dimensionality of phase space, out-of-equilibrium currents, multiplicative noise, complex force fields. |
Wednesday, March 4, 2020 4:54PM - 5:06PM |
P22.00011: Predicting the future from the past in visual object motion: optimal representations of mixed stochastic/deterministic trajectories Vedant Sachdeva, Aleksandra Maria Walczak, Thierry Mora, Stephanie Palmer Making predictions about the future state of the external world confers benefits to biological systems that can translate to increased fitness. This requires that sensory systems encode information about the stimulus in a manner suitable for prediction. However, physical constraints, such as finite metabolism and finite computing power, result in additional evolutionary pressures. These favor organisms that compress the representation of the input stimulus statistics along particular readout dimensions, creating an underlying tension between representing the statistics of a stimulus while preserving the information relevant to prediction. Here, we propose that the encoding scheme used by such biological systems can be predicted by the information bottleneck method. Using this technique, we can compute the optimal form of the encoding distribution for a variety of mixed stochastic and deterministic stimuli and demonstrate that these encoding distributions are optimized for prediction tasks at different timescales. We also consider the optimal encoding distribution when the underlying parameters of the stimulus evolve in time. |
Wednesday, March 4, 2020 5:06PM - 5:18PM |
P22.00012: Quantifying success and failure in simple models of large neural populations Leenoy Meshulam, Jeffrey Gauthier, Carlos Brody, David Tank, William S Bialek In statistical physics we routinely study models for collective behaviors that are simpler than the underlying microscopic mechanisms. In biological systems, one systematic implementation of this idea is the maximum entropy method, where we match some features of the data but otherwise the model has as little structure as possible. To understand whether this approach “works", it would be attractive to have a testing ground where we could see the same model succeed or fail to describe different but related systems. Recent experiments monitor the activity of 1000+ cells in the mouse hippocampus as the animal runs through a virtual environment. The scale of these data allows us to construct models for many different subsets of neurons drawn out of the whole population. We test many predictions of these models, and find that quantitative agreement with experiment is best when the group of cells is spatially contiguous; if we draw the same number of cells at random from large regions, the agreement gets systematically worse. Strikingly, the different predictions fail in an ordered way, so we can rank the different collective behaviors of the network activity by the degree of difficulty in getting them right. This serves to make precise what it means for these models to work. |
Wednesday, March 4, 2020 5:18PM - 5:30PM |
P22.00013: Quantifying temporal information accumulation for biochemical signaling dynamics Ying Tang, Adewunmi Adelaja, Xiaofeng Ye, Eric Deeds, Roy Wollman, Alexander Hoffmann The temporal patterns of intracellular signaling contains information for cell decision making. When signaling is initiated by a stimulus, circuits that decode the temporal patterns to control biological effectors must make decisions based on available information within the timecourse. Quantifying information flow through signaling networks thus requires a dynamical framework to estimate information transmission in a time-dependent manner. We find that a type of stochastic process can be used to represent the signaling activities that show a high degree of cell-to-cell variability. Based on the model, we extract the time-dependent channel capacity of signaling pathways. When the transcription factor NFκB is activated by diverse immune threats in macrophages (e.g. virus, gram negative or positive bacteria, or cytokine), the channel capacity reaches 1 bit of information around 1 hour and 2 bits of information within 10 hours. By knocking down the feedback regulation in the signaling pathway, the information accumulation was reduced, which uncovers that information transmission is enhanced by feedback control. The result demonstrates that the method allows quantification on the learning rate of decoding circuits for rapid cellular decision making. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700