Bulletin of the American Physical Society
APS March Meeting 2016
Volume 61, Number 2
Monday–Friday, March 14–18, 2016; Baltimore, Maryland
Session V39: Inference in Biophysics |
Hide Abstracts |
Sponsoring Units: DBIO GSNP Chair: David Schwab, Northwestern University Room: 342 |
Thursday, March 17, 2016 2:30PM - 2:42PM |
V39.00001: Self-Organized Information Processing in Neuronal Networks: Replacing Layers in Deep Networks by Dynamics Christoph Kirst It is astonishing how the sub-parts of a brain co-act to produce coherent behavior. What are mechanism that coordinate information processing and communication and how can those be changed flexibly in order to cope with variable contexts? Here we show that when information is encoded in the deviations around a collective dynamical reference state of a recurrent network the propagation of these fluctuations is strongly dependent on precisely this underlying reference. Information here 'surfs' on top of the collective dynamics and switching between states enables fast and flexible rerouting of information. This in turn affects local processing and consequently changes in the global reference dynamics that re-regulate the distribution of information. This provides a generic mechanism for self-organized information processing as we demonstrate with an oscillatory Hopfield network that performs contextual pattern recognition. Deep neural networks have proven to be very successful recently. Here we show that generating information channels via collective reference dynamics can effectively compress a deep multi-layer architecture into a single layer making this mechanism a promising candidate for the organization of information processing in biological neuronal networks. [Preview Abstract] |
Thursday, March 17, 2016 2:42PM - 2:54PM |
V39.00002: The learnability of critical distributions David Schwab, Johannah Torrence, Giacomo Torlai, Roger Melko, Stephanie Palmer Many biological systems, including some neural population codes, have been shown empirically to sit near a critical point. Here we study the learnability of such codes. We first construct networks of interacting binary neurons with random, sparse interactions (i.e. a Erdos-Renyi graph) of uniform strength. We then characterize the discriminability of those interactions from data samples by performing a direct coupling analysis and thresholding the direct information between each pair of neurons to predict the presence or absence of an interaction. By sweeping through threshold values, we compute the area under the ROC curve as a measure of discriminability of the interactions. We show that the resulting discriminability is maximized when the original distribution is at its critical point. We next trained deep neural networks to discriminate between samples drawn from two nearby temperatures in the 2D Ising model. We find distinct signatures of decoding performance in the vicinity of the critical point. This technique may be useful for detecting phase transitions in models without an a priori identified order parameter. [Preview Abstract] |
Thursday, March 17, 2016 2:54PM - 3:06PM |
V39.00003: Machine learning phases of matter Juan Carrasquilla, Miles Stoudenmire, Roger Melko We show how the technology that allows automatic teller machines read hand-written digits in cheques can be used to encode and recognize phases of matter and phase transitions in many-body systems. In particular, we analyze the (quasi-)order-disorder transitions in the classical Ising and XY models. Furthermore, we successfully use machine learning to study classical Z2 gauge theories that have important technological application in the coming wave of quantum information technologies and whose phase transitions have no conventional order parameter. [Preview Abstract] |
Thursday, March 17, 2016 3:06PM - 3:18PM |
V39.00004: Using chaos to model random symbols for improved unsupervised information processing SUMONA MUKHOPADHYAY, HENRY LEUNG We present theoretical analyses that may allow strengthening the connection between chaotic dynamical system and information processing. The analytical and empirical studies prove that computing with chaos and nonlinear characterization of information improves unsupervised information processing. Traditional supervised techniques for information retrieval from noisy environment achieve optimal performance. However, the need for training symbols is an inefficient strategy. We prove that with a chaotic generator as an information source, unsupervised performance is close to that of supervised with a white Gaussian stochastic process. Analytical results show that unsupervised technique using chaotic symbolic dynamics is equivalent to that of supervised when using random symbolic information. We conclude from the concepts of measure theory and ergodic theory, that random symbolic information can be modeled by a chaotic dynamical system via symbolic dynamics. We observe that the performance of unsupervised information retrieval is equivalent to that of supervised, when random symbolic information and a dynamical representation of it are used in conjunction. This fact enables to apply nonlinear dynamics to design improved communication systems. [Preview Abstract] |
Thursday, March 17, 2016 3:18PM - 3:30PM |
V39.00005: Anatomy of a Spin: The Information-Theoretic Structure of Classical Spin Systems Ryan James, Vikram Vijayaraghavan, James Crutchfield Collective organization in matter plays a significant role in its expressed physical properties. Typically, it is detected via an order parameter, appropriately defined for a given system's observed emergent patterns. Recent developments in information theory suggest how to quantify collective organization in a system- and phenomenon-agnostic way: decompose the system's thermodynamic entropy density into a localized entropy, that solely contained in the dynamics at a single location, and a bound entropy, that stored in space as domains, clusters, excitations, or other emergent structures. We compute this decomposition and related quantities explicitly for the nearest-neighbor Ising model on the 1D chain, the Bethe lattice with coordination number k = 3, and the 2D square lattice, illustrating its generality and the functional insights it gives near and away from phase transitions. In particular, we consider the roles that different spin motifs play (cluster bulk, cluster edges, and the like) and how these affect the dependencies between spins. [Preview Abstract] |
Thursday, March 17, 2016 3:30PM - 3:42PM |
V39.00006: Global Characterization of Model Parameter Space Using Information Topology Mark Transtrum A generic parameterized model is a mapping between parameters and data and is naturally interpreted as a prediction manifold embedded in data space. In this interpretation, known as Information Geometry, the Fisher Information Matrix (FIM) is a Riemannian metric that measures the identifiability of the model parameters. Varying the experimental conditions (e.g., times at which measurements are made) alters both the FIM and the geometric properties of the model. However, several global features of the model manifold (e.g., edges and corners) are invariant to changes in experimental conditions as long as the FIM is not singular. Invariance of these features to changing experimental conditions generates an "Information Topology" that globally characterizes a model's parameter space and reflects the underlying physical principles from which the model was derived. Understanding a model's information topology can give insights into the emergent physics that controls a system's collective behavior, identify reduced models and describe the relationship among them, and determine which parameter combinations will be difficult to identify for various experimental conditions. [Preview Abstract] |
Thursday, March 17, 2016 3:42PM - 3:54PM |
V39.00007: Statistical Physics of High Dimensional Inference Madhu Advani, Surya Ganguli To model modern large-scale datasets, we need efficient algorithms to infer a set of $P$ unknown model parameters from $N$ noisy measurements. What are fundamental limits on the accuracy of parameter inference, given limited measurements, signal-to-noise ratios, prior information, and computational tractability requirements? How can we combine prior information with measurements to achieve these limits? Classical statistics gives incisive answers to these questions as the measurement density $\alpha = \frac{N}{P}\rightarrow \infty$. However, modern high-dimensional inference problems, in fields ranging from bio-informatics to economics, occur at finite $\alpha$. We formulate and analyze high-dimensional inference analytically by applying the replica and cavity methods of statistical physics where data serves as quenched disorder and inferred parameters play the role of thermal degrees of freedom. Our analysis reveals that widely cherished Bayesian inference algorithms such as maximum likelihood and maximum a posteriori are suboptimal in the modern setting, and yields new tractable, optimal algorithms to replace them as well as novel bounds on the achievable accuracy of a large class of high-dimensional inference algorithms. [Preview Abstract] |
Thursday, March 17, 2016 3:54PM - 4:06PM |
V39.00008: Compression and regularization with the information bottleneck DJ Strouse, David Schwab Compression fundamentally involves a decision about what is relevant and what is not. The information bottleneck (IB) by Tishby, Pereira, and Bialek formalized this notion as an information-theoretic optimization problem and proposed an optimal tradeoff between throwing away as many bits as possible, and selectively keeping those that are most important. The IB has also recently been proposed as a theory of sensory gating and predictive computation in the retina by Palmer et al. Here, we introduce an alternative formulation of the IB, the deterministic information bottleneck (DIB), that we argue better captures the notion of compression, including that done by the brain. As suggested by its name, the solution to the DIB problem is a deterministic encoder, as opposed to the stochastic encoder that is optimal under the IB. We then compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB vastly outperforms the IB in terms of the DIB cost function. Our derivation of the DIB also provides a family of models which interpolates between the DIB and IB by adding noise of a particular form. We discuss the role of this noise as a regularizer. [Preview Abstract] |
Thursday, March 17, 2016 4:06PM - 4:18PM |
V39.00009: A novel method for the precise determination of step times and sizes in counting large numbers of photobleaching events Konstantinos Tsekouras, Steve Presse Counting of photobleaching steps is of importance in the investigation of many open problems in biophysics. Current methods of counting photo- bleaching steps cannot directly account for fluorophore photophysical behaviors such as fluorophore self-quenching, blinking and flickering. Our Bayesian approach to the counting problem allows for fluorophore blinking and reactivation as well as for multiple simultaneous photobleaching events and is neither computational resource- nor time- heavy. We detail the method’s applicability and limitations and present examples of application in photobleach event counting. [Preview Abstract] |
Thursday, March 17, 2016 4:18PM - 4:30PM |
V39.00010: Assessing the limits of hidden Markov model analysis for multi-state particle tracks in living systems Dylan Young Particle tracking offers significant insight into the molecular mechanics that govern the behavior of living cells. The analysis of molecular trajectories that transition between different motive states, such as diffusive, driven and tethered modes, is of considerable importance, with even single trajectories containing significant amounts of information about a molecule's environment and its interactions with cellular structures such as the cell cytoskeleton, membrane or extracellular matrix. Hidden Markov models (HMM) have been widely adopted to perform the segmentation of such complex tracks, however robust methods for failure detection are required when HMMs are applied to individual particle tracks and limited data sets. Here, we show that extensive analysis of hidden Markov model outputs using data derived from multi-state Brownian dynamics simulations can be used for both the optimization of likelihood models, and also to generate custom failure tests based on a modified Bayesian Information Criterion. In the first instance, these failure tests can be applied to assess the quality of the HMM results. In addition, they provide critical information for the successful design of particle tracking experiments where trajectories containing multiple mobile states are expected. [Preview Abstract] |
Thursday, March 17, 2016 4:30PM - 4:42PM |
V39.00011: Inferring phenomenological models of Markov processes from data Catalina Rivera, Ilya Nemenman Microscopically accurate modeling of stochastic dynamics of biochemical networks is hard due to the extremely high dimensionality of the state space of such networks. Here we propose an algorithm for inference of phenomenological, coarse-grained models of Markov processes describing the network dynamics directly from data, without the intermediate step of microscopically accurate modeling. The approach relies on the linear nature of the Chemical Master Equation and uses Bayesian Model Selection for identification of parsimonious models that fit the data. When applied to synthetic data from the Kinetic Proofreading process (KPR), a common mechanism used by cells for~increasing~specificity of molecular assembly, the algorithm~successfully uncovers the known coarse-grained description of the process. This phenomenological description has been notice previously, but this time it is derived in an automated manner by the algorithm. [Preview Abstract] |
Thursday, March 17, 2016 4:42PM - 4:54PM |
V39.00012: Probing self similar structures by studying the frequency of directional changes Ali Tabei, Stanislav Burov, Andrew Milbrandt, Kyle Spurgeon It has been shown that in two and higher dimension, when the time series of individual particle trajectories exist, the distribution of relative angles of motion between successive time intervals of random motions provides information about stochastic processes, which is beyond the information obtained from studying mean squared displacement. We show that this distribution is a useful measure, which provides supplementary information about the structural properties of the media that a random walker is diffusing. We compare the behavior of this measure for common self-similar structures. We show that the distribution of relative angles is a good measure to discriminates different complex structural geometries. [Preview Abstract] |
Thursday, March 17, 2016 4:54PM - 5:06PM |
V39.00013: Diffusion, Backward In Time: A Universal Inversion Scheme Dervis Vural, Vu Nguyen A sugar cube placed in a cup of tea will erode and eventually dissolve. Given the initial shape of the sugar block, it is trivial to predict its final distribution. However, the opposite problem of determining the initial state, given a final one is extremely difficult. A surprising number of seemingly unrelated topics in biology are the same one in disguise: Inverting diffusion on a network. Here we present a method that will identify the origin of a stochastic biological diffusion process, regardless of the forward model. We will then discuss potential implications to evolution, neuroscience, aging biology, and epidemiology. [Preview Abstract] |
Thursday, March 17, 2016 5:06PM - 5:18PM |
V39.00014: Inferring biological dynamics in heterogeneous cellular environments Steve Pressé In complex environments, it often appears that biomolecules such as proteins do not diffuse normally. That is, their mean square displacement does not scale linearly with time. This anomalous diffusion happens for multiple reasons: proteins can bind to structures and other proteins; fluorophores used to label proteins may flicker or blink making it appear that the labeled protein is diffusing anomalously; and proteins can diffuse in differently crowded environments. Here we describe methods for learning about such processes from imaging data collected inside the heterogeneous environment of the living cell. Refs.: "Inferring Diffusional Dynamics from FCS in Heterogeneous Nuclear Environments" Konstantinos Tsekouras, Amanda Siegel, Richard N. Day, Steve Pressé*, Biophys. J. , 109, 7 (2015). "A data-driven alternative to the fractional Fokker-Planck equation" Steve Pressé*, J. Stat. Phys.: Th. and Expmt. , P07009 (2015). [Preview Abstract] |
Thursday, March 17, 2016 5:18PM - 5:30PM |
V39.00015: Discovering cell types in flow cytometry data with random matrix theory Yang Shen, Robert Nussenblatt, Wolfgang Losert Flow cytometry is a widely used experimental technique in immunology research. During the experiments, peripheral blood mononuclear cells (PBMC) from a single patient, labeled with multiple fluorescent stains that bind to different proteins, are illuminated by a laser. The intensity of each stain on a single cell is recorded and reflects the amount of protein expressed by that cell. The data analysis focuses on identifying specific cell types related to a disease. Different cell types can be identified by the type and amount of protein they express. To date, this has most often been done manually by labelling a protein as expressed or not while ignoring the amount of expression. Using a cross correlation matrix of stain intensities, which contains both information on the proteins expressed and their amount, has been largely ignored by researchers as it suffers from measurement noise. Here we present an algorithm to identify cell types in flow cytometry data which uses random matrix theory (RMT) to reduce noise in a cross correlation matrix. We demonstrate our method using a published flow cytometry data set. Compared with previous analysis techniques, we were able to rediscover relevant cell types in an automatic way. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2020 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
1 Research Road, Ridge, NY 11961-2701
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700