Bulletin of the American Physical Society
APS March Meeting 2017
Volume 62, Number 4
Monday–Friday, March 13–17, 2017; New Orleans, Louisiana
Session Y14: Machine Learning for Modeling and Control of Biological Systems IIFocus
|
Hide Abstracts |
Sponsoring Units: DBIO GSNP Chair: John Wikswo, Vanderbilt University Room: 273 |
Friday, March 17, 2017 11:15AM - 11:51AM |
Y14.00001: Bridging Mechanism and Phenomenology in Models of Complex Systems Invited Speaker: Mark Transtrum The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. I discuss how parameter reduction, specifically the Manifold Boundary Approximation Method (MBAM), can be used as a tool for deriving simple phenomenological laws from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of microscopic components that function as tunable control knobs for the collective behavior. I illustrate with several examples from from biology and compare to other common parameter reduction methods. [Preview Abstract] |
Friday, March 17, 2017 11:51AM - 12:03PM |
Y14.00002: Increasingly diverse brain dynamics in the developmental arc: using Pareto-optimization to infer a mechanism Evelyn Tang, Chad Giusti, Graham Baum, Shi Gu, Eli Pollock, Ari Kahn, David Roalf, Tyler Moore, Kosha Ruparel, Ruben Gur, Raquel Gur, Theodore Satterthwaite, Danielle Bassett Motivated by a recent demonstration that the network architecture of white matter supports emerging control of diverse neural dynamics as children mature into adults, we seek to investigate structural mechanisms that support these changes. Beginning from a network representation of diffusion imaging data, we simulate network evolution with a set of simple growth rules built on principles of network control. Notably, the optimal evolutionary trajectory displays a striking correspondence to the progression of child to adult brain, suggesting that network control is a driver of development. More generally, and in comparison to the complete set of available models, we demonstrate that all brain networks – from child to adult – are structured in a manner highly optimized for the control of diverse neural dynamics. Within this near-optimality, we observe differences in the predicted control mechanisms of the child and adult brains, suggesting that the white matter architecture in children has a greater potential to increasingly support brain state transitions, potentially underlying cognitive switching. [Preview Abstract] |
Friday, March 17, 2017 12:03PM - 12:15PM |
Y14.00003: Uninformative priors prefer simpler models Henry Mattingly, Michael Abbott, Benjamin Machta The Bayesian framework for model selection requires a prior for the probability of candidate models that is uninformative---it minimally biases predictions with preconceptions. For parameterized models, Jeffreys' uninformative prior, $p^J$, weights parameter space according to the local density of distinguishable model predictions. While $p^J$ is rigorously justifiable in the limit that there is infinite data, it is ill-suited to effective theories and sloppy models. In these models, parameters are very poorly constrained by available data, and even the number of parameters is often arbitrary. We use a principled definition of `uninformative' as the mutual information between parameters and their expected data and study the properties of the prior $p^*$ which maximizes it. When data is abundant, $p^*$ approaches Jeffreys' prior. With finite data, however, $p^*$ is discrete, putting weight on a finite number of atoms in parameter space. In addition, when data is scarce, the prior lies on model boundaries, which in many cases correspond to interpretable models but with fewer parameters. As more data becomes available, the prior puts weight on models with more parameters. Thus, $p^*$ quantifies the intuition that better data can justify the use of more complex models. [Preview Abstract] |
Friday, March 17, 2017 12:15PM - 12:27PM |
Y14.00004: Reconstructing the information topology of models of complex systems Kolten Barfuss, Mark Transtrum Multi-parameter models of complex systems are ubiquitous throughout science. We interpret models geometrically as manifolds with parameters acting as coordinates. For many models, the manifold is bounded by a hierarchy of boundaries. These boundaries are themselves manifolds which correspond to simpler models with fewer parameters. The hierarchical structure of the boundaries induces a partial ordering relationship among these approximate models that forms a topological space and can be visually represented by a Hasse diagram. The Hasse diagram of the model manifold provides a global summary of the model structure and a road map from the intricate, fully parameterized description of a complex system through various types of approximations to the set of distinct behavior regimes the model enables. I describe two methods for reconstructing the entire Hasse diagram of complex models and discuss applications to models in statistical mechanics and biological differential equation models. [Preview Abstract] |
Friday, March 17, 2017 12:27PM - 1:03PM |
Y14.00005: TBD Invited Speaker: David Schwab |
Friday, March 17, 2017 1:03PM - 1:15PM |
Y14.00006: How the prior information shapes couplings in neural fields performing optimal multisensory integration He Wang, Wen-Hao Zhang, K. Y. Michael Wong, Si Wu Extensive studies suggest that the brain integrates multisensory signals in a Bayesian optimal way. However, it remains largely unknown how the sensory reliability and the prior information shape the neural architecture. In this work, we propose a biologically plausible neural field model, which can perform optimal multisensory integration and encode the whole profile of the posterior. Our model is composed of two modules, each for one modality. The crosstalks between the two modules can be carried out through feedforwad cross-links and reciprocal connections. We found that the reciprocal couplings are crucial to optimal multisensory integration in that the reciprocal coupling pattern is shaped by the correlation in the joint prior distribution of the sensory stimuli. A perturbative approach is developed to illustrate the relation between the prior information and features in coupling patterns quantitatively. Our results show that a decentralized architecture based on reciprocal connections is able to accommodate complex correlation structures across modalities and utilize this prior information in optimal multisensory integration. [Preview Abstract] |
Friday, March 17, 2017 1:15PM - 1:27PM |
Y14.00007: An information theoretic approach to geometric clustering DJ Strouse, David Schwab Clustering is a basic task in data analysis for both understanding and pre-processing data. Classic clustering methods, such as k-means or EM fitting of a gaussian mixture model, are based on geometry. These “geometric clustering” methods group data points together based on their Euclidean distance from one another; roughly speaking, points within a cluster have smaller distances to one another than to points in other clusters. More recently, however, “distributional clustering” methods, such as the information bottleneck (IB) and deterministic information bottleneck (DIB), have been introduced that group data points based upon their conditional distributions over a target variable. Here, points within a cluster provide similar information about the target variable. Are distributional and geometric clustering related, and if so, how? Can we blend these two approaches? Here we first describe a method to incorporate geometric information into the (D)IB clustering algorithm, where the target variable our clustering should be informative about is the spatial location of the contained data points. This enables us to derive a novel set of geometric clustering algorithms, which we then compare to the classic methods mentioned above. Finally, we compare both approaches on data. [Preview Abstract] |
Friday, March 17, 2017 1:27PM - 1:39PM |
Y14.00008: Congruent and Opposite Neurons as Partners in Multisensory Integration and Segregation Wen-Hao Zhang, K. Y. Michael Wong, He Wang, Si Wu Experiments revealed that where visual and vestibular cues are integrated to infer heading direction in the brain, there are two types of neurons with roughly the same number. Respectively, congruent and opposite cells respond similarly and oppositely to visual and vestibular cues. Congruent neurons are known to be responsible for cue integration, but the computational role of opposite neurons remains largely unknown. We propose that opposite neurons may serve to encode the disparity information between cues necessary for multisensory segregation. We build a computational model composed of two reciprocally coupled modules, each consisting of groups of congruent and opposite neurons. Our model reproduces the characteristics of congruent and opposite neurons, and demonstrates that in each module, congruent and opposite neurons can jointly achieve optimal multisensory information integration and segregation. This study sheds light on our understanding of how the brain implements optimal multisensory integration and segregation concurrently in a distributed manner. [Preview Abstract] |
Friday, March 17, 2017 1:39PM - 1:51PM |
Y14.00009: Universal statistics of terminal dynamics before collapse Nicolas Lenner, Stephan Eule, Fred Wolf Recent biological developments have both drastically increased the precision as well as amount of generated data, allowing for a switching from pure mean value characterization of the process under consideration to an analysis of the whole ensemble, exploiting the stochastic nature of biology. We focus on the general class of non-equilibrium processes with distinguished terminal points as can be found in cell fate decision, check points or cognitive neuroscience. Aligning the data to a terminal point (e.g. represented as an absorbing boundary) allows to device a general methodology to characterize and reverse engineer the terminating history. Using a small noise approximation we derive mean variance and covariance of the aligned data for general finite time singularities. [Preview Abstract] |
Friday, March 17, 2017 1:51PM - 2:03PM |
Y14.00010: Learning to soar in turbulent environments Gautam Reddy, Antonio Celani, Terrence Sejnowski, Massimo Vergassola Birds and gliders exploit warm, rising atmospheric currents (thermals) to reach heights comparable to low-lying clouds with a reduced expenditure of energy. Soaring provides a remarkable instance of complex decision-making in biology and requires a long-term strategy to effectively use the ascending thermals. Furthermore, the problem is technologically relevant to extend the flying range of autonomous gliders. The formation of thermals unavoidably generates strong turbulent fluctuations, which make deriving an efficient policy harder and thus constitute an essential element of soaring. Here, we approach soaring flight as a problem of learning to navigate highly fluctuating turbulent environments. We simulate the atmospheric boundary layer by numerical models of turbulent convective flow and combine them with model-free, experience-based, reinforcement learning algorithms to train the virtual gliders. For the learned policies in the regimes of moderate and strong turbulence levels, the virtual glider adopts an increasingly conservative policy as turbulence levels increase, quantifying the degree of risk affordable in turbulent environments. Reinforcement learning uncovers those sensorimotor cues that permit effective control over soaring in turbulent environments. [Preview Abstract] |
Friday, March 17, 2017 2:03PM - 2:15PM |
Y14.00011: Quantifying falsifiability of scientific theories Ilya Nemenman I argue that the notion of falsifiability, a key concept in defining a valid scientific theory, can be quantified using Bayesian Model Selection, which is a standard tool in modern statistics. This relates falsifiability to the quantitative version of the statistical Occam's razor, and allows transforming some long-running arguments about validity of scientific theories from philosophical discussions to rigorous mathematical calculations. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700