Bulletin of the American Physical Society
APS March Meeting 2020
Volume 65, Number 1
Monday–Friday, March 2–6, 2020; Denver, Colorado
Session X27: Physics of Neural Systems |
Hide Abstracts |
Sponsoring Units: DBIO Chair: Simon Sponberg, Georgia Inst of Tech Room: 404 |
Friday, March 6, 2020 11:15AM - 11:27AM |
X27.00001: Unifying criticality and the neutral theory of neural avalanches Sakib Matin, Thomas Tenzin, W. Klein The interest in the question of criticality in the brain has been prompted by experiments which show that the collective firing of neurons (neural avalanches) follow power-law distributions. Three proposed explanations of this emergent scale-free behavior are criticality, neutral theory, and self-organized criticality. We study a model of the brain for which the dynamics are governed by neutral theory and find that the scale-free behavior is controlled by the proximity to a critical point. Our results unify the neutral theory of neural avalanches with criticality, which requires fine tuning of control parameters, and rule out self-organized criticality. We use tools from percolation theory to characterize the critical properties of the neural avalanches and identify the tuning parameters, which are consistent with experiments. The scaling hypothesis provides a unified explanation of the power laws which characterize the neural avalanches. We discuss how our results can motivate future empirical studies of criticality in the brain. |
Friday, March 6, 2020 11:27AM - 11:39AM |
X27.00002: Spontaneous spatial symmetry breaking in excitatory neuronal networks and its effects in sparsely connected networks. Mihai Bibireata, Valentin Slepukhin, Alex Levine We explore the dynamics of the preBötzinger complex, the mammalian central pattern generator with N ∼ 103 neurons, which produces a collective metronomic signal that times the inspiration. Our analysis is based on a simple firing-rate model of excitatory neurons with dendritic adaptation (the Feldman Del Negro model [Nat. Rev. Neurosci. 7, 232 (2006), Phys. Rev. E 82, 051911 (2010)]) interacting on a fixed, directed Erdos–Rényi graph. In the all-to-all coupled variant of the model, there is a type of spontaneous symmetry breaking in which some fraction of the neurons become stuck in a high firing-rate state, while others become quiescent. This separation into firing and non-firing clusters persists into more sparsely connected networks, but is now determined by k-cores in the directed graphs. It produces a number of features of the dynamical phase diagram that violate the predictions of mean-field analysis. In particular, we observe in the simulated networks that stable oscillations do not persist in the large-N limit, in contradiction to the predictions of mean-field theory. Moreover, we observe that the oscillations in these sparse networks are remarkably robust in response to killing neurons, surviving until ∼ 20% of the network remains. This is consistent with experiment. |
Friday, March 6, 2020 11:39AM - 11:51AM |
X27.00003: A model for oscillatory gating of information flow between neural circuits as a function of local recurrence Mikail Khona, Ila R fiete Physical connections between neurons are hardwired on the timescale of seconds, but infor- |
Friday, March 6, 2020 11:51AM - 12:03PM |
X27.00004: Quasicriticality or Criticality in the brain? Leandro Fosque, John Beggs, Gerardo Ortiz, Rashid Williams-Garcia
|
Friday, March 6, 2020 12:03PM - 12:15PM |
X27.00005: Compression as a path to simpler models of collective neural activity Luisa Ramirez, William S Bialek Experiments now make it possible to observe, simultaneously, the electrical activity of hundreds or even thousands of neurons in a small region of the brain. But making models that capture the behavior of these real neural networks easily leads to a combinatorial explosion of complexity and we need explicit strategies for simplification. In many condensed matter systems, interactions are local, but this is not an effective guide for neurons. More abstractly, interactions may be compressible, so that the influence of the whole system on each degree of freedom can be represented by just a few bits of information. We test this idea, analyzing experiments from the vertebrate retina and the mouse hippocampus. Data sets are large enough to provide reliable sampling of activity patterns in subgroups of neurons, and within these groups we find, for example, that the influence of eight neurons on one neuron can be captured almost completely with just ten states, much less than the 256 possible states. This compression can be iterated, providing a path to describing the influence of the whole network on each neuron with a much reduced number of parameters. |
Friday, March 6, 2020 12:15PM - 12:27PM |
X27.00006: Effects of local excitations on large-scale brain network dynamics: Insights from coupled Wilson-Cowan oscillators under perturbative stimulation Evangelia Papadopoulos, Christopher Lynn, Demian Battaglia, Danielle Bassett Composed of many coupled dynamical units, the brain is a canonical example of a complex network. At the macroscale, it consists of large neuronal populations that generate time-varying activity and that interconnect via a web of anatomical links. However, despite recent progress, we still lack a mechanistic understanding of how large-scale brain networks shape system-wide dynamics, and specifically, how local changes in neural activity affect functional interactions across the brain as a whole. Motivated by a vast literature suggesting that synchronization of activity underlies the coordination of distinct brain areas, we combine structural connectivity data and biophysical modeling to study how regional excitations of activity modulate network-wide synchrony. By employing computational modeling, we examine how the impacts of excitations depend on the location of the perturbation, and, crucially, on the baseline state of the system. Furthermore, we uncover state-specific relationships between brain network properties and the effects of local excitations on the coherence of network dynamics. As a whole, this work provides insight into how local changes in neural activity can propagate via structural links to cause distributed alterations to inter-regional communication patterns. |
Friday, March 6, 2020 12:27PM - 12:39PM |
X27.00007: Robust simplicity in the multimodal sensory control of insect flight. Simon Sponberg, Varun Sharma Animal movement combines many physiological and physical systems to enable agility and robust behavior. Frequently the behavior that’s manifest can be describe, at least in specific experimental contexts, by relatively low order, and often linear, dynamical systems. How does this simplicity arise from the component systems and are the emergent dynamics preserved in the face of changes of context? Insect flight is challenging task with unstable mechanics and control requirements on the order of a single wingstroke. Using an agile hawk moth, Manduca sexta, tracking a robotic flower during foraging behavior we have shown that the underlying frequency response is linear, time invariant on the scale of 10’s of seconds, and relies on the linear superposition of vision and touch (via the moth’s long proboscis). However, as light levels drop the dynamics of the visual system adjusts, producing slower responses and reduced visual-motor gains. Does the simple emergence remain? We show that the mechanical, touch response to the flower also adjust correspondingly to produce a robust linear superposition of the two cues across contexts. In high and low light cases the two sensory system partition the frequency domain and this partition shifts as light levels dim. |
Friday, March 6, 2020 12:39PM - 12:51PM |
X27.00008: Scalable maximally informative dimensions analysis of deep neural networks Jimmy Kim, David J. Schwab Maximally informative dimensions (MID) is a technique in neuroscience used to analyze neural responses to natural stimuli. It assumes that neurons are only sensitive to a low-dimensional subspace within the high-dimensional stimulus space and extracts those relevant dimensions by maximizing the mutual information between the neural response and stimulus projections. Despite its advantages, MID suffers from scalability issues of the optimization. As such, in practice, no more than a handful of dimensions can be found. Here, we present a method based on variational lower bounds of mutual information that allows for the efficient extraction of large number of informative dimensions. We demonstrate this method by studying a deep neural network trained on CIFAR-10, and suggest possible applications towards the information-theoretic view of deep learning as well as a new, principled method of visualizing multiple different facets of a neuron. |
Friday, March 6, 2020 12:51PM - 1:03PM |
X27.00009: Inferring structure-function relationships in neural networks from geometric features of their error landscapes James Fitzgerald, Tirthabir Biswas Neural computation in biological and artificial networks relies on nonlinear synaptic integration. The synaptic connectivity between neurons in these networks is a critical determinant of overall network function. However, what features in connectivity are specifically required to generate measurable network-level computations are largely unknown. Here we introduce a geometric framework for calculating and analyzing connectivity constraints in nonlinear recurrent neural networks of rectified linear units. We focus on network-level functions defined by steady-state responses to a finite set of input conditions. By analytically determining how well any network approximates the responses, we show that error landscapes typically have degenerate global minima with several flat and semi-flat dimensions. The latter emerges from the rectifying nonlinearities. Further, at increasing noise or error levels, topological transitions occur where the space of admissible solutions changes suddenly and drastically. This is reminiscent of sudden changes associated with phase transitions in physical systems. These results allow us to develop a formalism for extracting rigorous insights into neural network structure and function from the geometries and topological transitions of error surfaces. |
Friday, March 6, 2020 1:03PM - 1:15PM |
X27.00010: An olfactory pattern generator for functional neural circuit analysis in Drosophila larva Guangwei Si, Jacob Baron, Yu Feng, Aravinthan Samuel For physical stimuli, it is straightforward to engineer patterns to stimulate the sensory circuits and probe the functional properties. However, this framework is generally not applicable to olfaction, since olfactory stimuli lack a well-defined space. The typical olfactory stimuli, fixed odor panels of pure chemicals, natural scents or mixtures, lack the capability to systematic sample the olfactory sensory periphery. Here, we present a stimuli method that project activity pattern directly onto the olfactory receptor neurons. This method is based on the flexible and precise mixing of an optimized set of primary odors using microfluidics. The primary odors were derived for the Drosophila larva to separately target each free variable in the olfactory code, the activity of an individual olfactory receptor neuron type. Our device allows us to program and deliver any mixture of primary odors from all combinatorial possibilities during an experiment, allowing selection of any receptor neuron activity combination on demand. We combine this stimulus method with imaging the olfactory representations in the Drosophila larva from receptor neurons to interneurons of the antennal lobe and Kenyon Cells of the mushroom body. |
Friday, March 6, 2020 1:15PM - 1:27PM |
X27.00011: Sequential and efficient neural-population coding of complex task information Sue Ann Koay, Stephan Y Thiberge, Carlos Brody, David Tank A crucial component of cortical computation is context: variables that indicate the external state of the world, and the internal state of the animal. However, the need to simultaneously represent many pieces of information in neural activity can also pose computational challenges for neural systems to overcome. We recorded from large neural populations in posterior cortices as mice performed a complex, dynamic task involving multiple interrelated variables. How are these variables represented together without crosstalk, and what of their time-dependent relationships to each other? We found that the neural encoding implied that correlated task variables were represented by uncorrelated modes in an information-coding subspace, which can in theory enable optimal decoding directions to be insensitive to neural noise levels. Across posterior cortex, principles of efficient coding thus applied to task-specific information, with neural-population modes as the encoding unit. Remarkably, this encoding function was multiplexed with rapidly changing, sequential neural dynamics, yet reliably followed slow changes in task-variable correlations through time. We can explain this as due to a mathematical property of high-dimensional spaces that the brain might exploit as a temporal scaffold. |
Friday, March 6, 2020 1:27PM - 1:39PM |
X27.00012: Heterogeneity of timescales in random networks with bistable units Nicolae Istrate, Merav Stern, Luca Mazzucato Recent experiments reveal that neural circuits operate in a regime with simultaneous presence of multiple timescales. Such heterogeneity of timescales was observed not only across different brain areas (Murray JD et al 2014) but also across neurons within the same circuit (Cavanagh SE et al 2016) during periods of ongoing activity, suggesting that it may be an intrinsic dynamical property of recurrent circuits. Here we investigate which neural mechanisms may support this heterogeneous distribution of timescales. We show that random neural networks with bistable units (Stern M et al 2014) naturally exhibit large heterogeneity of timescales across neurons in the presence of a distribution of self-couplings. We provide a biophysical interpretation for the bistable units in our rate network in terms of Hebbian assemblies. We show that, in recurrent spiking networks where excitatory and inhibitory neurons are arranged in assemblies, one can achieve a heterogeneous distribution of timescales when assembly sizes are unequal. We thus interpret the rate network self-couplings as potentiated synaptic couplings between neurons within the same assembly. Our results establish a novel theoretical framework to investigate the observed heterogeneity of intrinsic neuronal timescales. |
Friday, March 6, 2020 1:39PM - 1:51PM |
X27.00013: Recurrent Neural Networks Learn Simple Computations on Complex Time Series through Examples Jason Kim, Zhixin Lu, Danielle Bassett A hallmark of artificial and biological neural networks is their ability to represent and generalize complex information. Artificial neural networks generate novel images of cats after seeing many pictures of cats. Conference attendees generate creative new future research directions after carefully listening to a talk. How do neural networks represent, manipulate, and extrapolate complex information given only examples? Previous methods such as FORCE learning have demonstrated the ability of a neural network to replicate complex, and even chaotic, patterns of observed outputs in response to specific patterns of driving inputs. Here, we explain how a neural network further learns the underlying computations performed on the observed outputs. Specifically, we demonstrate that a neural network trained to output slightly translated or rotated chaotic manifolds in response to small driving inputs can extrapolate this output to generate largely translated or rotated chaotic manifolds in response to large driving inputs. We conclude with an analytic understanding of how this extrapolation occurs, yielding design principles for creating neural networks that can generalize knowledge. |
Friday, March 6, 2020 1:51PM - 2:03PM |
X27.00014: Universal scaling laws of interaction time distribution in honeybee and human social networks Sang Hyun Choi, Vikyath D Rao, Tim Gernat, Adam Hamilton, Gene Robinson, Nigel Goldenfeld Compared to the heavy-tailed inter-event time distribution which reflects collective emergent properties, the duration of interaction events has received less attention but may reflect the variability in the interaction behavior. Here we report measurements of trophallaxis and face-to-face event durations of honeybees show that its distribution is heavy-tailed as in human face-to-face interactions. We derive the power-law form by viewing the termination of an interaction as a particle escaping over an energy barrier. The variability within the population is represented by the distribution of barrier heights determined by extreme value theory. We find a universal scaling law connecting the exponent in the interaction time distribution to that in the barrier height distribution, which is verified by both honeybee and human data. Although less prominent than in humans, individual differences in honeybee interactivity, which are usually overlooked, are confirmed. Our work shows how individual differences can lead to universal patterns of behavior that transcend species and specific mechanisms of social interactions. |
Friday, March 6, 2020 2:03PM - 2:15PM |
X27.00015: Social inhibition maintains adaptivity and consensus of honey bees foraging in dynamic environments Subekshya Bidari, Zachary Kilpatrick, Orit Peleg To effectively forage in natural environments, organisms must adapt to changes in the quality and yield of food sources across multiple timescales. How do individuals foraging in groups use private observations and the opinions of their neighbors in changing environments? We address this problem in the context of honey bee colonies whose inhibitory social interactions promote adaptivity and consensus needed for effective foraging. Individual and social interactions within a mathematical model of collective decisions shape the nutrition yield of a group foraging from feeders with temporally switching quality. Social interactions improve foraging from a single feeder if temporal switching is fast or feeder quality is low. When the colony chooses from multiple feeders, the most effective form of social interaction is direct switching, whereby bees flip the opinion of nestmates foraging at lower yielding feeders. Model linearization shows that effective social interactions increase the fraction of the colony at the correct feeder and the rate at which bees reach that feeder. Our mathematical framework allows us to compare a suite of social inhibition mechanisms, suggesting experimental protocols for revealing effective colony foraging strategies in dynamic environments. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2024 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700