Bulletin of the American Physical Society
APS March Meeting 2021
Volume 66, Number 1
Monday–Friday, March 15–19, 2021; Virtual; Time Zone: Central Daylight Time, USA
Session X12: Physics of Neural Systems IILive
|
Hide Abstracts |
Sponsoring Units: DBIO Chair: John Beegs, Indiana Univ - Bloomington; Andrew Leifer, Princeton University |
Friday, March 19, 2021 8:00AM - 8:12AM Live |
X12.00001: Reaction-diffusion modeling of neurotransmitter processing at a high frequency synapse Elham Alkhammash, Ivan L'Heureux, Catherine E Morris, Bela Joos In the weakly electric fish Eigenmannia (glass knifefish), high frequency (200-550Hz) electric organ discharge (EOD) is driven by high frequency cholinergic synaptic input onto the electrocytes at their electroplaques. Assuming periodic release of ACh into the cylindrical synaptic gap, we solve numerically a one dimensional reaction-diffusion model at 200Hz and 500Hz. The model included the diffusion of ACh and its interactions with AChesterase (AChE) and AChRs. At 500Hz a higher AChE/ACh ratio is needed to remove ACh from the cleft between consecutive ACh releases. Only a small fraction of the ACh molecules reaches the AChRs, and there are residual amounts of ACh molecules from the preceding release. Previous computational studies showed that the persistently present ACh should not impede high frequency electrocyte firing, provided the ensuing cholinergic current is subthreshold for triggering firing. Our results suggest that the EOD’s upper bound frequency is attained when that persistent cholinergic current exceeds the firing threshold. The observed maximum frequency in Eigenmannia individuals is around 550Hz. |
Friday, March 19, 2021 8:12AM - 8:24AM Live |
X12.00002: Generalized ORGaNICs: Towards a Unifying Framework for Neural Dynamics Shivang Rawat, David Heeger, Stefano Martiniani It has been hypothesized that complex functions across different brain areas are accomplished through a set of canonical neural computations of the same form. Yet, a theoretical framework that employs these computational motifs to explain neural activity across different neural systems remains unknown. Oscillatory Recurrent Gated Neural Integrator Circuits (ORGaNICs) is a recently proposed framework capable of simulating key neurophysiological, cognitive and perceptual phenomena including working memory, sensory processing and attention, and motor control. Here, we derive a generalized class of ORGaNICs, enabling us to broaden the range of cognitive and perceptual phenomena that can be recapitulated by this framework under realistic biophysical constraints. We show that these circuits can simulate stochastic gamma-band oscillatory activity in primary visual cortex and the stimulus-dependence of oscillation amplitude and frequency. We also characterize the complex dynamics of working memory delay period activity in the prefrontal cortex (PFC). We show that generalized ORGaNICs can replicate brief bursts of narrow-band gamma oscillations in the PFC while maintaining self-sustained attractor states, and assess the circuit’s robustness with respect to noise. |
Friday, March 19, 2021 8:24AM - 8:36AM Live |
X12.00003: Quantal slowing in coupled rhythmogenic neural networks: Applications to breathing Taylor Womack
|
Friday, March 19, 2021 8:36AM - 8:48AM Live |
X12.00004: Local homeostatic regulation of the spectral radius of echo-state networks Fabian Schubert, Claudius Gros Criticality is considered an important property for recurrent neural networks. Close to a critical phase transition, RNNs show improved performance in sequential information processing. The theory of reservoir computing provides a basis for the understanding of recurrent neural computation, but it requires adjustments of global network parameters so that the network can operate in a state close to criticality. In the case of echo-state networks, an important quantity is the spectral radius of the recurrent synaptic weight matrix. In terms of biological plausibility, a calculation of the spectral radius is not possible. We show, however, that there exists a local and biologically plausible synaptic scaling mechanism, termed flow control, that can control the spectral radius while the network is operating under the influence of external input. We demonstrate the effectiveness of the new adaption rule by applying it to echo-state networks and testing their task performance under a time-delayed XOR operation on random binary input sequences. A stable network performance over a wide range of input strengths is preserved. This makes our mechanism more flexible to changes in the external driving as compared to scaling mechanisms that use a fixed setpoint of neural activity. |
Friday, March 19, 2021 8:48AM - 9:00AM Live |
X12.00005: Nonequilibrium Green's functions for functional connectivity in the brain Francesco Randi, Andrew M Leifer A theoretical framework describing the set of interactions between neurons in the brain, or functional connectivity, should include dynamical functions representing the propagation of signal from one neuron to another. Green's functions and response functions are natural candidates for this but, while they are conceptually very useful, they are usually defined only for linear time-translationally invariant systems. The brain, instead, behaves nonlinearly and in a time-dependent way. In this talk, I will show how nonequilibrium Green's functions can be used to describe the time-dependent functional connectivity of a continuous-variable network of neurons. I will show how the connectivity is related to the measurable response functions, and present illustrative numerical calculations inspired from C. elegans. |
Friday, March 19, 2021 9:00AM - 9:12AM Live |
X12.00006: Biological learning of local motion detectors Tiberiu Tesileanu, Alexander Genkin, Dmitri Chklovskii Motion detectors in the brain are typically localized, using correlations between light intensities at nearby locations processed using different temporal dynamics. If motion is global, however, in the sense that the same transformation is applied uniformly across the entire field of view, locations arbitrarily far apart can in principle be used for motion detection. Here we provide a normative model to explain why more distant connections are not used. Specifically, we show that if the brain is adapted to natural visual statistics, this leads to localized interactions even if we ignore the costs implied by long-range connections. Our model further provides a biologically plausible mechanism that can be used to learn the connectivity pattern for local motion detectors. We adapt a method initially designed for learning infinitesimal generators for global motion, and show that, when the training data contains localized patterns and/or localized motion, the learned generators naturally cluster into groups involving small sets of nearby pixels. Our proposed learning algorithm is based on non-negative similarity matching, a normative approach that allows us to use an objective function to derive a biologically plausible circuit that solves the task. |
Friday, March 19, 2021 9:12AM - 9:24AM Live |
X12.00007: Using Neural Networks for Dual Dimensionality Reduction Eslam Abdelaleem, Ilya M Nemenman When studying different biological or other complex systems, one often needs to identify correlated features of observable of the system, later to be included in the system’s models. However, those observables are often multidimensional with many features contributing to the correlation. It is then the job of data analysis to identify the smallest subset of features of the variables that encapsulates such correlations. Here we develop a deep learning based method for performing a dual dimensionality reduction: compressing two multidimensional variables while maximizing the correlation between their compressed description. The method can detect nonlinear statistical dependencies among the variables. |
Friday, March 19, 2021 9:24AM - 9:36AM Live |
X12.00008: Crystallinity characterization of white matter in the human brain Erin Teich, Matthew Cieslak, Barry Giesbrecht, Jean M. Vettel, Scott T. Grafton, Theodore D. Satterthwaite, Danielle Bassett White matter microstructure underpins cognition and function in the human brain through the facilitation of neuronal communication, and the non-invasive characterization of this structure remains an elusive goal in the neuroscience community. Efforts to assess white matter microstructure are hampered by the sheer amount of information needed for characterization. Current techniques address this problem by representing white matter features with single scalars that are often not easy to interpret. Here, we address these issues by introducing tools from materials science for the characterization of white matter microstructure. We investigate structure on a mesoscopic scale by analyzing its homogeneity and determining which regions of the brain are structurally homogeneous, or "crystalline" in the context of materials science. We find that crystallinity is a reliable metric that varies across the brain along interpretable lines of anatomical difference. We also parcellate white matter into "crystal grains," or contiguous sets of voxels of high structural similarity, and find overlap with other white matter parcellations. Our results provide new means of assessing white matter microstructure on multiple length scales, and open new avenues of future inquiry. |
Friday, March 19, 2021 9:36AM - 9:48AM Live |
X12.00009: Hysteresis in Models of Neuronal Dynamics Cheyne Weis, Michel Fruchart, Alexey Galda, Ryo Hanai, Peter Littlewood, Vincenzo Vitelli The Wilson-Cowan model has described a variety of different statistical behaviors in neocortical dynamics. Hysteresis has been observed in the Wilson-Cowan equations since their conception and modeled as an underlying mechanisms for attention and memory. In a pair of coupled Wilson-Cowan equations, adiabatically varying the external stimulus in a closed loop can transfer the system between fixed point steady states. The role of limit cycles in the hysteresis of a two-neuron system is restricted by the maximum of one stable limit cycle existing for a set of parameters. We demonstrate that as the number of coupled neurons is increased, the number of possible stable limit cycles and the proportion of parameter space they occupy also increases. We then show that the system exhibits hysteresis in transitions between the multiple limit cycle steady states. When the external stimulus is chosen such that the equations exhibit a discrete symmetry, the splitting and merging of stable and unstable limit cycles is observed. These results provide a new perspective on hysteresis-based phenomena in neuronal dynamics. |
Friday, March 19, 2021 9:48AM - 10:00AM Live |
X12.00010: A phenomenological model explains critical periods in learning Audrey Sederberg, Ilya M Nemenman An intriguing feature of learning in animals is the existence of critical periods, windows of time in which learning is easy, and after which the ability to learn is dramatically diminished. Classic experiments in barn owls and songbirds showed that juveniles can adapt to perturbed sensory feedback, such as a shift in the visual field or in the pitch of a song, and this experience shapes their susceptibility to adapting to shifts as an adult. Recent work from our group has shown that certain aspects of adult learning in a pitch-shift experiment is described well by the Bayesian integration of feedback. Building on the Langevin dynamics formulation of this model for the dynamics of Bayesian learning, we develop a general approach to find conditions under which critical periods naturally appear as a result of Bayes-optimal learning. Our results suggest that two factors, the distribution of feedback signals and coupling to latent dynamical variables, can account for a range of critical period observations, including rapid learning as a juvenile and experience-dependent learning dynamics for adults. We discuss testable predictions for other systems. |
Friday, March 19, 2021 10:00AM - 10:12AM Live |
X12.00011: Ensembling Diverse Neural Networks for Improved Protein Contact Prediction Wendy Billings, Dennis Della Corte Predicting residue-residue contacts from protein sequence is an important task, as protein contacts are relevant to the process of predicting folded protein structures. We propose that the deep learning practice of ensembling has the potential to improve protein contact prediction by combining the outputs of discrete neural networks. We show that ensembling the predictions made by different groups in the recent Critical Assessment of Protein Structure Prediction (CASP13) outperforms all individual groups. Further, we show that contacts derived from the distance predictions of three additional deep neural networks – AlphaFold, trRosetta, and our own ProSPr – can be substantially improved by ensembling all three networks. Finally, we demonstrate that ensembling these recent deep neural networks with the best CASP13 group creates a superior contact prediction tool. These results indicate that combining the predictions of diverse, high quality neural networks can improve protein contact prediction and outperform the best individual models. We call for increased availability of protein contact prediction methods and the creation of a better contact benchmark set, in order to create a community-based ensemble approach to superior protein contact prediction. |
Friday, March 19, 2021 10:12AM - 10:24AM On Demand |
X12.00012: Dynamically learning neural interactions that improve information Martin Tchernookov, Vijay Singh In the brain, the information about an external signal is encoded as the response of a set of interacting neurons. One expects that the underlying neural interactions are not random, but are the result of the statistical features of the expected stimuli and neural responses. By estimating the mutual information between the stimulus and the response of a set of interacting neurons we show that, indeed, randomly connected neurons do not provide improvement in mutual information as compared to non-interacting neurons, on average. This is because the interactions that positively and negatively affect the mutual information are statistically equally distributed. To achieve information increase the network has to search the space of possible interactions. This search comes at an energetic cost. To demonstrate a possible unsupervised search mechanism, we develop a model of orientation-selective neurons where the interactions between the neurons are learned dynamically from the responses to external stimuli. Using the temporal history of the neural correlations, the network adjusts its interactions dynamically leading to higher mutual information between stimuli and neural responses. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700