Bulletin of the American Physical Society
APS March Meeting 2020
Volume 65, Number 1
Monday–Friday, March 2–6, 2020; Denver, Colorado
Session L23: Physics of the Brain: Structure and Dynamics IFocus Session Undergrad Friendly
|
Hide Abstracts |
Sponsoring Units: DBIO GSNP Chair: Tatyana Sharpee, Salk Institute for Biological Studies Room: 304 |
Wednesday, March 4, 2020 8:00AM - 8:36AM |
L23.00001: A structured representation of odors in the fly mushroom body Invited Speaker: Elizabeth J Hong The mushroom body (MB) is a third-order olfactory area in the insect brain required for adaptive olfactory behaviors, such as learning odor associations, and is loosely analogous to olfactory cortex in mammals. In the vinegar fly Drosophila melanogaster, the chemical selectivity of each of the ~2200 principal neurons of the MB, called Kenyon cells (KCs), is determined by the pooling of odor information from a small random subset of the ~50 channels of olfactory input. The random integration of olfactory inputs in KCs contrasts with connectivity rules in other third-order olfactory areas, such as the lateral horn, where feature selectivity is determined by the invariant integration of specific combinations of olfactory inputs. The distinct connectivity statistics in different third-order olfactory areas has led to the idea that odor representations in the MB are “unstructured” and individualized in every brain, and must acquire meaning through learning, whereas invariant, chemotopic representations in the lateral horn support innate behaviors. We present a method using genetically enabled, in vivo two-photon functional imaging to measure near complete population representations of odors in KCs. We find that the relationship among neural representations of odors in the MB is invariant across individual brains. Furthermore, we apply a simple computational model of MB odor responses to illustrate that sparse, random connectivity can result in invariant relationships among odor representations, the structure of which is at least partially dictated by the correlational structure of the peripheral olfactory code. However, the experimentally observed structure of MB representational space deviates significantly from the predictions of the model for some regions of odor space. We discuss possible reasons for this discrepancy and future experimental directions to distinguish among these possibilities. |
Wednesday, March 4, 2020 8:36AM - 8:48AM |
L23.00002: Inferring causality in highly-synchronized dynamics Josuan Calderon, Gordon Berman The brain is a complex system with intricate neural dynamics, exhibiting interactions that are thought to be crucial for emergent cognitive functions. Causality methods provide a powerful tool for the characterization of these functional circuits by identifying directed functional interactions from time-series data. A frequently-stated hypothesis is that synchronization of oscillatory activity plays a key role for the communication of information between distant sites of the brain. However, quantitatively assessing the strength and the direction of these interactions has proven difficult, especially in the highly-synchronized states that are often observed. Here we explore how synchronization affects the capability to mathematically measure causal interactions in both artificial systems and data. Performing a comparative analysis of often-used causality metrics, we show how synchronization introduces biases. These results suggest a new framework that could be used to assess causality across a wide range of synchronization states in the brain and elsewhere. |
Wednesday, March 4, 2020 8:48AM - 9:00AM |
L23.00003: Modularity allows classification of human brain networks during music and speech perception Melia Bonomo, Christof Karmonik, Anthony K Brandt, J Todd Frazier Therapeutic music engagement is effective for improving cognitive health in patients suffering from neurological disease or trauma, however, little is known about the mechanism of action. Here, we investigated a means to quantify individual differences in functional brain activity while subjects listened to a variety of auditory pieces. Modularity was used to measure the degree to which functional activity within a group of brain regions was more highly correlated than activity between groups. We found consistent modules of the brain regions responsible for auditory processing among subjects, but differing whole-brain connection patterns and co-activation of regions responsible for autobiographical memory, prospection, and processing emotion ultimately led to differing modular structure. Significant trends were seen for individuals with higher or lower modularity during their self-selected musical piece. The use of modularity as a classifier of functional brain activity during auditory processing paves the way for creating personalized music therapy interventions and understanding how music benefits the brain. |
Wednesday, March 4, 2020 9:00AM - 9:12AM |
L23.00004: Synchronization, waves and stochasticity in spatially structured neuronal networks Jonas Ranft, Anirudh Kulkarni, Vincent Hakim Synchronization between distant brain regions, in the 20-30 Hz frequency range, has been observed in areas such as V1 or in the motor cortex during movement preparation. In order to shed light on these data, we have revisited the synchronization properties of distinct oscillating local Excitatory-Inhibitory (E-I) modules induced by distance-dependent long-range excitation. First, focusing on the sparsely synchronized oscillation regime which prevails in vivo, we have developed a rate model that accurately describes the stochastic oscillations of a single spiking E-I module. Second, we have considered the case of a chain of E-I modules with long-range excitation that decreases with distance. For modules of large sizes, complex dynamical regimes are observed in a sufficiently long chain, when long-range excitation mainly targets excitatory neurons . Synchronization of the module oscillations is otherwise observed. For modules with a moderate and biologically realistic size, stochasticity plays an important role in the observed dynamics. We show that its effect can be quantitatively described by modifications of the well-known Edwards-Wilkinson or KPZ equations. We analyse the resulting stochastic dynamics and discuss their relations to observed experimental observations. |
Wednesday, March 4, 2020 9:12AM - 9:24AM |
L23.00005: Searching for emergent long time scales without fine tuning Xiaowen Chen, William S Bialek Most of animal and human behavior occurs on time scales much longer than the response times of individual neurons. In many cases it is plausible that these long time scales emerge from the recurrent dynamics of electrical activity in networks of neurons. In linear models, time scales are set by the eigenvalues of a dynamical matrix whose elements measure the strengths of connections between neurons. It is not clear to what extent connection strengths need to be tuned in order to generate sufficiently long time scales; in some cases, one needs not just a single long time scale but a whole range. For a system with random symmetric connections, random matrix theory allows us to show that imposing a global stability constraint is sufficient to generate a diverging density of arbitrarily slow modes. But as soon as the detection mechanism for stability is set to be biologically plausible, these modes disappear for all system sizes. We will give a progress report on the more realistic, and challenging, case of asymmetric interactions. |
Wednesday, March 4, 2020 9:24AM - 10:00AM |
L23.00006: Margin learning in spiking neural networks Invited Speaker: Robert Gütig Neurons in the brain receive inputs from thousands of afferents. The high dimensionality of neural input spaces is tightly linked to the ability of neurons to realize difficult classification tasks through simple decision surfaces. However, this advantage of high dimensional neural representations comes at a price: Learning is difficult in high dimensional spaces. In particular, a neuron's ability to generalize from a limited number of training examples can be impaired by overfitting when the number of free parameters, i.e.synaptic efficacies, is large. |
Wednesday, March 4, 2020 10:00AM - 10:12AM |
L23.00007: Randomly connected networks generate emergent selectivity and predict decoding properties of large populations of neurons Audrey Sederberg, Ilya M Nemenman Modern recording methods enable sampling of thousands of neurons during the performance of behavioral tasks, raising the question of how recorded activity relates to theoretical models. In the context of decision making, functional connectivity between choice-selective cortical neurons was recently reported[1]. The straightforward interpretation of these data suggests the existence of selective pools of inhibitory and excitatory neurons. Computationally investigating an alternative mechanism for these experimental observations, we find that a randomly connected network of excitatory and inhibitory neurons generates single-cell selectivity, patterns of pairwise correlations, and indistinguishable excitatory and inhibitory readout weight distributions, as in experimental observations. We predict that, for this task, there are no anatomically defined subpopulations of neurons representing choice, and that choice preference of a particular neuron changes with the details of the task. We suggest distributed stimulus selectivity and functional organization in population codes are emergent properties of randomly connected networks. |
Wednesday, March 4, 2020 10:12AM - 10:24AM |
L23.00008: Impact of correlated connections in large recurrent networks with mesoscopic structure Alexander Kuczala, Tatyana Olegivna Sharpee Random recurrent networks serve as a useful tool for the tractable analysis of large neural networks. The spectrum of the connectivity matrix determines the network’s linear dynamics as well as the stability of the nonlinear dynamics. Knowledge of the onset of chaos helps determine the networks computational capabilities and memory capacity. However, fully homogeneous random networks lack the non-trivial structures found in real world networks, such as cell types and plasticity induced correlations in neural networks. We address this deficiency by investigating the impact of correlations between forward and reverse connections, which may depend on the neuronal type. Using random matrix theory, we derive a set of self consistent equations that efficiently compute the eigenvalue spectrum of large random matrices with block-structured correlations. The inclusion of structured correlations distorts the eigenvalue distribution in a nontrivial way; the distribution is neither a circle nor an ellipse. We find that layered networks with strong interlayer correlations have gapped spectra. For antisymmetric layered networks, oscillatory modes dominate the linear dynamics. In simple cases we find analytic expressions for the support of the eigenvalue distribution. |
Wednesday, March 4, 2020 10:24AM - 10:36AM |
L23.00009: Relationships Between Lognormal Distributions of Neural Properties and Connectivities Peter Robinson, xiao gao, Yinuo Han Relationships between convergence of inputs onto neurons, divergence of outputs from them, synaptic strengths, nonlinear firing properties, and randomness of axonal ranges are systematically explored by interrelating means and variances of synaptic strengths, firing rates, and soma voltages. Imposition of self-consistency yields broad distributions of synaptic strength as a necessary concomitant of the massive convergence of inputs to individual neurons, and widths of lognormal distributions of synaptic strength and firing rate are explained. The strongest individual synapses are shown to have an effect on soma voltage comparable to the standard deviation of of all others combined. Remarkably, inclusion of moderate randomness in axonal ranges accounts for the observed ~103-fold variability in two-point connectivity at a given separation, and ~105-fold overall when the known mean exponential fall-off is included, consistent with observed near-lognormal distributions. |
Wednesday, March 4, 2020 10:36AM - 10:48AM |
L23.00010: Directed effective connectivity of in vitro neuronal networks revealed from electrophysiological recordings Chumin Sun, K.C. Lin, Yu-Ting, Huang, Emily S.C. Ching, Pik-Yin Lai, C.K. Chan Studying connectivity of in vitro neuronal network revealed from electrophysiological recordings can provide insights for understanding the brain network. Existing methods focus on estimating functional connectivity defined by statistical dependencies between neuronal activities but it is effective connectivity that captures the relevant direct casual interactions. We present a method that makes explicit use of a theoretical result that effective connectivity is contained in the relation between time-lagged cross-covariance and equal-time cross-covariance. Applying this method to data recorded by multi-electrode arrays of over 4000 electrodes, we estimate the directed effective connectivity and synaptic weights of neuronal cultures at different days in vitro. Our analyses show that the neuronal networks are highly nonrandom with a fraction of inhibitory nodes close to the values measured in monkey cerebral cortex, have small-world topology and feeder hubs of large outgoing degree and the distributions of the average incoming and outgoing synaptic strength are non-Gaussian with long tails. |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700