Bulletin of the American Physical Society
APS March Meeting 2017
Volume 62, Number 4
Monday–Friday, March 13–17, 2017; New Orleans, Louisiana
Session K49: Physics of Neural Network Dynamics in the BrainInvited Session
|
Hide Abstracts |
Sponsoring Units: GSNP DBIO Chair: Jin Wang, State University of New York at Stony Brook Room: 396 |
Wednesday, March 15, 2017 8:00AM - 8:36AM |
K49.00001: How the brain assigns a neural tag to arbitrary points in a high-dimensional space Invited Speaker: Charles Stevens Brains in almost all organisms need to deal with very complex stimuli. For example, most mammals are very good at face recognition, and faces are very complex objects indeed. For example, modern face recognition software represents a face as a point in a 10,000 dimensional space. Every human must be able to learn to recognize any of the 7 billion faces in the world, and can recognize familiar faces after a display of the face is viewed for only a few hundred milliseconds. Because we do not understand how faces are assigned locations in a high-dimensional space by the brain, attacking the problem of how face recognition is accomplished is very difficult. But a much easier problem of the same sort can be studied for odor recognition. For the mouse, each odor is assigned a point in a 1000 dimensional space, and the fruit fly assigns any odor a location in only a 50 dimensional space. A fly has about 50 distinct types of odorant receptor neurons (ORNs), each of which produce nerve impulses at a specific rate for each different odor. This pattern of firing produced across 50 ORNs is called `a combinatorial odor code', and this code assigns every odor a point in a 50 dimensional space that is used to identify the odor. In order to learn the odor, the brain must alter the strength of synapses. The combinatorial code cannot itself by used to change synaptic strength because all odors use same neurons to form the code, and so all synapses would be changed for any odor and the odors could not be distinguished. In order to learn an odor, the brain must assign a set of neurons --- the odor tag --- that have the property that these neurons (1) should make use of all of the information available about the odor, and (2) insure that any two tags overlap as little as possible (so one odor does not modify synapses used by other odors). In the talk, I will explain how the olfactory system of both the fruit fly and the mouse produce a tag for each odor that has these two properties. [Preview Abstract] |
Wednesday, March 15, 2017 8:36AM - 9:12AM |
K49.00002: Brain Connectivity as a DNA Sequencing Problem Invited Speaker: anthony zador The mammalian cortex consists of millions or billions of neurons, each connected to thousands of other neurons. Traditional methods for determining the brain connectivity rely on microscopy to visualize neuronal connections, but such methods are slow, labor-intensive and often lack single neuron resolution. We have recently developed a new method, MAPseq, to recast the determination of brain wiring into a form that can exploit the tremendous recent advances in high-throughput DNA sequencing. DNA sequencing technology has outpaced even Moore's law, so that the cost of sequencing the human genome has dropped from a billion dollars in 2001 to below a thousand dollars today. MAPseq works by introducing random sequences of DNA---``barcodes''---to tag neurons uniquely. With MAPseq, we can determine the connectivity of over 50K single neurons in a single mouse cortex in about a week, an unprecedented throughput, ushering in the era of ``big data'' for brain wiring. We are now developing analytical tools and algorithms to make sense of these novel data sets. [Preview Abstract] |
Wednesday, March 15, 2017 9:12AM - 9:48AM |
K49.00003: Non-equilibrium physics of neural networks for leaning, memory and decision making: landscape and flux perspectives. Invited Speaker: Jin Wang Cognitive behaviors are determined by underlying neural networks. Many brain functions, such as learning and memory, can be described by attractor dynamics. We developed a theoretical framework for global dynamics by quantifying the landscape associated with the steady state probability distributions and steady state curl flux, measuring the degree of non-equilibrium through detailed balance breaking. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories [1]. Both landscape and flux determine the kinetic paths and speed of decision making. The kinetics and global stability of decision making are explored by quantifying the landscape topography through the barrier heights and the mean first passage time. The theoretical predictions are in agreement with experimental observations: more errors occur under time pressure. We quantitatively explored two mechanisms of the speed-accuracy tradeoff with speed emphasis and further uncovered the tradeoffs among speed, accuracy, and energy cost. Our results show an optimal balance among speed, accuracy, and the energy cost in decision making. We uncovered possible mechanisms of changes of mind and how mind changes improve performance in decision processes. Our landscape approach can help facilitate an understanding of the underlying physical mechanisms of cognitive processes and identify the key elements in neural networks [2]. [1]. H. Yan, L. Zhao, L. Hu, X. Wang, E.K. Wang, J. Wang. Nonequilibrium landscape theory of neural networks. Proc. Natl. Acad. Sci. USA E4185--E4194 (2013). [2]. H. Yan, K. Zhang, J. Wang. Physical mechanism of mind changes and tradeoffs among speed, accuracy, and energy cost in brain decision making- Landscape, flux, and path perspectives.~Chin. Phys. B.~25(7), 078702. (2016). [Preview Abstract] |
Wednesday, March 15, 2017 9:48AM - 10:24AM |
K49.00004: The central amygdala circuits in fear regulation. Invited Speaker: Bo Li The amygdala is essential for fear learning and expression. The central amygdala (CeA), once viewed as a passive relay between the amygdala complex and downstream fear effectors, has emerged as an active participant in fear conditioning. However, how the CeA contributes to the learning and expression of fear remains unclear. Our recent studies in mice indicate that fear conditioning induces robust plasticity of excitatory synapses onto inhibitory neurons in the lateral subdivision of CeA (CeL). In particular, this plasticity is cell-type specific and is required for the formation of fear memory. In addition, sensory cues that predict threat can cause activation of the somatostatin-positive CeL neurons, which is sufficient to drive freezing behavior. Here I will report our recent findings regarding the circuit and cellular mechanisms underlying CeL function in fear processing. [Preview Abstract] |
Wednesday, March 15, 2017 10:24AM - 11:00AM |
K49.00005: A model of metastable dynamics during ongoing and evoked cortical activity Invited Speaker: Giancarlo La Camera The dynamics of simultaneously recorded spike trains in alert animals often evolve through temporal sequences of metastable states. Little is known about the network mechanisms responsible for the genesis of such sequences, or their potential role in neural coding. In the gustatory cortex of alert rates, state sequences can be observed also in the absence of overt sensory stimulation, and thus form the basis of the so-called `ongoing activity'. This activity is characterized by a partial degree of coordination among neurons, sharp transitions among states, and multi-stability of single neurons' firing rates. A recurrent spiking network model with clustered topology can account for both the spontaneous generation of state sequences and the (network-generated) multi-stability. In the model, each network state results from the activation of specific neural clusters with potentiated intra-cluster connections. A mean field solution of the model shows a large number of stable states, each characterized by a subset of simultaneously active clusters. The firing rate in each cluster during ongoing activity depends on the number of active clusters, so that the same neuron can have different firing rates depending on the state of the network. Because of dense intra-cluster connectivity and recurrent inhibition, in finite networks the stable states lose stability due to finite size effects. Simulations of the dynamics show that the model ensemble activity continuously hops among the different states, reproducing the ongoing dynamics observed in the data. Moreover, when probed with external stimuli, the model correctly predicts the quenching of single neuron multi-stability into bi-stability, the reduction of dimensionality of the population activity, the reduction of trial-to-trial variability, and a potential role for metastable states in the anticipation of expected events. Altogether, these results provide a unified mechanistic model of ongoing and evoked cortical dynamics. [Preview Abstract] |
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700