Bulletin of the American Physical Society
APS March Meeting 2023
Las Vegas, Nevada (March 510)
Virtual (March 2022); Time Zone: Pacific Time
Session G02: Statistical Mechanics of the BrainFocus

Hide Abstracts 
Sponsoring Units: GSNP DBIO Chair: Christopher Lynn, Princeton University Room: Room 125 
Tuesday, March 7, 2023 11:30AM  12:06PM 
G02.00001: Recent progress in the statistical physics of real neural networks Invited Speaker: William S Bialek Single neurons have fascinating properties, and make measurable contributions to perception and action. Nonetheless many of the most interesting phenomena in the brain seem to be collective or emergent. There is a long history of trying to use ideas from statistical physics to describe these phenomena, with many beautiful and influential results. Nonetheless it remained difficult to see how theory and experiment could be connected, quantitatively. This became more urgent as the 21st century brought new experimental tools for recording the electrical activity of more and more neurons simultaneously. Maximum entropy methods emerged as a strategy for building models that are grounded in statistical physics but connected directly to these data, so that we end up with (for example) a spinglass like model for a particular population of neurons rather than some guess at the family from which this model might be drawn. After some initial uncertainties, it now is clear that these models can provide extraordinarily precise descriptions of ~100 cells, for example correctly predicting the ~100,000 triplet correlations in such networks within experimental errors. When models work this well it makes to take them seriously as statistical mechanics problems and ask, for example, where real networks are in phase diagram of possible networks. In cases where the simplest maximum entropy models fail, several alternatives have been suggested, and I will review some of these. We also need methods that have even better scaling with network size. In a different direction, recordings from 1000+ (or even 100,000+) neurons offer opportunities to explore coarsegraining, or more generally to ask if we can use the renormalization group to inspire new methods of data analysis. This has led to the observation of very precise and reproducible scaling behaviors, holding out that these networks might be described by theories that live at an RG fixed point. 
Tuesday, March 7, 2023 12:06PM  12:18PM 
G02.00002: MultiRelevance: Coexisting but Distinct Notions of Scale in Large Systems Adam G Kline, Stephanie E Palmer An essential aspect of renormalization group (RG) methods is the cutoff scheme, which specifies how collective variables are integrated out. In most applications, such as in field theory, momentum is used as a cutoff scale, and RG produces effective lowmomentum theories by averaging over highmomentum variables. Recently, RG methods have seen use in problems at the boundaries of statistical physics, biology, and computer science, where the models are complicated distributions over highdimensional spaces. These models are frequently not analogous to traditional manybody systems, making it difficult to specify what precisely is meant by "scale". This makes RG hard to implement and interpret. Here, we present recent theoretical progress on both of these fronts. First, we show that nonperturbative RG is wellsuited for models with finitely many degrees of freedom, and demonstrate a simple calculation. Next, we introduce a method of calculating the cutoff scheme to be used based on the structure of the model at hand. In doing so, we demonstrate that some models support multiple notions of scale, and term this property "multirelevance". Finally, we examine how multirelevance appears in problems relevant to fields that interface with statistical physics. 
Tuesday, March 7, 2023 12:18PM  12:30PM 
G02.00003: Exploring Criticality in Markovian Brain Dynamics Faheem Mosam, Eric De Giuli Biological systems need to react to stimuli over a broad spectrum of timescales. If and how this ability can emerge without external finetuning is a puzzle. This problem has been considered in discrete Markovian systems where results from random matrix theory could be leveraged. Indeed, generic large transition matrices are governed by universal results, which predict the absence of long timescales unless finetuned. Our current model considered an ensemble of transition matrices and motivated a temperaturelike uniformity variable that controls the dynamic range of matrix elements. Findings were applied to fMRI data from 820 human subjects scanned at wakeful rest. The data was quantitatively understood in terms of the random model, and brain activity was shown to lie close to a phase transition when engaged in unconstrained, taskfree cognition – supporting the brain criticality hypothesis in this context. In this work, the model is advanced in order to discuss the effect of matrix asymmetry, which is important for certain measures in stochastic thermodynamics (ex: detailed balance). We introduce a new parameter that controls the asymmetry of these discrete Markovian systems and show that when varied over an appropriate scale this factor is able to collapse Shannon entropy measures. This collapse indicates that structure emerges over a dynamic range of both uniformity and asymmetry. Moreover, results are used to identify phase transitions in monkeys transitioning between anesthetic and wakeful states. 
Tuesday, March 7, 2023 12:30PM  12:42PM 
G02.00004: Sparse, consistent correlation structure in the retinal population code across natural movies Caroline M Holmes, Benjamin Hoshal, Michael Berry, Olivier Marre, Stephanie E Palmer Neural populations are known to adapt their coding scheme in response to different scenes, but exactly how remains a mystery, especially at the population rather than single cell level. We analyze data from the larval salamander retina responding to five different natural movies, and use maximum entropy models to characterize the population in terms of activations and couplings between neurons. We find evidence that while individual cells are adapting their response to the stimulus, the couplings, i.e. the population structure, is consistent across natural movies. In particular, we find a sparse, strongly connected backbone of couplings, which constrains the vocabulary of the neural population. We also show that this consistent coupling could provide a stable structure to allow for consistent decoding despite adaptation. Finally, we show that we can make use of this consistent structure to build models of large groups of neurons in a new, scalable way by taking advantage of the consistency of the learned couplings across small groups. 
Tuesday, March 7, 2023 12:42PM  12:54PM 
G02.00005: Coarsegraining facilitates generalization in populations of retinal ganglion cells Kyle Bojanek, Olivier Marre, Michael Berry, Stephanie E Palmer The output of the retina contains all of the information the brain encodes about the visual world. The joint probability distribution of these outputs, retinal ganglion cells (RGCs), is known to change with the statistics of the scene driving retinal activity. This creates a significant challenge in creating generative models of population activity that generalize to new types of stimuli. We build on an informationtheoretic coarsegraining proposed by Ramirez and Bialek to take the population of neurons from an exponential number of states to a linear number of states. Using data from RGCs in the larval salamander retina in response to a variety of natural moving scenes, we test how well coarsegrained representations generalize. We find that trialaveraging significantly improves generalization to other natural movies. These coarsegrainings can be input to a Generalized Linear Model (GLM), an interpretable, generative model of retinal activity. With this input, the GLM performs well at recapitulating retinal response and generalizes well to novel stimulus statistics, including from spatial white noise checkerboards to natural movies. This approach may be helpful in modeling other areas of the brain and has implications both for basic neuroscience and machine learning. 
Tuesday, March 7, 2023 12:54PM  1:06PM 
G02.00006: Brain network dynamics for navigational learning and memory Jean M Carlson, Erica Ward, Robery Woodry, Elizabeth Chrastil Effective spatial navigation is dependent on several cognitive processes including learning, attention, and memory, but how do people learn and remember new environments? Previous studies explored brain activity in already known environments, however, comparatively little is known about the acquisition of this knowledge. We analyzed fMRI of humans completing a challenging maze learning task. Participants were given 16 minutes to explore a virtual hedge maze and learn the locations of 9 objects. Next, their object location memory was tested in 48 ( <45 s) trials, each starting at one object with instructions to find another object (obscured for testing) using paths of the maze (e.g. clock to lamp). Accuracy ranged from near 0% to 100%, enabling us to quantify brain network and behavioral differences that distinguish between poor, average, and exceptional performers. We used dynamic community detection to identify brain network changes. Preliminary results suggest that the best navigators exhibit high flexibility throughout the brain, whereas average and poor navigators exhibit low flexibility only in the salience/ventral attention network. In addition, we examined behavioral exploration patterns in the learning phase to determine whether they correlate with navigation accuracy in the test phase, finding that better navigators tend to explore more evenly than poor navigators. Together, the brain and behavioral dynamics in this study provide rich insight into navigational learning and memory. 
Tuesday, March 7, 2023 1:06PM  1:18PM 
G02.00007: Heavytailed neuronal connectivity arises from Hebbian selforganization Christopher W Lynn, Caroline M Holmes, Stephanie E Palmer In networks of neurons, the connections are heavytailed, with a small number of neurons connected much more strongly than the vast majority of pairs. Yet it remains unclear whether, and how, such heavytailed connectivity emerges from simple underlying mechanisms. Here we propose a minimal model of synaptic selforganization: connections are pruned at random, and the synaptic strength rearranges under a mixture of Hebbian and random dynamics. Under these generic rules, networks evolve to produce scalefree distributions of connectivity strength, with a powerlaw exponent γ = 1 + 1/p that depends only on the probability p of Hebbian (rather than random) growth. Extending our model to include correlations in neuronal activity, we find that clustering (another ubiquitous feature of neuronal networks) also emerges naturally. We confirm these predictions in the connectomes of several animals, suggesting that heavytailed and clustered connectivity may arise from general principles of selforganization, rather than the particulars of individual species or systems. 
Tuesday, March 7, 2023 1:18PM  1:30PM 
G02.00008: Signatures of criticality in the physical structure of the brain Helen S Ansell, Istvan A Kovacs The highly complex structure of the brain is integral to its function, yet many aspects of the physical structure of the brain and how they aid its function remain poorly understood. Recent experimental and computational advances have enabled the three dimensional reconstruction of millimeterscale brain volumes of multiple organisms at the cellular level [13], thereby allowing for detailed investigation of the structure of brain tissue. 
Tuesday, March 7, 2023 1:30PM  1:42PM 
G02.00009: Early Path Dominance as a Principle for Neurodevelopment Rostam M Razban, Jonathan A Pachter, Ken A Dill, Lilianne R MujicaParodi To understand how the neural networks of brains acquire their topological structures upon development, we perform a computational study of human diffusion MRI across adults (UK Biobank, N =19,380, adolescents (Adolescent Brain Cognitive Development Study, N =15,593) and neonates (Developing Human Connectome Project, N =758), as well as mouse viral tracing (Allen Institute). We perform targeted attack, a systematic unlinking of the network, to analyze its effects on global communication across the network through its giant cluster. We find that brain networks differ from scalefree and smallworld structures. Timereversing the attack computation suggests a mechanism for how brains develop, the validity for which we establish experimentally for targeted attack on increasing white matter tract lengths and densities shown to be invariant to aging and disease. We derive an analytical equation using percolation theory for the fraction of brain regions in the giant cluster as a function of connectivity. Based on a close match between theory and experiment, our results demonstrate that tracts are limited to emanate from regions already in the giant cluster and that tracts that appear earliest in neurodevelopment are those that become the longest and densest. 
Tuesday, March 7, 2023 1:42PM  1:54PM 
G02.00010: Multistable irregular activity in large winnertakeall networks Rich Pang Identifying multistable network models producing corticallike irregular spiking is an unsolved challenge in neuroscience. Multistability, thought to support working memory, is usually modeled via persistent states in which certain neurons receive elevated mean inputs. In contrast, irregular spiking is usually modeled in large networks in which neurons receive inputs that strongly fluctuate around a vanishingly small mean. This suggests stable states might instead be distinguished by higherorder input statistics, but networks operating in such a regime are not well understood. Here we show that a network of winnertakealllike neuron groups can create a landscape of steady states in which sparse, irregular spiking stabilizes memories of past external inputs. Via simulation and theory we show this is not a finitesize effect but results from a symmetry breaking in networks driven by multidimensional input fluctuations. Our results survive randomization of the competitive interactions, suggesting finetuned reciprocal connections are not needed. Thus, explicit competition among neurons, potentially via fast lateral inhibition, can enable irregular spiking to protect rather than degrade information, illustrating a novel collective mechanism that could support working memory. 
Tuesday, March 7, 2023 1:54PM  2:06PM 
G02.00011: A random matrix theory in eigenspace enables direct control of collective network activity Lorenzo Tiberi, David Dahmen, Moritz Helias A common approach to analytically treat neural network models is to assume a random connectivity matrix. But how does our choice of randomness affect the network's behavior? Rather than prescribing the distribution of synaptic strengths, we specify connectivity in the space that directly controls the dynamics – the space of eigenmodes. We develop a thermodynamic theory for a novel ensemble of random matrices, whose eigenvalue distribution can be chosen arbitrarily. As this distribution is varied, we show analytically how the behavior of a stochastic linear rate network changes. We discover a critical point whose nature is shaped by the distribution of oscillation frequencies of nearcritical modes: correlation and response functions can be tuned from an exponential to a powerlaw decay in time. Their decay exponents (d1 and d, respectively) slow down ∝ d, which controls the density of the real part of nearcritical eigenvalues p( x ) ∼ x^{d1}, for x → 0. Also, the network shows a transition from high to low dimensional activity when d < 2, with a minimum at d = 1. We argue that d can be interpreted as the network's spatial dimension in the sense of critical phenomena and the renormalization group. In particular, below a critical dimension d = 1, correlations diverge, hinting at a potential transition to nongaussian criticality in a nonlinear system. Our novel approach uncovers a diverse range of behaviors that only emerges when the collective effect of the subleadingorder synaptic strengths' statistics is not neglected. 
Tuesday, March 7, 2023 2:06PM  2:18PM 
G02.00012: Learning Dynamic Graphs, Too Slow Andrei A Klishin, Nicholas H Christianson, Cynthia S.Q. Siew, Dani S Bassett The structure of knowledge is commonly described as a network of key concepts and semantic relations between them. A learner of a particular domain can discover this network by navigating the nodes and edges presented by instructional material, such as a textbook, workbook, or other text. While over a long temporal period such exploration processes are certain to discover the whole connected network, little is known about how the learning is affected by the dual pressures of finite study time and human mental errors. Here we model the learning of linear algebra textbooks with finite length random walks over the corresponding semantic networks. Through a combination of stochastic simulations and statistical mechanics theory, we show that if a learner does not keep up with the pace of material presentation, the learning can be an order of magnitude worse than it is in the asymptotic limit. Further, we find that this loss is compounded by three types of mental errors: forgetting, shuffling, and reinforcement. Broadly, our study informs the design of teaching materials from both structural and temporal perspectives. 
Tuesday, March 7, 2023 2:18PM  2:30PM 
G02.00013: Thermal management in neuromorphic materials, devices, and networks Felipe Torres, ALI C BASARAN, IVAN K SCHULLER 
Follow Us 
Engage
Become an APS Member 
My APS
Renew Membership 
Information for 
About APSThe American Physical Society (APS) is a nonprofit membership organization working to advance the knowledge of physics. 
© 2023 American Physical Society
 All rights reserved  Terms of Use
 Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 207403844
(301) 2093200
Editorial Office
1 Research Road, Ridge, NY 119612701
(631) 5914000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 200452001
(202) 6628700