Bulletin of the American Physical Society
2024 APS March Meeting
Monday–Friday, March 4–8, 2024; Minneapolis & Virtual
Session EE01: V: Statistical and Nonlinear Physics IIVirtual Only
|
Hide Abstracts |
|
Sponsoring Units: GSNP Chair: Raffaele Marino, Università degli studi di Firenze Room: Virtual Room 01 |
|
Tuesday, March 5, 2024 11:30AM - 11:42AM |
EE01.00001: Large and small fluctuations in oscillator networks from heterogeneous and correlated noise Jason M Hindes, Ira B Schwartz, Melvyn Tyloo Oscillatory networks subjected to noise are broadly used to model physical and technological systems. Due to their nonlinear coupling, such networks typically have multiple stable and unstable states that a network might visit due to noise. In this talk, we focus on the assessment of fluctuations resulting from heterogeneous and correlated noise inputs on Kuramoto model networks. We evaluate the typical, small fluctuations near synchronized states and connect the network variance to the overlap between stable modes of synchronization and the input noise covariance. Going beyond small to large fluctuations, we introduce the indicator mode approximation, that projects the dynamics onto a single amplitude dimension. Such an approximation allows for estimating rates of fluctuations to saddle instabilities, resulting in phase slips between connected oscillators. Statistics for both regimes are quantified in terms of effective noise amplitudes that are compared and contrasted for several noise models. Bridging the gap between small and large fluctuations, we show that a larger network variance does not necessarily lead to higher rates of large fluctuations. |
|
Tuesday, March 5, 2024 11:42AM - 11:54AM |
EE01.00002: Statistical physics of regression with quadratic models Blake Bordelon, Cengiz Pehlevan, Yasaman Bahri A central model in machine learning for theory and practice is the linear model, where the predictor is a linear function of the learnable parameters. Such models arise naturally in a common limit of infinitely-wide deep neural networks and have aided in the understanding of the dynamics of learning and generalization. However, linear models also have limitations and do not capture the richness of "feature learning" that arises in deep neural networks. We consider quadratic models – predictors which allow a quadratic dependence on parameters – as a class of models for studying the effects of feature learning. We theoretically investigate the generalization scaling with sample size and learning dynamics in these models via replica methods from statistical physics and a dynamical mean field theory. |
|
Tuesday, March 5, 2024 11:54AM - 12:06PM |
EE01.00003: Engineered Ordinary Differential Equations as Classification Algorithm (EODECA): a Bridge between Dynamical Systems and Machine Learning Raffaele Marino In a world increasingly reliant on machine and deep learning, the interpretability of these models remains a substantial challenge, with many equating their functionality to an enigmatic black box. This study seeks to bridge the domains of machine learning and dynamical systems. Recognizing the deep parallels between dense neural networks and dynamical systems, particularly in the light of non-linearities and successive transformations, this talk introduces the Engineered Ordinary Differential Equations as Classification Algorithms (EODECAs). Uniquely designed as neural networks underpinned by continuous ordinary differential equations (ODEs), EODECAs aim to capitalize on the well-established toolkit of dynamical systems. Unlike traditional deep learning models, which often suffer from opacity and non-invertibility, EODECAs promise both high classification performance and intrinsic interpretability. They are naturally invertible, granting them an edge in understanding and transparency over their counterparts. By bridging these domains, we hope to usher in a new era of machine learning models where genuine comprehension of data processes complements predictive prowess. Drawing inspiration from Sir Winston Churchill, this research might signify the end of the beginning for opaque machine learning models, emphasizing the imperative of interpretability in design. |
|
Tuesday, March 5, 2024 12:06PM - 12:18PM |
EE01.00004: Phase classification and finite-size analysis with supervised machine learning Lev Shchur We analyze the problem of supervised learning of ferromagnetic phase transitions from a statistical physics perspective [1]. We consider two systems in two universality classes, a two-dimensional Ising model and a two-dimensional Baxter-Wu model, and perform a thorough finite-dimensional analysis of the supervised learning phase results of each model. We found that the variance of the neural network (NN) output function (VOF) as a function of temperature peaks in the critical region. Qualitatively, VOF is related to the degree of NN classification. We find that the VOF peak width displays a finite size scaling determined by the correlation length metric of the model's universality class. We test this conclusion using several neural network architectures—a fully connected neural network, a convolutional neural network, and several members of the ResNet family—and discuss the accuracy of the extracted critical metrics. |
|
Tuesday, March 5, 2024 12:18PM - 12:30PM |
EE01.00005: Thermodynamics of deterministic finite automata operating locally and periodically David H Wolpert, Thomas E Ouldridge Real-world computers have operational constraints that cause nonzero entropy production (EP). In particular, almost all real-world computers are `"periodic'', iteratively undergoing the same physical process; and "local", in that subsystems evolve whilst physically decoupled from the rest of the computer. These constraints are so universal because decomposing a complex computation into small, iterative calculations is what makes computers so powerful. We first derive the nonzero EP caused by the locality and periodicity constraints for deterministic finite automata (DFA), a foundational system of computer science theory. We then relate this minimal EP to the computational characteristics of the DFA. We thus divide the languages recognised by DFA into two classes: those that can be recognised with zero EP, and those that necessarily have non-zero EP. We also demonstrate the thermodynamic advantages of implementing a DFA with a physical process that is agnostic about the inputs that it processes. |
|
Tuesday, March 5, 2024 12:30PM - 12:42PM |
EE01.00006: Thermodynamic processes of Quantum heat transport in atomic systems Rodrigo A Ribeiro, Marcone I Sena Junior Several computational and analytical methods investigate time-dependent heat transport in nanostructures. Computational calculations consist of a considerable challenge, rendering the perturbative approach an attractive alternative. We study heat transport and quantum thermodynamics by phonons, using the equation of motion with Green's functions in phase space. We analyze heat flowing between out-of-equilibrium reservoirs in a driven system by time-dependent contacts in transient and permanent regimes. We obtain the heat current between reservoirs through Dyson series time-dependent perturbation. Additionally, we analyze the thermal efficiencies of different configurations to model atomic-scale quantum heat engines or refrigerators. |
|
Tuesday, March 5, 2024 12:42PM - 12:54PM |
EE01.00007: Towards quantization Conway Game of LifeKrzysztof Pomorski1,3, Dariusz Kotula2,3 1: Institute of PhysicsLodz University of Technology, Lodz, Poland2: Faculty of Computer Science and TelecommunicationCracow University of Technology, Cracow, Poland3: Quantum Hardware Systems, Lodz, Poland Krzysztof D Pomorski, Dariusz Kotula
|
|
Tuesday, March 5, 2024 12:54PM - 1:06PM |
EE01.00008: Entropy production of a qubit out of the perturbative regime Julian Rapp The calculation of Rényi and von Neumann entropy production rates in open quantum systems has been explored by our group for simple systems in the weak coupling limit. However, there have been questions regarding the validity of certain results in higher perturbative order. We therefore investigate entropy flows in an environment that disturbs the quantum system so that perturbation theory is no longer applicable. In this case, our formalism of evaluating powers of density matrices has to be generalized for analogous quasi-density matrices and is thus different from the previous approach. We present the obtained results and discuss differences to the weak coupling case. |
|
Tuesday, March 5, 2024 1:06PM - 1:18PM |
EE01.00009: Information rates of neural activity on varying time scales Tobias Kühn, Ulisse Ferrari Evaluating electrophysiological recordings, time is normally discretized in bins. Yet, although any result will in general be influenced by the level of this temporal coarse-graining, bin sizes are often chosen ad-hoc or based on restrictions imposed by the model. A prominent example for the latter case are networks of binary neurons (Ising spins), which can be either “on” or “off”. Consequently, the time bin has to be chosen so small that the probability of more than one spike occuring during this time span is negligible - otherwise information is lost. The family of models we are suggesting, which we call spike-count neurons, is a generalization of this framework and allows the computation of the entropy of neural activity avoiding the clipping of spike counts to 1. We allow the single-neuron variable to be a natural number, but still use Ising-like (pairwise) interactions between the neurons to capture the pairwise covariances of the data. The same framework can be used to model the statistics of the time evolution of a single neuron.
Our method allows to faithfully estimate the entropy of the neural activity and eventually the mutual information between neural activity and stimulus. This is a well-established approach when using the binary representation of neural activity - which comes with the limits highlighted before. By the help of a small-correlation expansion, using a novel diagrammatic framework (Kühn & van Wijland 2023), we provide an estimate for the entropies of ensembles also of spike-count neurons. This then requires a number of measures growing only quadratically in the number of neurons, as opposed to the exponential growth associated to the estimate of the full probability distribution, which prohibits using the latter for real data. Our approach allows to flexibly choose the time bin size in dependence of the data without losing information by clipping spike counts. In particular, this enables studying the dependence of the rate of information conveyed by neural activity on the temporal resolution it is registered with.
|
Follow Us |
Engage
Become an APS Member |
My APS
Renew Membership |
Information for |
About APSThe American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. |
© 2025 American Physical Society
| All rights reserved | Terms of Use
| Contact Us
Headquarters
1 Physics Ellipse, College Park, MD 20740-3844
(301) 209-3200
Editorial Office
100 Motor Pkwy, Suite 110, Hauppauge, NY 11788
(631) 591-4000
Office of Public Affairs
529 14th St NW, Suite 1050, Washington, D.C. 20045-2001
(202) 662-8700
