APS March Meeting 2024
Monday–Friday, March 4–8, 2024;
Minneapolis & Virtual
Session EE01: V: Statistical and Nonlinear Physics II
11:30 AM–1:18 PM,
Tuesday, March 5, 2024
Room: Virtual Room 01
Sponsoring
Unit:
GSNP
Chair: Raffaele Marino, Università degli studi di Firenze
Abstract: EE01.00009 : Information rates of neural activity on varying time scales*
1:06 PM–1:18 PM
Abstract
Presenter:
Tobias Kühn
(Sorbonne University)
Authors:
Tobias Kühn
(Sorbonne University)
Ulisse Ferrari
(Sorbonne University)
Evaluating electrophysiological recordings, time is normally discretized in bins. Yet, although any result will in general be influenced by the level of this temporal coarse-graining, bin sizes are often chosen ad-hoc or based on restrictions imposed by the model. A prominent example for the latter case are networks of binary neurons (Ising spins), which can be either “on” or “off”. Consequently, the time bin has to be chosen so small that the probability of more than one spike occuring during this time span is negligible - otherwise information is lost. The family of models we are suggesting, which we call spike-count neurons, is a generalization of this framework and allows the computation of the entropy of neural activity avoiding the clipping of spike counts to 1. We allow the single-neuron variable to be a natural number, but still use Ising-like (pairwise) interactions between the neurons to capture the pairwise covariances of the data. The same framework can be used to model the statistics of the time evolution of a single neuron.
Our method allows to faithfully estimate the entropy of the neural activity and eventually the mutual information between neural activity and stimulus. This is a well-established approach when using the binary representation of neural activity - which comes with the limits highlighted before. By the help of a small-correlation expansion, using a novel diagrammatic framework (Kühn & van Wijland 2023), we provide an estimate for the entropies of ensembles also of spike-count neurons. This then requires a number of measures growing only quadratically in the number of neurons, as opposed to the exponential growth associated to the estimate of the full probability distribution, which prohibits using the latter for real data. Our approach allows to flexibly choose the time bin size in dependence of the data without losing information by clipping spike counts. In particular, this enables studying the dependence of the rate of information conveyed by neural activity on the temporal resolution it is registered with.
*This work was supported by the grants CRINFONET of Sorbonne Université and ANR-21- CE37-0024 NatNetNoise of the Agence Nationale de la Recherche.