Calcium signals from VIP and SST neurons of the primary visual cortex as decoders of familiar and novel visual stimuli
2022 Neuromatch Academy (NMA) project on AllenSDK database by
- Nicole A Gonzalez R
- Diego A Heredia F
- Alejandra Lopez C
- Francisco Millar S
- Sebastian Venegas
- Bruno Zorzet
Discriminating between already recognized and novel visual stimuli and identifying them safe is essential for mammalian survival. How visual stimuli are encoded is therefore fundamental to understanding how the brain interprets the inputs it receives from the world. During visual processing tasks, neurons in the primary visual cortex (V1) of mice constantly modulate their activity depending on the task and the context in which visual stimulus processing occurs. Among these, populations of VIP and SST neurons are key regulators of pyramidal neuron activity. Although the dynamics of these neural network in V1 cortex are not fully understood, there is evidence that these neurons markedly modify their activity during novel vs familiar image processing, which has been investigated by measuring calcium activity in mice performing familiar vs novel image discrimination tasks. Therefore, the following work aims to demonstrate whether, from the recording of the calcium activity of V1 cortex neurons in mice obtained from the AllenSDK database, a computational model capable of predicting whether the calcium dynamics of a novel or familiar image can be generated.
-
¿Can the calcium trace of VIP and SST neurons allow to discriminate between familiar and novel stimuli?
-
¿How do calcium signals from VIP, SST neurons combine in the mouse visual cortex allow to differentiate familiar from novel stimuli?
-
¿Which parts of the fluorescence signal are most important for predicting the type of stimulus presented?
We hypothesize that the fluorescence values of the VIP, SST neuronal populations are non-linearly related to the type of novel or familiar image presented, and that a binary logistic regression is an excellent model to classify the outcome of our dependent variable (i.e. the type of image by means of calcium activity data).
Our methodological plan was based on filtering the AllenSDK dataset to obtain the VIP and SST neuronal populations (10666 neurons) of 13 mice, obtaining the time series of calcium fluorescence signals recorded while the mice did a visual adaptation task using either familiar or novel images. Two depth planes were considered for the first analyses, and we took the traces at the stimulus presentation until 0.750 seconds later.
Figure 1. Mean and standard deviation of the fluorescence signal time series for SST and VIP neuron populations.
The calcium activity at a given timestamp was organized in a feature matrix
We used cross-validation to calculate the accuracy of the models and compute the probabilities of obtaining this accuracy in a random distribution, so as to determine how reliable they are. For the VIP Model the accuracy is 79.42%, and for the SST Model, the accuracy is 78.29%, both with a probability of less than 0.001 of it being by chance.
Figure 1. Cross-validation accuracy for SST and VIP neuron populations in the mice primary visual cortex.
Now, to visualize the “importance” assigned by our model to each temporal feature. we plotted the weight values
Figure 2. Weights
Finally, just to give a more global perspective of the data we worked with, and as a way to visualize the behavior of the linear part of the model, and have an idea on how this part classifies the two different stimuli before applying the nonlinear part, see figure 3, there you can see on the right, for the VIP and SST populations when a novel (in yellow) and a familiar (in purple) stimulus were presented, the cumulative sum of the mean signals, multiplied by the model parameters associated with each time, as a function of the time interval considered.
These plots are interesting, because it can be observed that, as the time interval increases, the curves separate from each other more for the SST than the VIP neurons; or in other words, the averages of the cumulative sums for the novel and familiar signals fall outside the standard deviation estimated from the data of the other image type presented. So, as a future perspective, it would be interesting to study how the accuracy of the model behaves as the time interval increases (more stimuli are presented), as well as when we take only the time intervals where the stimuli were shown.
Figure 3. Cumulative sum of the mean VIP and SST signals, multiplied by the model parameters associated with each time, as a function of the time interval considered.