Extended Data Fig. 5: Effect of eye and limb movements on the neural activities and choices.
From: Population coding of strategic variables during foraging in freely moving macaques

(a) Histograms of correlation coefficients between the limb and eye movements and individual neuron activities. Bars are shaded for statistically significant correlations, and the inset indicates the percentage of significantly correlated neurons. A deep artificial neural network was trained using DeepLabCut (Mathis et al., 2018) to localize monkey M’s shoulder, elbow, and paw of the forelimb contralateral to the recording site (right limb) in each overhead video frame of the foraging animal. We could compute average limb movement in any desired time interval using those three limb markers. We considered time intervals around button presses (–2 s, +2 s) and computed the average limb speed and firing rate of all the dlPFC neurons of monkey M in non-overlapping time-bins of 200 ms width. The pupil diameter and the eye velocity were computed with the same method as in ref. 26 (b) For comparison, the correlation coefficient of the reward predictors with firing rates of dlPFC neurons for monkeys G, T, and M (c) Magnitude of movements of the head, arm, and eye are similar whether the animal chooses to stay (horizontal axes, arbitrary units; 5 sessions, 238–336 presses per session) or switch locations (vertical axes, arbitrary units; 5 sessions, 3–27 presses per session) around the time of a button press. Points depict mean movement magnitudes, and widths of shaded ellipses indicate standard errors of the means along each axis. The top row shows movements after presses, and the bottom row shows movements before presses. (d) Prediction of monkey M choices, cross-validated using 1000 sub-samples of presses (80% of presses used for training, 20% for testing). The prediction performance was calculated as the area under the curve of the output of logistic regression (as in Fig. 5c), except that the predictors were the task-irrelevant variables, eye, body, and head movements, and pupil diameter (left) or reward predictors (right). The mean (black line) and the Gaussian-smoothed distribution (gray shades) across 1000 sub-sampled test data are shown.