Fig. 1: Large-scale functional and morphological screen of AN movement encoding and nervous system targeting. | Nature Neuroscience

Fig. 1: Large-scale functional and morphological screen of AN movement encoding and nervous system targeting.

From: Ascending neurons convey behavioral state to integrative sensory and action selection brain regions

Fig. 1

ac, Schematics and tables of the main questions addressed. a, To what extent do ANs encode longer time-scale behavioral states and limb movements? This encoding may be either specific (for example, encoding specific kinematics of a behavior or one joint degree of freedom) or general (for example, encoding a behavioral state irrespective of specific limb kinematics or encoding multiple joint degrees of freedom). Here, we highlight the CTr and FTi joints. b, Where in the brain do ANs convey behavioral states? ANs might target the brain’s (1) primary sensory regions (for example, optic lobe or antennal lobe) for sensory gain control; (2) multimodal and integrative sensory regions (for example, AVLP or mushroom body) to contextualize dynamic, time-varying sensory cues; and (3) action selection centers (for example, GNG or central complex) to gate behavioral transitions. Individual ANs may project broadly to multiple brain regions or narrowly to one region. c, To what extent is an AN’s patterning within the VNC predictive of its brain targeting and encoding? d, We screened 108 sparsely expressing driver lines. The projection patterns of the lines with active ANs and high SNR (157 ANs) were examined in the brain and VNC. Scale bar, 40 μm. e, These were quantified by tracing single-cell MCFO confocal images. We highlight projections of one spGal4 to the brain’s AVLP and the VNC’s prothoracic (‘ProNm’), mesothoracic (‘MesoNm’) and metathoracic neuromeres (‘MetaNm’). Scale bar is as in d. f, Overhead schematic of the behavior measurement system used during two-photon microscopy. A camera array captures six views of the animal. Two optic flow sensors measure ball rotations. A puff of CO2 (or air) is used to elicit behavior from sedentary animals. g, 2D poses are estimated for six camera views using DeepFly3D. These data are triangulated to quantify 3D poses and joint angles for six legs and the abdomen (color-coded). The FTi joint angle is indicated (white). h, Two optic flow sensors measure rotations of the spherical treadmill as a proxy for forward (red), sideways (blue) and yaw (purple) walking velocities. Positive directions of rotation (‘+’) are indicated. i, Left: a volumetric representation of the VNC, including a reconstruction of ANs targeted by the SS27485-spGal4 driver line (red). Indicated are the dorsal-ventral (‘Dor’) and anterior-posterior (‘Ant’) axes as well as the fly’s left (L) and right (R) sides. i, Right: sample two-photon cross-section image of the thoracic neck connective showing ANs that express OpGCaMP6f (cyan) and tdTomato (red). AxoID is used to semi-automatically identify two axonal ROIs (white) on the left (L) and right (R) sides of the connective. j, Spherical treadmill rotations and joint angles are used to classify behaviors. Binary classifications are then compared with simultaneously recorded neural activity for 250-s trials of spontaneous and puff-elicited behaviors. Shown is an activity trace from ROI 0 (green) in i. DoF, degree of freedom.

Back to article page