Skip to main content
bioRxiv
  • Home
  • Submit
  • FAQ
  • Blog
  • ALERTS / RSS
  • About
  • Channels
    Advanced Search
    bioRxiv posts many COVID19-related papers. A reminder: they have not been formally peer-reviewed and should not guide health-related behavior or be reported in the press as conclusive.
    New Results
    • View current version of this article
    Follow this preprint

    Modularity and neural coding from a brainstem synaptic wiring diagram

    Ashwin Vishwanathan, Alexandro D. Ramirez, Jingpeng Wu, Alex Sood, Runzhe Yang, Nico Kemnitz, Dodam Ih, Nicholas Turner, Kisuk Lee, Ignacio Tartavull, William M. Silversmith, Chris S. Jordan, Celia David, Doug Bland, Mark S. Goldman, Emre R. F. Aksay, H. Sebastian Seung, the EyeWirers
    doi: https://doi.org/10.1101/2020.10.28.359620
    This article is a preprint and has not been certified by peer review [what does this mean?].
    000010850 Comments0 TRiP Peer Reviews0 Community Reviews0 Automated Evaluations1 Blog/Media Links0 Videos85 Tweets
    • Abstract
    • Full Text
    • Info/History
    • Metrics
    • Preview PDF

    Abstract

    Neuronal wiring diagrams reconstructed from electron microscopic images are enabling new ways of attacking neuroscience questions. We address two central issues, modularity and neural coding, by reconstructing and analyzing a wiring diagram from a larval zebrafish brainstem. We identified a recurrently connected “center” within the 3000-node graph, and applied graph clustering algorithms to divide the center into two modules with stronger connectivity within than between modules. Outgoing connection patterns and registration to maps of neural activity suggested the modules were specialized for body and eye movements. The eye movement module further subdivided into two submodules corresponding to the control of the two eyes. We constructed a recurrent network model of the eye movement module with connection strengths estimated from synapse numbers. Neural activity in the model replicated the statistics of eye position encoding across multiple populations of neurons as observed by calcium imaging. Our findings show that synapse-level wiring diagrams can be used to extract structural modules with interpretable functions in the vertebrate brain, and can be related to the encoding of computational variables important for behavior. We also show through a potential synapse formalism that these modeling successes require true synaptic connectivity; connectivity inferred from arbor overlap is insufficient.

    Main

    The reconstruction of neural circuits from electron microscopy images has been accelerating, and synapse-resolution wiring diagrams are becoming more readily available1–4. As connectomic information becomes more plentiful, new ways of addressing scientific questions are emerging. Here we reconstruct a wiring diagram from a larval zebrafish brainstem, and apply it to two fundamental challenges in neuroscience: modularity and neural coding.

    A first challenge is to divide the brain into modules that are specialized for distinct behaviors. Division methods assume that modules consist of neurons that share similar neural coding properties5–9, patterns of gene expression10, cell body locations11;12, or axonal projection targets13. Connectomics offers another anatomical approach for finding modules14 based on computational methods of dividing graphs into clusters. This approach has revealed behavioral modules in the C. elegans connectome15;16, but not in vertebrates, as far as we know. Related studies of retina17 and visual cortex18 have not yielded functional interpretations.

    Applying graph clustering algorithms, we find that our brainstem wiring diagram can be partitioned into modules of strongly connected neurons. The modules extend across multiple rhombomeres, the traditional anatomical subdivisions of the hindbrain. The modules are validated by downstream pathways, which also allow functions to be assigned to the modules. One module turns out to be specialized for eye movements, a functional assignment that is further validated by cellular-level registration with brain activity maps. This oculomotor module is in turn subdivided into two submodules, which appear specialized for movements of the two eyes.

    A second challenge is to use a wiring diagram to predict how neural activity encodes sensory, motor, or cognitive variables. Such neural coding properties have been characterized by neurophysiologists, and their explanation using neural network models has occupied theoretical and computational neuroscientists over the past half century. Network models are often based on classical assumptions that the probability or strength of connection between neurons is a function of the distance between them19;20, the similarity between their preferred stimuli 21, or their cell types22.

    Connectomics offers an intriguing alternative: build a neural network model based on a wiring diagram obtained via electron microscopic (EM) reconstruction. We apply this approach to the oculomotor module mentioned above. The connection strengths of our network model are directly constrained by synapse counts in our EM reconstruction. An overall scale parameter for the connection matrix is set by the requirement that the network be able to temporally integrate in the sense of Newtonian calculus. The requirement follows from our identification of the oculomotor module as the “neural integrator” for horizontal eye movements, which transforms angular velocity inputs into angular position outputs23. We find that the encoding of eye position by neural activity is consistent at a population level with experimental results from separate calcium imaging and electrophysiological studies.

    Directly applying connection matrices from EM reconstructions to network modeling is an emerging area. As far as we know, we are the first to use the approach to explain the encoding of a specific behavioral variable in neural activity. The approach has been applied to explain orientation and direction selectivity in the Drosophila visual motion detection circuit, though fine-tuning by backpropagation learning was required24. The approach has also been applied to model whitening of odor representations3.

    Neuronal wiring diagram reconstructed from a vertebrate brainstem

    We applied serial section electron microscopy (ssEM) to reconstruct synaptic connections between neurons in a larval zebrafish brainstem (Fig. 1). The imaged dataset targeted a volume that is known to include neurons involved in eye movements25–29. First, the dataset of Ref.29 was extended by additional imaging of the same serial sections. By improving the alignment of ssEM images relative to our original work, we created a 3D image stack that was amenable to automated reconstruction techniques30. We trained a convolutional neural network to detect neuronal boundaries30;31, and used the output to generate an automated segmentation of the volume. To correct the errors in the segmentation, we repurposed Eyewire, which was originally developed for proofreading mouse retinal neurons32. Eyewirers proofread ∼3000 objects, which included neurons with cell bodies in the volume as well as “orphan” neurites with no cell body in the volume (Fig. 1a). We will refer to all such objects as “nodes” of the reconstructed network. Convolutional networks were used to automatically detect synaptic clefts, and assign presynaptic and postsynaptic partner nodes to each cleft 33 (Fig. 1b, Methods). The reconstructed dataset contained 2824 nodes, 44949 connections between pairs of nodes, and 75163 synapses. Most connections (65%) involved just one synapse, but some involved two (19%), three (7.9%), four (3.7%), or more (4.0%) synapses, up to a maximum of 21 (Extended Data Fig. 1a).

    Figure 1:
    • Download figure
    • Open in new tab
    Figure 1: EM reconstructions of hindbrain neurons.

    a. Cut-section view of the reconstructed volume. Inset, above the rendering, shows the part of the hindbrain that was reconstructed, black box. Dimensions of the imaged volume are shown below. Large green cell body is the Mauthner neuron, which is in rhombomere 4 (R4).

    b. Pipeline used to automatically detect synapses between neurons in the imaged volume. (Left) Raw electron micrograph is the input to a convolutional neural net (CNN) that is trained to recognize postsynaptic densities (PSDs), result of CNN detection is shown in the middle. Another CNN is then used to assign the correct partner for every detected PSD. One representative pre- and postsynaptic pair (blue, yellow) is shown.

    c. Side view of identified abducens motor (ABDM, green) and abducens internuclear (ABDI, magenta) neurons overlaid over ssEM planes. Middle and bottom views are coronal planes showing the location of the neurons at the planes indicated by dotted black line in top sagittal view. Green arrows indicate the axon of an example ABDM neuron that is part of the abducens nerve bundle (black box). Magenta arrows indicate the axons from ABDI neurons crossing the midline (black box).

    d. Reconstructions of ventromedial spinal projection neurons (vSPNs) and dorsal octaval (DO) neurons overlaid on a representative ssEM plane.

    To identify the neurons, we registered the ssEM volume to a reference brain atlas (Extended Data Fig. 2). Visual inspection of some neurons and neurites allowed identification of several important groups. These included the ventromedially located spinal projection neurons (vSPNs) involved in escape behaviors and controlling movements of the body axis, abducens neurons (ABD) controlling extraocular muscles and Descending Octaval (DO) neurons that mediate optokinetic and vestibular signals34 as part of the sensorimotor transformations needed to control eye movement (Fig. 1c,d; Extended Data Fig. 3,4; Supplementary Info). Other reconstructed neurons could not immediately be classified by visual inspection. We therefore developed a classification strategy based on patterns of connectivity.

    Axial and oculomotor modules in the brainstem

    We next organized the reconstructed neurons into two groups: those in a recurrently connected “center”, where feedback interactions are expected to establish collective dynamics, and those in the “periphery” that project to or collect input from the center. We identified the recurrent center of the graph by removing two kinds of objects. First, we removed 2282 objects with zero in-degree or out-degree. These included the ABD neurons and most vSPNs, including the Mauthner cell and MiV1 and MiV2 cells (Fig. 1c,d). Then we removed 62 objects with vanishing eigenvector centrality (Methods, Extended Data Fig. 1b). The latter removal yielded more robust clusterings, though results are qualitatively the same without it. We also experimented with other values of the centrality threshold, and our findings are robust to such changes, as will be shown below. For a centrality-based visualization of the network, see Extended Data Fig. 1e.

    The remaining 540 nodes were defined as the center of the graph, and we applied a graph clustering algorithm to divide it into two modules, each consisting of strongly connected neurons. As we will see later, this binary division is the first step in a hierarchical clustering that is guided by biological validation. (For comparison with flat clusterings see Extended Data Fig. 5.)

    After neurons are sorted into the two modules, the connection matrix of the center neurons (Fig. 2a) has a block structure. The diagonal blocks of the matrix contain the connections within modules, and the off-diagonal blocks contain connections between modules. The diagonal blocks are denser than the off-diagonal blocks, meaning that within-module connectivity is stronger than between-module connectivity. The two modules are designated modA and modO, for reasons that will be explained later.

    Figure 2:
    • Download figure
    • Open in new tab
    Figure 2: Modularity and functional specialization of brainstem interneurons.

    (a) Connectivity matrix of ‘center’ neurons organized into two modules (modA, modO). Neurons in the center were clustered whereas neurons in the periphery were not. Neurons in the periphery were organized by known cell types, vSPNs and ABD neurons. Colored dots represent the number of synapses.

    b. Example connected pairs of modA (orange) and modO (blue) neurons. ModO neurons make synapses onto abducens neurons (ABD, magenta).

    c. (Left) Locations of modA and modO somas. Plots to the sides represent the densities along the mediolateral (bottom) and rostrocaudal (right) axes. Black box is the location of DO neurons. (Middle) Presynapses onto modA and modO dendrites. Every 5th synapse was plotted for clarity. (Right) Postsynapses from modA and modO axons. Every 10th synapse was plotted for clarity. R - Rhombomere.

    d. The potential to make synapses was measured by defining a potential synapse (PS) as a presynapse and postsynapse separated by a distance less than some threshold value (red circle). (center-to-center) Ratio of within-module to between-module synapses versus threshold distance for potential synapses. Table lists the actual true synapse (TS) numbers *. (center-to-periphery) Ratio of numbers of synapses from neurons in modA and modO to peripheral neurons (ABD, vSPNs). Numbers in tables represent normalized synapse counts defined as the ratio of sum of all synapses in a block to the product of the number of elements on each block.

    There were some statistical differences between modA and modO in morphological and ultrastructural properties (Extended Data Fig. 6a, b). We also searched for evidence of spatial organization. Soma locations of both modules are intermingled and highly distributed, extending rostrocaudally over all rhombomeres in the ssEM volume (Fig. 2c). Dendritic and axonal arbors are also intermingled, as are postsynapses and presynapses (Fig. 2c). Therefore we hypothesized that modular wiring specificity was dependent on true synaptic connectivity and could not be inferred indirectly from spatial overlap of neuronal arbors.

    For a quantitative test of the hypothesis, we defined an index of modular wiring specificity as the sum of within-module synapse densities divided by sum of between-module synapse densities. Here synapse density is defined as the number of synapses normalized by the product of presynaptic and postsynaptic neuron numbers. The wiring specificity index was roughly 6 for the center neurons (Fig. 2d), based on actual synapses. We defined a “potential synapse” as a presynapse and a postsynapse within some threshold distance of each other (Fig 2d), and computed the index of modular wiring specificity based on potential synapses rather than actual synapses. The index dropped to less than 3 for potential synapses defined by a distance threshold of 5 µm, and close to 2 at a distance threshold of 10 µm (Fig. 2d, left, center-to-center). Therefore modular wiring specificity is greatly underestimated if connectivity is inferred from arbor overlap, rather than from true synapses.

    The modules in Fig. 2 were obtained with the Louvain algorithm for graph clustering35–37. Similar modules are obtained when spectral clustering38 or a degree-corrected stochastic block model39 are used (Extended Data Fig. 7).

    For biological validation of the modules, we checked their relation to three neuron classes (vSPN, ABD, and DO) that were identified above. Of 10 vSPNs contained in the center, all were in modA, and none were in modO (Extended Data Fig. 8a, black arrows). Furthermore, the 15 vSPNs contained in the periphery received much stronger connectivity from modA than from modO (Fig 2a, vSPNs; Extended Data Fig. 8). Since the vSPNs are known to be involved in turning or swimming movements40–43, we propose that modA plays a role in movements of the body axis, and will refer to it as the “axial module.”

    All 54 ABD neurons were in the periphery, and received much stronger connectivity from modO than from modA (Fig 2a). All 34 of the DO neurons (Fig 1,2c) were members of modO; none were in modA. ABD neurons drive extraocular muscles either directly or through a disynaptic pathway. DO neurons are secondary vestibular neurons, which provide input to the vestibulo-ocular reflex. We therefore propose that modO plays a role in eye movements, and will refer to it as the “oculomotor module.”

    To quantify the differences in center-periphery connectivity described qualitatively above, we defined an index of wiring specificity for peripheral populations as the synapse density from preferred partner in the center divided by the synapse density from non-preferred partner in the center. This wiring specificity index decreases greatly for peripheral vSPNs when potential synapses are considered, but the decrease is more modest for ABD neurons (Fig. 2d, right, center-to-periphery).

    To validate our claims regarding function, we performed calcium imaging throughout the hindbrain in a separate set of animals to generate a map of neurons with activity related to oculomotor behavior. Recordings were obtained while the animals performed spontaneous eye movements in the dark, with activity ranging from neurons that exhibited bursts during saccades to those with perfectly persistent firing during fixations44. The maps from 20 age-matched animals were combined by registering them to a reference atlas45. We extracted a map for eye position signals (see Methods)) that was complete, in the sense that each hindbrain voxel was covered by at least three fish (Methods). This registration showed that, in comparison to cells in the axial module (modA), neurons in the oculomotor module (modO) were more than 2 times as likely to be neighbors with oculomotor neurons from the functional maps, a difference that disappeared when neuron locations were artificially jittered by more than 10 micrometers (Extended Data Fig. 9).

    Submodules of the Oculomotor network

    There is a rich repertoire of oculomotor behaviors, which vary in speed of movement, patterns of binocular coordination, and other properties. Motivated by this functional diversity, we applied the same graph clustering algorithm employed in Fig. 2a to divide modO into two submodules. From the modO connection matrix, it is apparent that within-submodule connectivity is strong, while between-submodule connectivity is weak (Fig. 3a). The two submodules are designated modOM and modOI, for reasons that will be explained later.

    Figure 3:
    • Download figure
    • Open in new tab
    Figure 3: Submodules specialized for two eyes.

    (a) Sub Clustering of modO reveals further modular structure within the oculomotor block with one module, modOI, largely projecting to abducens internuclear neurons (ABDI) and modOM projecting to abducens motor neurons (ABDM).

    b. (Top) Locations of somas, along with densities to the sides for neurons within modOI and modOM (Bottom, left) Presynapses onto dendrites and (right) postsynapses from axons of neurons in modOI and modOM. Every 5th synaptic site was plotted for clarity.

    c. Potential synapse analysis. (Left, center-to-center) Ratio of potential synapse within-modules modOI and modO to between-modules as a function of potential synapses distance. Table lists the actual true synapses numbers *. (Right, center-to-periphery) Potential synapses between modOIonto ABDI and modO onto ABDM as a function of potential synapse distance. Numbers in tables represent normalized synapse counts defined as the ratio of sum of all synapses in a block to the product of the number of elements in the block.

    d. Ocular Preference index (OPI) for each neuron within modOI and modOM after removal of DO vestibular neurons.

    There were some statistical differences between modOM and modOI in morphological and ultrastructural properties (Extended Data Fig. 6d, e). The somas of the two submodules showed some spatial segregation, but the presynapses and postsynapses were more intermingled (Fig. 3b). We again quantified the contribution of spatial organization to wiring specificity using the potential synapse formalism, and found that wiring specificity far exceeded what might be predicted from traditional measures of arbor overlaps (Fig. 3c, center-to-center).

    Submodule functions were suggested by patterns of connection from modO to ABD neurons. The abducens complex is composed of two groups, the motor neurons (ABDM) that directly drive the lateral rectus muscle of the ipsilateral eye, and the abducens internuclear neurons (ABDI) that indirectly drive the medial rectus muscle of the contralateral eye through a disynaptic pathway. Increased activity in both subclasses drive eye movements toward the side of the brain on which the neurons reside (‘ipsiversive’ movements). We found that neurons in modOM preferentially connected to ABDM neurons, while neurons in modOI preferentially connected to ABDI neurons. The potential synapse formalism again showed that wiring far exceeded what might be predicted from arbor overlap (Fig. 3c, center-to-periphery). Many modO cells only connect to ABDM only or ABDI only (Fig. 3d, Ocular Preference Index, OPI [≤-0.75,≥0.75]). We therefore refer to the submodules as motor-targeting (modOM) and internuclear-targeting (modOI), and suggest they are largely involved in controlling movements of the ipsilateral and contralateral eye, respectively.

    Connectomic prediction of eye position encoding in neural activity

    We hypothesized that modO contains the “neural integrator” for horizontal eye movements that transforms angular velocity inputs into angular position outputs. The necessity of such a velocity-to-position neural integrator (VPNI) was pointed out in the 1960s because sensory and command inputs to the oculomotor system encode eye velocity, whereas the extraocular motor neurons additionally carry an eye position signal23. The VPNI has been investigated in a wide variety of vertebrates, including primates46, cat47, and teleost fish48.

    The transformation of velocity into position is “integration” in the sense of Newtonian calculus. It has long been hypothesized that oculomotor integration depends on recurrent connections within the VPNI circuit. Many recurrent network models of the VPNI have been proposed over the years49–53. Surprisingly, direct synaptic connections between VPNI cells were demonstrated for the first time only recently29. Given our hypothesis that modO contains the VPNI, the substantial fraction of recurrent synapses in modO is consistent with the hypothesis that VPNI function is supported by recurrent connectivity (Fig. 4a). To extract more detailed predictions from the hypothesis, we generated a recurrent network model of the VPNI based on a synaptic weight matrix derived from our EM reconstruction (see Methods). We then used the model to predict how VPNI neurons encode eye position in their activities, and compared the predictions to calcium imaging data.

    Figure 4:
    • Download figure
    • Open in new tab
    Figure 4: Velocity-to-position neural integrator

    (a) Histogram of the fraction of all neurons that are recurrent within the identified modules. (left) Recurrent fraction within modA and modO (µmodA = 0.06±0.05, µmodO = 0.11±0.08, p = 1.2×10−18, Wilcoxon-rank sum test). Box plots, above, show medians (black line) along with 25th and 75th percentile (box edges). Red crosses are the outliers.

    b. Schematic showing the proposed wiring of modO along with the two submodules modOI and modOM, and DO neurons that synapses onto ABDM and ABDI.

    c. Histograms of the eye position sensitivity (k) from a linear model (black) as compared to values from fluorescence calcium imaging experiments (red). k - is defined as the slope of the eye position and firing rate (inset). Bimodal distribution of the ABD neurons in the model corresponds to ABDM and ABDI populations. Circles represent the average values and triangles represent medians. Histograms for experimental data are showing values inside the 1st and 99th percentile..

    The elements of the model weight matrix were defined as Embedded Image, where Nij is the number of synapses onto neuron i from neuron j. The normalization by the sum of incoming synapses was included to compensate for the varying amounts by which dendritic arbors of neurons are truncated by the borders of the EM volume. Linear model neurons50 were used for simplicity because most VPNI cells have linear responses as a function of eye position for ipsilateral eye positions46;48. All VPNI cells in the unilateral model were excitatory28 (Extended Data Fig. 9c, Supplementary info), and all DO cells were inhibitory34 (Extended Data Fig. 3c), in line with prior literature. The slope of the rate-position relationship is known as the eye position sensitivity. The elements of the leading eigenvector of the weight matrix are proportional to the eye position sensitivities of the cells in the network51, which enabled us to directly extract predictions from the EM connectivity matrix once we set a single scale factor to translate relative neuronal firing rates into absolute eye position sensitivities that could be compared to the experimental response distributions.

    Within the network model, we distinguished between neuron classes that were identified earlier. VPNI cells were defined as those modO cells that are not DO cells. ABD cells were divided into ABDM and ABDI cells (Fig. 4b). Each of these four populations displayed a characteristic distribution of eye position sensitivities (Fig. 4c) that was robust to changes in the centrality threshold (Extended Data Fig. 10a).

    To test these population-level predictions about eye position sensitivities, we extracted eye position sensitivities from our calcium imaging experiments in larval zebrafish (Methods). The population distributions fit remarkably well with the predictions of the network model (Fig. 4c). Thus, key aspects of the coding properties of neurons in the oculomotor system can be derived from the patterns of synaptic connectivity alone. To test the robustness of our result, we corrupted our reconstructed wiring diagram by simulating errors in the automated synapse detection, and found that the population distributions of eye position sensitivities remained very similar (Extended Data Fig. 10c).

    If our EM reconstruction were not available, one could imagine basing a network model on a weight matrix inferred from overlap of neuronal arbors reconstructed via light microscopy. Essentially this light microscopic approach, with some additional synaptic pruning rules54, has been used by the Blue Brain Project to construct a simulation of a cortical column55. We simulated the light microscopic approach by estimating Nij by the number of potential synapses onto neuron i from neuron j, and then computing the weight matrix Wij by the normalization described earlier (see Methods).

    When our weight matrix inferred based on potential synapses was substituted into our network model, we obtained population distributions for eye position sensitivities that were qualitatively different from experimental measurements (Extended Data Fig. 10d).

    Discussion

    We reconstructed a neuronal wiring diagram from a vertebrate brainstem, and applied it to two fundamental challenges in neuroscience. The first challenge is the division of the brain into modules specialized for distinct functions. We searched for modular organization in the “center” of the wiring diagram, defined by excluding small neuronal fragments. We proposed a hierarchical division, first dividing the center into modA and modO, and then dividing modO into modOI and modOM. These divisions were validated by comparing with centerperiphery connectivity and identified subsets of neurons (vSPN, ABD, and DO). The validation also enabled plausible assignments of biological functions: modA and modO are for axial and eye movements (Fig. 2), and modOI and modOMare for movements of the two eyes (Fig. 3). While we have taken a hierarchical approach, flat clustering also yields modules that are similar to modA, modO, modOI and modOM(Extended Data Fig. 5).

    The VPNI has served as a model system for understanding persistent neural firing56;57. The VPNI also exhibits low-dimensional brain dynamics, which has been found to underlie a wide array of motor 51;58;59, navigational60–65, and cognitive functions27;66–68. Our claim that modO (excepting the DO neurons) is essentially the VPNI has several novel implications. Since modO extends across rhombomeres 4 through 8, it follows that the VPNI also extends across these rhombomeres. Previous physiological studies of VPNI cells mainly focused on R7/826–29;69. Our proposal that the VPNI should be extended also to R4-6 is consistent with the previous observation that integrator function was not completely abolished by inactivation of R7/826. There was one previous report of eye position signals in R4-6 neurons that are not abducens neurons44 but anatomical evidence for their inclusion in the VPNI has been lacking.

    The second fundamental challenge is the prediction of neural coding properties by network models. We found that the eye position sensitivities of VPNI neurons, DO neurons, and their downstream ABD targets could be matched at the population level by a recurrent network of model neurons with connection matrix constrained by the EM reconstruction (Fig. 4c).

    Through the potential synapse formalism, we investigated whether the modular organization (Figs. 2d, 3c) and prediction of neural coding properties (Extended Data Fig. 10d) depended on true synaptic connectivity as reconstructed by EM, rather than estimates of connectivity based on overlap of axonal and dendritic arbors as available by light microscopy. In both cases, we found that a connection matrix estimated from potential synapses gave poorer results than the true connection matrix.

    This work demonstrates the power of synapse-resolution connectivity data. It was not a given that a linear network model with connection strengths estimated from anatomy could predict neural coding properties that qualitatively match calcium imaging data at the population level, or that graph clustering algorithms could identify modules specialized for different functions. We ignored many anatomical properties, such as the location of synapses within the dendritic arbor, the size of synapses, and the number of vesicles, all of which could significantly impact synaptic strength. Furthermore, the phosphorylation state of postsynaptic receptors or facilitation in presynaptic terminals could greatly modify synaptic efficacy70. Such additional properties may need to be incorporated in network models to generate quantitative predictions of function that match at the single cell level. Nevertheless, our findings here suggest that key properties of even complex networks generating behavior can be elucidated by patterns of synaptic connectivity.

    Author Contributions

    AV - designed and conceptualized study, collected data, imaged EM data, performed registration, data analysis and wrote paper. ADR - collected light microscopic data, registered and analyzed LM data. JW - code for data generation and curation in eyewire, meshing, skeletonization, convnet inference. AS - computational modeling, RY - clustering algorithm comparisons, NK - Eyewire data assembly. DI - EM image assembly. NT - Synapses de- tection and partner assignment. KL - Boundary detection. IT - Eyewire algorithms development. WMS - Eyewire algorithms development, Data manipulation software. CSJ - Eyewire algorithms development, Eyewire system administration. CD, DB - Eyewire moderation, data curation. MSG - conceptualized computational model, wrote the paper. ERFA - designed and conceptualized study, wrote the paper. HSS - designed and conceptualized study, data analysis, wrote the paper. EyeWirers - Neuron reconstruction online.

    Declaration of Interest

    HSS has financial interests in Zetta AI LLC.

    Methods

    Image acquisition and alignment

    We acquired a dataset of the larval zebrafish hindbrain that extended 250 µm rostrocaudally and includes rhom-bomeres 4 through 7/8 (R4 to R7/8). The volume extends 120 µm laterally from the midline and 80 µm ventrally from the plane of the Mauthner cell axon. The ssEM dataset was an extension of the original dataset in ref 29 and was extended by additional imaging of the same serial sections. Only a few tens of neurons had been manually reconstructed in our original publication on the ssEM dataset29. The dataset was stitched and aligned using a custom package, Alembic (see Code availability). The tiles from each section were first montaged in 2D, and then registered and aligned in 3D as whole sections. Point correspondences were generated by block matching via normalized cross-correlations both between tiles and across sections. The final set of parameters that were used are listed in table.

    Errors in each step were found by a combination of programmed flags (such as lower than expected correspondences, small search radius, large distribution of norms, or high residuals after mesh relaxation) and visual inspection. They were corrected by either changing the parameters or by manual filtering of points. In most cases, the template and the source were both passed through a band-pass filter. Stitching of tiles (montaging) within a single section was split into a linear translation step (premontage) and a non-linear elastic step (elastic montage). In the premontage step individual tiles were assembled to have 10% overlap between neighboring tiles, as specified during imaging, and by fixing a single tile (anchoring) in place. They were then translated row by row and column by column according to the single correspondence found between the overlaps. In the elastic montage step, the locations of the tiles were initialized from the translated locations found previously, and blockmatches were computed every 100 pixels on a regular triangular mesh (see Table for parameters used).

    View this table:
    • View inline
    • View popup
    Table1:

    Parameters used for image alignment

    Once the correspondences were found, outliers were filtered by checking the spatial distribution of the cross-correlogram (sigma filter), height of the peak of the correlogram (r value), dynamic range of the source patch contrast, kurtosis of the source patch, local consensus (average of immediate neighbors), and global consensus (inside the section). After the errors had been corrected, by filtering bad matches, the linear system was solved using conjugate gradient descent. The mean residual errors were in the range of 0.5 - 1.0 pixels after relaxation. The inter-section alignment was split into a translation step (pre-prealignment), a regularized affine step (prealignment), a fast coarse elastic step (rough alignment), and a slow fine elastic step (fine alignment). In the pre-prealignment step, a central patch of the given montaged section was matched to the previous montaged section to obtain the rough translation between two montaged sections. In the prealignment step, the montaged images were offset by that translation, and then a small number of correspondences were found between the two montaged sections, which were solved for a least-squared-residual affine transform, regularized with 10% (empirically derived) of identity transformation to reduce shear from propagating across multiple sections. Proceeding sequentially allowed the entire stack to get roughly in place. The mean residual errors were in the range of 3.5 pixels after relaxation.

    Convolutional Net Training

    Dataset

    Four expert brain image analysts (Daan Visser, Kyle Willie, Merlin Moore, and Selden Koolman) manually segmented neuronal cell boundaries from six subvolumes of EM images with VAST71, labeling 194.4 million voxels in total. These labeled subvolumes were used as the ground truth for training convolutional networks to detect neuronal boundaries. We used 187.7 million voxels for training and reserved 6.7 million voxels for validation.

    Network architecture

    To detect neuronal boundaries, we used a multiscale 3D convolutional network architecture similar to the boundary detector in72. This architecture was similar to U-Net73;74 and although was originally designed to improve upon U-Net, we found no significant evidence that it performs better at neuronal boundary detection than U-Net. Thus the particular choice of multiscale 3D convolutional network architecture in our image segmentation pipeline may not be important.

    We augmented the original architecture of 72 with two modifications. First, we added a “lateral” convolution between every pair of horizontally adjacent layers (i.e. between feature maps at the same scale). Second, we used batch normalization75 at every layer (except for the output layer). These two architectural modifications were found to improve boundary detection accuracy and stabilize/speed-up training, respectively. For more details, we refer the reader to the Supplementary Section A.1 and Figure S10 in72.

    Training procedures

    We implemented the training and inference of our boundary detectors with the Caffe deep learning framework76. We trained the networks on a single Titan X Pascal GPU. We optimized the binary cross-entropy loss with the Adam optimizer77, initialized with α = 0.001, β1 = 0.9, β2 = 0.999, and ε = 0.01. The step size α was halved when the validation loss plateaued, three times during training at 135K, 145K, and 175K iterations. We used a single training example (minibatch of size 1) to compute gradients for each training iteration. The gradient for target affinities (the degree to which image pixels are grouped together) in each training example was reweighted dynamically to compensate for the high imbalance between target classes (i.e. low and high affinities). Specifically, we weighted each affinity inversely proportional to the class frequency, which was computed independently within each of the three affinity maps (x, y, and z) and dynamically in each training example. We augmented training data using (1) random flips and rotations by 90°, (2) brightness and contrast augmentation, (3) random warping by combining five types of linear transformation (continuous rotation, shear, twist, scale and perspective stretch), and (4) misalignment augmentation78 with the maximum displacement of 20 pixels in x- and y-dimension. The training was terminated after 1 million iterations, which took about two weeks. We chose the model with the lowest validation loss at 550K iterations.

    Convolutional Net Inference

    The above trained net was used to produce an affinity map of the whole dataset using the ChunkFlow.jl package79 (see Code availability). Briefly, the computational tasks were defined in a JSON formatted string and submitted to a queue in Amazon Web Services Simple Queue Service (AWS SQS). We launched 13 computational workers locally with NVIDIA TitanX GPU. The workers fetched tasks from the AWS SQS queue and performed the computation. The workers first cut out a chunk of the image volume using BigArrays.jl (see Code availability) and decompose it into overlapping patches. The patches were fed into the convolutional network model to perform inference in PyTorch80. The output affinity map patches were blended in a buffer chunk. The output chunk was cropped around the margin to reduce boundary effects. The final affinity map chunk, which is aligned with block size in cloud storage, was uploaded using BigArrays.jl. Both image and affinity map volumes were stored in Neuroglancer precomputed format (https://neurodata.io/help/precomputed/). The inference took about 17 days in total and produced about 26 terabytes of affinity map.

    Chunk-wise Segmentation

    In order to perform segmentation of the entire volume, we divided the volume into ‘chunks’. Overlapping affinity map chunks were cut out using BigArrays.jl (see Code availability), and a size-dependent watershed algorithm 81 (see Code availability) was applied to agglomerate neighboring voxels to make supervoxels. The agglomerated supervoxels are represented as a graph where the supervoxels are nodes, and the mean affinity values between contacting supervoxels are the edge weights. A minimum spanning tree was constructed from the graph by recursively merging the highest weight edges. This over-segmented volume containing all supervoxels and the minimum spanning tree was ingested to Google Cloud Storage for proofreading using the crowd-sourced platform EyeWire.

    Semi-automated reconstructions on Eyewire

    Following segmentation of the data into supervoxels, the data was made available to players on an online crowd-sourced platform, Eyewire (https://eyewire.org). Neurons for reconstruction were selected based on correspondence of EM images with functional imaging of same animal to identify VPNI neurons29. After this initial round, all pre- and post synaptic targets of these neurons were reconstructed. The players were provided the option of agglomerating supervoxels using a slider to change the threshold for agglomeration. To ensure accurate reconstructions, we did two things: (1) only players who met a certain threshold, determined by their accuracy on a previously published dataset (retina, Supplementary info) were allowed to progress to reconstruct zebrafish neurons and (2) the reconstructions were performed by two players in two rounds (round1, round 2) in a wikipedia like manner, where the latest player could modify the previous players reconstructions. Finally, once the neurons underwent two rounds of reconstructions, they were validated by expert in-house image analysts, who each have more than 5000 hrs of experience. The resulting accuracy of the players in the crowd as compared to experts (assuming experts are 100%) was >80% in the first round and ∼95% after the second round of tracing. The validated reconstructions were subsequently skeletonized for analysis purposes.

    The accuracy of the players performance was calculated as an F1 score, where F1 = 2TP / (2TP+FP+FN), where TP represents true positives, FP represents false positives, and FN represents false negatives. All scores were calculated as a sum over voxels. TP was assigned when both the player and the expert agreed the segmentation was correct. FN was assigned when the player missed segments that were added in by the expert. FP was assigned when the player erroneously added segments that did not belong. Two F1 scores were calculated for each player, once for round 1 and once for round 2. No player played the same neuron in both rounds. Typically at an agglomeration threshold of 0.3 the segmentation had an F1 score of 62%.

    Skeletonization

    The neuron segmentation IDs were ingested to an AWS SQS queue and multiple distributed workers were launched in Google Cloud using kubernetes. Each worker fetched the segmentation chunks associated with a neuron ID. The segmentation voxels were extracted as a point cloud and the Distance to Boundary Field (DBF) was computed inside each chunk. Finally a modified version of the skeletonization algorithm TEASAR was applied82. Briefly, we constructed a weighted undirected graph from the point cloud, where the neighboring points are connected with an edge and the edge weight is computed from the DBF. Then, we took the point with the largest DBF as source, and found the furthest point as the target. The shortest path from source to target in the graph was computed as the skeleton nodes. The surrounding points were labeled as visited, and the closest remaining unvisited point was taken as the new source. We repeated this process until all the points were visited. The skeleton node diameter was set as its DBF. The skeleton nodes were post-processed by removing redundant nodes, removing ‘hairs’, based on diameter, removing branches inside soma, downsampling the nodes, merging single-child segments, and smoothening the skeleton path. All skeletonization was performed at mip level 4.

    Synapse detection

    Synapses were automatically segmented in this dataset using neural networks to detect clefts and assign the correct partner as previously described83. Briefly, a subset of the imaged data (219 µm3) was selected for annotation. The annotations were performed using the manual annotation tool VAST71. Trained human annotators labeled the voxels that were part of a synaptic cleft, the postsynaptic density (PSD) and presynaptic docked vesicle pools. A convolutional neural network was trained to match these labels, using 107 µm3 as a training set, and 36 µm3 as a validation set, leaving the remaining 76 µm3 as an initial test set. All of these sets were compared to the predictions of the model tuned to an F-Score of 1.5 on the validation set in order to bias towards recall, where recall = TP / (TP + FN). Biasing the predictor towards recall reduces false negatives at the cost of more false positives, which are easier to correct. Apparent human errors were corrected, and training was restarted with a new model. We also later expanded the test set by proofreading similar automated results applied to new sections of the datasets (to increase representation of rare structures in the full image volume). The final model used a RS-UNet architecture78 implemented using PyTorch80, and was trained using a manual learning rate schedule, decreasing the rate by a factor of 10 when the smoothed validation error converged. The final network reached 86% precision and 83% recall on the test set after 230k training iterations.

    A convolutional network was also trained to assign synaptic partners to each predicted cleft as previously described83. All 361 synapses in the ground truth were labeled with their synaptic partners, and the partner network used 204 synapses as a training set, 73 as a validation set, and the remaining 84 as a test set. The final network was 95% accurate in assigning the correct partners of the test set after 380k training iterations

    The final cleft network was applied across the entire image volume, and formed discrete predictions of synaptic clefts by running a distributed version of connected components. Each cleft was assigned synaptic partners by applying the partner network to each predicted cleft within non-overlapping regions of the dataset (1024 x 1024 x 1792 voxels each). In the case where a cleft spanned multiple regions, the assignment within the region that contained the most of that cleft was accepted, and the others were discarded. Cleft regions whose centroid coordinates were within 1µm and were assigned the same synaptic partners were merged together in order to merge artificially split components.

    Finally, spurious synapse assignments (i.e postsynapses on axons and presynapses on dendrites) were cleaned by querying the identity of the 10 nearest synapses to every synapse. If the majority of the 10 nearest neighbors were of the same identity (pre or post), then the synapse was assigned correctly. If the majority were of an opposing identity, these synapses were assigned wrongly and were deleted. This process eliminated 1975 falsely assigned synapses (∼2% of the total).

    LM/EM registration

    Registration of the EM dataset to the reference atlas (ZBrain, 45) was carried out in two stages. We created an intermediate EM stack from the low resolution (270 nm/pixel) EM images of the entire larval brain tissue. This intermediate stack had the advantage of a similar field of view as compared to the LM reference volume, while also being of the same imaging modality as the high-resolution EM stack. The low-resolution EM stack was registered to the reference brain by fitting an affine transform that maps the entire EM volume onto the LM volume. To do this, we selected corresponding points such as neuronal clusters and fiber tracts using the tool BigWarp84. These corresponding points were used to determine an affine transform using the MATLAB least squares solver (mldivide). Subsequently, the intermediate EM stack, in the same reference frame as the ZBrain atlas, was used as the template to register the high-resolution EM stack onto it. This was performed in a similar manner by selecting corresponding points and fitting an affine transform. The resulting transform would transform points from the high-resolution EM space to the reference atlas space. This transform was used to map the reconstructed skeletons from high-resolution EM space to the reference atlas space.

    Functional imaging and functional maps

    The complete methods for recording calcium activity used to create the functional maps are reported in44. Briefly, we used two-photon, raster-scanning microscopy to image calcium activity from single neurons throughout the hindbrain of 7-8 day old transgenic larvae expressing nuclear-localized GCaMP6f, Tg(HuC:GCaMP6f-H2B) strain cy73-431 from Misha Ahren’s lab. Horizontal eye movements using a sub-stage CMOS camera were recorded simultaneously with calcium signals. We used the CalmAn-Matlab software to extract the neuronal locations from fluorescence movies85.

    We analyzed saccade-triggered average (STA) activity to determine which neurons were related to eye movements (see44 for complete details). For each cell, we interpolated fluorescence activity that occurred within five seconds before or after saccades to the left (right) to a grid of equally spaced, ⅓ second timepoints and then averaged the interpolated activity across saccades to compute the STA. The STA could occur around saccades to the left or to the right. We performed a one-way ANOVA on each STA to determine which neurons had significant saccade-triggered changes in average activity (p<0.01 using the Holm-Bonferroni method to correct for multiple comparisons). To determine which of these neurons had activity related to eye position and eye velocity, we first performed a Principal Components Analysis (PCA) on the STAs from neurons with significant saccade-triggered changes. We found that the first and second principal components had post-saccadic activity largely related to position and velocity sensitivity respectively (see Figure 3A in 44). We characterized each STA using a scalar index, called ϕ in44, created from that STA’s projections onto the first two principal components and found that this index does a good job of characterizing the average position and velocity-related activity seen across the population (see Figure 3C in44 for a map of ϕ values and population average STAs). Position and velocity neurons were defined as neurons with an STA whose ϕ value of was within a specific range (−83 to 37 and 38-68 respectively). We removed any neurons with pre-saccadic activity that was significantly correlated with time until the upcoming saccade. The locations of each neuron were then registered to the Z-Brain atlas using similar methods as listed in the previous section (see44 for complete details).

    Computing slopes of eye-position dependence

    To assess the functional characteristics of various oculomotor cells (Figure 4) we fit a classical model of eye position sensitivity to neuronal firing rates extracted from our fluorescence measurements. We approximated the firing rate, r, of a cell using the deconvolution algorithm with non-negativity constraint described in86. We modeled the dependence of firing rate on eye position using the equation, r = [k(E − Eth)]+, where k and Eth are free parameters and the function [x]+ = x if x > 0 and 0 otherwise87. In order to best compare results across animals, we normalized the units of eye position before fitting the model by subtracting the median eye position about the Null position (measured as the average raw eye position) and then dividing by the 95th percentile of the resulting positions. Since our focus was on a cell’s position-dependence, we also removed the eye velocity dependent burst of spiking activity at the saccade time that Abducens and VPNI neurons are known to display by removing samples that occur within 1.5 seconds before or 2 seconds after each saccade. Saccade times were determined in an automated fashion by determining when eye velocity crossed a threshold value44. Finally, since the eye position and fluorescence were recorded at different sampling rates, we linearly interpolated the values of neuronal activity at the eye position sample times. To fit the value of k, for each cell, we defined the eye movements toward the cell’s responsive direction as positive so that k is positive by construction. We then determined the ratio of the threshold to eye position, Eth/K, using an iterative estimation technique based on a Taylor series approximation of [x]+ described in ref 88. Using the resulting estimate of Eth/K, we determined k as the slope resulting from a linear regression, with offset, to r using the regressor [E = Eth/K]+. Since we did not know the cell’s responsive direction a priori, we ran the model twice -- once with movements to the left as positive and once with movements to the right as positive -- and used the value of k that resulted in the highest R2 value. We fit the model using positions from the contralateral eye for the Abducens internuclear neurons and the ipsilateral eye for all other neurons. A neuron was identified as an Abducens (ABD), Abducens internuclear (ABDI), or descending octavolateral (DO) neuron if its location on the ZBrain atlas was within 5 microns of an ABDM, ABDI, or DO cell determined from EM anatomy (since EM reconstruction was only performed for one hemisphere, we duplicated the locations of the EM-identified neurons and mirrored their location onto the opposite hemisphere). As a goodness-of-fit measure we required all neurons, except DO cells, to have an R2 value greater than 0.4. Additionally, non-DO cells were required to have a saccade-triggered average with at least one significant time point (p<0.01 by an ANOVA test using Holm-Bonferroni correction) as defined in ref 44 and to have a Embedded Image (which is defined as Embedded Image), fluorescence response that was loosely related to eye position (R2 greater than 0.2 when we run the above model replacing r with Embedded Image). The eye position sensitivity for fluorescence data was then scaled to average physiological VPNI responses from goldfish 48:89.

    Modular analysis

    We organized the reconstructed neurons into two groups: those in a recurrently connected “center”, where feedback interactions are expected to establish collective dynamics, and those in the “periphery” that project to or collect input from the center. To do so, we first defined a connection matrix Nij by the number of synapses onto node i from node j. We computed the left and right eigenvectors of the connection matrix corresponding to the eigenvalue with maximal real part. The elements of these eigenvectors, which are all non-negative, are regarded as measures of network “centrality”. We define eigen-centrality as the geometric mean of the left and right eigenvectors corresponding to the maximal real part. This terminology is helpful because nodes with zero out-degree (orphan dendrites) have zero out-centrality, and nodes with zero in-degree (orphan axons) have zero in-centrality. We note for completeness that our definition of centrality is properly termed an eigen-centrality; similar organization could be generated by using the related concept of degree-centrality. Degree-centrality (geometric mean of in-degree and out-degree) and eigen-centrality are correlated, but not perfectly (Extended Data Fig. 1c).

    Using the above method, we were able to define a ‘center’ that contained 540 recurrently connected neurons (eigen-centrality >10−8). We then applied three graph-clustering algorithms, the Louvain algorithm, Spectral clustering and the Stochastic block matching method to divide the center into modules. The Louvain based methods is presented in the main text, and the rest are described in the Extended data.

    Louvain Clustering

    Graph clustering was performed using the Louvain clustering algorithm for identifying different ‘communities’ or ‘modules’ in an interconnected network by optimizing the ‘modularity’ of the network, where modularity measures the (weighted) density of connections within a module compared to between modules. Formally, the modularity measure maximized is Qgen = ∑ Bijδ(ci, cj), where δ(ci,cj) equals 1 if neurons a and b are in the same module and 0 otherwise, where Embedded Image. Here si = ∑c Wic is the sum of weights out of node i, sj = ∑c Wjb is the sum of weights out of node j, ω = ∑cd Wcd is the total sum of weights in the network, and the resolution parameter γ determines how much the naively expected weight of connections γ } is subtracted from the connectivity matrix. Potential degeneracy in graph clustering was addressed by computing the consensus of the clustering similar to 90. Briefly, an association matrix, counting the number of times a node (neuron) is assigned to a given module, was constructed by running the Louvain algorithm 200 times. Next, a randomized association matrix was constructed by permuting the module assignment for each node. Reclustering the thresholded association matrix, where threshold was the maximum element of the randomized association matrix, provided consensus modules. We used the commuinity_louvain.m function from the Brain Connectivity Toolbox package (BCT, https://sites.google.com/site/bctnet/Home). In addition to the Louvain graph-clustering algorithm, we also clustered the ‘center’ with two alternate graph-clustering algorithms; spectral clustering and stochastic block matching, described below.

    Spectral Clustering

    We employed a generalized spectral clustering algorithm for weighted directed graphs to bisect the zebrafish ‘center’ subgraph as proposed by38. Given a graph G(V, E) and its weighted adjacency matrix Embedded Image, Aij where indicates the number of synapses from neuron i to neuron j, one can construct a Markov chain on the graph with a transition matrix Pα, such that [Pα]ij := (1 − α) · Aij/ ∑k Aik + α/n. The coefficient α > 0 ensures that the constructed Markov chain is irreducible, and Perron-Frobenius theorem guarantees Pα has a unique positive left eigenvector π with eigenvalue 1, where π is also called the stationary distribution. The normalized symmetric Laplacian of the Markov chain is Embedded Image.

    To approximately search for the optimal cut, we utilize the Cheeger inequality for a directed graph38 that bridges the spectral gap of ℒ and the Cheeger constant ϕ∗. Ref91 shows that the eigenvector v corresponding to the second smallest eigenvalue of ℒ, λ2 results in optimal clusters. We obtained two clusters by a binary rounding scheme, i.e., S = {i ∈ V|vi ≥ 0} and S = {i ∈ V|vi < 0}.

    We modified the directed_laplacian_matrix function in the NetworkX package (https://networkx.github.io) to calculate the symmetric Laplacian for sparse connectivity matrices, with a default α = 0.05. The spectral gaps for the eigenvector-centrality subgraph is Embedded Image and for the partitioned the oculomotor (modO) module is Embedded Image.

    Degree-corrected Stochastic Block Matching (SBM)

    Unlike the Louvain and spectral clustering algorithms that assume fewer intra-cluster connections than inter-cluster connections, we applied an efficient statistical inference algorithm 39 to obtain the stochastic block models (SBM) that best describes the ‘center’ subgraph.

    The traditional SBM92 is composed of n vertices, divided into B blocks with {nr} vertices in each block, and with the probability, prs, that an edge exists from block r to block s. Here we use another equivalent definition, to use average edge counts from the observation ers = nrnsprs to replace the probability parameters. The degree-corrected stochastic block model93 further specifies the in- and out-degree sequences Embedded Image of the graph as additional parameters.

    To infer the best block membership {bi} of the vertices in the observed graph G, we maximize likelihood Р(G|{bi}) = 1/Ω({ers}, {nr}), where Ω({ers}, {nr}) is the total number of different graph realizations with the same degree distribution Embedded Image, and {ers} edges among and within blocks of sizes {nr}, corresponding to the block membership {bi}. Therefore, maximizing likelihood is equivalent to minimizing the microcanonical entropy94 S ({ers}, {nr}) = ln Ω({ers}, {nr}), which can be calculated as Embedded Image, where M =∑rs ers is the total number of edges.

    We used the minimize_blockmodel_dl function in the graph-tool package (https://graph-tool.skewed.de) to bisect the central subgraphs by setting Bmin = Bmax = 2 and degcorr = true.

    Network level modeling

    A full-scale, one-sided model was built using the reconstructed synapses for the ABD, DO, and VPNI populations. A full-scale, one-sided model was built using the reconstructed synapses for the ABD, DO, and VPNI populations. Although the VPNI is a bilateral circuit, previous experiments87;89 have shown that one half of the VPNI is nevertheless capable of maintaining the ipsilateral range of eye positions after the contralateral half of the VPNI is silenced. This may reflect that most neurons in the contralateral half are below their thresholds for transmitting synaptic currents to the opposite side when the eyes are at ipsilateral positions53. Therefore, we built a recurrent network model of one half of the VPNI circuit based on the modO neurons that we had reconstructed from one side of the zebrafish brainstem. For simplicity, we did not include the modA neurons, assuming that the weak connectivity between the axial and oculomotor modules makes their contributions negligible, similar to the manner in which bilateral interactions have been shown to be negligible for the maintenance of persistent activity. We added the ABD neurons and the feedforward connections from modO to ABD to the model, because the ABD neurons are the “read-out” of the oculomotor signals in the VPNI.

    The elements of the model weight matrix are Embedded Image. An overall scale factor β was applied to tune the principal eigenvalue to one, which is a necessary condition for generating stable temporal integration in the network model. The automated synapses detection identified a small number of synapses (8 synapses from 3 ABD neurons) projecting from the abducens onto integrator and vestibular neurons. Manual inspection of the synapses indicated that they were false positives, so they were neglected when building the model. Projections from the DO population were taken to be inhibitory while all other connections were taken to be excitatory (supplementary information). The remaining neurons in the oculomotor module were included as part of the VPNI--we did not consider other vestibular populations since only DO neurons in modO received vestibular afferents from the Viiith nerve; saccadic burst neurons are likely located in R2/344, outside of our reconstructed volume; and recently discovered pre-saccadic ramp neurons44, are quite sparse in comparison to neurons with position signal.

    Directed connection weights between each pair of neurons were set in proportion to the number of synapses from the presynaptic neuron onto the postsynaptic neuron divided by the total number synapses onto the post-synaptic neuron. The weights were then scaled to achieve perfect integration in a linear rate model governed by Embedded Image where ri is the firing rate of the ith neuron, Wij are the connection weights, and τ, the intrinsic time constant, is 100ms. Position sensitivities were determined for the neurons in each model by simulating the response of the network to a pulse of input along the integrating direction. The position sensitivity for each neuron was then determined by averaging across models and scaled to average physiological VPNI responses from goldfish48;89.

    As a control, we generated a connectome that accounted for the false positive and negative rate of synapse detection by the neural nets; synaptic detection jitter. We generated 1000 models by randomly varying the identified synapses according to the false positive and false negative rates and calculated the connection weights as described above. The eye position sensitivities with synaptic detection jitter were reported as the average of these 1000 models (Extended Data Fig. 10c).

    Code availability

    • Alembic - https://github.com/seung-lab/Alembic.git

    • BigArrays.jl - https://github.com/seung-lab/BigArrays.jl with Apache License Version 2.0.

    • ChunkFlow.jl - https://github.com/seung-lab/ChunkFlow.jl with Apache License Version 2.0.

    • Watershed - https://github.com/seung-lab/Watershed.jl with GNU General Public License v3.0.

    • Agglomeration - https://github.com/seung-lab/Agglomeration with MIT License.

    • Skeletonization, morphology and functions could be found at https://github.com/seung-lab/RealNeuralNetworks.jl with Apache License 2.0.

    Extended Data Figures

    Extended Data Figure. 1:
    • Download figure
    • Open in new tab
    Extended Data Figure. 1: Network features

    a. Frequency distribution of the number of synapses per connection on the reconstructed graph

    b. Eigencentrality, defined as the geometric mean of the left and right eigenvectors.

    c. Eigencentrality vs. Degree centrality

    d. Top10 in-degree neurons, where 4 are vSPNs and 5 are ABD neurons.

    e. Network diagram of the reconstructed neurons, where dots represent individual nodes and lines represent the edges between nodes. Left eigenvector is referred to as out-centrality and the right eigenvector is referred to as in-centrality. The network representation shows neurons in the ‘center’ and in the ‘periphery’. Only edges involving more than 5 synapses shown for clarity.

    Extended Data Figure. 2:
    • Download figure
    • Open in new tab
    Extended Data Figure. 2: Registration of EM and LM to reference atlas

    a. Maximum intensity projection of the registration of 2P functional imaging (green) onto the Z-brain reference atlas (magenta). Neurons that were responsive to right eye positions are reported. (Bottom) Example response of a neuron (black) to the eye position (blue, yellow). Scale bar 100µm

    b. Single plane views of the registration of the ssEM volume (green) to Z-brain reference atlas (magenta). Scale bar 100µm

    Extended Data Figure. 3:
    • Download figure
    • Open in new tab
    Extended Data Figure. 3: Identification of abducens and DO neurons.

    a. Overlap of abducens neurons with transgenic lines registered to Z-brain atlas. (Left) Location of ABDM neurons as indicated in fluorescence micrographs by two clusters of neurons in R5,6. Corresponding locations of ABDM neurons from EM reconstructions are shown in green. (Right) Location of two clusters of glutamatergic interneurons in fluorescence corresponds with EM reconstructions of ABDI neurons.

    b. (Left) EM reconstruction of DO neurons (red) along with inputs (blue) that were identified as originating as part of the vestibulocochlear nerve (VIIIth nerve, black arrows). Side views of the overlap of the axons identified as part of VIIIth nerve onto the isl2 transgenic line. (Middle) single plane (right) maximum intensity projection. SAG - Statoacoustic Ganglia.

    c. Overlap of the DO neurons with transgenic lines that label markes for excitatory and inhibitory neurotransmitters registered to the Z-brain atlas. DO somata overlap with gad-1b labeled neurons (red circle), but not with Vglut or gly2.1. Arrows indicate the trajectory of the vestibular nerve.

    Extended Data Figure. 4:
    • Download figure
    • Open in new tab
    Extended Data Figure. 4: Identification of vSPNs.

    a. vSPNs were identified based on their overlap with neurons labeled by spinal backfills that were part of the Zbrain resource. Scale bar 50µm.

    Extended Data Figure. 5:
    • Download figure
    • Open in new tab
    Extended Data Figure. 5: Modular structure at different scales.

    a. Number of modules obtained and robustness of clusters as a function of resolution parameter γ.

    b. Connectivity matrices when clustered for different numbers of modules. At γ = 0.7, emergent modular structure is similar to that as shown when modules in Fig 2 (modA, modO) and Fig 3 (modOM, modOI) are clustered in a hierarchical manner.

    Extended Data Figure. 6:
    • Download figure
    • Open in new tab
    Extended Data Figure. 6: Synapse size and location distribution

    a. Histogram of total pathlength of neurons in modA (n = 251) and modO (n = 289). P is the significance value based on Wilcoxon-rank sum test (RS-test). Cohen’s D measures the effect size.

    b. Histogram of synapses size (detected PSD) of neurons within-module (left) and between-modules (right) between modA and modO. P is the significance value based on Wilcoxon-rank sum test (RS-test). Table summarizes the mean and standard deviations.

    c. Histogram of synapses locations i.e. distance from somata to synaptic site along the neurite for neurons in modA and modO. Table summarizes the mean and standard deviations.

    d. Histogram of synapses size (detected PSD) of neurons within-module (left) and between-modules (right) for modOI and modOM. Table summarizes the mean and standard deviations.

    e. Histogram of synapse locations for neurons within the oculomotor module, modOI and modOM.

    Extended Data Figure. 7:
    • Download figure
    • Open in new tab
    Extended Data Figure. 7: Clustering algorithm comparisons

    a. Spectral clustering (see Methods) into two modules along with a normalized number of synapses in each block.

    b. Degree corrected Stochastic Block Matching (SBM) (see Methods) into two modules along with the normalized number of synapses in each block.

    Extended Data Figure. 8:
    • Download figure
    • Open in new tab
    Extended Data Figure. 8: Composition of Axial module (modA).

    a. Connectivity matrix (same as in Figure 2) with the addition of small vSPNs in the periphery. (Right) Visualization of the large and small classes of vSPNs. Locations of small vSPNs that are part of the ‘center’ in modA are indicated by arrows above the rows.

    Extended Data Figure. 9:
    • Download figure
    • Open in new tab
    Extended Data Figure. 9: Overlap of modules with functional maps.

    a. Overlap of neurons in modA (orange) and modO (blue) with a maximum intensity projection of neurons with eye position signals (see Methods), functional map (background) - front views, (below) - side views.

    b. Quantification of the overlap of modO neurons from EM to neurons with eye position signals functional imaging. (Top) Number of eye position neurons as a function of jitter radius. Error bars are standard deviations of 10 iterations of jitter. (Bottom) ratio of eye position neurons as a function of soma jitter. Overlap was counted using a patch size of 6×6×8µm around each modO neuron.

    c. Overlap of modO neurons registered to the Z-brain atlas to transgenic lines that label neurotransmitter also in the same reference frame.

    Extended Data Figure. 10:
    • Download figure
    • Open in new tab
    Extended Data Figure. 10: Position sensitivity distributions for model variants

    a. Box plots of eye position sensitivities (k) values for all four cell types, VPNIs, DOs, ABDM and ABDI neurons as a function of eigencentrality rank. Each box on the x-axis represents the k values when the weights in the model are restricted to neurons below a certain eigencentrality rank. Population measures remain consistent even when we shrink the candidate VPNI population by 50%.

    b. Histogram of the fraction of all neurons that are recurrent within the identified modules. (left) Recurrent fraction within modOI and modOM. Box plots, above, show medians along with 25th and 75th percentile. Red crosses are the outliers.

    Supplementary Information

    Abducens neuron identification

    We identified abducens motor neurons (Fig. 1c, Extended Data Fig. 3) by their overlaps with abducens motor neurons (ABDM) labeled by the mnx transgenic line (Extended Data Fig. 3a). The ABDM axons exited R5 and R6 through the abducens (VIth) nerve (Fig. 1c, black box) as reported previously [1].

    Contraversive horizontal movements of the eye are driven by the medial rectus muscle, which are innervated by motor neurons in the oculomotor nucleus, which in turn are driven by internuclear neurons (ABDI) in the contralateral abducens complex (Fig. 1c). ABDI neurons were identified (Extended Data Fig. 3a) by their overlaps with two nuclei in the evx2 transgenic line [2] that labels glutamatergic interneurons. The ABDI neurons were just dorsal and caudal to the ABDM neurons, and their axons crossed the midline [3].

    Identification of DO neurons

    We identified a class of secondary vestibular neurons; Descending Octavolateral (DO) neurons (Fig. 1d, brown). We observed that DO cells received synapses from primary vestibular afferents. The latter were orphan axons in R4 identified as the vestibular branch of the vestibulocochlear nerve (VIIIn) by comparison with the isl-2 line, which labels the major cranial nerves (Extended Data Fig. 3b, blue axons).

    Identification of vSPNs

    Ventromedially located Spinal Projection Neurons (vSPNs) were identified by their stereotypic locations [4] and by comparing the registered ssEM volume to spinal backfilled neurons part of the Zbrain resource (Extended Data Fig. 4). The vSPNs were divided into two groups, large and small vSPNs (Fig. 1d). Large vSPNs were M, Mid2, MiM1, Mid3i and CaD neurons. Small vSPNs were RoV3, MiV1, MiV2.

    Assigning sign to VPNI neurons

    We first attempted to determine the signs of the neural connections by comparison with molecular maps in the Z-Brain atlas. Integrator neurons in the dorsomedial stripe running from R4 to R7/8 overlap with a region of alx expression (Extended Data Fig. 9c, ZBB-alx-gal4) where most neurons are glutamatergic [5]. More lateral and caudal neurons overlap with markers for glycine and glutamate. Finally, neurons in the oculomotor module do not overlap with the GABA expression (Extended Data Fig. 9c, ZBB-gad1b). For these reasons were assigned neurons in the oculomotor module to be excitatory.

    Gamification of the zebrafish dataset and crowdsourced reconstructions

    In addition to in-house reconstructions by experts, zebrafish cells were made available to citizen scientist gamers through the existing retinal crowdsourcing game “Eyewire.”

    To test out this new dataset only the most experienced players on Eyewire were allowed to participate. A small group of 4 highly experienced players received an invite to test this new dataset. The players were given the title of “Mystic” which was a new status created to enable gameplay, and became the highest achievable rank in the game.

    Subsequent Mystic players had to apply for the status to unlock access to the zebrafish dataset within Eyewire. There was a high threshold of achievement required for a player to gain Mystic status. Each player was required to reach the previous highest status within the game, as well as complete 200+ cubes a month and maintain 95% accuracy when resolving cell errors.

    Once a player was approved by the lab, they were granted access to the zebrafish dataset, and given the option to have a live tutorial session with an Eyewire admin. There was also a pre-recorded tutorial video and written tutorial materials for players who could not attend the live session or who desired a review of the materials. Newly promoted players were also given a special badge, a new chat color, and access to a new chat room exclusive to Mystics. These rewards helped motivate players by showing their elevated status within the game as well as giving them a space to discuss issues specific to the zebrafish dataset.

    Cells were parceled out in batches to players. When a cell went live on Eyewire, it could be claimed by any Mystic. Each cell was reconstructed by only one player at a time. Once this player had finished their reconstruction, a second player would check over the cell for errors. After the second check, a final check was done by an Eyewire Admin. To mitigate confusion when claiming cells, a special GUI was built into the game that allowed players to see the status of each cell currently online. A cell could be at one of five statuses - “Need Player A,” “Player A,” “Need Player B,” “Player B,” and “Need Admin.” These statuses indicated whether a cell needed to be checked, or was in the process of being checked, and whether it needed a first, second, or Admin level check. At each stage the username of the player or Admin who had done a first, second, or final check was also visible. It was made mandatory that the first and second checks were done by two separate players.

    Collaboration and feedback were important parts of the checking process. If a player was unsure about an area of the cell they were working on, they could leave a note with a screenshot and detail of the issue, or create an alert that would notify an Admin. If a “Player B” or an Admin noticed a mistake made earlier in the pipeline, they could inform the player of the issue via a “Review” document, or through an in-game notification (player tagging).

    To differentiate the zebrafish cells from the regular dataset, each cell was labeled with a Mystic tag. This tag helped to identify the cells as separate from the e2198 retinal Eyewire dataset, and also populated them to a menu of active zebrafish cells within Eyewire.

    Players were rewarded for their work in the zebrafish dataset with points. For every edit they made to a cell they received 300 points. Points earned while playing zebrafish were added to a player’s overall points score for all gameplay done on Eyewire, and appeared in the Eyewire leaderboard.

    The following players in Eyewire were the admins who validated crowd sourced neuronal reconstructions:

    Hoodwinked, BenSilverman, Hightower, sarah.morejohn, SeldenK, sorek.m, zorek.m, twisterZ, hjones.jr, devonjones, amy, EinsteintheRapper, zkem, celiad, celiaz,sunreddy, peleaj43, sarahaw

    The following players helped reconstruct neurons on Eyewire, collectively called - Mystic players:

    r3, Atani, Nseraf, susi, eldendaf, Frosty, a5hm0r, hiigaran, kinryuu, Manni_Mammut, aesanta1, LotteryDiscountz, galarun, annkri, dragonturtle, LynneC, Cliodhna, jax123, KrzysztofKruk, Kfay, rinda, crazyman4865, JousterL, randompersonjci, Caffeine, Baraka, ggreminder, TR77, hewhoamareismyself, nagilooh, Oppen_heimer, Gruenewitwe, cognaso, twotwos, hawaiisunfun, danielag, lemongrab, zope, MysticM, kondor, frankenmsty, zfishman.

    Acknowledgement

    Merlin Moore, Kyle Wille, Ryan Willie, Selden Koolman, Sarah Morejohn, Ben Silverman, Doug Bland, Celia David, Sujata Reddy, Anthony Pelegrino, Sarah Williams, Dan Visser - Manual Annotation, Validation. Amy Robinson - EyeWire management. We thank Will Wong for help with image data transformation for Eyewire and Alex Bae for help with skeletonization. ERFA, MSG, AV and HSS acknowledge support from R01 NS104926, R01 EY027036. ERFA and MSG acknowledge support from R01 EY021581, Simons Foundation Global Brain Initiative. ADR received support from K99 EY027017. HSS acknowledges support from NIH/NCI UH2 CA203710, ARO W911NF-12-1-0594, and the Mathers Foundation, as well as assistance from Google, Amazon, and Intel. HSS is grateful for support from the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC0005. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.

    References

    1. [1].↵
      Dorkenwald, S. et al.Binary and analog variation of synapses between cortical pyramidal neurons. bioRxiv (2019).
      Google Scholar
    2. [2].↵
      Motta, A. et al. Dense connectomic reconstruction in layer 4 of the somatosensory cortex. Science 366 (2019).
      Google Scholar
    3. [3].↵
      Wanner, A. A. & Friedrich, R. W. Whitening of odor representations by the wiring diagram of the olfactory bulb. Nat. Neurosci. 23, 433–442 (2020).
      PubMedGoogle Scholar
    4. [4].↵
      Xu, C. S. et al. A connectome of the adult drosophila central brain. bioRxiv (2020).
      Google Scholar
    5. [5].↵
      Sharma, J., Angelucci, A. & Sur, M. Induction of visual orientation modules in auditory cortex. Nature 404, 841–847 (2000).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    6. [6].
      Conway, B. R., Moeller, S. & Tsao, D. Y. Specialized color modules in macaque extrastriate cortex. Neuron 56, 560–573 (2007).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    7. [7].
      Roh, J., Cheung, V. C. K. & Bizzi, E. Modules in the brain stem and spinal cord underlying motor behaviors. J. Neurophysiol. 106, 1363–1378 (2011).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    8. [8].
      Hira, R., Terada, S.-I., Kondo, M. & Matsuzaki, M. Distinct functional modules for discrete and rhythmic forelimb movements in the mouse motor cortex. J. Neurosci. 35, 13311–13322 (2015).
      Abstract/FREE Full TextGoogle Scholar
    9. [9].↵
      Gu, Y. et al.A map-like Micro-Organization of grid cells in the medial entorhinal cortex. Cell 175, 736– 750.e30 (2018).
      Google Scholar
    10. [10].↵
      Masullo, L. et al.Genetically defined functional modules for spatial orienting in the mouse superior colliculus. Curr. Biol. 29, 2892–2904.e8 (2019).
      CrossRefGoogle Scholar
    11. [11].↵
      Peters, A. & Sethares, C. The organization of double bouquet cells in monkey striate cortex. J. Neurocytol. 26, 779–797 (1997).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    12. [12].↵
      Wang, Y., Brzozowska-Prechtl, A. & Karten, H. J. Laminar and columnar auditory cortex in avian brain. Proc. Natl. Acad. Sci. U. S. A. 107, 12676–12681 (2010).
      Abstract/FREE Full TextGoogle Scholar
    13. [13].↵
      Passingham, R. E., Stephan, K. E. & Kötter, R. The anatomical basis of functional localization in the cortex. Nat. Rev. Neurosci. 3, 606–616 (2002).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    14. [14].↵
      Seung, S. Connectome: How the Brain’s Wiring Makes Us Who We Are (HMH, 2012).
      Google Scholar
    15. [15].↵
      Jarrell, T. A. et al.The connectome of a decision-making neural network. Science 337, 437–444 (2012).
      Abstract/FREE Full TextGoogle Scholar
    16. [16].↵
      Pavlovic, D. M., Vértes, P. E., Bullmore, E. T., Schafer, W. R. & Nichols, T. E. Stochastic blockmodeling of the modules and core of the caenorhabditis elegans connectome. PLoS One 9, e97584 (2014).
      Google Scholar
    17. [17].↵
      Jonas, E. & Kording, K. Automatic discovery of cell types and microcircuitry from neural connectomics. Elife 4, e04250 (2015).
      CrossRefPubMedGoogle Scholar
    18. [18].↵
      Lee, W.-C. A. et al. Anatomy and function of an excitatory network in the visual cortex. Nature 532, 370–374 (2016).
      CrossRefPubMedGoogle Scholar
    19. [19].↵
      von der Malsburg, C. Self-organization of orientation sensitive cells in the striate cortex. Kybernetik 14, 85–100 (1973).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    20. [20].↵
      Wilson, H. R. & Cowan, J. D. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik 13, 55–80 (1973).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    21. [21].↵
      Ben-Yishai, R., Bar-Or, R. L. & Sompolinsky, H. Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. U. S. A. 92, 3844–3848 (1995).
      Abstract/FREE Full TextGoogle Scholar
    22. [22].↵
      Marr, D. A theory of cerebellar cortex. J. Physiol. 202, 437–470 (1969).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    23. [23].↵
      Robinson, D. A. Eye movement control in primates. Science 161, 1219–1224 (1968).
      FREE Full TextGoogle Scholar
    24. [24].↵
      Tschopp, F. D., Reiser, M. B. & Turaga, S. C. A connectome based hexagonal lattice convolutional network model of the drosophila visual system (2018). 1806.04793.
      Google Scholar
    25. [25].↵
      Schoonheim, P. J., Arrenberg, A. B., Del Bene, F. & Baier, H. Optogenetic localization and genetic perturbation of saccade-generating neurons in zebrafish. J. Neurosci. 30, 7111–7120 (2010).
      Abstract/FREE Full TextGoogle Scholar
    26. [26].↵
      Miri, A. et al. Spatial gradients and multidimensional dynamics in a neural integrator circuit. Nature Publishing Group 14, 1150–1159 (2011).
      Google Scholar
    27. [27].↵
      Daie, K., Goldman, M. S. & Aksay, E. R. F. Spatial patterns of persistent neural activity vary with the behavioral context of short-term memory. Neuron 85, 847–860 (2015).
      CrossRefPubMedGoogle Scholar
    28. [28].↵
      Lee, M. M., Arrenberg, A. B. & Aksay, E. R. F. A Structural and Genotypic Scaffold Underlying Temporal Integration. J. Neurosci. 35, 7903–7920 (2015).
      Abstract/FREE Full TextGoogle Scholar
    29. [29].↵
      Vishwanathan, A. et al. Electron Microscopic Reconstruction of Functionally Identified Cells in a Neural Integrator. Curr. Biol. 27, 2137–2147.e3 (2017).
      CrossRefPubMedGoogle Scholar
    30. [30].↵
      Lee, K. et al. Convolutional nets for reconstructing neural circuits from brain images acquired by serial section electron microscopy. Curr. Opin. Neurobiol. 55, 188–198 (2019).
      Google Scholar
    31. [31].↵
      Lee, K., Zlateski, A., Ashwin, V. & Seung, H. S. Recursive Training of 2D-3D Convolutional Networks for Neuronal Boundary Prediction 3573–3581 (2015).
      Google Scholar
    32. [32].↵
      Kim, J. S. et al. Space-time wiring specificity supports direction selectivity in the retina. Nature 509, 331–336 (2014).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    33. [33].↵
      Turner, N. L. et al. Synaptic partner assignment using attentional voxel association networks. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 1–5 (2020).
      Google Scholar
    34. [34].↵
      Pastor, A. M., Calvo, P. M., de la Cruz, R. R., Baker, R. & Straka, H. Discharge properties of morphologically identified vestibular neurons recorded during horizontal eye movements in the goldfish. J. Neurophysiol. 121, 1865–1878 (2019).
      Google Scholar
    35. [35].↵
      Reichardt, J. & Bornholdt, S. Statistical mechanics of community detection. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 74, 016110 (2006).
      CrossRefPubMedGoogle Scholar
    36. [36].
      Blondel, V. D., Guillaume, J. L., Lambiotte, R. & others. Fast unfolding of communities in large networks. Journal of statistical (2008).
      Google Scholar
    37. [37].↵
      Rubinov, M. & Sporns, O. Complex network measures of brain connectivity: uses and interpretations. Neuroimage 52, 1059–1069 (2010).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    38. [38].↵
      Chung, F. Laplacians and the cheeger inequality for directed graphs. Ann. Comb. 9, 1–19 (2005).
      CrossRefGoogle Scholar
    39. [39].↵
      Peixoto, T. P. Hierarchical block structures and High-Resolution model selection in large networks. Phys. Rev. X 4, 011047 (2014).
      Google Scholar
    40. [40].↵
      Gahtan, E., Sankrithi, N., Campos, J. B. & O’Malley, D. M. Evidence for a widespread brain stem escape network in larval zebrafish. J. Neurophysiol. 87, 608–614 (2002).
      PubMedWeb of ScienceGoogle Scholar
    41. [41].
      Orger, M. B., Kampff, A. R., Severi, K. E., Bollmann, J. H. & Engert, F. Control of visually guided behavior by distinct populations of spinal projection neurons. Nat. Neurosci. 11, 327–333 (2008).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    42. [42].
      Huang, K.-H., Ahrens, M. B., Dunn, T. W. & Engert, F. Spinal projection neurons control turning behaviors in zebrafish. Curr. Biol. 23, 1566–1573 (2013).
      CrossRefPubMedGoogle Scholar
    43. [43].↵
      Bhattacharyya, K., McLean, D. L. & MacIver, M. A. Visual threat assessment and reticulospinal encoding of calibrated responses in larval zebrafish. Curr. Biol. 27, 2751–2762.e6 (2017).
      CrossRefGoogle Scholar
    44. [44].↵
      Ramirez, A. D. & Aksay, E. R. F. Pre-saccadic rise neurons in the hindbrain control the timing of spontaneous saccades (2018).
      Google Scholar
    45. [45].↵
      Randlett, O. et al. Whole-brain activity mapping onto a zebrafish brain atlas. Nat. Methods 12, 1039 (2015).
      CrossRefPubMedGoogle Scholar
    46. [46].↵
      McFarland, J. L. & Fuchs, A. F. Discharge patterns in nucleus prepositus hypoglossi and adjacent medial vestibular nucleus during horizontal eye movement in behaving macaques. J. Neurophysiol. 68, 319–332 (1992).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    47. [47].↵
      Chéron, G. & Godaux, E. Disabling of the oculomotor neural integrator by kainic acid injections in the prepositus-vestibular complex of the cat. J. Physiol. 394, 267–290 (1987).
      PubMedWeb of ScienceGoogle Scholar
    48. [48].↵
      Aksay, E., Baker, R., Seung, H. S. & Tank, D. W. Anatomy and discharge properties of Pre-Motor neurons in the goldfish medulla that have Eye-Position signals during fixations. J. Neurophysiol. 84, 1035–1049 (2000).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    49. [49].↵
      Kamath, B. Y. & Keller, E. L. A neurological integrator for the oculomotor control system. Math. Biosci. 30, 341–352 (1976).
      CrossRefWeb of ScienceGoogle Scholar
    50. [50].↵
      Cannon, S. C., Robinson, D. A. & Shamma, S. A proposed neural network for the integrator of the oculomotor system. Biol. Cybern. 49, 127–136 (1983).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    51. [51].↵
      Seung, H. S. How the brain keeps the eyes still. Proc. Natl. Acad. Sci. U. S. A. 93, 13339–13344 (1996).
      Abstract/FREE Full TextGoogle Scholar
    52. [52].
      Seung, H. S., Lee, D. D., Reis, B. Y. & Tank, D. W. Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron 26, 259–271 (2000).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    53. [53].↵
      Fisher, D., Olasagasti, I., Tank, D. W., Aksay, E. R. F. & Goldman, M. S. A modeling framework for deriving the structural and functional architecture of a short-term memory microcircuit. Neuron 79, 987–1000 (2013).
      CrossRefPubMedGoogle Scholar
    54. [54].↵
      Reimann, M. W., King, J. G., Muller, E. B., Ramaswamy, S. & Markram, H. An algorithm to predict the connectome of neural microcircuits. Front. Comput. Neurosci. 9, 120 (2015).
      CrossRefPubMedGoogle Scholar
    55. [55].↵
      Markram, H. et al. Reconstruction and simulation of neocortical microcircuitry. Cell 163, 456–492 (2015).
      CrossRefPubMedGoogle Scholar
    56. [56].↵
      Major, G. & Tank, D. Persistent neural activity: prevalence and mechanisms. Curr. Opin. Neurobiol. 14, 675–684 (2004).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    57. [57].↵
      Joshua, M. & Lisberger, S. G. A tale of two species: Neural integration in zebrafish and monkeys. Neuro-science 296, 80–91 (2015).
      CrossRefPubMedGoogle Scholar
    58. [58].↵
      Cannon, S. C. & Robinson, D. A. Loss of the neural integrator of the oculomotor system from brain stem lesions in monkey. J. Neurophysiol. 57, 1383–1409 (1987).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    59. [59].↵
      Aksay, E., Gamkrelidze, G., Seung, H. S., Baker, R. & Tank, D. W. In vivo intracellular recording and perturbation of persistent activity in a neural integrator. Nat. Neurosci. 4, 184–193 (2001).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    60. [60].↵
      Blair, H. T. & Sharp, P. E. Anticipatory head direction signals in anterior thalamus: evidence for a thalamo-cortical circuit that integrates angular head motion to compute head direction. J. Neurosci. 15, 6260–6270 (1995).
      Abstract/FREE Full TextGoogle Scholar
    61. [61].
      Samsonovich, A. & McNaughton, B. L. Path integration and cognitive mapping in a continuous attractor neural network model. J. Neurosci. 17, 5900–5920 (1997).
      Abstract/FREE Full TextGoogle Scholar
    62. [62].
      Burak, Y. & Fiete, I. Do we understand the emergent dynamics of grid cell activity? J. Neurosci. 26, 9352–4; discussion 9354 (2006).
      FREE Full TextGoogle Scholar
    63. [63].
      Burak, Y. & Fiete, I. R. Accurate path integration in continuous attractor network models of grid cells. PLoS Comput. Biol. 5, e1000291 (2009).
      CrossRefPubMedGoogle Scholar
    64. [64].
      Yoon, K. et al. Specific evidence of low-dimensional continuous attractor dynamics in grid cells. Nat. Neu-rosci. 16, 1077–1084 (2013).
      CrossRefPubMedGoogle Scholar
    65. [65].↵
      Kim, S. S., Rouault, H., Druckmann, S. & Jayaraman, V. Ring attractor dynamics in the drosophila central brain. Science 356, 849–853 (2017).
      Abstract/FREE Full TextGoogle Scholar
    66. [66].↵
      Romo, R., Brody, C. D., Hernández, A. & Lemus, L. Neuronal correlates of parametric working memory in the prefrontal cortex. Nature 399, 470–473 (1999).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    67. [67].
      Miller, P., Brody, C. D., Romo, R. & Wang, X.-J. A recurrent network model of somatosensory parametric working memory in the prefrontal cortex (2005).
      Google Scholar
    68. [68].↵
      Inagaki, H. K., Fontolan, L., Romani, S. & Svoboda, K. Discrete attractor dynamics underlies persistent activity in the frontal cortex. Nature 566, 212–217 (2019).
      CrossRefPubMedGoogle Scholar
    69. [69].↵
      Brysch, C., Leyden, C. & Arrenberg, A. B. Functional architecture underlying binocular coordination of eye position and velocity in the larval zebrafish hindbrain. BMC Biol. 17, 110 (2019).
      Google Scholar
    70. [70].↵
      Koulakov, A. A., Raghavachari, S., Kepecs, A. & Lisman, J. E. Model for a robust neural integrator. Nat. Neurosci. 5, 775–782 (2002).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    71. [71].↵
      Berger, D. R., Seung, H. S. & Lichtman, J. W. VAST (volume annotation and segmentation tool): Efficient manual and Semi-Automatic labeling of large 3D image stacks. Front. Neural Circuits 12, 88 (2018).
      CrossRefPubMedGoogle Scholar
    72. [72].↵
      Zung, J., Tartavull, I., Lee, K. & Sebastian Seung, H. An error detection and correction framework for connectomics. Advances in Neural Information Processing Systems 30 (NIPS 2017) (2017). 1708.02599.
      Google Scholar
    73. [73].↵
      Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional networks for biomedical image segmentation (2015). 1505.04597.
      Google Scholar
    74. [74].↵
      Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation (2016). 1606.06650.
      Google Scholar
    75. [75].↵
      Ioffe, S. & Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift (2015). 1502.03167.
      Google Scholar
    76. [76].↵
      Jia, Y. et al. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22Nd ACM International Conference on Multimedia, MM ‘14, 675–678 (ACM, New York, NY, USA, 2014).
      Google Scholar
    77. [77].↵
      Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization (2014). 1412.6980.
      Google Scholar
    78. [78].↵
      Lee, K., Zung, J., Li, P., Jain, V. & Seung, H. S. Superhuman accuracy on the SNEMI3D connectomics challenge. arXiv preprint arxiv::1706.00120 (2017).
      Google Scholar
    79. [79].↵
      Wu, J., Silversmith, W. M. & Seung, H. S. Chunkflow: Distributed hybrid cloud processing of large 3D images by convolutional nets. arXiv preprint arxiv::1904.10489 (2019).
      Google Scholar
    80. [80].↵
      Paszke, A. et al. Automatic differentiation in PyTorch (2017).
      Google Scholar
    81. [81].↵
      Zlateski, A. & Sebastian Seung, H. Image segmentation by Size-Dependent single linkage clustering of a watershed basin graph (2015). 1505.00249.
      Google Scholar
    82. [82].↵
      Sato, M., Bitter, I., Bender, M. A., Kaufman, A. E. & Nakajima, M. TEASAR: tree-structure extraction algorithm for accurate and robust skeletons. In Proceedings the Eighth Pacific Conference on Computer Graphics and Applications, 281–449 (2000).
      Google Scholar
    83. [83].↵
      Turner, N. et al. Synaptic partner assignment using attentional voxel association networks (2019). 1904. 09947.
      Google Scholar
    84. [84].↵
      Bogovic, J. A., Hanslovsky, P., Wong, A. & Saalfeld, S. Robust registration of calcium images by learned contrast synthesis. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), 1123–1126 (2016).
      Google Scholar
    85. [85].↵
      Giovannucci, A. et al. CaImAn an open source tool for scalable calcium imaging data analysis. Elife 8 (2019).
      Google Scholar
    86. [86].↵
      Pnevmatikakis, E. A. et al. Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron 89, 285–299 (2016).
      CrossRefPubMedGoogle Scholar
    87. [87].↵
      Aksay, E. et al. Functional dissection of circuitry in a neural integrator. Nat. Neurosci. 10, 494–504 (2007).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    88. [88].↵
      Muggeo, V. M. R. Estimating regression models with unknown break-points. Stat. Med. 22, 3055–3071 (2003).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    89. [89].↵
      Debowy, O. & Baker, R. Encoding of eye position in the goldfish horizontal oculomotor neural integrator. J. Neurophysiol. 105, 896–909 (2011).
      CrossRefPubMedWeb of ScienceGoogle Scholar
    90. [90].↵
      Sporns, O. & Betzel, R. F. Modular brain networks. Annu. Rev. Psychol. 67, 613–640 (2016).
      CrossRefPubMedGoogle Scholar
    91. [91].↵
      Gleich, D. Hierarchical directed spectral graph partitioning. Information Networks (2006).
      Google Scholar
    92. [92].↵
      Holland, P. W., Laskey, K. B. & Leinhardt, S. Stochastic blockmodels: First steps. Soc. Networks 5, 109–137 (1983).
      CrossRefWeb of ScienceGoogle Scholar
    93. [93].↵
      Karrer, B. & Newman, M. E. J. Stochastic blockmodels and community structure in networks. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 83, 016107 (2011).
      CrossRefPubMedGoogle Scholar
    94. [94].↵
      Bianconi, G. Entropy of network ensembles. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 79, 036114 (2009).
      PubMedGoogle Scholar

    References

    1. [1].↵
      Ashwin Vishwanathan, Kayvon Daie, Alexandro D Ramirez, Jeff W Lichtman, Emre R F Aksay, and H Sebastian Seung. Electron Microscopic Reconstruction of Functionally Identified Cells in a Neural Integrator. Curr. Biol., 27(14):2137–2147.e3, July 2017.
      CrossRefPubMedGoogle Scholar
    2. [2].↵
      José L Juárez-Morales, Claus J Schulte, Sofia A Pezoa, Grace K Vallejo, William C Hilinski, Samantha J England, Sarah de Jager, and Katharine E Lewis. Evx1 and evx2 specify excitatory neurotransmitter fates and suppress inhibitory fates through a pax2-independent mechanism. Neural Dev., 11:5, February 2016.
      Google Scholar
    3. [3].↵
      B Cabrera, B Torres, R Pásaro, A M Pastor, and J M Delgado-Garcia. A morphological study of abducens nucleus motoneurons and internuclear neurons in the goldfish (Carassius auratus). Brain Res. Bull., 28(1):137–144, January 1992.
      CrossRefPubMedWeb of ScienceGoogle Scholar
    4. [4].↵
      W K Metcalfe, B Mendelson, and C B Kimmel. Segmental homologies among reticulospinal neurons in the hindbrain of the zebrafish larva. J. Comp. Neurol., 251(2):147–159, September 1986.
      CrossRefPubMedWeb of ScienceGoogle Scholar
    5. [5].↵
      Yukiko Kimura, Yasushi Okamura, and Shin-Ichi Higashijima. alx, a zebrafish homolog of Chx10, marks ipsilateral descending excitatory interneurons that participate in the regulation of spinal locomotor circuits. J. Neurosci., 26(21):5684–5697, May 2006.
      Abstract/FREE Full TextGoogle Scholar
    Back to top
    PreviousNext
    Posted October 28, 2020.
    Download PDF
    Print/Save Options
    Email
    Share
    Citation Tools
    • Tweet Widget
    COVID-19 SARS-CoV-2 preprints from medRxiv and bioRxiv

    Subject Area

    • Neuroscience
    Subject Areas
    All Articles
    • Animal Behavior and Cognition (3817)
    • Biochemistry (8107)
    • Bioengineering (5900)
    • Bioinformatics (21853)
    • Biophysics (10933)
    • Cancer Biology (8492)
    • Cell Biology (12325)
    • Clinical Trials* (138)
    • Developmental Biology (6992)
    • Ecology (10692)
    • Epidemiology* (2065)
    • Evolutionary Biology (14271)
    • Genetics (9905)
    • Genomics (13321)
    • Immunology (8448)
    • Microbiology (20661)
    • Molecular Biology (8147)
    • Neuroscience (44285)
    • Paleontology (329)
    • Pathology (1324)
    • Pharmacology and Toxicology (2329)
    • Physiology (3476)
    • Plant Biology (7452)
    • Scientific Communication and Education (1349)
    • Synthetic Biology (2068)
    • Systems Biology (5670)
    • Zoology (1161)
    * The Clinical Trials and Epidemiology subject categories are now closed to new submissions following the completion of bioRxiv's clinical research pilot project and launch of the dedicated health sciences server medRxiv (submit.medrxiv.org). New papers that report results of Clinical Trials must now be submitted to medRxiv. Most new Epidemiology papers also should be submitted to medRxiv, but if a paper contains no health-related information, authors may choose to submit it to another bioRxiv subject category (e.g., Genetics or Microbiology).

    Evaluation/discussion of this paper   x