Extracting neuronal functional network dynamics via adaptive Granger causality analysis
See allHide authors and affiliations
Edited by Terrence J. Sejnowski, Salk Institute for Biological Studies, La Jolla, CA, and approved March 13, 2018 (received for review October 18, 2017)
Significance
Probing functional interactions among the nodes in a network is crucial to understanding how complex systems work. Existing methodologies widely assume static network structures or Gaussian statistics or do not take account of likely sparse interactions. They are therefore not well-suited to neuronal spiking data with rapid task-dependent dynamics, binary statistics, and sparse functional dependencies. We develop an inference framework for extracting functional network dynamics from neuronal data by integrating techniques from adaptive filtering, compressed sensing, point processes, and high-dimensional statistics. We derive efficient estimation algorithms and precise statistical inference procedures. We apply our proposed techniques to experimentally recorded neuronal data to probe the neuronal functional networks underlying attentive behavior. Our techniques provide substantial gains in computation, resolution, and statistical robustness.
Abstract
Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.
Converging lines of evidence in neuroscience, from neuronal network models and neurophysiology (1⇓⇓⇓⇓⇓⇓–8) to resting-state imaging (9⇓–11), suggest that sophisticated brain function results from the emergence of distributed, dynamic, and sparse functional networks underlying the brain activity. These networks are highly dynamic and task-dependent, which allows the brain to rapidly adapt to abrupt changes in the environment, resulting in robust function. To exploit modern-day neuronal recordings to gain insight into the mechanisms of these complex dynamic functional networks, computationally efficient time series analysis techniques capable of simultaneously capturing their dynamicity, sparsity, and statistical characteristics are required.
Historically, various techniques such as cross-correlogram (12) and joint peristimulus time histogram (13) analyses have been used for inferring the statistical relationship between pairs of spike trains (12⇓–14). Despite being widely used, these methods are unable to provide reliable estimates of the underlying directional patterns of causal interactions among an ensemble of interacting neurons due to the intrinsic deficiencies in identification of directionality, low sensitivity to inhibitory interactions (15), and susceptibility to the indirect interactions and latent common inputs.
Methods based on Granger causality (GC) analysis have shown promise in addressing these shortcomings and have thus been used for inferring functional interactions from neural data of different modalities (16⇓⇓–19). The rationale behind GC analysis is based on two principles: the temporal precedence of cause over effect and the unique information of cause about the effect. Given two time series
Numerous efforts have been dedicated to extending the bivariate GC measure to more general settings, such as the conditional form of GC in ref. 20 for multivariate setting, and several frequency-domain variants of GC (21⇓–23). Despite significant advances in time series analysis using GC and its variants, when applied to neuronal data, the existing methods exhibit several drawbacks.
First, most existing methods for causality inference provide static estimates of the causal influences associated with the entire data duration. Although suitable for the analysis of stationary neural data, they are not able to capture the rapid task-dependent changes in the underlying neural dynamics. To address this challenge, several time-varying measures of causality have been proposed in the literature based on Bayesian filtering and wavelets (24⇓⇓⇓⇓⇓–30). Second, there are very few causal inference approaches to take into account the sparsity of the functional networks (31⇓–33). As an example, authors in ref. 31 introduced a method for sparse identification of functional connectivity patterns from large-scale functional imaging data. Despite their success in inferring sparse connectivity patterns, these techniques assume static connectivity structures.
Third, most existing approaches are tailored for continuous-time data, such as electroencephalography (EEG) and local field potential recordings, which limits their utility when applied to binary neuronal spike recordings. These methods are generally based on multivariate autoregressive (MVAR) modeling, with a few nonparametric exceptions (30, 34). Some efforts have been made to adapt the MVAR modeling to neuronal spike trains (17, 35, 36). For instance, the binary spikes were preprocessed in refs. 17 and 35 via a smoothing kernel, which significantly distorts the temporal details of the neuronal dynamics. In addition, the frequency-domain GC analysis techniques implicitly assume that the data have rich oscillatory dynamics. Although this assumption is valid for steady-state EEG responses or resting-state recordings, spike trains recorded from cortical neuronal ensembles often do not exhibit any oscillatory behavior.
To address the third challenge, point process modeling and estimation have been successfully used in capturing the stochastic dynamics of binary neuronal spiking data (37, 38). This framework has been particularly used for inferring functional interactions in neuronal ensembles from spike recordings (32, 38⇓⇓⇓–42). A maximum likelihood (ML)-based approach was introduced in ref. 38 based on a network likelihood formulation of the point process model; a model-based Bayesian approach based on point process likelihood models with sparse priors on the connectivity pattern was introduced in ref. 32. Among the more recent results, an information-theoretic measure of causality is proposed in ref. 41; a static GC measure based on point process likelihoods is proposed in ref. 40. However, a modeling and estimation framework to simultaneously take into account the dynamicity and sparsity of the G-causal influences as well as the statistical properties of binary neuronal spiking data is lacking.
In this paper, we close this gap by developing a dynamic measure of GC by integrating the forgetting-factor mechanism of recursive least squares (RLS), point process modeling, and sparse estimation. To this end, we first exploit the prevalent parsimony of neurophysiological time constants manifested in neuronal spiking dynamics, such as those in sensory neurons with sharp tunings, as well as the potential low-dimensional structure of the underlying functional networks. These features can be captured by point process models in which the cross-history dependence of the neurons is described by sparse vectors. We then use an exponentially weighted log-likelihood framework (43) to recursively estimate the model parameters via sparse adaptive filtering, thereby defining a dynamic measure of GC, which we call the adaptive GC (AGC) measure.
The significance of sparsity in our approach is twofold. First, while the functional networks may not be truly sparse, they can often be parsimoniously described by a sparse set of significant functional links. Our models can indeed capture these significant links through sparse cross-history dependence. Second, sparsity enables stable estimation in the face of limited data. This is particularly important for adaptive estimation, where the goal is to reliably estimate a large number of cross-history parameters using short, effective observation windows.
We next develop a statistical inference framework for the proposed AGC measure by extending classical results on the analysis of deviance to our sparse dynamic point process setting. We provide simulation studies to evaluate the identification and tracking capabilities of our proposed methodology, which reveal remarkable performance gains compared with existing techniques, in both detecting the existing G-causal links and avoiding false alarms, while capturing the dynamics of the G-causal interactions in a neuronal ensemble. We finally apply our techniques to two experimentally recorded datasets: two-photon imaging data from the mouse auditory cortex under spontaneous activity and simultaneous single-unit recordings from the ferret primary auditory (A1) and prefrontal cortices (PFC) under a tone-detection task. Our analyses reveal the temporal details of the functional interactions between A1 and PFC under attentive behavior as well as among the auditory neurons under spontaneous activity at unprecedented spatiotemporal resolutions. In addition to their utility in analyzing neuronal data, our techniques have potential application in extracting functional network dynamics in other domains beyond neuroscience, such as social networks or gene regulatory networks, thanks to the plug-and-play nature of the algorithms used in our inference framework.
Theory and Algorithms
Preliminaries and Notations.
We use point process modeling to capture neuronal spiking statistics. A point process is a stochastic sequence of discrete events occurring at random points in continuous time. When adapted to the discrete time domain, point process models have proven to be successful in capturing the statistics of neuronal spiking (37, 44⇓–46). Our analysis in this paper is based on discrete point process models, in which the observation interval T is discretized to
Suppose that at time bin t the effective neural covariates are collected in a vector
To capture the adaptivity manifested in the spiking dynamics, we use the forgetting factor mechanism of RLS algorithms (47) and combine the data log-likelihoods up to time k using an exponential weighting scheme (43):
The parameter vectors
The AGC Measure.
Consider simultaneous spike recordings from an ensemble of C neurons indexed by
An example of the neuronal ensemble model for
To assess the G-causal influences, a likelihood-based GC measure has been proposed in ref. 40 for point process models. Consider neuron
Most existing formulations of GC leverage the MVAR modeling framework (20⇓⇓⇓⇓⇓⇓⇓⇓–29, 31, 35), which pertains to data with linear Gaussian statistics. The GC measure in Eq. 4, however, benefits from the likelihood-based inference methodology and covers a wide range of complex statistical models. Both the MVAR-based GC measure and its log-likelihood-based point process variant of ref. 40 assume that the underlying time series are stationary (i.e., the modulation parameters are all static). In many scenarios of interest, however, the underlying dynamics exhibit nonstationarity. An example of such a scenario is the task-dependent receptive field plasticity phenomenon (43, 48, 49). In addition, ML estimation used by these techniques does not capture the underlying sparsity of the parameters and often exhibits poor performance, when the data length is short or the number of neurons C is large.
To account for possible changes in the ensemble parameters and their underlying sparsity, we introduce the AGC measure, which is capable of capturing the dynamics of G-causal influences in the ensemble. To this end, we make two major modifications to the classical GC measure. First, we leverage the exponentially weighted log-likelihood formulation of Eq. 2 to induce adaptivity into the GC measure. Second, we exploit the possible sparsity of the ensemble parameters. Replacing the standard data log-likelihoods in Eq. 4 by their sparse adaptive counterparts given in Eqs. 2 and 3, we define the AGC measure from neuron
Statistical Inference of the AGC Measure.
Due to the stochastic and often biased nature of GC estimates, nonzero values of GC do not necessarily imply existence of G-causal influences. Hence, a statistical inference framework is required to assess the significance of the extracted G-causal interactions.
Consider two nested GLM models, referred to as full and reduced models, with parameters
To perform the foregoing hypothesis test, the distributions of the deviance difference under the two hypotheses need to be characterized. Although these distributions are known for the classical GC measure (50⇓–52), they cannot be readily extended to our AGC measure for two main reasons. First, the log-likelihoods are replaced by their exponentially weighted counterparts, which suppresses their dependence on the data length N due to the forgetting factor mechanism. Second, unlike ML estimates, which are asymptotically unbiased, the
To address these challenges, inspired by recent results in high-dimensional regression (53, 54), we define the adaptive de-biased deviance as
There are four major challenges in inferring a GC influence from
Schematic depiction of the inference procedure for the AGC measure.
(i) Recursive computation of the AGC.
The computation of the adaptive de-biased deviance differences
(ii) Asymptotic distributional analysis of the AGC.
Let
i) in the absence of a GC link from
(c∼) to(c) ,D(c∼↦c)k,β→χ2(M(d)) , andii) in the presence of a GC link from
(c∼) to(c) , if the corresponding cross-history coefficients scale at least asO(1−β1+β−−−√) , thenD(c∼↦c)k,β→χ2(M(d),ν(c∼↦c)k) ,
where
Theorem S1 has two main implications. First, it establishes that our proposed adaptive de-biased deviance difference statistic admits simple asymptotic distributional characterization. Given that these asymptotic distributions form the main ingredients of the forthcoming inference procedure, the second block in Fig. 2 serves to highlight the significance of adaptive de-biasing. As shown in SI Appendix, section 3, the bias
Second, given that for
The output of the second block in Fig. 2 is the de-biased deviance differences corresponding to all pairs of neurons (shown in 2D as deviance difference maps). In the next two subsections we will show how to translate the deviance differences to statistically interpretable AGC links.
(iii) FDR control.
First, we use part i of the result of Theorem S1 to control the FDR in a multiple hypothesis testing framework. To this end, we use the Benjamini–Yekutieli (BY) procedure (55). The BY procedure aims at controlling the FDR, which is the expected ratio of incorrectly rejected null hypotheses, or namely “false discoveries,” at a desired significance level α.
To identify significant GC interactions while avoiding spurious false positives, we conduct multiple hypothesis tests on the set of
(iv) Test strength characterization via noncentral χ2 filtering and smoothing algorithm.
Next, we use part ii of the result of Theorem S1 to assess the significance of the tests for the detected GC links. Under the alternative hypothesis, Theorem S1 implies that
It remains to estimate the unknown noncentrality parameters
Algorithm 1: AGC Inference from Ensemble Neuronal Spiking
Input: Spike trains
1. for
c,c∼=1,…,C ,c∼≠c do2. Recursively estimate the sparse time-varying modulation parameter vectors
{ωˆ(c)k}Kk=1 and{ωˆ(c∖c∼)k}Kk=1 corresponding to full and reduced GLMs usingℓ1–PPF1 (43),3. Recursively compute the adaptive de-biased deviance differences
{D(c∼↦c)k,β}Kk=1 (Algorithm S3),4. Perform noncentral
χ2 -squared filtering and smoothing to estimate the noncentrality parameters{νˆ(c∼↦c)k}Kk=1 from{D(c∼↦c)k,β}Kk=1 (Algorithm S2),5. 𝐟𝐨𝐫 k = 1, …, K 𝐝𝐨
6. Apply BY rejection rule to the ensemble set of GC tests to control FDR at rate α (Algorithm S1),
7. Compute AGC maps
Φˆk∈[−1,1]C×C based on the J-statistics as(Φˆk)c,c∼≔sk(ωˆ(c,c∼)k)J(c∼↦c)k (Algorithm S1).
Output: AGC maps
Summary of Advantages over Existing Work.
Algorithm 1 summarizes the overall AGC inference procedure. Choices of the parameters Θ involved in Algorithm 1 and its computational complexity are discussed in SI Appendix, sections 4–6. Before presenting applications to synthetic and real data, we summarize the advantages of our methodology over existing work:
i) Sparse dynamic GLM modeling provides more accurate estimates of the parameters (43), and hence more reliable detection of the GC links, as compared with existing static methods based on ML. We examine this aspect of our methodology in SI Appendix, section 8, using an illustrative simulation study;
ii) Relating the noncentrality parameters to the test strengths of the detected GC links is not used by existing techniques. In light of Theorem S1 and the need for estimating the noncentrality parameters, we devised a noncentral
χ2 filtering and smoothing algorithm to exploit the entire observed data for obtaining reliable estimates;iii) Exponential weighting of the log-likelihoods admits construction of adaptive filters for estimating the network parameters in a recursive fashion, which significantly reduces the computational complexity of our inference procedure; and
iv) Characterization of AGC via the J-statistic as a normalized measure of hypothesis test strength for each detected GC link can be further used for graph-theoretic analysis of the inferred functional networks. By viewing the J-statistic as a surrogate for link strength, the AGC networks can be refined by thresholding the J-statistics, and access to the distribution of the J-statistics in a network allows one to perform further hypothesis tests regarding the network function (56).
In the next section, we illustrate these advantages by comparing our methodology with two representative techniques for inferring functional network dynamics.
Applications
A Simulated Example.
We consider a network of
Functional network dynamics inference from simulated spikes. (A) Three states of the functional network evolution, where neurons (vertices) are interacting through static (solid edges) or dynamic (dashed edges) causal links of inhibitory (open circles) or excitatory (filled circles) nature. (B) One realization of simulated spikes within windows of 1 s selected at
An observation period of
In Fig. 3C, the estimates of
The top row in Fig. 3E shows the ground truth G-causal maps plotted at nine time instances (three per segment). Each map Φ represents an
We compare the AGC maps with two other methods: the static GC method of ref. 40 (third row), and the functional connectivity analysis of ref. 38 (final row). To adapt these methods to the time-varying setting, we used nonoverlapping window segments whose length is chosen to match the effective window length
To quantify the foregoing performance comparison, we repeated the simulation for
Performance comparison of AGC inference with the methods of refs. 40 and 38 in terms of TDR (green) and FAR (red) for the three segments of the simulation period. Boxes represent the mean and
Application to Real Data: Spontaneous Activity in the Mouse Auditory Cortex.
In this section, we apply our proposed method to experimentally recorded neuronal population data from the mouse auditory cortex. We imaged the spontaneous activity in the auditory cortex of an awake mouse with in vivo two-photon calcium imaging (see SI Appendix, section 13 for details of the experimental procedures). Within an imaged field of view, the activity of
Adaptive G-causal interactions among ensemble of neurons in mouse auditory cortex under spontaneous activity. The time course of estimated GC changes for four selected GC links obtained through (A) noncentrality parameter
The detected G-causal maps are considerably sparse (maximum
This media file demonstrates the AGC map estimates for the entire data duration of ∼22 min (1,333 s) in color-coded 20×20 array format, where each entry Φ(i,j) corresponds to a possible G-causal link from neuron (j) to neuron (i) (four snapshots shown in Fig. 5C). The magnitude of each entry represents the estimated J-statistics for the corresponding link, and the sign of each entry reflects the nature of the link, where the excitatory, inhibitory, and no-GC link conditions are shown in red, blue, and green, respectively. The horizontal cursor at the bottom represents time in seconds.
This media file shows the network maps for the entire data duration of ∼22 min (1,333 s) overlaid on the slice (four snapshots shown in Fig. 5D). Neurons are shown with red circles and the selected 20 neurons are highlighted in yellow, scattered across the whole slice. The white directed arrows represent the detected G-causal links, where the width of each arrow at each time reflects its significance level (i.e., its corresponding J-statistics). The horizontal cursor at the bottom represents time in seconds.
Application to Real Data: Ferret Cortical Activity During Active Behavior.
Studies of the PFC have revealed its association with high-level executive functions such as decision making and attention (60⇓–62). In particular, recent findings suggest that PFC is engaged in cognitive control of auditory behavior (62), through a top-down feedback to sensory cortical areas, resulting in enhancement of goal-directed behavior. It is conjectured in ref. 63 that the top-down feedback from PFC triggers adaptive changes in the receptive field properties of A1 neurons during active attentive behavior, to facilitate the processing of task-specific stimulus features. This conjecture has been examined in the context of visual processing, where top-down influences exerted on the visual cortical pathways have been shown to alter the functional properties of cortical neurons (64, 65).
To examine this conjecture at a single-unit level, we apply our proposed AGC inference method to single-unit spiking activities from an ensemble of neurons simultaneously recorded from two cortical regions of A1 and PFC in ferrets during a series of passive listening and active auditory task conditions. In this application, we sought to reveal the significant task-specific changes in the G-causal interactions within or between PFC and A1 regions at the single-unit level during active behavior. We used the spike data recordings from a large set of experiments (more than 35) conducted on three ferrets for GC inference analysis (data from the Neural Systems Laboratory, Institute for Systems Research, University of Maryland, College Park, MD). During each trial in an auditory discrimination task, the ferrets were presented with a random sequence of broadband noise-like acoustic stimuli known as temporally orthogonal ripple combinations (TORCs) along with randomized presentations of the target tone. Ferrets were trained to attend to the spectrotemporal features of the presented sounds and discriminate the tonal target from the background reference stimuli (see ref. 63 for details of the experimental procedures). Due to their broadband noise-like features, the TORCs and the corresponding neural responses admit efficient estimation of the spectrotemporal tuning of the primary auditory neurons via sparse regression (43, 66).
Fig. 6 shows our results on a selected experiment in which a total number of
Dynamic inference of G-causal influences between single-units in ferret PFC and A1 during auditory task. (A) Spike trains corresponding to the first repetition of each block. (B) Time course of significant GC changes through J-statistics for selected single-units (FDR controlled at
Three major task-specific dynamic effects can be inferred from Fig. 6: (i) a significant bottom-up GC link from the target-tuned A1 unit during active behavior, (ii) a persistent task-relevant top-down GC link, and (iii) task-relevant plasticity and rapid tuning changes within A1. First, unit 4 in A1 shows strong frequency selectivity to the target around
This media file exhibits the STRF estimates of all of the five A1 units for the entire experiment duration (three snapshots shown in Fig. 6D). The excitatory and inhibitory effects are shown with red and blue colors, respectively. Each STRF panel is a 50×50 array, with I=50 lag bins uniformly spanning time lags in the range of [0,50] ms on the horizontal axis, and J=50 frequency bins spanning the frequencies in the range of [500,16,000] Hz in logarithmic scale on the vertical axis. The horizontal cursor at the bottom represents time in seconds.
The second effect appears as a strong top-down GC link (green link,
In addition to these interregion GC links, multiple instances of GC links within A1 (e.g.,
To validate our results in the absence of ground truth, we assess their reliability using surrogate data obtained by random shuffling and network subsampling in SI Appendix, section 11 and verify the robustness of the inferred task-dependent functional network dynamics against the aforementioned adversarial perturbations. In conclusion, our methodology enabled the extraction of the top-down and bottom-up network-level dynamics that were previously conjectured in ref. 63 to be involved in active attentive behavior, at the neuronal scale with high spatiotemporal resolution. In SI Appendix, section 12 we present our analysis of another experiment, which further corroborates our findings.
Discussion and Concluding Remarks
Summary and Extensions of Our Contributions.
Most widely adopted time series analysis techniques for quantifying functional causal relations among the nodes in a network assume static functional structures or otherwise enforce dynamics using sliding windows. While they have proven successful in analyzing stationary Gaussian time series, when applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics they hinder a precise statistical characterization of the sparse dynamic neuronal functional networks underlying adaptive behavior.
To address these shortcomings, we developed a dynamic inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We proposed a measure of time-varying GC, namely AGC, and demonstrated its utility through theoretical analysis, algorithm development, and application to synthetic and real data. Our analysis of the mouse auditory cortical data revealed unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatial resolution. Application of our techniques to simultaneous recordings from the ferret auditory and prefrontal cortical areas suggested evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.
The plug-and-play nature of the algorithms used in our framework enables it to be generalized for application to various other domains beyond neuroscience, such as the analysis of social networks or gene regulatory networks. As an example, the GLM models can be generalized to account for m-ary data, the forgetting factor mechanism for inducing adaptivity can be extended to state-space models governing the coefficient dynamics, and the FDR correction can be replaced by more recent techniques such as knockoff filters (67). To ease reproducibility and aid the adoption of our method, we have archived a MATLAB implementation on GitHub (https://github.com/Arsha89/AGC_Analysis).
Limitations of Our Approach.
In closing, it is worth discussing two potential limitations of our proposed paradigm.
Confounding effects due to network subsampling.
A common criticism of statistical causality measures, such as the GC, directed information, or transfer entropy, is susceptibility to latent confounding causal effects arising from network subsampling. In practice, these methods are typically applied to a small subnetwork of the circuits involved in neuronal processing. Given that each neuron may receive thousands of synaptic inputs, lack of access to a large number of latent confounding inputs can affect the validity of the causal inference results obtained by these methods.
We have evaluated the robustness of our method against such confounding effects using comprehensive numerical studies in SI Appendix, section 9. These studies involve scenarios with deterministic and stochastic latent common inputs as well as confounding effects due to network subsampling and suggest that our techniques indeed exhibit a degree of immunity to such confounding effects. We argue that this performance is due to explicit modeling of the dynamics of the Granger causal effects in the GLM framework, invoking the sparsity hypothesis, and using sharp statistical inference procedures (see SI Appendix, section 9 for further discussion).
Biological interpretation.
The functional network characterization provided by our framework must not be readily interpreted as direct or synaptic connections that result in causal effects. Our analysis results in a sparse number of GC interactions between neurons that can appear and vanish over time in a task-specific fashion. While it is possible that these connections reflect synaptic contacts between neurons, as changes in synaptic strengths can be induced rapidly within minutes (68), the observed GC dynamics could also be due to other underlying mechanisms such as desynchronization of inputs, altered shunting, or dendritic filtering. Thus, these plasticity effects remain to be tested with ground truth experiments. An alternative and inclusive view is that these links reflect a measure of information transferred from one neuron to another.
The relatively rapid switching of these links, however, must be interpreted with caution: While some of the rapid fluctuations are due to the use of the FDR control procedure (as discussed in the Applications), sudden emergence or disappearance of a link does not necessarily imply sudden changes in the causal structure or information transfer in the network. A sudden disappearance of a steady link most likely reflects the fact that given the amount of currently available data, there is not enough evidence to maintain the existence of the link at the group level with the desired statistical confidence; similarly, a sudden emergence of a link most likely implies that enough evidence has just been accumulated to justify its presence with statistical confidence. The gradual effects of these interactions are indeed reflected in the dynamics of the noncentrality parameters estimated by our methods.
As demonstrated by the applications of our inference procedures, our framework provides a robust characterization of the dynamic statistical dependencies in the network in the sense of Granger at high temporal resolution. This characterization can be readily used at a phenomenological level to describe the dynamic network-level functional correlates of behavior, as demonstrated by our real data applications. More importantly, this characterization can serve as a guideline in forming hypotheses for further testing of the direct causal effects using experimental procedures such as lesion studies, microstimulation, or optogenetics in animal models.
Acknowledgments
This work was supported in part by National Science Foundation Grant 1552946 and National Institutes of Health Grants R01-DC009607 and U01-NS090569.
Footnotes
- ↵1To whom correspondence should be addressed. Email: behtash@umd.edu.
Author contributions: J.B.F., S.A.S., P.O.K., and B.B. designed research; A.S., S.M., J.L., J.B.F., S.A.S., P.O.K., and B.B. performed research; A.S. and B.B. analyzed data; and A.S. and B.B. wrote the paper.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
Data deposition: The experimental data used in this paper have been deposited on the Digital Repository at the University of Maryland at hdl.handle.net/1903/20546, and the MATLAB implementation of the algorithms is archived on GitHub at https://github.com/Arsha89/AGC_Analysis.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1718154115/-/DCSupplemental.
Published under the PNAS license.
References
- ↵
- Olshausen BA,
- Field DJ
- ↵
- Olshausen BA,
- Field DJ
- ↵
- Sporns O,
- Zwi JD
- ↵
- Song S,
- Sjöström PJ,
- Reigl M,
- Nelson S,
- Chklovskii DB
- ↵
- Rehn M,
- Sommer FT
- ↵
- Bartlett P,
- Pereira F,
- Burges CJC,
- Bottou L,
- Weinberger KZ
- Druckmann S,
- Hu T,
- Chklovskii DB
- ↵
- Ganguli S,
- Sompolinsky H
- ↵
- Babadi B,
- Sompolinsky H
- ↵
- Greicius MD,
- Krasnow B,
- Reiss AL,
- Menon V
- ↵
- Damoiseaux J, et al.
- ↵
- Hagmann P, et al.
- ↵
- Perkel DH,
- Gerstein GL,
- Moore GP
- ↵
- Gerstein GL,
- Perkel DH
- ↵
- Brody CD
- ↵
- Aertsen AM,
- Gerstein GL
- ↵
- Bernasconi C,
- König P
- ↵
- Kamiński M,
- Ding M,
- Truccolo WA,
- Bressler SL
- ↵
- Goebel R,
- Roebroeck A,
- Kim DS,
- Formisano E
- ↵
- Brovelli A, et al.
- ↵
- Geweke JF
- ↵
- Geweke J
- ↵
- Kaminski M,
- Blinowska KJ
- ↵
- Baccalá LA,
- Sameshima K
- ↵
- Sommerlade L, et al.
- ↵
- Milde T, et al.
- ↵
- Havlicek M,
- Jan J,
- Brazdil M,
- Calhoun VD
- ↵
- Möller E,
- Schack B,
- Arnold M,
- Witte H
- ↵
- Hesse W,
- Möller E,
- Arnold M,
- Schack B
- ↵
- Astolfi L, et al.
- ↵
- Sato JR, et al.
- ↵
- Valdés-Sosa PA, et al.
- ↵
- Stevenson IH, et al.
- ↵
- Zhou Z, et al.
- ↵
- Dhamala M,
- Rangarajan G,
- Ding M
- ↵
- Sameshima K,
- Baccalá LA
- ↵
- Krumin M,
- Shoham S
- ↵
- Truccolo W,
- Eden UT,
- Fellows MR,
- Donoghue JP,
- Brown EN
- ↵
- Okatan M,
- Wilson MA,
- Brown EN
- ↵
- Nedungadi AG,
- Rangarajan G,
- Jain N,
- Ding M
- ↵
- Kim S,
- Putrino D,
- Ghosh S,
- Brown EN
- ↵
- Quinn CJ,
- Coleman TP,
- Kiyavash N,
- Hatsopoulos NG
- ↵
- Kim S,
- Quinn CJ,
- Kiyavash N,
- Coleman TP
- ↵
- Sheikhattar A,
- Fritz JB,
- Shamma SA,
- Babadi B
- ↵
- Smith A,
- Brown EN
- ↵
- Paninski L
- ↵
- Paninski L,
- Pillow J,
- Lewi J
- ↵
- Haykin SS
- ↵
- Brown EN,
- Nguyen DP,
- Frank LM,
- Wilson MA,
- Solo V
- ↵
- Fritz J,
- Shamma S,
- Elhilali M,
- Klein D
- ↵
- Wilks SS
- ↵
- Davidson RR,
- Lever WE
- ↵
- Peers H
- ↵
- Van de Geer S,
- Bühlmann P,
- Ritov Y,
- Dezeure R
- ↵
- Javanmard A,
- Montanari A
- ↵
- Benjamini Y,
- Yekutieli D
- ↵
- Francis etal
- ↵
- Pnevmatikakis EA, et al.
- ↵
- Hromádka T,
- Zador AM
- ↵
- Watkins PV,
- Kao JP,
- Kanold PO
- ↵
- Miller EK,
- Cohen JD
- ↵
- Gold JI,
- Shadlen MN
- ↵
- Buschman TJ,
- Miller EK
- ↵
- Fritz JB,
- David SV,
- Radtke-Schuller S,
- Yin P,
- Shamma SA
- ↵
- Gilbert CD,
- Li W
- ↵
- Piëch V,
- Li W,
- Reeke GN,
- Gilbert CD
- ↵
- Klein DJ,
- Simon JZ,
- Depireux DA,
- Shamma SA
- ↵
- Barber RF,
- Candès EJ
- ↵
- Cooke SF,
- Bear MF
Article Classifications
- Physical Sciences
- Applied Mathematics
- Biological Sciences
- Neuroscience