I am concerned about the assumption that G-causality analysis makes that the time series can be described as a linear (multivariate autoregressive) model. My initial exploration of the G-causality toolbox is that, while the analysis seems to recover directed edges in the neural network for my initial (small) toy models, that the assumption of the linear model may be invalid for my application. In particular, the toolbox complains that my residuals after the model fit is not white (by the Durbin-Watson test), which suggests that the model is not good for the neural traces.
That said, it does seem to "work". Here, the analysis correctly reproduced three directed edges in a set of five neurons:
Actually, the Durbin-Watson test is not implemented in the toolbox. It always throws a warning... Nevertheless, I still want a sanity check of the validity of the MVAR fit on the neural traces... When I "turn on" the true DW test for residual whiteness, the neural data supposedly passes.
But clearly, the problem here is that I'm just using the toolbox superficially without knowing about its workings... Now, to dig in.
First question to consider is how well the multivariate autoregressive (MVAR) model assumed by G-causality describes the data. As it turns out, the description is quite good, as long as one is careful about the sampling rate of the signal:
(I worry about overfitting here...)
Next, we want to know the maximum lag (the "model order") we should use in the G-causality analysis. Based on the delay in the cross correlations, we should use up to 10 ms of lag:
Some more inference instances:
It is quite interesting that the Granger analysis can infer directed edges! (Q: So far, I have been using exclusively excitatory synapses. Can the method also pick up inhibitory synapses?)
I find that the "FDR level" knob seems to understate the rate of false positives for the G-causality inference. I am not sure how the toolbox calculated $p$-values associated with each Granger edge, which might be the cause for this discrepancy.
No comments:
Post a Comment