Replies: 1 comment
-
Thanks for your question. I have converted it to a discussion topic, since it is not an issue and is more appropriate in the forum. To be short: gllvm does not do anything for covariates that are collinear in the model from your example. Only for constrained and concurrent ordinations are covariates dropped if the design matrix is not of full column rank. Having said that, a lot of thought has gone into making the estimation process as robust as possible, so it is nice to know it performs well even for collinear predictors. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, this is not really an issue but more about understanding the internals of
gllvm
. Does the package do something like a QR decomposition by default to make optimisation more efficient? I couldn't find it easily in the codebase but it seems to be quite robust in the face of strong multicollinearity.For example, the following fit onto simulated data without multicollinearity gave pretty good estimations as a benchmark.
When I simulated predictors that are very strongly correlated, it doesn't look as bad as I would've thought (by changing to

S <- matrix(rep(0.9, m * m), m, m)
):Admittedly the estimation starts to go off the true values (and display variance inflation), but still... not bad. I'm writing up a short supplementary section for a project to say something about when these sorts of model break with increasing multicollinearity, so am curious if
gllvm
does something clever internally. Thanks!Beta Was this translation helpful? Give feedback.
All reactions