Is maximum likelihood estimation with robust Huber-White standard errors and a scaled test statistic — which is asymptotically equal to the Yuan-Bentler test statistic — appropriate for data with pretty much outliers? Regarding outliers, how does it perform compared to bootstrapping? (I'm using the implementation of MLR in lavaan package)
1 Answer
It depends what you mean by "outlier". Is it an idiosyncratic observation that does not belong to the same population as the one you intended to sample from (and to which you intend to generalize)? Or are you simply talking about a distribution with long tail(s), which is more likely than a normal distribution to yield random observations that are many SDs away from the mean?
The "MLR" adjustments for nonnormality still assume the observations come from the same population (regardless of the distribution, which might have long/heavy tails), and only the SEs and tests are adjusted, not the ML point estimates. If you are referring to outliers that are idiosyncratic observations from a different distribution (which also bias the ML point estimates), then regularized SEM has been developed to obtain point estimates that are less sensitive to influential outliers. There is an R package:
https://cran.r-project.org/package=regsem
And associated literature listed in the documentation: