“*What is very apparent from this study is that there is no single `best’ method of dimension reduction for ABC.*“

**M**ichael Blum, Matt Nunes, Dennis Prangle and Scott Sisson just posted on arXiv a rather long review of dimension reduction methods in ABC, along with a comparison on three specific models. Given that the choice of the vector of summary statistics is presumably the most important single step in an ABC algorithm and as selecting too large a vector is bound to fall victim of the dimension curse, this is a fairly relevant review! Therein, the authors compare regression adjustments *à la* Beaumont et al. (2002), subset selection methods, as in Joyce and Marjoram (2008), and projection techniques, as in Fearnhead and Prangle (2012). They add to this impressive battery of methods the potential use of AIC and BIC. *(Last year after ABC in London I reported here on the use of the alternative DIC by Francois and Laval, but the paper is not in the bibliography, I wonder why.)* An argument (page 22) for using AIC/BIC is that either provides indirect information about the approximation of *p(θ|y)* by *p(θ|s);* this does not seem obvious to me.

**T**he paper also suggests a further regularisation of Beaumont et al. (2002) by ridge regression, although *L*_{1} penalty *à la* Lasso would be more appropriate in my opinion for removing extraneous summary statistics. (I must acknowledge never being a big fan of ridge regression, esp. in the *ad hoc* version *à la* Hoerl and Kennard, i.e. in a non-decision theoretic approach where the hyperparameter *λ* is derived from the data by X-validation, since it then sounds like a poor man’s Bayes/Stein estimate, just like BIC is a first order approximation to regular Bayes factors… *Why pay for the copy when you can afford the original?!*) Unsurprisingly, ridge regression does better than plain regression in the comparison experiment when there are many almost collinear summary statistics, but an alternative conclusion could be that regression analysis is not that appropriate with many summary statistics. Indeed, summary statistics are not quantities of interest but data summarising tools towards a better approximation of the posterior at a given computational cost… (I do not get the final comment, page 36, about the relevance of summary statistics for MCMC or SMC algorithms: the criterion should be the best approximation of *p(θ|y)* which does not depend on the type of algorithm.)

**I** find it quite exciting to see the development of a new range of ABC papers like this review dedicated to a better derivation of summary statistics in ABC, each with different perspectives and desideratas, as it will help us to understand where ABC works and where it fails, and how we could get beyond ABC…

### Like this:

Like Loading...