Bayesian analysis used here to answer the question: "when looking at resampling results, are the differences between models 'real?'" To answer this, a model can be created were the outcome is the resampling statistics (e.g. accuracy or RMSE). These values are explained by the model types. In doing this, we can get parameter estimates for each model's affect on performance and make statistical (and practical) comparisons between models.
perf_mod(object, ...) # S3 method for rset perf_mod(object, transform = no_trans, hetero_var = FALSE, formula = NULL, ...) # S3 method for vfold_cv perf_mod(object, transform = no_trans, hetero_var = FALSE, ...) # S3 method for resamples perf_mod( object, transform = no_trans, hetero_var = FALSE, metric = object$metrics, ... ) # S3 method for data.frame perf_mod(object, transform = no_trans, hetero_var = FALSE, formula = NULL, ...)
A data frame or an
Additional arguments to pass to
An named list of transformation and inverse
transformation functions. See
A logical; if
An optional model formula to use for the Bayesian hierarchical model (see Details below).
A single character value for the statistic from
An object of class
These functions can be used to process and analyze matched resampling statistics from different models using a Bayesian generalized linear model with effects for the model and the resamples.
By default, a generalized linear model with Gaussian error and an identity link is fit to the data and has terms for the predictive model grouping variable. In this way, the performance metrics can be compared between models.
Additionally, random effect terms are also used. For most resampling methods (except repeated V-fold cross-validation), a simple random intercept model its used with an exchangeable (i.e. compound-symmetric) variance structure. In the case of repeated cross-validation, two random intercept terms are used; one for the repeat and another for the fold within repeat. These also have exchangeable correlation structures.
The above model specification assumes that the variance in the performance
metrics is the same across models. However, this is unlikely to be true in
some cases. For example, for simple binomial accuracy, it well know that the
variance is highest when the accuracy is near 50 percent. When the argument
hetero_var = TRUE, the variance structure uses random intercepts for each
model term. This may produce more realistic posterior distributions but may
take more time to converge.
Examples of the default formulas are:
# One ID field and common variance: statistic ~ model + (model | id) # One ID field and heterogeneous variance: statistic ~ model + (model + 0 | id) # Repeated CV (id = repeat, id2 = fold within repeat) # with a common variance: statistic ~ model + (model | id2/id) # Repeated CV (id = repeat, id2 = fold within repeat) # with a heterogeneous variance: statistic ~ model + (model + 0| id2/id) # Default for unknown resampling method and # multiple ID fields: statistic ~ model + (model | idN/../id)
Custom formulas should use
statistic as the outcome variable and
as the factor variable with the model names.
Also, as shown in the package vignettes, the Gaussian assumption make be
unrealistic. In this case, there are at least two approaches that can be
used. First, the outcome statistics can be transformed prior to fitting the
model. For example, for accuracy, the logit transformation can be used to
convert the outcome values to be on the real line and a model is fit to
these data. Once the posterior distributions are computed, the inverse
transformation can be used to put them back into the original units. The
transform argument can be used to do this.
The second approach would be to use a different error distribution from the
exponential family. For RMSE values, the Gamma distribution may produce
better results at the expense of model computational complexity. This can be
achieved by passing the
family argument to
perf_mod as one might with