Bayesian analysis used here to answer the question: "when looking at resampling results, are the differences between models 'real?'" To answer this, a model can be created were the outcome is the resampling statistics (e.g. accuracy or RMSE). These values are explained by the model types. In doing this, we can get parameter estimates for each model's affect on performance and make statistical (and practical) comparisons between models.
Usage
perf_mod(object, ...)
# S3 method for rset
perf_mod(object, transform = no_trans, hetero_var = FALSE, formula = NULL, ...)
# S3 method for resamples
perf_mod(
object,
transform = no_trans,
hetero_var = FALSE,
metric = object$metrics[1],
...
)
# S3 method for data.frame
perf_mod(object, transform = no_trans, hetero_var = FALSE, formula = NULL, ...)
# S3 method for tune_results
perf_mod(
object,
metric = NULL,
transform = no_trans,
hetero_var = FALSE,
formula = NULL,
filter = NULL,
...
)
# S3 method for workflow_set
perf_mod(
object,
metric = NULL,
transform = no_trans,
hetero_var = FALSE,
formula = NULL,
...
)
Arguments
- object
Depending on the context (see Details below):
A data frame with
id
columns for the resampling groupds and metric results in all of the other columns..An
rset
object (such asrsample::vfold_cv()
) containing theid
column(s) and at least two numeric columns of model performance statistics (e.g. accuracy).An object from
caret::resamples
.An object with class
tune_results
, which could be produced bytune::tune_grid()
,tune::tune_bayes()
or similar.A workflow set where all results contain the metric value given in the
metric
argument value.
- ...
Additional arguments to pass to
rstanarm::stan_glmer()
such asverbose
,prior
,seed
,refresh
,family
, etc.- transform
An named list of transformation and inverse transformation functions. See
logit_trans()
as an example.- hetero_var
A logical; if
TRUE
, then different variances are estimated for each model group. Otherwise, the same variance is used for each group. Estimating heterogeneous variances may slow or prevent convergence.- formula
An optional model formula to use for the Bayesian hierarchical model (see Details below).
- metric
A single character value for the statistic from the
resamples
object that should be analyzed.- filter
A conditional logic statement that can be used to filter the statistics generated by
tune_results
using the tuning parameter values or the.config
column.
Value
An object of class perf_mod
. If a workfkow set is given in
object
, there is an extra class of "perf_mod_workflow_set"
.
Details
These functions can be used to process and analyze matched resampling statistics from different models using a Bayesian generalized linear model with effects for the model and the resamples.
Bayesian Model formula
By default, a generalized linear model with Gaussian error and an identity link is fit to the data and has terms for the predictive model grouping variable. In this way, the performance metrics can be compared between models.
Additionally, random effect terms are also used. For most resampling methods (except repeated V-fold cross-validation), a simple random intercept model its used with an exchangeable (i.e. compound-symmetric) variance structure. In the case of repeated cross-validation, two random intercept terms are used; one for the repeat and another for the fold within repeat. These also have exchangeable correlation structures.
The above model specification assumes that the variance in the performance
metrics is the same across models. However, this is unlikely to be true in
some cases. For example, for simple binomial accuracy, it well know that the
variance is highest when the accuracy is near 50 percent. When the argument
hetero_var = TRUE
, the variance structure uses random intercepts for each
model term. This may produce more realistic posterior distributions but may
take more time to converge.
Examples of the default formulas are:
# One ID field and common variance:
statistic ~ model + (model | id)
# One ID field and heterogeneous variance:
statistic ~ model + (model + 0 | id)
# Repeated CV (id = repeat, id2 = fold within repeat)
# with a common variance:
statistic ~ model + (model | id2/id)
# Repeated CV (id = repeat, id2 = fold within repeat)
# with a heterogeneous variance:
statistic ~ model + (model + 0| id2/id)
# Default for unknown resampling method and
# multiple ID fields:
statistic ~ model + (model | idN/../id)
Custom formulas should use statistic
as the outcome variable and model
as the factor variable with the model names.
Also, as shown in the package vignettes, the Gaussian assumption make be
unrealistic. In this case, there are at least two approaches that can be
used. First, the outcome statistics can be transformed prior to fitting the
model. For example, for accuracy, the logit transformation can be used to
convert the outcome values to be on the real line and a model is fit to
these data. Once the posterior distributions are computed, the inverse
transformation can be used to put them back into the original units. The
transform
argument can be used to do this.
The second approach would be to use a different error distribution from the
exponential family. For RMSE values, the Gamma distribution may produce
better results at the expense of model computational complexity. This can be
achieved by passing the family
argument to perf_mod
as one might with
the glm
function.
Input formats
There are several ways to give resampling results to the perf_mod()
function. To
illustrate, here are some example objects using 10-fold cross-validation for a
simple two-class problem:
library(tidymodels)
library(tidyposterior)
library(workflowsets)
data(two_class_dat, package = "modeldata")
set.seed(100)
folds <- vfold_cv(two_class_dat)
We can define two different models (for simplicity, with no tuning parameters).
logistic_reg_glm_spec <-
logistic_reg() %>%
set_engine('glm')
mars_earth_spec <-
mars(prod_degree = 1) %>%
set_engine('earth') %>%
set_mode('classification')
For tidymodels, the tune::fit_resamples()
function can be used to estimate
performance for each model/resample:
rs_ctrl <- control_resamples(save_workflow = TRUE)
logistic_reg_glm_res <-
logistic_reg_glm_spec %>%
fit_resamples(Class ~ ., resamples = folds, control = rs_ctrl)
mars_earth_res <-
mars_earth_spec %>%
fit_resamples(Class ~ ., resamples = folds, control = rs_ctrl)
From these, there are several ways to pass the results to perf_mod()
.
Data Frame as Input
The most general approach is to have a data frame with the resampling labels (i.e., one or more id columns) as well as columns for each model that you would like to compare.
For the model results above, tune::collect_metrics()
can be used along with some
basic data manipulation steps:
logistic_roc <-
collect_metrics(logistic_reg_glm_res, summarize = FALSE) %>%
dplyr::filter(.metric == "roc_auc") %>%
dplyr::select(id, logistic = .estimate)
mars_roc <-
collect_metrics(mars_earth_res, summarize = FALSE) %>%
dplyr::filter(.metric == "roc_auc") %>%
dplyr::select(id, mars = .estimate)
resamples_df <- full_join(logistic_roc, mars_roc, by = "id")
resamples_df
## # A tibble: 10 x 3
## id logistic mars
## <chr> <dbl> <dbl>
## 1 Fold01 0.908 0.875
## 2 Fold02 0.904 0.917
## 3 Fold03 0.924 0.938
## 4 Fold04 0.881 0.881
## 5 Fold05 0.863 0.864
## 6 Fold06 0.893 0.889
## # … with 4 more rows
We can then give this directly to perf_mod()
:
set.seed(101)
roc_model_via_df <- perf_mod(resamples_df, refresh = 0)
tidy(roc_model_via_df) %>% summary()
## # A tibble: 2 x 4
## model mean lower upper
## <chr> <dbl> <dbl> <dbl>
## 1 logistic 0.892 0.879 0.906
## 2 mars 0.888 0.875 0.902
rsample Object as Input
Alternatively, the result columns can be merged back into the original rsample
object. The up-side to using this method is that perf_mod()
will know exactly
which model formula to use for the Bayesian model:
resamples_rset <-
full_join(folds, logistic_roc, by = "id") %>%
full_join(mars_roc, by = "id")
set.seed(101)
roc_model_via_rset <- perf_mod(resamples_rset, refresh = 0)
tidy(roc_model_via_rset) %>% summary()
## # A tibble: 2 x 4
## model mean lower upper
## <chr> <dbl> <dbl> <dbl>
## 1 logistic 0.892 0.879 0.906
## 2 mars 0.888 0.875 0.902
Workflow Set Object as Input
Finally, for tidymodels, a workflow set object can be used. This is a collection of
models/preprocessing combinations in one object. We can emulate a workflow set using
the existing example results then pass that to perf_mod()
:
example_wset <-
as_workflow_set(logistic = logistic_reg_glm_res, mars = mars_earth_res)
set.seed(101)
roc_model_via_wflowset <- perf_mod(example_wset, refresh = 0)
tidy(roc_model_via_rset) %>% summary()
## # A tibble: 2 x 4
## model mean lower upper
## <chr> <dbl> <dbl> <dbl>
## 1 logistic 0.892 0.879 0.906
## 2 mars 0.888 0.875 0.902
caret resamples object
The caret
package can also be used. An equivalent set of models are created:
library(caret)
set.seed(102)
logistic_caret <- train(Class ~ ., data = two_class_dat, method = "glm",
trControl = trainControl(method = "cv"))
set.seed(102)
mars_caret <- train(Class ~ ., data = two_class_dat, method = "gcvEarth",
tuneGrid = data.frame(degree = 1),
trControl = trainControl(method = "cv"))
Note that these two models use the same resamples as one another due to setting the
seed prior to calling train()
. However, these are different from the tidymodels
results used above (so the final results will be different).
caret
has a resamples()
function that can collect and collate the resamples.
This can also be given to perf_mod()
:
caret_resamples <- resamples(list(logistic = logistic_caret, mars = mars_caret))
set.seed(101)
roc_model_via_caret <- perf_mod(caret_resamples, refresh = 0)
tidy(roc_model_via_caret) %>% summary()
## # A tibble: 2 x 4
## model mean lower upper
## <chr> <dbl> <dbl> <dbl>
## 1 logistic 0.821 0.801 0.842
## 2 mars 0.822 0.802 0.842
References
Kuhn and Silge (2021) Tidy Models with R, Chapter 11, https://www.tmwr.org/compare.html