Compare Two Models via Posterior Predictive Performance Metrics
Source:R/model_metrics.R
compare_models_ppc.RdSide-by-side comparison of two posterior predictive distributions on RMSE, MAE, and predictive variance gap. Lower is better for all three; the signed difference (Model 1 − Model 2) is also returned.
Usage
compare_models_ppc(
y_obs,
y_rep1,
y_rep2,
model_names = c("Model 1", "Model 2")
)Arguments
- y_obs
Numeric vector of length \(n\).
- y_rep1
Numeric matrix \(S_1 \times n\). Posterior predictive draws for Model 1 (e.g. from
simulate_ppc()).- y_rep2
Numeric matrix \(S_2 \times n\). Posterior predictive draws for Model 2. \(S_1\) and \(S_2\) need not be equal.
- model_names
Character vector of length 2 giving display names for the two models. Defaults to
c("Model 1", "Model 2").
Value
A data.frame with four columns:
metricName of the performance metric.
model1Value for Model 1.
model2Value for Model 2.
diff_m1_minus_m2Signed difference (Model 1 − Model 2). Negative values indicate Model 1 is better for that metric.
See also
ppc_diagnostics(), simulate_ppc()
Other ppc-workflow:
plot_ppc_overlay(),
plot_ppc_stat(),
ppc_diagnostics(),
print.ppc_diagnostics(),
simulate_ppc(),
theme_ppc()
Examples
set.seed(42)
n <- 60
S <- 150
y <- rnorm(n, mean = 3)
# Model 1: well-specified
draws1 <- matrix(rnorm(S * n, mean = 3), nrow = S, ncol = n)
y_rep1 <- simulate_ppc(draws1)
# Model 2: slightly mis-specified mean
draws2 <- matrix(rnorm(S * n, mean = 5), nrow = S, ncol = n)
y_rep2 <- simulate_ppc(draws2)
compare_models_ppc(y, y_rep1, y_rep2, model_names = c("Correct", "Shifted"))
#> metric Correct Shifted diff_m1_minus_m2
#> 1 RMSE 1.122712 2.347675 -1.224963
#> 2 MAE 0.865520 2.047945 -1.182424
#> 3 Pred. Variance Gap 0.719815 0.767053 -0.047237