Estimates the leave-one-out (LOO) information criterion for dynamite
models using Pareto smoothed importance sampling with the loo package.
Arguments
- x
[
dynamitefit
]
The model fit object.- separate_channels
[
logical(1)
]
IfTRUE
, computes LOO separately for each channel. This can be useful in diagnosing where the model fails. Default isFALSE
, in which case the likelihoods of different channels are combined, i.e., all channels of are left out.- thin
[
integer(1)
]
Use only everythin
posterior sample when computing LOO. This can be beneficial with when the model object contains large number of samples. Default is1
meaning that all samples are used.- ...
Ignored.
Value
An output from loo::loo()
or a list of such outputs (if
separate_channels
was TRUE
).
References
Aki Vehtari, Andrew, Gelman, and Johah Gabry (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. 27(5), 1413–1432.
See also
Model diagnostics
hmc_diagnostics()
,
lfo()
,
mcmc_diagnostics()
Examples
data.table::setDTthreads(1) # For CRAN
# \donttest{
# Please update your rstan and StanHeaders installation before running
# on Windows
if (!identical(.Platform$OS.type, "windows")) {
# this gives warnings due to the small number of iterations
suppressWarnings(loo(gaussian_example_fit))
suppressWarnings(loo(gaussian_example_fit, separate_channels = TRUE))
}
#> $y_loglik
#>
#> Computed from 200 by 1450 log-likelihood matrix.
#>
#> Estimate SE
#> elpd_loo 243.2 27.0
#> p_loo 89.4 3.4
#> looic -486.5 54.0
#> ------
#> MCSE of elpd_loo is NA.
#> MCSE and ESS estimates assume MCMC draws (r_eff in [0.1, 1.7]).
#>
#> Pareto k diagnostic values:
#> Count Pct. Min. ESS
#> (-Inf, 0.57] (good) 1442 99.4% 23
#> (0.57, 1] (bad) 8 0.6% <NA>
#> (1, Inf) (very bad) 0 0.0% <NA>
#> See help('pareto-k-diagnostic') for details.
#>
# }