Skip to contents

This function summarizes the posterior distributions of specified parameters from a model fit. Summary includes mean, sd, and specified quantiles, as well as effective sample size (n_eff) and Rhat for estimated parameters. See more examples in the Package Vignette.

Usage

joint_summarize(model_fit, par = "all", probs = c(0.025, 0.975), digits = 3)

Arguments

model_fit

An object of class stanfit.

par

A character vector of parameter names. The default is 'all'.

probs

A numeric vector of quantiles of interest. The default is c(0.025,0.975).

digits

An integer indicating the number of decimal places to round values in summary table. Default value is 3.

Value

A summary table of parameter estimates.

Note

Before fitting the model, this function checks to ensure that the function is possible given the inputs. These checks include:

  • Input model fit is an object of class 'stanfit'.

  • Input probs is a numeric vector.

  • Input par is a character vector.

  • Input par are present in fitted model.

  • Input model fit has converged (i.e. no divergent transitions after warm-up).

If any of these checks fail, the function returns an error message.

Examples

# \donttest{
data(green_crab_data)

# Fit a model
model_fit <- joint_model(data = green_crab_data, family = "negbin", q = TRUE,
                         multicore = FALSE)
#> 
#> SAMPLING FOR MODEL 'joint_count' NOW (CHAIN 1).
#> Chain 1: 
#> Chain 1: Gradient evaluation took 0.000414 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 4.14 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Iteration:    1 / 3000 [  0%]  (Warmup)
#> Chain 1: Iteration:  500 / 3000 [ 16%]  (Warmup)
#> Chain 1: Iteration:  501 / 3000 [ 16%]  (Sampling)
#> Chain 1: Iteration: 1000 / 3000 [ 33%]  (Sampling)
#> Chain 1: Iteration: 1500 / 3000 [ 50%]  (Sampling)
#> Chain 1: Iteration: 2000 / 3000 [ 66%]  (Sampling)
#> Chain 1: Iteration: 2500 / 3000 [ 83%]  (Sampling)
#> Chain 1: Iteration: 3000 / 3000 [100%]  (Sampling)
#> Chain 1: 
#> Chain 1:  Elapsed Time: 4.816 seconds (Warm-up)
#> Chain 1:                13.379 seconds (Sampling)
#> Chain 1:                18.195 seconds (Total)
#> Chain 1: 
#> 
#> SAMPLING FOR MODEL 'joint_count' NOW (CHAIN 2).
#> Chain 2: 
#> Chain 2: Gradient evaluation took 0.000432 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 4.32 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2: 
#> Chain 2: 
#> Chain 2: Iteration:    1 / 3000 [  0%]  (Warmup)
#> Chain 2: Iteration:  500 / 3000 [ 16%]  (Warmup)
#> Chain 2: Iteration:  501 / 3000 [ 16%]  (Sampling)
#> Chain 2: Iteration: 1000 / 3000 [ 33%]  (Sampling)
#> Chain 2: Iteration: 1500 / 3000 [ 50%]  (Sampling)
#> Chain 2: Iteration: 2000 / 3000 [ 66%]  (Sampling)
#> Chain 2: Iteration: 2500 / 3000 [ 83%]  (Sampling)
#> Chain 2: Iteration: 3000 / 3000 [100%]  (Sampling)
#> Chain 2: 
#> Chain 2:  Elapsed Time: 4.79 seconds (Warm-up)
#> Chain 2:                16.165 seconds (Sampling)
#> Chain 2:                20.955 seconds (Total)
#> Chain 2: 
#> 
#> SAMPLING FOR MODEL 'joint_count' NOW (CHAIN 3).
#> Chain 3: 
#> Chain 3: Gradient evaluation took 0.000522 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 5.22 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3: 
#> Chain 3: 
#> Chain 3: Iteration:    1 / 3000 [  0%]  (Warmup)
#> Chain 3: Iteration:  500 / 3000 [ 16%]  (Warmup)
#> Chain 3: Iteration:  501 / 3000 [ 16%]  (Sampling)
#> Chain 3: Iteration: 1000 / 3000 [ 33%]  (Sampling)
#> Chain 3: Iteration: 1500 / 3000 [ 50%]  (Sampling)
#> Chain 3: Iteration: 2000 / 3000 [ 66%]  (Sampling)
#> Chain 3: Iteration: 2500 / 3000 [ 83%]  (Sampling)
#> Chain 3: Iteration: 3000 / 3000 [100%]  (Sampling)
#> Chain 3: 
#> Chain 3:  Elapsed Time: 5.208 seconds (Warm-up)
#> Chain 3:                14.258 seconds (Sampling)
#> Chain 3:                19.466 seconds (Total)
#> Chain 3: 
#> 
#> SAMPLING FOR MODEL 'joint_count' NOW (CHAIN 4).
#> Chain 4: 
#> Chain 4: Gradient evaluation took 0.000401 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 4.01 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4: 
#> Chain 4: 
#> Chain 4: Iteration:    1 / 3000 [  0%]  (Warmup)
#> Chain 4: Iteration:  500 / 3000 [ 16%]  (Warmup)
#> Chain 4: Iteration:  501 / 3000 [ 16%]  (Sampling)
#> Chain 4: Iteration: 1000 / 3000 [ 33%]  (Sampling)
#> Chain 4: Iteration: 1500 / 3000 [ 50%]  (Sampling)
#> Chain 4: Iteration: 2000 / 3000 [ 66%]  (Sampling)
#> Chain 4: Iteration: 2500 / 3000 [ 83%]  (Sampling)
#> Chain 4: Iteration: 3000 / 3000 [100%]  (Sampling)
#> Chain 4: 
#> Chain 4:  Elapsed Time: 4.339 seconds (Warm-up)
#> Chain 4:                12.113 seconds (Sampling)
#> Chain 4:                16.452 seconds (Total)
#> Chain 4: 
#> Refer to the eDNAjoint guide for visualization tips:  https://ednajoint.netlify.app/tips#visualization-tips 

# Create summary table of all parameters
joint_summarize(model_fit$model)
#>           mean se_mean    sd  2.5%  97.5%     n_eff Rhat
#> mu[1,1]  0.107   0.000 0.028 0.060  0.168 11249.775    1
#> mu[1,2]  0.083   0.000 0.022 0.045  0.131 12030.000    1
#> mu[2,1]  0.032   0.000 0.035 0.001  0.126 13003.137    1
#> mu[2,2]  0.025   0.000 0.027 0.000  0.098 13096.576    1
#> mu[3,1]  0.017   0.000 0.018 0.000  0.065 12483.927    1
#> mu[3,2]  0.013   0.000 0.014 0.000  0.051 12801.222    1
#> mu[4,1]  0.684   0.001 0.109 0.496  0.919 10242.573    1
#> mu[4,2]  0.528   0.001 0.089 0.374  0.725 13242.074    1
#> mu[5,1]  0.101   0.001 0.108 0.002  0.395 13408.103    1
#> mu[5,2]  0.078   0.001 0.084 0.002  0.306 13187.346    1
#> mu[6,1]  0.120   0.001 0.133 0.003  0.471 12798.324    1
#> mu[6,2]  0.093   0.001 0.102 0.002  0.368 13251.577    1
#> mu[7,1]  0.013   0.000 0.013 0.000  0.047 13401.229    1
#> mu[7,2]  0.010   0.000 0.010 0.000  0.036 13469.004    1
#> mu[8,1]  0.307   0.003 0.297 0.010  1.099 11825.458    1
#> mu[8,2]  0.237   0.002 0.230 0.008  0.845 12050.187    1
#> mu[9,1]  0.034   0.000 0.033 0.001  0.120 13660.964    1
#> mu[9,2]  0.026   0.000 0.025 0.001  0.092 13616.333    1
#> mu[10,1] 1.056   0.002 0.247 0.648  1.616 11421.770    1
#> mu[10,2] 0.813   0.002 0.186 0.502  1.233 14735.481    1
#> mu[11,1] 0.302   0.002 0.286 0.013  1.065 13696.951    1
#> mu[11,2] 0.233   0.002 0.222 0.010  0.823 13680.319    1
#> mu[12,1] 0.021   0.000 0.022 0.000  0.082 14162.059    1
#> mu[12,2] 0.016   0.000 0.017 0.000  0.064 14067.922    1
#> mu[13,1] 7.791   0.014 1.362 5.525 10.920  8947.389    1
#> mu[13,2] 5.995   0.008 0.966 4.357  8.142 14352.034    1
#> mu[14,1] 0.120   0.000 0.020 0.084  0.163 11034.311    1
#> mu[14,2] 0.093   0.000 0.016 0.065  0.127 13081.598    1
#> mu[15,1] 0.769   0.006 0.543 0.125  2.138  8432.038    1
#> mu[15,2] 0.595   0.004 0.424 0.095  1.691  9372.344    1
#> mu[16,1] 3.854   0.007 0.664 2.722  5.320  8224.005    1
#> mu[16,2] 2.965   0.004 0.463 2.161  3.975 15936.736    1
#> mu[17,1] 0.163   0.002 0.177 0.004  0.652 12200.077    1
#> mu[17,2] 0.126   0.001 0.138 0.003  0.502 12179.628    1
#> mu[18,1] 3.345   0.012 1.102 1.774  5.965  9135.664    1
#> mu[18,2] 2.582   0.008 0.854 1.354  4.644 11189.034    1
#> mu[19,1] 3.975   0.007 0.717 2.785  5.606 10077.127    1
#> mu[19,2] 3.068   0.005 0.569 2.128  4.371 12332.396    1
#> mu[20,1] 0.119   0.001 0.071 0.025  0.294 13578.177    1
#> mu[20,2] 0.092   0.000 0.055 0.019  0.231 14010.271    1
#> q[1]     0.778   0.001 0.100 0.598  0.988  6848.915    1
#> p10      0.018   0.000 0.010 0.005  0.042  9514.083    1
#> beta[1]  1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[2]  1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[3]  1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[4]  1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[5]  1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[6]  1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[7]  1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[8]  1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[9]  1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[10] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[11] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[12] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[13] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[14] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[15] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[16] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[17] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[18] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[19] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> beta[20] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> alpha[1] 1.271   0.003 0.247 0.796  1.773  8621.853    1
#> phi      0.917   0.001 0.129 0.693  1.196 10776.960    1

# Summarize just 'p10' parameter
joint_summarize(model_fit$model, par = "p10", probs = c(0.025, 0.975),
                digits = 3)
#>      mean se_mean   sd  2.5% 97.5%    n_eff Rhat
#> p10 0.018       0 0.01 0.005 0.042 9514.083    1
# }