From 099efc0fd538d7e049b3ba31c8a672ca60ee8418 Mon Sep 17 00:00:00 2001 From: mattansb <35330040+mattansb@users.noreply.github.com> Date: Tue, 28 Jan 2025 07:55:50 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20@=20easystat?= =?UTF-8?q?s/bayestestR@00183b2344ba9f55efe3f4faf68dca4218f77d96=20?= =?UTF-8?q?=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- CODE_OF_CONDUCT.html | 2 +- CONTRIBUTING.html | 4 +- PULL_REQUEST_TEMPLATE.html | 2 +- SUPPORT.html | 2 +- articles/bayes_factors.html | 14 +- .../figure-html/unnamed-chunk-13-1.png | Bin 116419 -> 104392 bytes articles/bayestestR.html | 4 +- articles/credible_interval.html | 2 +- articles/example1.html | 2 +- articles/example2.html | 2 +- articles/example3.html | 2 +- articles/guidelines.html | 2 +- articles/indicesExistenceComparison.html | 2 +- articles/mediation.html | 2 +- articles/overview_of_vignettes.html | 2 +- articles/probability_of_direction.html | 2 +- articles/region_of_practical_equivalence.html | 2 +- .../web_only/indicesEstimationComparison.html | 2 +- authors.html | 2 +- news/index.html | 2 +- pkgdown.yml | 4 +- reference/area_under_curve.html | 2 +- reference/as.data.frame.density.html | 2 +- reference/as.numeric.p_direction.html | 2 +- reference/bayesfactor.html | 36 ++-- reference/bayesfactor_inclusion.html | 2 +- reference/bayesfactor_models.html | 162 +++++++++--------- reference/bayesfactor_parameters.html | 2 +- reference/bayesfactor_restricted.html | 28 +-- reference/bayestestR-package.html | 2 +- reference/bci.html | 2 +- reference/bic_to_bf.html | 2 +- reference/check_prior-1.png | Bin 91334 -> 84190 bytes reference/check_prior.html | 2 +- reference/ci.html | 2 +- reference/contr.equalprior.html | 2 +- .../convert_bayesian_as_frequentist.html | 2 +- reference/density_at.html | 2 +- reference/describe_posterior.html | 2 +- reference/describe_prior.html | 38 ++-- reference/diagnostic_draws.html | 2 +- reference/diagnostic_posterior.html | 26 +-- reference/disgust.html | 2 +- reference/distribution.html | 2 +- reference/dot-extract_priors_rstanarm.html | 2 +- reference/dot-prior_new_location.html | 2 +- reference/dot-select_nums.html | 2 +- reference/effective_sample.html | 2 +- reference/equivalence_test.html | 54 +++--- reference/estimate_density.html | 10 +- reference/eti.html | 18 +- reference/hdi.html | 10 +- reference/map_estimate.html | 32 ++-- reference/mcse.html | 2 +- reference/mediation.html | 2 +- reference/model_to_priors.html | 2 +- reference/overlap.html | 2 +- reference/p_direction.html | 14 +- reference/p_map.html | 14 +- reference/p_rope.html | 2 +- reference/p_significance.html | 2 +- reference/p_to_bf.html | 2 +- reference/pd_to_p.html | 2 +- reference/point_estimate.html | 46 ++--- reference/reexports.html | 2 +- reference/reshape_iterations.html | 2 +- reference/rope.html | 2 +- reference/rope_range.html | 22 +-- reference/sensitivity_to_prior.html | 48 +++--- reference/sexit.html | 2 +- reference/sexit_thresholds.html | 12 +- reference/si.html | 34 ++-- reference/simulate_correlation.html | 2 +- reference/simulate_prior.html | 2 +- reference/simulate_simpson.html | 2 +- reference/spi.html | 2 +- reference/unupdate.html | 2 +- reference/weighted_posteriors.html | 2 +- search.json | 2 +- 79 files changed, 371 insertions(+), 371 deletions(-) diff --git a/CODE_OF_CONDUCT.html b/CODE_OF_CONDUCT.html index d0d5632f1..ac4454c19 100644 --- a/CODE_OF_CONDUCT.html +++ b/CODE_OF_CONDUCT.html @@ -67,7 +67,7 @@
diff --git a/CONTRIBUTING.html b/CONTRIBUTING.html index cf2fc06d9..96fa99723 100644 --- a/CONTRIBUTING.html +++ b/CONTRIBUTING.html @@ -67,7 +67,7 @@
@@ -102,7 +102,7 @@

Checks to do before submissiondevtools::check(): General checks +devtools::check(): General checks

diff --git a/PULL_REQUEST_TEMPLATE.html b/PULL_REQUEST_TEMPLATE.html index d2417811b..92815c754 100644 --- a/PULL_REQUEST_TEMPLATE.html +++ b/PULL_REQUEST_TEMPLATE.html @@ -67,7 +67,7 @@
diff --git a/SUPPORT.html b/SUPPORT.html index a1b27a5dc..da2d27c89 100644 --- a/SUPPORT.html +++ b/SUPPORT.html @@ -67,7 +67,7 @@
diff --git a/articles/bayes_factors.html b/articles/bayes_factors.html index be2be2481..5ba7fe65c 100644 --- a/articles/bayes_factors.html +++ b/articles/bayes_factors.html @@ -103,7 +103,7 @@

Bayes Factors

- Source: vignettes/bayes_factors.Rmd + Source: vignettes/bayes_factors.Rmd
bayes_factors.Rmd
@@ -1290,8 +1290,8 @@

Order restrictionsOrder restrictions

Get Started with Bayesian Analysis

- Source:
vignettes/bayestestR.Rmd + Source: vignettes/bayestestR.Rmd
bayestestR.Rmd

@@ -242,7 +242,7 @@

suite by running the following:

 install.packages("remotes")
-remotes::install_github("easystats/easystats")
+remotes::install_github("easystats/easystats")

diff --git a/articles/credible_interval.html b/articles/credible_interval.html index 98e8f43d8..4d797b0ef 100644 --- a/articles/credible_interval.html +++ b/articles/credible_interval.html @@ -103,7 +103,7 @@

Credible Intervals (CI)

- Source: vignettes/credible_interval.Rmd + Source: vignettes/credible_interval.Rmd
credible_interval.Rmd
diff --git a/articles/example1.html b/articles/example1.html index c8319e5d3..681300dbb 100644 --- a/articles/example1.html +++ b/articles/example1.html @@ -103,7 +103,7 @@

1. Initiation to Bayesian models

- Source: vignettes/example1.Rmd + Source: vignettes/example1.Rmd
example1.Rmd
diff --git a/articles/example2.html b/articles/example2.html index a5e5f1174..0e4d61535 100644 --- a/articles/example2.html +++ b/articles/example2.html @@ -103,7 +103,7 @@

2. Confirmation of Bayesian skills

- Source: vignettes/example2.Rmd + Source: vignettes/example2.Rmd
example2.Rmd
diff --git a/articles/example3.html b/articles/example3.html index b030713ac..bc3a1d5a8 100644 --- a/articles/example3.html +++ b/articles/example3.html @@ -103,7 +103,7 @@

3. Become a Bayesian master

- Source: vignettes/example3.Rmd + Source: vignettes/example3.Rmd
example3.Rmd
diff --git a/articles/guidelines.html b/articles/guidelines.html index ad8d3b902..b463ad67b 100644 --- a/articles/guidelines.html +++ b/articles/guidelines.html @@ -103,7 +103,7 @@

Reporting Guidelines

- Source: vignettes/guidelines.Rmd + Source: vignettes/guidelines.Rmd
guidelines.Rmd
diff --git a/articles/indicesExistenceComparison.html b/articles/indicesExistenceComparison.html index 9a7c163ce..41ce589e6 100644 --- a/articles/indicesExistenceComparison.html +++ b/articles/indicesExistenceComparison.html @@ -103,7 +103,7 @@

In-Depth 2: Comparison of Indices of Effect Existence and Significance

- Source: vignettes/indicesExistenceComparison.Rmd + Source: vignettes/indicesExistenceComparison.Rmd
indicesExistenceComparison.Rmd
diff --git a/articles/mediation.html b/articles/mediation.html index b9fe7f4fc..43935aa47 100644 --- a/articles/mediation.html +++ b/articles/mediation.html @@ -103,7 +103,7 @@

Mediation Analysis using Bayesian Regression Models

- Source: vignettes/mediation.Rmd + Source: vignettes/mediation.Rmd
mediation.Rmd
diff --git a/articles/overview_of_vignettes.html b/articles/overview_of_vignettes.html index 294727695..78e3a0c87 100644 --- a/articles/overview_of_vignettes.html +++ b/articles/overview_of_vignettes.html @@ -103,7 +103,7 @@

Overview of Vignettes

- Source: vignettes/overview_of_vignettes.Rmd + Source: vignettes/overview_of_vignettes.Rmd
overview_of_vignettes.Rmd
diff --git a/articles/probability_of_direction.html b/articles/probability_of_direction.html index da0736430..bbcd4811e 100644 --- a/articles/probability_of_direction.html +++ b/articles/probability_of_direction.html @@ -103,7 +103,7 @@

Probability of Direction (pd)

- Source: vignettes/probability_of_direction.Rmd + Source: vignettes/probability_of_direction.Rmd
probability_of_direction.Rmd
diff --git a/articles/region_of_practical_equivalence.html b/articles/region_of_practical_equivalence.html index 1c539e74b..873a5bcfd 100644 --- a/articles/region_of_practical_equivalence.html +++ b/articles/region_of_practical_equivalence.html @@ -103,7 +103,7 @@

Region of Practical Equivalence (ROPE)

- Source: vignettes/region_of_practical_equivalence.Rmd + Source: vignettes/region_of_practical_equivalence.Rmd
region_of_practical_equivalence.Rmd
diff --git a/articles/web_only/indicesEstimationComparison.html b/articles/web_only/indicesEstimationComparison.html index 28e9c8176..611af1edf 100644 --- a/articles/web_only/indicesEstimationComparison.html +++ b/articles/web_only/indicesEstimationComparison.html @@ -103,7 +103,7 @@

In-Depth 1: Comparison of Point-Estimates

- Source: vignettes/web_only/indicesEstimationComparison.Rmd + Source: vignettes/web_only/indicesEstimationComparison.Rmd
indicesEstimationComparison.Rmd
diff --git a/authors.html b/authors.html index 453c35ba8..90d1cae6b 100644 --- a/authors.html +++ b/authors.html @@ -120,7 +120,7 @@

Authors

Citation

-

Source: inst/CITATION

+

Source: inst/CITATION

Makowski, D., Ben-Shachar, M., & Lüdecke, D. (2019). bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework. Journal of Open Source Software, 4(40), 1541. doi:10.21105/joss.01541

@Article{,
diff --git a/news/index.html b/news/index.html
index 9e3b7b35e..b67c2269d 100644
--- a/news/index.html
+++ b/news/index.html
@@ -67,7 +67,7 @@
 
diff --git a/pkgdown.yml b/pkgdown.yml index a397e3415..edaa48777 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -1,6 +1,6 @@ pandoc: 3.6.2 pkgdown: 2.1.1.9000 -pkgdown_sha: 6615322cb2ce15b1effcecf894123c88fa10b9c9 +pkgdown_sha: 74fda8cdb8bbbcd215faf2a2079f4eb98db586c6 articles: bayes_factors: bayes_factors.html bayestestR: bayestestR.html @@ -15,7 +15,7 @@ articles: overview_of_vignettes: overview_of_vignettes.html probability_of_direction: probability_of_direction.html region_of_practical_equivalence: region_of_practical_equivalence.html -last_built: 2025-01-17T16:35Z +last_built: 2025-01-28T07:28Z urls: reference: https://easystats.github.io/bayestestR/reference article: https://easystats.github.io/bayestestR/articles diff --git a/reference/area_under_curve.html b/reference/area_under_curve.html index 56e81c647..6e26baa4e 100644 --- a/reference/area_under_curve.html +++ b/reference/area_under_curve.html @@ -73,7 +73,7 @@
diff --git a/reference/as.data.frame.density.html b/reference/as.data.frame.density.html index e6e4292b6..8cdcedf2b 100644 --- a/reference/as.data.frame.density.html +++ b/reference/as.data.frame.density.html @@ -67,7 +67,7 @@
diff --git a/reference/as.numeric.p_direction.html b/reference/as.numeric.p_direction.html index 88325abf5..8be804020 100644 --- a/reference/as.numeric.p_direction.html +++ b/reference/as.numeric.p_direction.html @@ -67,7 +67,7 @@
diff --git a/reference/bayesfactor.html b/reference/bayesfactor.html index 81f0db4bc..1bce0d278 100644 --- a/reference/bayesfactor.html +++ b/reference/bayesfactor.html @@ -79,7 +79,7 @@
@@ -195,8 +195,8 @@

Examples#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 4.3e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.43 seconds. +#> Chain 1: Gradient evaluation took 4.2e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.42 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -213,15 +213,15 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.198 seconds (Warm-up) -#> Chain 1: 0.3 seconds (Sampling) -#> Chain 1: 0.498 seconds (Total) +#> Chain 1: Elapsed Time: 0.199 seconds (Warm-up) +#> Chain 1: 0.298 seconds (Sampling) +#> Chain 1: 0.497 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: -#> Chain 2: Gradient evaluation took 1.4e-05 seconds -#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds. +#> Chain 2: Gradient evaluation took 1.5e-05 seconds +#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: @@ -238,15 +238,15 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.175 seconds (Warm-up) -#> Chain 2: 0.177 seconds (Sampling) -#> Chain 2: 0.352 seconds (Total) +#> Chain 2: Elapsed Time: 0.177 seconds (Warm-up) +#> Chain 2: 0.178 seconds (Sampling) +#> Chain 2: 0.355 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 1.4e-05 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds. +#> Chain 3: Gradient evaluation took 1.5e-05 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -263,15 +263,15 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.193 seconds (Warm-up) -#> Chain 3: 0.122 seconds (Sampling) -#> Chain 3: 0.315 seconds (Total) +#> Chain 3: Elapsed Time: 0.2 seconds (Warm-up) +#> Chain 3: 0.127 seconds (Sampling) +#> Chain 3: 0.327 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: -#> Chain 4: Gradient evaluation took 1.2e-05 seconds -#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. +#> Chain 4: Gradient evaluation took 1.4e-05 seconds +#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: diff --git a/reference/bayesfactor_inclusion.html b/reference/bayesfactor_inclusion.html index 6e926aea2..5d1115381 100644 --- a/reference/bayesfactor_inclusion.html +++ b/reference/bayesfactor_inclusion.html @@ -71,7 +71,7 @@
diff --git a/reference/bayesfactor_models.html b/reference/bayesfactor_models.html index b30de5cb1..2ca81da40 100644 --- a/reference/bayesfactor_models.html +++ b/reference/bayesfactor_models.html @@ -71,7 +71,7 @@
@@ -288,8 +288,8 @@

Examples#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 3.6e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.36 seconds. +#> Chain 1: Gradient evaluation took 1.9e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.19 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -306,8 +306,8 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.017 seconds (Warm-up) -#> Chain 1: 0.037 seconds (Sampling) +#> Chain 1: Elapsed Time: 0.018 seconds (Warm-up) +#> Chain 1: 0.036 seconds (Sampling) #> Chain 1: 0.054 seconds (Total) #> Chain 1: #> @@ -356,9 +356,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) +#> Chain 3: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 3: 0.036 seconds (Sampling) -#> Chain 3: 0.055 seconds (Total) +#> Chain 3: 0.054 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). @@ -382,8 +382,8 @@

Examples#> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.018 seconds (Warm-up) -#> Chain 4: 0.036 seconds (Sampling) -#> Chain 4: 0.054 seconds (Total) +#> Chain 4: 0.037 seconds (Sampling) +#> Chain 4: 0.055 seconds (Total) #> Chain 4: stan_m1 <- suppressWarnings(rstanarm::stan_glm(Sepal.Length ~ Species, data = iris, @@ -393,8 +393,8 @@

Examples#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 1.9e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.19 seconds. +#> Chain 1: Gradient evaluation took 2.2e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -411,15 +411,15 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.028 seconds (Warm-up) +#> Chain 1: Elapsed Time: 0.029 seconds (Warm-up) #> Chain 1: 0.046 seconds (Sampling) -#> Chain 1: 0.074 seconds (Total) +#> Chain 1: 0.075 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: -#> Chain 2: Gradient evaluation took 9e-06 seconds -#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. +#> Chain 2: Gradient evaluation took 1.1e-05 seconds +#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: @@ -436,15 +436,15 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.03 seconds (Warm-up) -#> Chain 2: 0.047 seconds (Sampling) -#> Chain 2: 0.077 seconds (Total) +#> Chain 2: Elapsed Time: 0.031 seconds (Warm-up) +#> Chain 2: 0.048 seconds (Sampling) +#> Chain 2: 0.079 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 9e-06 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. +#> Chain 3: Gradient evaluation took 1.2e-05 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -461,15 +461,15 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.028 seconds (Warm-up) -#> Chain 3: 0.047 seconds (Sampling) -#> Chain 3: 0.075 seconds (Total) +#> Chain 3: Elapsed Time: 0.029 seconds (Warm-up) +#> Chain 3: 0.048 seconds (Sampling) +#> Chain 3: 0.077 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: -#> Chain 4: Gradient evaluation took 9e-06 seconds -#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. +#> Chain 4: Gradient evaluation took 1.2e-05 seconds +#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: @@ -498,8 +498,8 @@

Examples#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 2.1e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. +#> Chain 1: Gradient evaluation took 2.2e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -517,14 +517,14 @@

Examples#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.095 seconds (Warm-up) -#> Chain 1: 0.109 seconds (Sampling) -#> Chain 1: 0.204 seconds (Total) +#> Chain 1: 0.111 seconds (Sampling) +#> Chain 1: 0.206 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: -#> Chain 2: Gradient evaluation took 9e-06 seconds -#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. +#> Chain 2: Gradient evaluation took 1.2e-05 seconds +#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: @@ -541,15 +541,15 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.084 seconds (Warm-up) -#> Chain 2: 0.106 seconds (Sampling) -#> Chain 2: 0.19 seconds (Total) +#> Chain 2: Elapsed Time: 0.086 seconds (Warm-up) +#> Chain 2: 0.108 seconds (Sampling) +#> Chain 2: 0.194 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 9e-06 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. +#> Chain 3: Gradient evaluation took 1.2e-05 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -566,15 +566,15 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.086 seconds (Warm-up) -#> Chain 3: 0.109 seconds (Sampling) -#> Chain 3: 0.195 seconds (Total) +#> Chain 3: Elapsed Time: 0.088 seconds (Warm-up) +#> Chain 3: 0.111 seconds (Sampling) +#> Chain 3: 0.199 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: -#> Chain 4: Gradient evaluation took 9e-06 seconds -#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. +#> Chain 4: Gradient evaluation took 1.1e-05 seconds +#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: @@ -591,9 +591,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.079 seconds (Warm-up) -#> Chain 4: 0.097 seconds (Sampling) -#> Chain 4: 0.176 seconds (Total) +#> Chain 4: Elapsed Time: 0.081 seconds (Warm-up) +#> Chain 4: 0.1 seconds (Sampling) +#> Chain 4: 0.181 seconds (Total) #> Chain 4: bayesfactor_models(stan_m1, stan_m2, denominator = stan_m0, verbose = FALSE) #> Bayes Factors for Model Comparison @@ -615,8 +615,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 2.1e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. +#> Chain 1: Gradient evaluation took 2.2e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -633,15 +633,15 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.029 seconds (Warm-up) -#> Chain 1: 0.028 seconds (Sampling) -#> Chain 1: 0.057 seconds (Total) +#> Chain 1: Elapsed Time: 0.03 seconds (Warm-up) +#> Chain 1: 0.032 seconds (Sampling) +#> Chain 1: 0.062 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: -#> Chain 2: Gradient evaluation took 7e-06 seconds -#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. +#> Chain 2: Gradient evaluation took 8e-06 seconds +#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: @@ -658,9 +658,9 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.028 seconds (Warm-up) -#> Chain 2: 0.025 seconds (Sampling) -#> Chain 2: 0.053 seconds (Total) +#> Chain 2: Elapsed Time: 0.029 seconds (Warm-up) +#> Chain 2: 0.026 seconds (Sampling) +#> Chain 2: 0.055 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). @@ -684,8 +684,8 @@

Examples#> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.03 seconds (Warm-up) -#> Chain 3: 0.035 seconds (Sampling) -#> Chain 3: 0.065 seconds (Total) +#> Chain 3: 0.033 seconds (Sampling) +#> Chain 3: 0.063 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). @@ -708,9 +708,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.031 seconds (Warm-up) -#> Chain 4: 0.033 seconds (Sampling) -#> Chain 4: 0.064 seconds (Total) +#> Chain 4: Elapsed Time: 0.032 seconds (Warm-up) +#> Chain 4: 0.034 seconds (Sampling) +#> Chain 4: 0.066 seconds (Total) #> Chain 4: brm2 <- brms::brm(Sepal.Length ~ Species, data = iris, save_pars = save_pars(all = TRUE)) #> Compiling Stan program... @@ -718,8 +718,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 1e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. +#> Chain 1: Gradient evaluation took 1.1e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -743,8 +743,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: -#> Chain 2: Gradient evaluation took 3e-06 seconds -#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. +#> Chain 2: Gradient evaluation took 4e-06 seconds +#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: @@ -768,8 +768,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 3e-06 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. +#> Chain 3: Gradient evaluation took 4e-06 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -793,8 +793,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: -#> Chain 4: Gradient evaluation took 3e-06 seconds -#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. +#> Chain 4: Gradient evaluation took 4e-06 seconds +#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: @@ -825,8 +825,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 7e-06 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. +#> Chain 1: Gradient evaluation took 6e-06 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -843,9 +843,9 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.048 seconds (Warm-up) -#> Chain 1: 0.057 seconds (Sampling) -#> Chain 1: 0.105 seconds (Total) +#> Chain 1: Elapsed Time: 0.049 seconds (Warm-up) +#> Chain 1: 0.058 seconds (Sampling) +#> Chain 1: 0.107 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). @@ -868,9 +868,9 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.049 seconds (Warm-up) -#> Chain 2: 0.052 seconds (Sampling) -#> Chain 2: 0.101 seconds (Total) +#> Chain 2: Elapsed Time: 0.05 seconds (Warm-up) +#> Chain 2: 0.053 seconds (Sampling) +#> Chain 2: 0.103 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). @@ -893,9 +893,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.046 seconds (Warm-up) -#> Chain 3: 0.052 seconds (Sampling) -#> Chain 3: 0.098 seconds (Total) +#> Chain 3: Elapsed Time: 0.047 seconds (Warm-up) +#> Chain 3: 0.055 seconds (Sampling) +#> Chain 3: 0.102 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). @@ -918,9 +918,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.055 seconds (Warm-up) -#> Chain 4: 0.052 seconds (Sampling) -#> Chain 4: 0.107 seconds (Total) +#> Chain 4: Elapsed Time: 0.052 seconds (Warm-up) +#> Chain 4: 0.053 seconds (Sampling) +#> Chain 4: 0.105 seconds (Total) #> Chain 4: bayesfactor_models(brm1, brm2, brm3, denominator = 1, verbose = FALSE) diff --git a/reference/bayesfactor_parameters.html b/reference/bayesfactor_parameters.html index 08a433e19..6982019ac 100644 --- a/reference/bayesfactor_parameters.html +++ b/reference/bayesfactor_parameters.html @@ -119,7 +119,7 @@
diff --git a/reference/bayesfactor_restricted.html b/reference/bayesfactor_restricted.html index 00102b77f..ef6120d7e 100644 --- a/reference/bayesfactor_restricted.html +++ b/reference/bayesfactor_restricted.html @@ -79,7 +79,7 @@
@@ -357,8 +357,8 @@

Examples#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 2.1e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. +#> Chain 1: Gradient evaluation took 2e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -376,8 +376,8 @@

Examples#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.03 seconds (Warm-up) -#> Chain 1: 0.038 seconds (Sampling) -#> Chain 1: 0.068 seconds (Total) +#> Chain 1: 0.039 seconds (Sampling) +#> Chain 1: 0.069 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). @@ -401,14 +401,14 @@

Examples#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.03 seconds (Warm-up) -#> Chain 2: 0.039 seconds (Sampling) -#> Chain 2: 0.069 seconds (Total) +#> Chain 2: 0.04 seconds (Sampling) +#> Chain 2: 0.07 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 1.2e-05 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. +#> Chain 3: Gradient evaluation took 1.1e-05 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -425,9 +425,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.029 seconds (Warm-up) -#> Chain 3: 0.038 seconds (Sampling) -#> Chain 3: 0.067 seconds (Total) +#> Chain 3: Elapsed Time: 0.03 seconds (Warm-up) +#> Chain 3: 0.039 seconds (Sampling) +#> Chain 3: 0.069 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). @@ -451,8 +451,8 @@

Examples#> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.028 seconds (Warm-up) -#> Chain 4: 0.039 seconds (Sampling) -#> Chain 4: 0.067 seconds (Total) +#> Chain 4: 0.04 seconds (Sampling) +#> Chain 4: 0.068 seconds (Total) #> Chain 4: em_condition <- emmeans::emmeans(fit_model, ~condition, data = disgust) diff --git a/reference/bayestestR-package.html b/reference/bayestestR-package.html index 3d8534fdc..4238e03a8 100644 --- a/reference/bayestestR-package.html +++ b/reference/bayestestR-package.html @@ -89,7 +89,7 @@
diff --git a/reference/bci.html b/reference/bci.html index 99c20f390..9a05579ac 100644 --- a/reference/bci.html +++ b/reference/bci.html @@ -69,7 +69,7 @@
diff --git a/reference/bic_to_bf.html b/reference/bic_to_bf.html index b4c8c6100..2c88e5886 100644 --- a/reference/bic_to_bf.html +++ b/reference/bic_to_bf.html @@ -73,7 +73,7 @@
diff --git a/reference/check_prior-1.png b/reference/check_prior-1.png index 724ca1b70073620c93acc00fd0f23ab54cb82043..3167d3b0e93b582f77d8e13714beaf142702a707 100644 GIT binary patch literal 84190 zcmce;byQVf)HZ4-p`u8mfFLCx9S0PUK7@34cXtR%sDP4kDCq{tLnBfG(sAHW(s4K- z-Eh~2-}{a4j`99`F9QZBd#}CrTys71na`Yy5P4Z~+&d(9E?v5WD=8uR_R^*6ZkH}y zHMn&H{LffS%0KY!hJm!WDEJOO4n%5v!MEFX66%hZE|CX8Uzfw@H;gY`dU8op^tF;( z!s>*pljhFJ*#;__slB8Y+wAgyzfrwg_6ZF$^_P3UbYM|$pL>n5qWc{VnPKG1JlJVf zDe1=b2wj}Al7?uc(`MBPCLi^0Km}mdCn-1<1+V0XZRUb#--znh6q4i>QS^acOeC zhXd7H-TXVh?y@ZXe0TNEjR2LP&rerUf++4hI|5f2I`c^Qy?XK_xrYvX4sgttgtPb_ zbXr(jN4L%PF(b$O4eJUQb>+BYJ2N(B+c5poEJNOV6ItFzyKl(FIk0<9TOM`AZqEY+ ziconXL3rjThdYN@Ti-lV+ggQ;jEo72Pux>p?D`wcn}>v5n0S4n!;^JimgJr+-xG|k z2GT~m77rJ)Gt*ZC=Sxf7>wB2T4f`MJ&ra4E@7;PIjYJ~zGBd?Akv2o@Jpu$84PFNh zu;YJCAvljDsfcho7Pln%?0qf=lsjWoPQY^`X(n8!yd8MRV8d9C)18ICjRAaa|Gujf zYULDpV=b#!OFFkl48!Re8S_xsRBPXK3k!?4CMG7yJ2N|PUa4+W*1tJAbZI3vH9I{% zK$n?E3Z5PGSgudjm5gBn1gM~y5|J!5^1t6d@kw;-=G7=Rc+-sEK;#p{qH80GUoA>X z-^TmaX?JADO`v)gZ3*n@d%Bg3U^5KV(9oFa7kps(Tx-hbpi@_hZ^Kx4@$o}wNLI2h zf60aGcgu^GvB)XBiDXX8m?Ym*cV!h7@P_n6I1Kl0W!KedD=RCbYu!sJe2?v!b?dc8 zM@I?8!rpBjo!FnzpUAVm4RfS%J3HCiS*ux>itzatK)H?f+%qyZHhu|bUnRu3F*H7& zA4SAJnE?Bg&IgZHup-JiJ_Qbl6DssxFCO0>?%%qEi zS+j!41bY?C%1?QpLqN0^m_EfgYar)nttQ;)y5G}`l9IP)ho}Y`$-9qVD7}9D`p=#| zQJ@c48dpuEVO!X@^_wB=QH8e2r`uT~okE9w;Y=at``;iGXXak9ac4~LX4PWSSWzMJUNlkI_&`691*M*FdzZ)}Fu6GerEVPp08d_|TIRpjNrFAro%M{rqJ%<6E}JFNF`>O%lVkQpMOm5CSSI&lExXs*lfwm?&m?tCDmV?~@PZChKL26c2RaO7{b$6#U z`9vlNkNjhGfyBn)iBH|}W@q)u?ii;vn!{ayf;Umtdc2ZNX|K4A;%>S9TGgV9=QzKu z#f|V$7-PY?$wipwkpR( z*^`E|y)&oA>yMvvbE|?Y6*FZZ#7G4z>2xHt(!;j9tyQ--=^2rZDQXZb>uSOm8ebHw z)+jNOG#gt?_N{O8Xw;LB=d$j{_xX2~S))t>fY+v*0D{*s6Kto#$GTn$SS!@}a+$)qv{~y0H@Im}-B`J+yDx0!KHm{dD0T%hba5k5tw?j3`G9aI zDYwmFUz&K6M=@?x&*G;su0#S;XLOBg5#bB1Yy<*fv*WTll0we!vNY+rJ~X7NThK=7 ztMww5aT@F5G5z83c3h7aEtCH6=`=}=){gcz1#RtuH=hX2k9M&WE93%h@^S(j?~~kS zZg(XM>E@{xseq5>U0JR#vjQI5*@g9o7S zlE`=1c6YQS={)iTH<}3|(rebLrMV}Z6vMHu039-%F~gF4Y^RIYA8XxHp}Ti$smS*v zul}IRR_t?1ZLQY){Jh6@pZNTS+jcX7+|p(zOSr{Zv68B4)g1i{)>?cJiOUat4v(r82mPe|hL!_GH?`0Wc5CnslqcWIAc9;UFSu#KqN!rZ*? zk*ZdTIHfm7?nzt!O-iqO8aluo)mX;VR`|YZnw#f*{PbzIf~9#%zNQ|Y$j4b$Yco-u zUfm6O)+-tsx1ive8G{MCCft;xM~@z9R9MBmRaA6d&WK&%Za4^?bYCPNLZLK?jMwXr zCe{IV=VWI~u+;CTl8}-kh?MltMYTX91FR#Xlke(C zu|}1HF_H^uJ5|`@vs;?NcN3pt5CnKKX9Jg(kB>Gz>JS3%B|7f&k2pCvo`VR;0gubf zVejUlE<3L3@qSSTU>hThshRBP>`X)3=AZ2Oo*{v&b9nAr6=)(~Y!e-v_+Te$R8cyf z*4b+Samp=*a&5-Ou)D7*7pI$ooOee}gCg2^d3inNzKT1oj8k-1iXwAOo4{Np;*v=_!hx;3|8s!$z3*CvNxt?2H+<6rh6+J`-%OG^)GP9`tc(Qy~*g@py1v27C&5hqq; zlLaTNY;4wwPs4qWXKi-yDFnX#aH-QNic3zW%yk5AzH{g{?t6AJf%ZC>k4^I-dr`k{ zFtE6_qt-Bh%RPP%y#QIL^z@a$zn@npDB30yyvTI-e?9*5>PfHRiDh9Gte&=|Y}n4)2!ZbNkX8VyVMf~`itF(-t{{8bGwnr$27DXI~f?k*`31y zdI}Wx!0Hr6Q*~>x2j3d$v9t1yf)KJT&TX*qg#i}@UKri`Y$&%!0a&`dpqPfO#P#iw zyUy(q&-eha9j<0r07fBnwk4Euosd~0z1$LI zF*BytOEI}^CiZe zO3KQ{3h8vB@y!~QcKT*jnVFeA*JSh2*i~Ex$>f7Srl}T=t)Gb})+hzNDtoM7RQ~nP zMpLY$5}OX0l+p_c2{{SH+-@tBOe}=u9QE_s6s8N(1!LA7okxvhKg8&IEI%6>8hVot zeAeRnmCKU-ai&S5%yEN}VPSX$8s$>1rl3M zoPEh4N!1?U z*y$6e{IIq+L|qx#WrEptoB{iOyQ*uX*f7n)Gy79TtO;rw*C37u!FMbin3x8^W!ZI}dx3%P zz+{5zL4t)}?Tup{QeW?U@-o*^?fji3RJ=%J-x3oOd#vPSaaa!L_f~_zD&MHr+y5;U zkn99m(Izhv}wf2y4nD-x>>8{}qKG2Ybq#;DDzXyc?D$2?(!A8v z)Eq*ib9Z-L%8|%U%ub+3Q|8SA zG3R-7%I9mg4@yU^&XbdB%3Iewj@#XQampT>I668GVqM`TJJCj;-qg_1F{Vilx~!lH zDlJ*Wn!}^p=j#(((B>WO?O$oIW5}YhIm3u;0DvE28unXQT#o4X+d~6s>E1&d~Z~@#&sQ!RG69& zSqS?no(sq>pEJ3l$DeO(Y?kMi&wu65-R=ypmhh%~ktR8b=Opk;yT;{>^7>pzM|IdV zNs$X~(-PO9a2b2)&45N55q>9ZOYlA$PTJUh8^6mcQLmA*HeLItNAX?5*1!FNC_J8* zS^ZD!hE#yMiXGs)^krTX&L}9cuU<<1q@<E{lTNT+>1as$dXkx7n zWf>V995juGW+cxfCv@VZv;*jPU9W?zY}mMst1udd$uAY|goP%0ltj`klW^N;DUIPa zOT_Kdqi40CY~lcf-EdW&tNqSolT*hF+$n74Nb9l8k00+y(zWiv+#~8EoneKK7*)1M z9jof=>wDPpW-~1Pk(e`$+Jte(DH~*)cQTR2VRb#OlA)m?(ms8;5lY&gjDGzKchGLj z0g*E$In0&Qb9c3O6L@#8X}q@Qlijvl_z>C1BHCIbogu`_06X_J064n7m(rAK7bIlu{|J%$;nDuL5T{7 z3!PRbxi7?BFtz&xQn6YNz?kt<$0o@3JlWmf)Zu0zuTp5UD`70GOhrAVSgYI9u|}8C zJDp|5+eoR^m?nEyY;JB=myeD$tDFz#y*t`_QrJGJ7RRbDY<6(6Hx=%FB+~LK%yDhJ z3eo~-a)6!_tLx)F*c3!2=%qGwIE*|bH}!Rb|2hFdO6h+>RJCPMT}2O9wa-Dnb9Rt~ zj`xo8$-f{WvRCQFU*qHRfEXps_jNFAQT{$04zHe%(G=cGlr=>@Rr#Vdabj-0oYFNmgPuIoBrKL&Yp02x z5}L$sWTrhyu-&`#K&b(Y>$W9o+Rlk|Ya6hZevti?0cemf>epkEGI)c*5&AB z3LQ$z5pH&{D0Eh}Rym7D&gb-;<3?W3;=nF;%y|ILZ4OwzJa!kh$ooCkU3cE7aY@P)gzHh-~OuiHaJc+`$sRp@l; z`kw6Qj5SOLl6p+kW3V7az_>JUe|pS#Q_g<%vh5{o@Rl#P_eoM2}(AClG! zk&`-8-TdnU?DuYo4*BRV?n&%4QJ%7{vI!knrE^FgC9Ror zhPl=c@KG3+!Nv+SBVZ5r?4-i8PujL2tRomrU#CPp@XqU>)oacTk?}C^>d)JdrY8u< z7A7rR@U6RhmEiv0t5|fo^wHkBs<(F?*kFM8a2j_!gvWEiJ7bs!W?DWezkT~a`3Nsa z#1~8n3LVQp-tGt5NxAKmps|SwBzIN=zjW^6MEwT&h-E@^Bp?7eKoW!`3P>9U)IPVO zLgm@BXOM{Ij440{wknsGOym{SYsBh!2HYk}snWa4I1<;o1I_WMB zEdRA>x)-1UhwrHu5HV%ID`B=e{nX-s!g1&BU1bG@`x<*gMLy-ak|PNTWZwT8Z$kPU zh4;2X-9{@ZpUdhu2L}hIg*eldamSvaa+FS9YHF$*nQ#M$+1eiS(W*3(;ev#7t#y;eMt%L8P7XM9S44VkZS7OG)pOu%_SZ+)C*2`$ zLIdhq23WjgKS)HY;Ut_7$F8%wWhMJmLqg5qpb`*1*acri{L-|o%P8&UlkN|Z90$U$ z$0uq?JnsoZl&*uQV|8q$TTU!*UR#@9}ymG4F2(^`n0J>9NkJkb_p*W=CUMdy= zRheEE$Hi{k&}DCcqcm3eDth0!Y{s*}PL_%Qsh|-3vk&)kp^I@Sp<>O+?%x3X4jUv; z6IAx5d>e+QFTq~I6@fSwnNC1}1lh4Wq4>4)!QVV4y!5I8h+*yPdz$1odg+>2x2J}#*#BDlG5(fmu0ei20=Vk?+0r>jfbNj|c#Q@64nAn;AJks_bm>dF#J_Vz z5=ci8RR4cm%+Rs|J^P;Tq#ykNj_y?dI8#v%sWNHU!=Qflm#LGc-rk9SB=|$0a1Jpe z$H0NfBJ(5lC_Z<%8@Mk2;<5G7j+x7Z;Z>oF*`qrSvDhT5^S< z^0C#9KOGK*^Yu!w9aik*KfE)k}WU zq1TF90KDhpT_*h9*{P_h84qSvfOlPzmW4$wP#;j|6_8`91O?E{f>g+%ipNzfJCW0; z{GchaU~EdgV5P}3)xvPQQjue8RT=%yoP#JX}QwH3&?qC%%x$3=TZORYl49 zs!#Hy&)V1Q*-9Yf1EA}7ZJ{wijzvdCQZF~hvH+h-npU&=X6{(Jp zD0L7Z2X)o1MnoYHxgh8AN#;K|IJi1V2UR%z?&nupCEcNw!Ip;&%ttg$e_`jWrl?5F ze4$3gPGnrvzrEy{$EP@fa35|_K1j2f*TUH%P3e8do3mFkMMv*e49p!Y zGK@TPN5`KE4AbMOHc*6bCoBnlGn#+AlV|NIwN0u0Q7S9w8N$;B>075q*A0_?_uSK< zroTVU24#ZWJ8FDn@~yP= zcGF{{a0>qH5f1}7`{!9dcwSi7j8jyN_noBIHwdV^ZyN2Ks^Ke@!J>{jO^|KaVwcR! zc6Fi&`THl)F>z^Se-bLo65~|w#e$@0c+oIXQH@FUV+#F&*Jl7YXKYN9#Dgdlq*x5` zuh*tkSMx!|FsMq=&AUi8iru|je$Yy4yQJE1s9faib2yM}u`)R6`9FL8RV|ITzblaV z&kF>4vB_uAv9WnTafifTNDY+_k340CE53WR9!bY~zB~{J zpb7*~3B_K-N`nA(*e>Uj3Uql4cxJRpz8c$KZ!b_d9Degy`(aZyNL?RfRaa{O!97aI zjKS6_M`|%a-&q0z0E6Eq&X)ow^qc!yHYyDy1Q1>%BMPe)5I;7{%F5(> z#z3KvI;Hr$NCp6oipJU>SzoUMDpL{Z_Oq>T@=NWeg(dr2LrDn;&$Cra(!_WF(+dCw zrjhCoYXqr2qhv==K-Pm|t+A=8B@lx&s-1F7x)bP| zSZ|D<^Gh%Wj!HkR8!0L2vlF@y(vUH142^r8&h;zrCokS#@+% z5U)S2so&*Elr7X<;;xx^F~Gs=+H0%5B@Rj>gkt_igYG9%b5)ULw{<|(UDBaS3pTub zh);}j<0$$sTv~bn@-nDA1S-5QaYPCV3OoRj-maNCksYhB9ss$yG?3s@Q7(|O28c$m zahiCzij-8p;B}((Zqd7}^0hvC-vKDb6t4T?#It>8sjZnoe|R@D9)epl!9}IphTBB+ zwJo%f9VY-qiF2n2MwBg@#4qU?<02@LIQviR!L4h;ohRO}lZAYJM)<xYHEsc5o!$UQwFpGGqsfA;o)AE z96z%WD=uRrqwI`~H-Ou(*hQe%@F)BFCMcyM3M*&sL`OxbktukAecdaE0SZ5;kT8Ya zYNrJv8Ax)vy1G?4$Aa&Q8*Qf{|Q}WA42`XM}|n zX{xK!OENGshaQ~L$tGk2inR>FQ{Av5@U^f~4Mjx)Oz#&OnSnW~cdw_9U;8>Iu>*(d zZ1357m2_4(W_Kd&>+Fr49w$z}zuWNJ65rs1BM@vb_=lYz_XH*R@4-P#eQpobmcC%; zN$)*v$$mP)jgW8azM$ll!UHV<`1pB8GX5_oj!3MwOB~Izt?!YhnmaH$pttmrd_X}l zs9yj0@uU1^Wqv+WrZoV)=C>Xm)lelgq*PXQ^4o2dAc%TBX^r4&WK1NQ8fL{Q&~4C> z6Ff+-)hS9RXrl01_neBHT1OopY*B}Ot#QF>Xgd!nA5P=@5~&RL_I?+qa2EwceVr>r z)6=sCDt|*g7ZK^8q_1q)`Z;8v51<8(4tOe1jGW*J_9W;1HhM%-Xg0SDf&iT}AeayD z)$hSz2Xm1go9%Qp(czRrI*>#_Wdn+;k8WL~lHJxrWCGQk7gQV^VQV${`7!`KiN^l} zqDw!Y&^Y_6MnKyg)cKTp1dsBenSe63Wq-OPRIvj|M{RIq(atk|eocJA{qNAPfjFoN z&|4AA8_;e}#|0UX_NWbH$;Iju+4E4H8yHlTWc%sdp3wsif*g1FWK%UClZ{jdxD@k3 z8*S*Wub)y_y}|(5r1;kT|86mOk<<2n+K0Y={@;0GU<(lxNPbEfUb`x_Y!-<#+YINr z{`T#|vMoa2N`^T5YT9ldWdcVmXa+c6rNCR0C(sLvXnjPA@QUv-q%8!GIzc7!Fw!(p;+k;SGABi5DVq|XFn)8p%a(+G7pcjbmfQa!5#@YCAVad zNn%RoadChxgdZ`~5I1K1tgNbv0ZRQ>nnyC{e;}w z^h5`&RulIFfs@%VaW2dAg9bJ)V23ZCFPJu=Lj@-$%D1R5+ne&4Q-k`WqiE$~-L^LP zaMJb8!RdCHrBiVAJ<^NbuDe7g@^8k@3N|!#VkZ`{sEYbQ4rjNQL(}b1=B1LE^i3S| zOJ>nE8nlrK-jqW%>)XwwL2HU=fN|Ez~yE&K&+z zYlaW?q!FH5;}VVhhAcSG1$yAh!HBp>zlVV|Za-mSu|hS?j3GYaiBX>#n_A z{s=Zqu6i2QZy9dhmMYi2C>1I#01YK@o-w^5Z z7m8AB9532^c)o-GUvX=617i)#rrrI~M>jdPqT7_~w>8`+mhHpu*75tS`k`0yhQ636 zs;kG9l$7M-ffN?!^6@0p!x&89I&0shzO$$kS;Nemw$sO}nrHHB`#oNe9!a1Dt&HkH zgymp@>WCps&b!>C$U`fqYv=YS+^<8*Wc`zyz8KKSq9`H~0J6$Ueoy_+_wl`(=cd;6 zjmfK86xAAPR!6(Mb9dD(2=YJ=*i|fR6)%bKQEl;TgBJ3pv6&b2qs{^mPr)fZWuw)TCjv zQowoWb{HObqdI*Yh|KFV(b0*k>bPg%abGjy8&?Tpr%E3}nJpmbqGv3_)z)0A%Z?%+ z=RC`L()e|!pu(*LEufCr?3OO^Y-*c_FPjbg;gmD){J@UbJAjvYksYw&khYt;{y1WiIbKhn^04Jq7-oo7!50bHQRcN2xXvw~&`X>j*j z@}xh(^GVefaNRxC(JQCd#33seUifH$-nZ9q?^*YZA@L-y{HDm1L{lBw2FA@VH$~!l z5;L_;wfbl6aO0GNIL&7pXa;2LjKH&(M9 zmub4i$S$L3TqKk=n8|mSij?djeM_e2eP4<1a;`PG{_1#Ji$&C*5?fa-ZQV)dVG6d{ClxZR&-}xy zZTw$e^!mKI{hgAJPT8>xDas_I;*x9eiH?pZw>c+)N3(pE_6?JCUXjAGXAvzFCjGvK zp1(&g{xi)n_#u!n7F*FSxE%;pV8B_W^#RNA0BtCJBTwS zS8f<*aG~gU7@&ur8}TPAL`H`4R54Ai!?^}$E_qFo7Tv=paBs}@c8;cwEA;f}n!jGx zSE)@`)sx54>b7Df?ms}&PvS+_9n^l-`|1B17*LGx0+w`T6cxE@v!iw^f?I+XUcB}6 zaPbc)(E;$w$7O8OT(Z5A#&GaA^co~Bv2I@M>qGf6%hHnYwNGj)?D`0cHry%6p+g6& z6?2Pr=3q6I<39e5Y`%oQ!T;A|SjI!05kJhWn@cM4wa5|9V*b=58Zw~Dw3wXb*Yqf* zcQ74!Mjt;FyZ;lq>(<5O@Nw9)>21#xcVD^rzcfQyt}ib#n#mPEDM@LGk4d@e*b}C< zV8c_WK&nc2u{r@4s}o>9Pa3V@O=B?nb@{INTxAU`hm>ET{Fx$EOep#o(VrLo-)g?Q z0J?Ws^-|9?spK9Eb69-Nl{Tg2NljVr{lh*d4Zq963Ws}bxr1HTd=oe_>z|8&C~NR7 zlh%4a_OsrTe5Sun7H73BFL9gve+O3h4P;z(eEf#V5rxWj!^GbXGjrpuJ`y?c#NIt@ z6FN=F&1E?6a2C0CP7nXmeBhr_6@$;E5aV?Th)kAkUlWfKAi2|)=Q5riY&_e+%}`KJ z+jt+B(i*;#o5z<+EglA%WhYkbfM?x%eZGyCevP8@Z^xs=nJk|(DOlUj*_&)o2AL$& zH*}I$OtJ6bbJifG{2wFeKY-->&7AF>41C}K$?m;dL>wmk0jm0xrzVu)v z?L^!2eE!f|fJvydH2x^*hg&LhyObQu#$NvlkQaw#8{&#YtvF3zx=bSV6Dm_oT^jb9 z|6RUcwh*S$ML$45rm)G+fswhcxT=pK$1;MiNK7M&e%JJBZb z`MIi1F^^*z#fd8t!*CfFBa+(SuD>sNuIr!z7E!Jdn)Iyf?m<$1ehj?vCm$yTRp?OK2#)2>dW zc2s@Jzr~W1&U3nD9MCpPyj!#=7JCK@YGPXre_R|J5kd7|);@J)WMrg9bUWf}mrJUU zHB}FJi8LYm{!iA_jbp>_I8A?#UpF|P_+O%yW76dg9N^{f7`stLV)1J@J}{+YfUxt& zMgX#LML_pB35PtHOZQ-G@^+r$zx5qH9T&v$+O}v=tWt^iqxNmj=x7T?F|pPkhq|BV z=4MIP+CJ@1k2mVsoqlh~-yeVDk1_z+u1e6QyL0pNIcaH-E^wZv#>N5x8x(ikPhK^E zCSPE+?!^1KIqgMUXFq96LCydKhKO{~PjTQ?uNG)REH0jR$788ZlhZzCg)HX*YUOn) zaUdv)4uL+`rr9h;OOy&Px<;tbC0X^rwl04nx$<}F^GFeu*CArzmiUv zrc-+gsTg*#D^}t5z(&L`su5}n7|)9mxH7qMJ}q}yU%x^ZJvwZ;5SfjuRt!f09x65^Wx%R ztIG{i6?Kn_^DSnqt|t^;6MrA6EStdW`+l);sQ-JPXD4&8nHhSkQBz|Jb#o_3==e40 zYcJs=16U_|9%g_3P$`@B23Z4mA*~nSUgOuIJnA4d8fyMs=k{s2<%lwqo}EaEl_<&_ zp#s&F0x!ZQQN^lHQLA_5Q-jgs_N|9@JP*k3d%Sh-_!C^0qZp6jw4gjUv&V%NP2|W! z2AZLiRCM0Gg%PU+McNrwoN0!I4gc5y38QlI%O?Ne-blZJoZSAT)){ZjKw_B-w|(kW ztpVH@ht%EOy&4pd%S&6qj{JQniJVv;8VTnTW8&vBPV#q|yo1jfO<-!aJ;_|+SvN79 z#WiF;IO(9wmJUelD^y-_9h|!w4*bHGtv*lIz`KVt z&#{9OwtcImcTNsIutDLW6^?ltA~YV#kd$tom7Lb{=`kZWH|l~6Kxd4=$15g~`OkWg zc)E6IUB}k$z_}hpE}$qDE_=+k6@Wq6NqX^_i~8>y1X zA8vg7vGeX>P_As8{%g_TXuuCSA4a_|;QsTy)$2Iv!Ys@eW3V+$6@k)4Imd7(s?Wrk z$DVzy2}xa86vwK4Ig7Ee@5wMj=AY2Ydb_=wQXA!TJV3_rOZjjPuMrwzSTohe5-UzyC_@}+rA_~l< z?G91nFgcPW&Vq=IhCNA z9h1-p4Skm#>V78! zE!*yzoeBt`ASKgGb~Qr2jmIztKerEC9nXd5B@ciGOjmGvOM@H`Ln5jRGc12b?|mi1 zGV5c;VCoi3WVWwGB_7&pRIjwg1U~+j`O&2|dEXD1nCW6ew6}0@?#{YW6DrGo9Iqj0 zl9r1sc(jV9s+o%aYX&aM|1W6*Wr+dM(3N|8V1Xg)`^bu&0oCG?AkdnE3h;Tfa~b0d zz;nuCR{Jd4i62!6wx9|XJcY`hP=O&XL?NQqm+b7$(3!W61UWOM4bi-&kdR_Z@g$kN zm8iHoBjmTkZp5(SC`WgHK40E;4gxX|X8oN1TV1!6yl8JL-;u-b6PdaF%eTXC4(Cv^ zEBM6nTX}1U6qGRHsX?gV$NAs&53Q#j+MjILE6%xQJmVozqc=n59Y#Sx$jX=Hd`qYW zU97FJlz6<@9&5dYT%?V@TONWHp2>Vn4PW~T1G}#jdIi7c^H}efE??>S3Gm2~!&U{u z!I(G@R(~LbIN{EQRG*ox5db`WW+0dRap@wqxd##}^@aJ0Gvy(8_#6B5p64Kl+)K*+ zVVcxFPl07v$1m{iORTG1mKJlUK89a_*n;VK#SkpO>!KjU2xEC<1aZrr=v5@QIO-O@3{C47)`*ecJA~>-SrN*g4@z`DF78 z3mwyTYX6sm=}C~5!Zt^0+N3j||Gg!*qCa;Vt*FFa&tyOxSpceg(>7{2ee$}xMWAa> zWxaMQ3=~s1%=>8)eNPXdMtCFSm|;aSfR72M0XXQKSYBR+npq5nxd3DeT)T4lz)SFe z?|AbjZU#?+O3)w+J=VHoN|yrg1CGr(4Wc-URyoD#Lg3$9d%o%3S&4MXj6?p};MHbW zmI-RdkhUn-`YrP70gb)Yo0|@xK#GA*u7Xn^%(0gLVEi%zgFw@#Za8a}seSiKuh$x^ zSF@(Gb2;cyB$j;J_&x{m@=VL}>Pprxn5NOEfAx9A-$gI*Ate{+K%v)LI`Rbut?QWo zqLyHq&PInBy|O$;uM&R~B)h8yvEi2iHOr(fFZovdBgAYSo>NHX!SH>h7d zymie9oS(sfO0g2)yf{trpeO(pi$PzCWm6z7=vPiwKnfw^o)m{DlH(x2C3ecOHiQ)t_aGO2{g+H z%fj1?asUHBS(s*7bbYL`5}di4x8rdFT}&OUpI?`QjM8uEAvRc`N@It9z<4tW=l-`q zIOlBE8z#o_o!9$kUcdbExK>of!o zIl(hBU$TLBennMDiivV`vW512Av)saT`J*{>?C>G0m4%@y7j-ZK>s_3^(Xb9p!B~L zdTsZE0?f*gNDS2KxX=}!b+EOF&m<37Tywhcx8eG5Xd+F6<3EBWgZ<5#ekD~WsS%GR zoLswgu5(ZiE&?rMMA@KeQ}Rr5Mr_8V=KS#FyI0@aMD66(8qRz`1rZa*zeZ%20G&2= z1_l#lB87RBfo{l*%gE5Vn3$MM&=Qr`^eUoc)C$z0k$gBb^xcVHGwSMKWvfnn(TRz$ z>|Fth9zBtj$gXwe>7;`Dof;<0@+M+pW)Pr7&o-*=203@jh}X(5_K|n)0HeqdEV*0&CXGqRwCO zd$NzBqTX96h0ozGQ}FZRm8+MrJLTr#NO({!Ip5kI#;vzVqd~4-UWz+GJFo=nGC%S* zDso0Xq$%TVmS%=*tXLLLbS-_?sU;&*3=6epSmvP(CNZ=v+Q$rukfzmqIQn;xY_urm zkYOk*%M8@v15SSdT8e`Yd=M_ld*5S`r}pc$EAM(sn1AtT!8Q)Oz#-lm4sbjSw6j_L zeJv#=C#MQd;WU>VA0OMW8O;3{!@h*uy=j*FpI!h^aP5U^ywyGmEts2!ohGI2tT-Tn z&BT+8!lCToRwTY9i11GTPwKc>Q1J&?Hlh`^<(EJ5Ds; z1R@qusHix=**z^~_6udg{@`?+QwEW7+%+A^Y#M8;G+P;RJM@`K7gwi&~X$Iv- z+*He5qfeC5V_)mq`pPLuD1m>esC%#LL2^y6Zz8JbG>QL&-2HnVQ~=-2E%?6+;HLr- z&;d{q5)!EI;QeXJ&6Sx?EW-c^#|_=#@UHJ@aR&fD|69*fZed~firp6@aAI2P#pA{9 zTLZlL(cnnRYT;fQI1D1vfB^jfcTXZ1C|+8Gw`)6cSb6`PfBR;V=(=WvZv>NK|LX?1 zc<%!AswL{3pK3&kOFSI3b@IJ^J7iPZX^G=g{rTa5k_GD%Ro3q$zCjh7Wh=#j4D(~F z2&}|_d6*L1vC`<1VKvHx2wGE3PfcMV8suYH`J|0ZoKjB%UQ4+ubZ%ICEXmolm_Zjf zw-FPMKm?xrk5G_K&^9?ZXk;%c!3DY?z+WN&HbbtiuFALlIz04^=axiuL@p&R*qI2oRLwMmilSEOtAA?MkYp=o3k;lI46e^oyq)h+oEQKqGqUkB1kwJGM&r$U~t6y(d^sDsm! zRXJtJ-2TEf6&4%hm}bHUnlQSNZzExnRzXDkY(|0lFPb=GcK677xqusaPin0pI&xoUu%?>nDKU9&#Q<~%EhX$Q@+i6rkYdpEV^QN(OI_J+&9_7 zr$==;;;?V;U}&$V_)I8?akJ#J?F4+0(PM+eT==Y0&SAU5a`-v>{rmU74x@j#+Iv1?+5DLrQnY=1fm2Em(nh2Id=8iOm>$yy&BtoBPwcC$N2EhxCf?yEctVz_7p!M z_PcD)J5vM&>8SewgXsX@%``vT;Vd_v?0c6U4H*3Dl&@GVbs9{Z^v2M#xgZR(ZmZl; zK^SDx2Y!7CGq=|G{Mz7^5rT&3ec)@oxc7mV9q+W={=>L$67lWr!D{q=WQKW18Y@gq+H2FJK2d&gp%_qM&4At^%-&pdl9t|aj^VXcSs z+l_bT3Y^RqnViTMGDBtA)r2O9{`=wrBp(EHDD+Lc<`aGRiyTX==19zPTuFDd_;uV* z+e%N~ehpI$`bIQT&J_{@CS2AV$o%ihxX%$5I$P=m4keg?Hel$CU}a(F@87qeuBDLF zVD8#!TpS!6j)Ws{)Gt!bXEU<)aHwd()o0PQ`v{Tcx%mRgT?B>;{_{1{NsYTE3+lv5wgA;5gih^;W-ii73X`>oDwL7rd<573R(Haoq~;eKzlYDRPJ6>8|FzgBR|dR;Pb)m( zL7Z2b(w7coG-R7F<1{HSy!JQuZ}ggKXswx4{n;HcuVM1`*(A0OY}aWsW_eLLP~#=n zdcQ}EGbwz9><-51!vM@LYjt^|E=!X)3Lal#8yRYttB@$f5MI*Bu(35NyE|fYw?tFB zX^h3VlYPpTsAAZ0j(7JaH$l}Yi7_n|6;(e8Y4y*$c0xW1KQkL(R`Z$BEiI(PD=0OF2jQX2)ie-bgF#Nw;Q4?iE}v zYq@W&5FAfaMtsB8$?^DGe2VR(8}Y6iDq& zo087t=(Cp0mT>-FI!^yd{rW>g?n7c}f>!cawl?axEt0Ed;lg~s?tO}?tLTicWs2qZ zk#<>mp&YO8kQhI!!QGbfeD3J=hqhD_tUZtRQhDO&F7Bhw8F^aqr+4;s)TVE~{@`lD zS2=2Hy3s{F-1@t)lk`sQr4trd;kox%xjhzrrz#3QxC&3N${DoUi+DNc-F#hPjr)=h zxxurLv_b^jq9dm3N8~e(#nllw@QUSN;Z=_Lp^COp9sZNy!|D?*KkbU97yLbP&#+$H z&$lGxQ($-KNAU}XY!XG@z;PE;ELtj! z4>ja#;*&*eYK$%3Xs}jBvofg$nLET!b<271){{oLXX~Ue-%t4Fa}@>Wz_srd>SWz>OSdhKjh+1| z-*c{YyY!=svd2BxDz-{8uTM^1m9zi5!~Mm|uOUxcwJtzwSmUYI{a1I+!SZin;64epxZ9ByUH zPWB<#tFfhacYFTq0mnJ_Ko{*Qu{g+|nR*VZED51~32(u(aH8}me40dZqX}d5{}A?- zQBil%-lHgrlt{M%64KpZfJlpUcXtkrN(urJ(xD&&N)O#2F?5a6Js=?64fl-tyzg4~ z%l$BG`9c2YoZWk$efF;5fiTYB;*C6oCq#WlMRK$oZe?`5&ki{@yTbdf;DF1MQXI>8 zF6$Lf8_W;%ivW?zuG>FdA!;Q&wX{4s4XM!Jc`YbJ&QU7fsFpr=;++mB=KbZ#fXP5c zb;RQqeC_HbXQ?Efr(l<0@5*eoJP@r^ zi`NAtB>HEsE@t(7<6@rpBA*J)BWXJ}Eq@ovPp(q`}CYPf0T2i#0Tr`(&4*TdYr}ja7DHsu+LjdOx(i<<2p~#r?yj@BSs5?y5RP(sUnKrR!nar z^SA2x6R-FzbC8u|LwGDqFcZ1oQ0C%jLIk~er<31l&DFNU&YlExPom{E;F=VDmyt}ow&5hJeQqgEGO%m@KM8H#ggT~-Z4y9<;BtG z<&pdJXb$G6{&fx-iXJBMhx4YmJ7arEV=hDAe!Y@2mC&GAIALB)N=z11D7+i8f5Y9t z(q}4tjV^cc*WQBhHukvE%=)htbYO6;cYmo+iv)3wbB6j5t$@oLBV}o57aWm;LTw7`)aA{MsF#Ti!~YmU{%*9WQtjqzOofsR;O7X|fBa06eL8!+j-HglyW(C5>I1$ao-;$F@Ndhn z14Ie9uTFovi!3D7@p#SruJ}5;d5^r=-61L@&k zl-eKArcf;0zyc(A!|f+(7dvcNjErG8H>r#p6-`#}T7C-6LG8)gPcI_f3}OWNaHEce zm12slW1uYpxM!JdFOcf}&%yc3tpAgop`&3!MN~e&o;?HBiI#QPJWIMTxT>~aBiKT(eu%%YOyPN%Z`?muXIP>}eeqV|EZx|It?&8Hzpg zQz)aOIjM7HZ zOXyJiB69jMaIYb&dh#Re+@&&A|IubqSb9-Uarx1ZZ1>}akc)7-t-ISpUB(?6KHDik z1x+21&nPOh@4LF!4r?D33)aLXerg7_;+PEd^r9xaTF%rDs^vXXZS7xnUrl1HJHY+ zsJJy%I2_m#(CLA+{8n8@;jNHrXuSrz6?o}Qz2>HD(7KqEIqyjUwniXlU2h{C?4R&z z*67*P9gJH1rV<-3pcP?kAOEeAB@z<7kQk1;qoaFERUd=ZV_Gg{+eek2O3@_BD>-Dr zA@Le?xpwuHGgRH3f&~4~8>hAv<8+QjNmP40RI+rJnZvg$lBpZ>jonJTQbQUsY>@W> z;np($TL6=UtVp18kH>0v=vd=6GF~!?V*VwSbML_2BoZgsvICP7v@$g#xw@c;2Dy;v zs_)i+?w7oj ztD;n2Ao8NBi-pM~Obs~7AUa|p)y!$5)WvHrtfO3bo-*t+x2oQWj)BZp|H(Kp(r-n$ zfY~lA@wAfA0?yuKaL4zvSU>*vM@-W=>bAAjwjYKy^HpD{UXI!cUI3B9)@4rxVb0JK zImd?n)r%?f9*fQcFD_=+0C!(?7xKx*$EhI~|A?0Qk7y<&HUnJnY%gy68k#(%?LMP$ z<5A0Q3Dvd@+~4IukCN8R@^cn8y?LyJ&+8PJY8YTlc>^6x_Ms|>CDw|2vjrbD#G=1$ zvfrpV4Dl=yQYaECUl^67t??CXnDhXHm>l5aBa-*gY85H_kA4LHUtb^wV?sO^y%liQ zZB7mvN!5;1Cm2jCd#e3lnKsH=Or%WbsS0nX6DD`px_37!_5C?i?~}^71-IDw_Uw0k<%E?RQtV&FPGSq+a&6&Ufqc2_I2D<14zW!ra0`G9p~c6Sh@8+WFrvq`R2R;+u&cV{-M_CzLg zMQ?9ghSy+WA}(;A0dL4}4rx`vo-LC%b;UxnL9MMzMh`3hI zsyj#5^SR0ZkzpNqmx7T=ER!_rgU=&rjc;wbRP{;aT|VV!gL_)6Q*`J6%#7tfvcg*_ z#APxZ!*WAfR-%51E0MC^ysXz-Oa{@W;4>qoqy38E_{L8i0zC`M@SD@?4lf0^v1_i2 zhdeO^(HG0IhGOc2QB(k?d83By)PtEjF{#Gz0GVe~;n!=U>Z|ESiO@=oH@%0hE4+|* zr}d?Atqk~0PKY7;Zc0JEXs1x;-PR5B5dG-x-ev&|5PP^$Y;VE!JtykwJ}+^d7`o_} zKV%U<5a}iI0(z0Pu{-(`E8~t0tINjC&_ZfY8|fk@V5ZKbcRy9&N<{U3YMWcI*6 zkL?J>>fvuRXDUVUwP7Rh=YDjvH)Vj$ceRZ8kTk*NeTU8bLh}|$5D8&^xxs~~ z`gi8Mjpop2Q!fa;@g*jQZU5v*|NO=I>OF>pB>%|nXWQ(_BfiL^ zP0dtHG&2nU<2)DeOc=@n;sUj%Tkz4Vr{-yi3$Fb{x&K!4_V)H1 z@R?~%o%JUbE%rr)>*oessVbs+$cqm9bl%Cd-g)K_Vg%bNN@sKny_Tp=BWRO10W*h4 zrm_#3Chh;QEyER&OK1(+fET}7G$JZr)XBys_QaQn82avU7kx1KoIjj3o86a{E z)sGH0-j!KW`NqDYN@+Hk{F}xqOtgaMgfMc;2e2pRC+y!6^0RM#VdRGEax#{Z%+gk+ zF#6iq#D^R6tY=ua45F1a=*X)%*zO@3z#XzAhYjc99w(l6)w!?iWF==x zZ+h$;Qmg${A)i{_KV^&@c~$Qt$?^Tc&M0}(CaUe4U9xA^ZL*TQx@kQf+#4#Lljl@* zc9yFr-8YwY1O=y~1@lY?#H?=ju7+YnhY-`?JnCue7#oSO^m#n+)9=`;H~EPpnmo4B z|GYW;ur$t$={e=Xf!?j9{zmEIBgr*$8mxSrI*FEE4t-I@6};bi#y)z>-?oZ+Eg@U| zL@PWv-hrofYf-Y3pUp(4bZJbP9jGAHWTnfz<5dk09D=Xu%NXj7OrV$P`Hu_ZPwSwF z%B^5VQJR5YYE(NXaxt+>T*77YqY&>kcW;G2sO6pCb*zf!xPeDzB0F(G-_!i-pG6_! zuU%#9F&J2sg_O3)r*!xn<^M_63fTW8O0vBSpSR|C6{8~^91U=EU!;A{?a7YE4SX<= z8cF&cBB^*%qrfySP4eE{b=(|;4a+NGbmz(b_3wvc`~&CzM-t3Ox0elr$T{X$4)( z4N!rvzFG*ZxQFSsy(|S5W!l5Z61mj3GFpn%DGdxaW4(S$eSX<0!0wKczqJ+`gx*s zC^G)A*l@-l3w%1Als7iXzBQDrZXvQq@hZ(@-H%w@?*gZMa($vA;;HeTuVHmjen6Ai zkOGoHQMCaI{;}8V-`U^a&)+L0kcB+z&p(+_c>|67xpq*Y^kn4q)|r%w6p=M^D4P;v z+_n3Psu$-Uq+O|Q^}zRM#=pG1fB$`ghJKknWe*$@mKDa3%t5rAd1j0pC57reCrS?( zK<55dZ`7m6S@BfsTV2z#mILida=Yao zGQAqQnq4F#@b35`2IX5|c`NzhCMKuD?BvV2o1KzZWVsw(8}MBTT556Kl}UJHk54Xb zrrwI=fFiOjb$lq30qe=#G!#zEiP!-B-$RNGq_Xc~KH)lZ(p~h`N(f%7jktuK?r)vR zf4;s&!J61(oBp=o+UyZl>jydW6V-Hc*Bw9FR<0LFH8eY2YN%k=zZ=SA)$<8{J|ywv zP42aOBY6Azcdj-(n?-y9lG=McW_i$CJ6n9s{Yvnmkny|o^ zO-YVcA;xnehx@H>l1H#uS#C{jE{|(c=dwqEK=jqzm(-R*_2cz1EdQ^CKC-r>*bO+f zG#XrB=&1h!WM_=)iN#VVgz@u}ntEaX$TBwD%4SOGn9GY;=`#CN2HE|Y7{CQj{K-b)R^p)-7hFpz??8J&|k?O;Pe9mmm2C37dy@863zjx&9Wnj;l9baFxf`3c(Ce*9}}J8!b#JPQd4Nur{J86?|urCk0HqJAbrxe&9)J1**J zVEZ1y9iTy;ZKtg;hAbq<`NUL!wAHGoM-Ij6neTNQIVBo4E1T}lC)4!j*5kDE&0~LD zh#g^v;@-7z!QDBGK^^Dh6tV!z^b040hh=-bn=G!G0pqW@zi3D|nCwF1T5#wEwJM|2K2XSN>~w zg#)bRw%**lQ$IOSEZz$Jttv+Flg!1>#I*Lq+CyX=+w4L`X{*E)O(Wg1OWCYall3t5 z4@+7~aG7b|^K0OdG-F&lzCR_U-Eb&rg*B+k#Z>&Ov!+!|Z9DVCMbbVit{`S6^&arL zd`Sf_=27aZOorP*a9(-8eQlr6s%W6w-C(}pt17)Ob_%4UN^3aJ z&iR%sFT|D|w(r9|C+$P4JONE!|Dy?2CT);9jlNwzX6q-R>Z;Yx85k4o=!$8+{6C^_ zKEH=-xLSoFo)2V;w+g;xF$h~rbKzy+rA-)M`0MTjPAxt5!wZ5a8$2)-QZu-c?0>$5 z+k?9y(nr#As|z~(e=5PvyhvqUkcCKTy!#s@x)@C0HN4h$oLUtEmj@s0CD&?Nojd(L zI)m^poWDVNlp#knuWZt#W~8t1ML%8cc(Z=;b^q8`w*$wIKdf=bz<{GO{NW8!yN^#n zK+UQ=DBU!zAQS(V&<CydD3-T z`~Cj;C}20pss8ahO~k+?|E!k)3^7$*23|Y-s>q}K!8>U`ul#Q zJkdzL3FH~Yh;<@tkEc{4VQSs&_w(JJ0c&R^_fPS`yU=`9Hymnf=~}>Zy5%o_HV-s9 zwa*1yww7MRva;+u=8w!$9 zue0n`dD$g&S?=|=_2*Q(3w|9s&dXZfb|sUY>W)WW|D?S7Td3~;wzk9sW6EiG>O2XC z`Rq-0y<;#j%;}7`&$RCQ0CT(P?xD@$=uyU+Qr1}`)ST{5!x_06rc_1CEwpr+CP&xk zl2J%OCn^8cQvTUgktR!;(2v?6^a3QG{_$IVG(wp#GTOn=Q)av2BNhyY%;kJs>%pC_ zFWU}a-|h>*?$cwg(QhAt5E582&tKVV2C;)Q(&+d}#THXr#-Xb`PVlp6!9d4z z`@|IE?HZfX@m~j+C|ypQ3E1bgPV6Mb5JGgiT>g&@@K50WkUugScQcsrW}!{&_DMGx zG16k#1;X`W6>rV1%+7zxUU$e>nxqFYmu^5K8P(NCWD0r>IuAU#zwJ$b^6lrR8#RSH z`aGM$`|U>KxNgV}Hb49?p?IMRN^n&#-$2iR$}|-z^2l|6Snlfp<~Z@o)I8Lv@CA&n z$2AA#3bGXP8%eyn9;^w9T`tG@axqBoOTE8kD&-&_vnfFhgg zIU^cYVNGP1zGVg`KRmD^S0@#i6jyH37wy4u`qx*8wH-V{ z$G@MS@e~@P6&cM{8`T)(p3&fo=3>Irzt+y$re(S=FcKqYfk;A=I@>~rh#D#|L;Yiw zTM^Xti?uT73J0xqrq-*5ND!X9q8_HLMr}KuLIvAuFGbf*O4oR}OUfQNv_AmhwvBAm zxaJ-;>I1@o!o4<=!fP@Eb|9o}-8 z2L`cDp_wv1P;+|EvQ%J?Kq8m6$t7+q`a>Z?vX}&GfZMsS+8qig)G|_Tz3sZOSa-Iv zwS`DUVxrF@Ao$l%e0efDL|Pi?{jB~+ReEPne?%n1g8oVuU%3(DxxAB@>M!}Hb>{7Vdd#Qz z4wb|ZCEIWM@`rW9S*8O8=jNDbfZyCe4@wP(EWOO9iWO%C;?T927+cwyLJm@`b&Iah zYlpp2)3wGt#-iu?g>6ptPLk7OE-97PT&Se}Y|bLBejrBhny-Iu?p&^ky8k9ZPa!|a z<-Ek=yx8_xc8^jm9&+~$cqUd#EmwSe`}8RAjawxO{yTWZJJ5Hq0$Ofo;f=yBR#iTEOpaz&~}^eFTu<*Hd_Zw_{KLx zEBx?F{lLIge%b0O9;J3Ti#F0ZFusUu&ZK7vk=FZdiEUV}(by?`k^y$Mjm?@2 z2m7{r_0P4jIoiAh*Ph@a?uf-v7^%B&M6?}g6wZx!$Om&|Ne4|kcTQxhA|=Y42je@R zn?SbqMYmsdhFYF(;?>d6%YN7VhK;XBIq3nE4iH5r9R@xg_Rmj0pZ)ynX5Mg7PfJ>( zJ1EVU$;H%KzbYT_C2A1GS>aclS*R+ME23j994H4}48sJ{pyuF#fi#}{ z7WgUpg3r?O%=#oQ)af^N74*-g<(+>t*GUl2sLSbkACJtVVcb}C!#OCi zZEBO$DW&K>oO^nuYRH_z?PiOe-4ExllunKnV{UUgCMoY78}c4Fe8HLy&CqSJLn|2X zKbsHfjKq(l2JhH!*?aw-#(>1fCUI&YShVpzCJW$L+zGxv$6twf=z(%#iHYoOpA#s? zkbKwGP5@!Sq%T6K8mA-P+93DBbxCcIe{Fnu!e4q^f4uePR~?ke!{AQGerNevx3pYy zXIgY7G{ha=Z13++3@6g|(-e^aC6aVwE>W=(P-te4@L@&WP(0j8@UFTC?q)pwSGXM9 zZDhR^-lxs2L@4ip>HDu9eM~410c6+n{99!rc+Z#P_~!c7aliaE&qjkK-HLmrQ8Ik0 z+Xt~Ae)z@M6p5!=mEdB`^LmnJqnc&0mA8;0ger5MZ-Le(5@YmL>zA=FiWt_Qn`k*P z9?=<-=(Txk79~1+g4b^1Ec+*136_D*qg~EZ#Gvt%94e|=?sMp;hByYrleh(}>&P!w znVay`VPZ_2Boapt*ZkCy&PQwM)YT*=1LV~L^VR426>TdDXCXD=lO#6CLEog$VEdh~ z{$09bZ_ZFt*57y?koE{Jna90&T85a&zAY$QEslIs@kl$oppf!hU>Ej8^KL)dX??pE zpq_5!d7*X$iOIANgW^!zpFT!gv;XSWHLr>9+(eH{EjrZ{&6t+=HbYXvWN2Pgf1BXgKJ&CWl@pIfwG$bG*(#M4BZ z=oZHU=yPyyHJcKi4tbEnT2R&w_liSmmERl3snih|719RD0DqZjm4!CAxjC~XjJFIa zX>rSsTNwG(I^vD@x46Uh??y0HYjmEJ3swCh zH}bUcc2Nt_AGl+he%gU`+#J>pOqS^DW25a{UvOFFWTWRxS6#DZuXdfekayF2@=eur<~~4)*8O8^U6|HGsXxtQIP?!!s%i!-o&O$ zL`{KAQwQ)no#Ye#56}m~GG|19R=V-KS0l}M{$^j^3rVmbMQ^zNv9ZZpEyG1mVlb;t zH>$1WB;H$?wnib|G5kuc*mY~>Vs@}^jHt3CsQc5{V(|CUF-9?Od~>>!>-e}m5p?6q z9q!{j35@UOn!Cf0ai}mEmtw5~XK&P_%uQp25LInhgD&gO zmKyAuAg!s{{;Lcw*YZc#z<}vTxf-lAC|?n*s9CSMWejrnRPKfor^LNWDiLEf{R1QhTtS z9cT3G6dM0y zYW&KJV}+RerDx2KBpfd(@^BR+Qp*dy#?-KyzAcTQrwyLcq+SkiT>D&I;q|8KY|izM zes#v$tz38fsSJcfmOLaDa~SX7;>qc2W6iUzwZF4ozW!+w_Vn;z9wR zkU<;Unz~RPi@G-6BYyvTpl{fr-|d5nD`w1|$+ap^>M2$B*a-UW1jXy`MQ9{%e-on} zsVUEn(9xv{urZmbso61xqv+M{h}*{20e!R6qpZEP1)yynQz7)A)9&ZrInS>*Zkdu{ zVqjqS?q}1s!`aZtU!a&p(r`F%bw1$Q!(LTlKPjL$+RlYzH)#Rj!F*Sq(SGMs1oJ3` z0FK!mUvo5RS?L27?kW|EaULe8`{YrQpj;=o+(NE_?g2p&;C0Qu=p9*6!yxdK^Rt=u>P()dlY0q39R$**6Z21-E zj2?r7d6bhK*m=rxOo!s30H zU<1}`Fj(+Q&M021lgIKV7*DTTTM0Hq}lHW^FP_d8qqxM&~Hu=4i!6pDVub(UI7+XwNIMkk3a^Y+X~@@|E^+ z4q2eWf1-4CR*|xmj(eu+;No&mNVUzLF|G@8_r^76Do(B25~%?5XVCdK6QhllD`|`0=guS=w~E~<;7^iavY$VF zJ%salZ`pCEs;ZU+Gc=rrEWA$Sfx|IWy4dBw;O^eadeob#};G)QC zIh#l}ujsN2bu7VrM>{|r1(#&>9J}0Iwia*u z-vf0jlh#jGqP~9LI(vkP4th`(-#cd8Taby zK-F?Qx%={~+K348xRKJV6_CuYYC3CfW`W?Pk7q_MAcQIhFikuu0f8sV4Hs0sixyXP ztTw+*~_f5WPI?a9A$F^q6_!*3&R&afGi#oB!joh<%t5n)akbd9NgmBd9F z@sKvy_uF7=rhqWh)i7bbTYHm@74mK$i<8EqkJy*XHRjr7H)qh~DAv>gsI_qn?`A~d zR>4n=Jxu2TsshaSF^l~2nFN@&9pR6@VW~6L<#U572c%hhdV03#f5=r*3@En-_)_c* zmBwUK9+UQv^PZ-2dHGRv(cC&`X2#(1A{|;npKKWKH=9$uw7+4&fwr5v4N;LkuFP)D`p$5+<%s%NHfdJ{x>w1S?sKT!e>n8 z0VT)?Mgsfkz7mYSz zc#@a=2mr_ESUdd>zphGtVR>V`8(?tyd)zdb>wZK_WQNC9ctJ>&#e`Ru8iCv~OHBA8@pqWBETLFD3)p2RXJdo6&U&h{~Ys;ogpiB^N3PD~d1T;Ix+s9Mv~<~)69 z0^{+wYFrMrEbk7QYt@h2>$c)Z3uLY}2+&|XBKrP;U=+jXhr4wIedCN^^Q3Zml3$@D zztY*5tlYHj$Rm9;+aNkKfl!A59q?}gZP9*S7M zx*p?Ruyhm5uB4d(&}^;%=Tn0d9XQb~vqqI>1f*I;EV>G!>z`@p zs?&uH2$D7OcfergBoL-KRL^6g;Yee9#ghNDjr=sG6=@!I^)is;?r_vUqnIms%G|M&B`YmX0fZ&`;b!Omu^%NWvB(O;sY3?6|MM?*`Jpa*6f_ym&;e@ znaleY1l)dduxdm46vOd@~vO=K;gwM;(K?i%uDdry#~>H9WZk z@G#Ukys5a&+gn4m#BsA!nL&~l| zlOJvwApr!#vu-phm2q)lRlgp~sr%r>d-nXZjkxQ-E1_3C*!7U2HJ3^IsvUrQ><^lk$_>cJ9pFrL6dSlQ~Q>(T3zu9*QEtd4&({V~el!-+hXCmQ`k(MeQK zTicDRcQvmCVuH`{;?1K zqC$6=L9x~>IoT{$Az{2l@7_7woQ?nx#Vs2i^i+5#fDrWkdoVT{hI#=&9|Tu!$s%PR{MQRW z@(`^ba*9S~H}E|l4RMLQrIselA?wa2s4f@Mh9~&cvcL;AB)Z}HdRf9;y3PFe;Oc`B-Jywz@a!lN8<%tCfKx&x(G5ONF&Y4D zAa2)$kfMbfg+iY(R(N#tE%9U7^=6Kl(BFvsa5!jt7c<6cS)~%iGR(w-%G_4w!hiE`)$YX77a&-6bjld`MV4^-5Ie80^ zplgspcWT?wqIPglL0#iN8z`td+<%NQXz!{JMa|6<1}nW=9~fAJ4VbycR)_6@!TgSP zto`gKeRNAZr1HkRD9x9AZ75J=s0l%p?-61;Nl068pafk6WWoDFaOd~YDr_Z&d2 z^+_dxO-Q-Ey*779LkoF4TQxplvT|$`K${R5WJGPgp`Z1%Wa3wlZtP2u9Fna74&66N zY|mz@Tm{b;inotZ(b;vW2cDp&W%V{?<0c4apNgHWmtmQLLG;b~(COg40ZCet2y>k^ z1#zX(gMyA1U@^W#F+r;Dx+r7F>(VWcP36tiHVY_=1o2ACm6aq_(;k!na~LBR znLMNuEw=k2Aq%aZDXIQ6R8MsoQn)HTG$|`|ID(W#ZuT49cAkFyO~;k!A%SAz zojJQyO65rW7Fd2*=N)f*53yHGHw-6gs2c=Cqu$H7P&0!4F2xiuc}`AD+?O4m#|I3i z%XufbhG7Q`TP?6k?$8D(psF$BlMF;LU_hp1Bf0zSyMpffbB^c5e9H$KHbac zjIa+HedFymq9Ukt09r|x#Vq7Lh<}lUav>roFn@~aRP-}A^F99e&$eWV#5dSA#0EWY z^dBC5Laa)~G~wqJ54h)n1>$cpw03Uh80P;BA8RZsM(T^CyZ#D)eBbFf(5jz>m9_Qw zSg50;!-e_|aS|NP^CI?!{wOyW*M~Kyyk>wE0uU=2p})c&>$Scsq$W!re(%v`b-T&; zOnu$0c2G>>?Y0cZ8{;aB$InGQ3pH2^M&4moqvmh(2|%-mz^8J{ZkwWvS5iEV32#G; zAfBFX5PNSb@xyoe_0rx{lLp>+fWbtYVeg+d9HVM*T|!vGuSX*=AYoE;w}YPwM}^%{ zl$FJD(_i)peWjtJb4NXWjBtE*R&vWlJ)XR78ZWWu?(a@49kcnEC(B@-aC(|Tiu6^UFZyJ?FpJk_QS8DRk zMB17y{6b%eCf~l}+Y?#`M?Fk)sHPGMq~BTRANFoIa*?I^)|++3Qawk%Fi<1g+}xx0 zAKWl7+#JKh!dy?meD=|p?OUIUn^e5w^7u3`Et|(}u%sk5R86?_*g8jxuBfOB6*XEC z1K+0<0KCCu8(AyW-!=gH{Bc6Nu$lho{L$IYhGT$_%X*q?e-{H4nhe5@NKdjR8aZH- zPhaE9goZ#ArbY_c^WGSDd2{O1b-$r#e@h&GGa$O13zioqfA8`t_@KI=4jWn>zEci+ z@+w|06m1fP`X{BXiutOM(tFmtwCwlJ4Ei0wX)C-NG9gYg#QoR%gSIljH zkY<;9ayKQRKl*TQxRcp{hFpKILBzprma+3%n*~DB(q{yPGEW;{PaQn;e@S~-xlui zkz21-niOAI|ErBL+bYy-ttb=1t~P{PWVj{c^66X>zsZt8sF`UM2?Q-iPU&Ap-CN4i1SA_6Tm(VrjjL`kG# z5O`%V^i-XlSAS{4Wrgm5Pxpy>4VguvsoD12oqFx!0=BbvK3zif1N(`o7iN~QVwIM& z0pHsrx%y*0%>}&6=~5wo9!VwKK#~-bCg-*gV~zgI-~@aDU-Yw#*G3At4mT)-!8Dm*)M`@g9|z+* zQf%$X%wPliTNcsbhk_({NB z!#w65;~?~hCx?}`X?)g8!`c~bS`-yml&FyVPSWKJ!cG@%X(CxZv6K~hO z@`;e~I}u?nFEk5F)YLvRP_^GWz3?pH2%=UxT0DRD=6=zBLY9<@3)w@oHD^q)LO&nu zepYAdEJnjOij2p>xSA+qHCD(k2r;f2(6EwC@+uhMpI?H>uMd7{`#|Hdem z7wy+MA(`?Viy|FaN@G8s}U&=rmAEck>DPU+!UBGXbosc1ecVNOWE_v~PzivmMd$ zV19~JwDO^tAlsdN>54J}5`v{9$5Fg}DAbtub+mVn%IsF?4Gh6eyOs%JCS@AIZ(yRA zG|Dh{0f&6TK@T+AZ|ZnY8?Iz+e0a}PaTi)Kd)duSHH=0 z>sE;iSLp*3{#oFPxK))T`U8pz+d#O0_Z38R>?{xFIc=`l>nfP;goKbsfmm}WsfsG( z+b8BC<;{ikMb!U_@0=Q6a}z4OfL-3TpPV4qG-`IHAYtBU>L4XIZ%xLrhuFb3YrP)E2TMo-}xIu z5O56X-7B_FQrN!UtY|;Ey+f!$og|2Jo>)vftM9ggjebq9HU>Ay0wewmnj<(H!$=K% z2>|k%Lek!eFKm$m%tOOR0pxFKv3?2_^gbo6L*BUiU7Mc@zVPRZO6d`>s;C!I;>#q* zCiz}MGgjuh*m*Z03Xh3u@5kI-52>s&Y6>(jvOokBu$MZkAYstp?f!iVqzssl>g5cd zyU_@4KE5DAWibK>n$~E5_i?;XpxJr%&t06zCR+)R)x`ITY$xp~M&;!``DnSkv6A2= zx_SC`BB->ul4JX;;~WxVX5uedBud?{E3{mA?{K*^R@y5XGSJ zva$1-5lN*B>#pt6) zM%8+}fYQX*rQb9%mnyU}d3(&yU0ej0 zjPVbF$FZ-@j+nbd7{m*?2yHIpV);%C=u+^kXDy@Hdbsx8W`k}(h$AReG%D_XR3dM0 ze;AjE@qFoevjYa~$M}WSEX{dYHQ)SsqRjK14o|ZDH0d%5Xf5orVF79>hq!Fg_T75P zXr`?+miEF;NHqZ3H{!#zhBd=)J2KbBlz9UJpj$Ir0HkXqfM+#9bwpM#gtLD7_KveA zz>*UJu7oo?J%qdl)((D&+~_6aR>EBW9?#u@~@l zF(Vu7b(Qm@*Rwts1%p6JZjZ=+oVK`WH_}S`!tHJ&lxNwEq$R5}??_9*)f;7EHCd@3 z(bPN|>BufYG4X`dY!mm!&gcY%DpshkWrfbkQN(b^5uW*X)Xt*FjYlubPG(}>v*pHS z4>5y*dGmWG3a(&QA@@fsbRs?De`Y_n;c+9dB`BqSuVs7Nsp#1!DyO>Xu;<{{`Sqp%#305^WE3_Lu5*06L0ezE+2V775OE0zti47 za7+DFjvV0oIhhtE9!Np=b~FV~UENrP81k5pBCtb0DgL@_tWlNr$xPS12)55fYTS|X zjRtuEC#K)WmNg36Y%}g4OPlnjUYA8o^OBfWcEWK+8@*AbS~w_F!2^Y_&LZi9?&(|9 za>>ZmpezO7k&vekHs^BUwb)36sG{O$MU&%;`cCl-Kwbh~WRO~>Ib(wR;%i0=_%4!Ke5P!ucgB{Imq?hCJm zJt6&(3j5I+9^|RFJ?rw7R4xW%IR3fKs9Oa#VLfznqur*CeG9s?sayn*d)Cg+q*=h= zbs%**;o*4k@ORI%6}Df=9VP}R_Z=w#4Q-anu1H8AS)Z;WH8nNO%g-ldblmU|y*yu* z#DMe*q24Fo4hGuHyk&vSp}TGQ#SM2_A#$vw zpDk9y<53T@uHWw&wE6u%RDESwm0hs*1_Y!-NG}n zAT8Y>-5@23-`en=?>guIzMj2e*2F#c%;ePDWRdbZyFx(dWMscS+U^q>a=Hk8{O=v4 z1s3=3oS0H|p~>ldsrd4R=jUr+3DUnL!wP{1wOwxEv2}-pMhtsy$((7d8N1JGe;3s1 zDJs^mBjDgCxmxSK@mJ^ct*Esg$dHYBO?JzuFErxNl3i&*33No45xq3{Os?jyX$>MD zB7zx__VVv&ea8X(#>-13goL73mD1=q)Ml0kA2vG5`2&vt#~*`%l~J& z-Md#)Jsv35)nSb=B>Bmc30dYfkZoavjT;2>W93RJPwx8fDG?6C=Q&EPuRk39WQ4oc zNkNV;5ZN`y&m;l%b8a=%Tw&}3Nw&>OD$8Yxz~Qx<{EL3f;FIGYVq8wNX`<^&_)`A=`S5uJuYPB2L2^8`^J21G~6VE2YW}P2bw7-hl14S&nXcZ zSq3TJTHq6CH_EF{(;Ac8P4GvdsNEGI!BwiUt#wkkdr1}EfK_W1GXnvtE{%P2aA2+N>yU3myHj7{$2UJBL8=m z++lo=&uq12At8_BN2T^*7$Y*iIoq7mgVmlF zWdk5)`t~-+A?sISXHCU!TDshFmQa( z2RIqiKYqvws$uR;$iP0m;@RPYeb;D8%p7N;vqcLB`5GWX!G3DqS3kh`LEKAb3@I$$I(daXo(_!v*x-=-#^pG+V1_aIh^QYN zy2TdSJFv3E6&N;j)V%+a{NHJe&-cswAQN$e&kru}k&{yiBOMd1og)8h^djrB0waiU ziR~aa5*@Y!@wD47SzCUUZ_-x_Ijh((*2@a5E2z(!m zYv}_P5(Cn>m1?rut!@0tDvC#f0wT3j9RiXL-20LnowPt);E}d5MIKtfa0U z67tObF&Gvr3L;!kPK|kk1;lb(=fqlHC^y)U*tS;7E^~q7X0h6+<-}6}t;HqG+?(yd z*<4k%z>b!*OEMVwt!34xEf8p~Twr~daD^Qb_13LDtBSsUj0OYe4L<YUETz* zhQMTAiNYdE?DXeqEc9re6)Vw;Z|}1P`q_x{xk&Y+G6hdd4jd)A+Ie@HwzdL0?OV)J zfi=9vY0LJFv#Pu2<&0O^GQ+<|#bNgfr@8W=!d8}+2z{9vP1_c2dMwIw!Nfo_lFzP- zj_GP`*uhJJ2^6^F{5j0UhQhJ#A;8UQS93#_3q= z^?V3wHyD=-k`gIf3_`pAMfSA>4SJhOzJ>F0kp7p+Q>*ITk~4W*G}iEy^>D z!;%(>)XY(lYjxwVCmSEFHnZAxf0|GF+@z(8fG(aAb`o0D$!IQ!I#4c1PJZ*XBRjYH z{rvVWnx%op_bmFsOs21KEP~ocIGI;JS6O&-$7?r0;Ga9!^$5Hp93_GsB&b=%4W}7| zd*HD9{X!ydaxfgVv|F$%LRtw<+^0t<)eNFa08nfyw_y#=eWR~onK|Z2lR@U{6Nf;5 zzVHp_dqb}`^PE(DL7M`Yk$BzK4r0=N&Mq?&u*%Jt(wWtz zQ5N(+K*5sRbo_M&@j@Sa!wRf_Jg`O(2uKDat7cTig4+l+6;t7Yy2Q`75GFTQjm2O8 zl~{@cr{(n|w!6G_H?(h^2qlPxw^$^Zv63fUQ#UIVH9$`jW7euuq|W(crj3+Q9(ViK zu!L~ZS#Hiu7F)=5eFwQvxh{9aI1GNGGo?x+2Ji+L1HM`T+K3y*z~WC3VWL4aYD_T{ z_)mLN9>a{6dH)}F?$C1c@9amhR63iC@+=&_1TjEqGTDq4*Y>|O)TuoM7SQS+< zJ|vZY{cVY@1osxtoWtquybDz{3DS>fvQRPy2Elzs4r*7p)do=tD>|$VFJ)qJ*gEb$ z#o=NkgU;o~_E(W?Bb~5JizY$G4^6WSj+^WJWWCfd=>2JYFDf@`)7Qy6V^u_HMdpx8 z;G3sFiSN=<<%rD9eHs=Q`dCV@z|d#n$~=UO92zbZ62FuW6S}74i$}*`G;V-drW+EZ z(*5GYA25xiYj3I~hiVt8&2e|)+rT|kpp@fQEA-e$hb=1kD=QN{X4h=7a9Q zKTzJ?#Aiw0Vr2|pL3VS+hRAXBTzME8t2DO2vI+Lgz%rBn-H}X);j)^Ib=*GkW=gg^ zI|bd}AA+j!$fslMo-;zv(_4zaBNWD83rIaf5!qcf;+?M4%3;QJY#nQaNoy&af!ORR8~G`d!TzU6M1)yRmx1SRDY zX@Bn`-PT*rf>J!X?=FUSUi^_lm@(fq?iF}5FcpeCn~C2BCTW{koCc1IRZIR0@u#gL z#7;Y4XjI+iGaCO173s8+DJ}$Q8BJh*IChS^%BOt{CJW>rqc9Q4tGAYmaF^FNz~2sz zFsP{NXuNONK!jXf!(;2LfCI`U1v|Nlmt{)v`~z}s7=8>lV^z&tDNioODsx`Pr%`SO0HE0_~8x|Et|Jm+|Yu7g~ zzS6fn(?D!-Z3uqhYRdih5gSk$ChFi)04nqPz?l!S3g$2Lj*aym5h#TP0~DQ%gcA&{ z1-agIr)kXx*o3BJnTTv4>&^N?LIv{`>N^VG>Kcr9`~V;-Cg`l>t0twu8DFz-T91%T ziY7~lp;tj9KM%D$UEW7~ZUZHOxg>%hGd8p9xPBl0f{ZJJUMtbRbD@6$z&g5p->>M1 z23b&w9IBoKMZpw%Wmvn*+&N=NV~wJY|A)Go$MmGmYw7dVs@&xqoM!awl(~sJbr-%? z?NrpqRw19s8lPVrXTIZpWMVx3LW!S)n?LV0Py^$?fayVl0P$Wk-VG9d?n__M%1Y7I z@?naO|JRsk!l=(l0Qo9NPNamrg=vH8t*ai2R>np~EhMOQill2aKmX(WiWzpB+0rpW zhC%1hmwnWI+~?ofpg#tuiauy?Zd6E$1$9-M9EeWC6;F-jLPC8CMP95p1EfB z5z4(ct%(k7@vDfwmgP-IU=DcqX0m?fuTQc}O^*%_4;#U(D>FKD*nUexNsy*trt#mq z1Btw0I?yyNo?X4f(l>`jGlKc4xYDX~%Nn%Vu6deDXu2WIMrlwPOP%!LlhDw21hDy)-Y6I;|6bZ4P zAzw~CcA-aQx=LJ)yUVtS!f1y84JQ%z91fO2tT9r%L)9)*V`pbKm?~D)XmjUCV$|;c z$proiKhkcgvrX{yXi3pETTa@K9=3f(3(uTlZzAvKtCGNC^vgF%b zBVf3*k>7Vhvy~Nd*BDIx7`Frb1G|T7%13Z-J1p%hLcYryi|j{9CZ2;Y%H>fYNuOI> z=5(S$&qQ0>Vb8CQir?icYx!9?vz!3l>*-Az7MNx9t8 zWV#lf{5eYF-ocb26(yq|BE`gZC+Ld}XNNDEKDO(9%~=bqijZ9_dN-_khs1$_h1?hc zG&YU8>1QNccH%zJt9yKUI(TG6L5vay;3MD&vYVQoMny$cYV}1)!DQ|1+uu8HbjxOo z)_1BW75RPAqsj}~kflXkJ$)7AfZ>r)8vU=VwapSmWn00qAGFbO2g=etZQ`hObk)Lb zQHd42D}+MpOr9maqSX`y8)17@g0fs*&a~)twR8=rOpgz-8+K^I{p&1e)-CmKiTC!K z0$>sT+2oN1sSqkA>UKHpdG|}{^(PnzA0mL>O^1ai@&suJBk!?FQ4Ojzs_Sj4A5~0D z!;^YYvTW8I_1F-C1WDjhl}2}!QFHY1^{w5E|6pk!j7v0`P=||gwc;ZL1~r`g1w6jk zZ@Q2%c8rzV<*Tu=G2e^*nI4w!E!wb++aG2M3g!CiHp;8fkGSw9WWKufiVfJR#tfcB z0m-zI*SO<9cb;JQ8hm<@NG(N6mwbD)os1@jX9*1?`HrmfV zRy+q@IgH>S0n6NoX;$%&b}=J|1Ubxn^vI@KXhZ9qXuelC)CYktCaU;*&3!qAu+AzVw2PxQPlRDqv%uZZ}!_x1nr&d<-YV%!5VmLLx(;ute- zhe@Kw6A_>J3a~upT^_%r>fKMRCVUQD)ILj+zzrD>H0T&vYV{+}dY#sYtFSJ^TFKmQ z9-viHpFVx%+Hu{-tRScdJTOqC&|v^rFd_Vf&CSF<*FEIWo6Tg!Xs}A+c!~lSc)8;=nI1M^ONZo$MS(L%>of~;B zUp_>#b+}b@Gn{CX%;J&8=NztcOc2-Ng<0r$)st64s7_p&JawWF@>sM+0 zQ}u^qd&Ic?RFIh@59m^+-sGtwRuI%kT zOu@O+G@!lmr8Y=l4?k`zzzlxhE-A9R7YK%Ql=CjHb$(L_ftJ}`ratKd$HC6VDs$3) zy&jPe!wk5pwbPA&V*ZL;2uxR^%^*Pyf`aN9;Tfk=%^D=wrNAai!3eIBqM`^uGAj*x zf)kD6sZdJP%DQJd?wtNy4+!-GZ{HH|me_!#dHaDgc^aN;0FZ6hvv04I_0qu7&(wdg zR5D4wThGNM+$&8I&8(=mc@jN9LTvci4HT1&TfHt-104nBH((H%6=9}BnYvEI)lHmI zD-UPp)HV2m6hq#lb~9>0KqMfju*HimDO(8Ikes#Vej&Z_ z&8f}Xu;xBpULJ|tn#N_D>8YcmlRizXJj5%fdkh5y0{3{vH8?GsRAxH5?wjo_EXXx@ z@2P55-$+4fH{o^Qo1D^ed2mHduZt`+j`s;iye5>xt9uSQRhr-w=XEjkqujQ|4S<>j4-mN&k=3|mM5 zInGsRXc{U&N0ioPpG2*V3xnXG{yRd5=T3#(nbhOKOyI=#P)uk>`UL-_s;Zlx@5D=K+!dijR#nxTW$E<} z_rIa2@dOuY^mldD@Oy?##Z1Has4R zOfbH7@UKErRh5(E``hCY0`%X`dGbk>_M0-UZ6<8zZ&c9?5)*}qiDihALP8A4QQ@c1 zW54vpGTAeW%B6u}{Rt*1D_%5=%0fP*n8=Y-gbkr$ zm+CxVSX9kbYR#pLCP73WgtS?#`$u4T`99G9PP(w}%~1v{p)i{xNovGQGq{R#Q0f_TJscs@v+3y^s|=f<|?~jK)k((vyVK^_->(vZuH^>1cHKl zz;=C;%ghHqFX8ZoZ$FxVCThJuj_b$N`tB-re_>)Rf^S9~O(yBHu5ALfLb90RVvA%% zs&$VLRwc*AV}CK}?_0;^rCx(xyR!N+9$W)=#@2L~KElTpJ@mQqU-FW76`p|VofWSu zfZMM0fk+Js$UA1W&tUjSCfsVtVMh1Q0~990kza|V&KPZZ82vk;7CL9{6zVFb&!_OGj;d~0NQ;NOgNx%)KfdZmns=_q5FnxTCX8LT zod{UQmR~GrX;ZGdrWP#k&MH>`Y0Wpx82Yls1*4}k7=z?wucTuAbAm#*Jl7KYVQSB- zbSS({(!Y5T43^~ypnU{n%% zp4*p`S|57a{?S`cq6u7=(@TlGMq9I2?K%Q~(e$es6pzv)1~zCftSI0y7qv#OF!RA! zMY*jw2}w!KT)%hkt27L1=H7W9H|neh=T9nCUeP9gl>d=-cKcU)fB*900uGXznmXZ~ z@%P(OvcC7zSDCJq;53pFoe{XWxbm7bJ<*;(zG2I z1Vo8*Q~g|tT@=2>KRS>-`Ou8X_-xeq6MKmCEpDrwe1PC4CZ9}(4XIRW|NGZsX4h;I z<(9sOt6-w0)l#HXRHNT##Lkb;u-c#Oa1C4UoUXApJjtJ2^vt6fH1=1YWWW2Hn>c61 zxzjuXj>780;SVVH_eae6-%V@s|15vg^q)DtGP8G=kdc)&tkc4Yl7NL?^YdDhuSvLU z9zk0F{XJIP=O}pT62o`I4I43~Y4HQbt!{#$k@YCfvvjdg-5?h&%c8P{K(ft?W+bB0 z8U-=Lk)kgFr$-V}r@g(c)rDvi*W8g*Q6++Ed;L!W0#3*rmtzaoa)hz!2|fDgqk8M(`QwEJX5{JiQN;RnV^bFqT`%33RYg}yWbGu&r5N_s3R`0nx< zZg04<>l>&9M(LzAw6}0Z1c-@*Z6jUafL=FI7Ey2P0Mn+POXC7;jvAnWPZBQV5d0uE zZ5L|28+Pfg>tRLZoGzBI@sLnZ#l1nYavf}8<70pIQC+X=Y;(vC@W$Eh*Av1|`Z!nd zt!XE<>~*AidLs9yYBNGA5dovN&ySjuTP8!zZ9-=vhL&g<*ZGJpZz~sq+`Ia!D-Bl# zJDNNZfL_S2Ft9vGoFZQF(qCe1wp%EPH<{hh=>K2`EiO9TJ7vEAO+ujcXgAAg_*}VaXJ@Da}-SML=~dwczDy zp3>tewNKKQ7;tA4plQiKA%ns~wXCOzjN@u91RJo zJ2EqbLBE|8)RM;7bs-XFx3-d2GZ=vpnAad>8w1?O=Q%8(Jx-5rfra!x!yIdv#kQ}g zAUA5ZA+D$|x(+OK)v}bzd-2xUC8ZxjJXmce)*=)*!lLomz;jWcEnC2J+e2PbAT_B; z>F4O_k{2uxt{B5_V8`V@ABgS>_Wx*X^}Ln>g@E=wp>jwP8ys_h>DI{kod6V`c7U<@DRsv z#5>Jg4owZwRxWE*Lnx)q1aBW;1 z4$RvVz1?cSo^4^9Sp%?o#0(e8z|^#LcYlI1V{K7mqZkMwkJ=%*Av{^&lkrCXefjt| z@@kA_fn74g+6UX++L<_Zj1SdYALL~xqHPQMa$olyU*D6$x>zyUZ52DQdJaEg?bNTUv{wpf74p~j*(-l#X zqoaLE4d}x08`Zu6*$8DV*`<0-fEsQeAL&x&T1^u7qUhy&AyW-|HuQEeo#XZ7eewy$ z?B(#}8q2ct1iJSe=SeFpMtb!ZpuvSmA%PAv$sUF{i!ak7L8t{XRm5DS1UpU`yTMXo zDp2%fG%eMS7CPY)*^c$(mwzZV`=|6`phq5l&)wgjMjCKE< ze?cEkgg8!^ShjrL@}b;oJbZWS0I0$oMf-E-N;VyF?s*%wsC^`;I{AHqHQ)bXmCEge zM4dJw7?Siu)S@5m_+^~jA0iB`AFIUO}UjfI;IehN2ra zY5BTNh^UxDsM^ZouUqvWKd2*rh5}X1K3{q*RR!D{vly&6i$o8=$4xzcxFvrxNMqdq zvMyBUm^BS!yG56_!eXu+F&D8esE#GPw*hba1pv4b6nIhO2TH}}x`QA7z_3PC`#v2B zbjGCKb}B2@K|GJrB7W3*$JM1NF5u6Mw|3sAIq_RpFoI6@c0NImBJlObk!nOI*^3q$ z9v7|;%f8cz3^hLr=c1;atn{KckgEu{X)0b#cxLxB!%4RVCo+x13gyLI)F5_)^~I>Toj? z*A@SKbkcx7+SXaV!U7~U$36QVmR2hXFLDr_;^LzDd@Xpex^`NH%N94RW!(!N zYZ8N~r4_D%rdHvK)iA}-y~p9@V#~$*BCfU@6g#A1{db5{BoQ0lJ{`~_R>ju@#UBPO z>Hfmecx~yL1&ar%F)O^s#u+v{{cXC|#|*T?XLHB-eb3t2D5Qu!_Gf_SUT1 zn>=i^OKTg=KyJ*iO-Biu-@snRF;(^tJ9_^@o~3lF7Xd8aj5}kjxLAJU9Y1Ei(Ln8t z5)}%d=Av$AqN1-ZRYB+f_s2xs&8$fOk-!8<{m4&LVZ&0vl~hK%1WH$LY_Ycw{`JXe zL0`9Bor$H)KF9#AT~pOf{;Qb@*{!pNZBhTfE#{GQmbH1WEsi^r0FKn8*33jRMtng^ zf88D;%i=obF^HM_v~@!Yj1)RS8YVD);?Hln2|@;c!-kZ8{0^)^VT1X7DToaN;qm$s zH)r$;X3S?2n3$yhcNgjU=$LDO6tK3KJlGaHxf=i+K&u6Y^l)aqq<()fu{jf$W+g4X zsJ*=pvsPQi9~5X&YlBVK5}O6;bX#FC$tb`HmX(+nmBI;~HDm1hY2tKq{UQSWclCYezW1)B zLZ8%NdfNg2UT>?s1=O3hn6x39S!$FS>ZURmUIyvZ5{rmr_}w4p)Rd`a=<2U)2W2K3 z@uJ@BV$X0X7RadDxurB}Zn^siAwAl3gjrGgJ1rOp;$QUww=YlH*+zGhQ9Qr+b6q(r zw(mq6$Osj&)=Hd4*S^6aeZ1bZ?HzNa_xHtRJ?I+2Eyg}zAuk)au~+{?$fumyuYh0)2;n4&N z1xcK&e+BPZ7T=^&X4g>-;q$2-%^_oV(`@{U39W=s^LAD|>fH>i5AhO7A!k)*tPJ=+ z)Mn5TTz-Y`R{_HPFxP{eFT)=6;VH_t0rSq(Fw=NMzuDGL|Ee;_*-g1 z#difF7Pdin5&5-Al+xE)UQy7am>ir|e%4g``>VmY-*MvAGn7Z;_#CegRiv)pi>G9d z{{AJEn&c>{>A73Yt`6>I1wENWOSKM|Q()2xr>!s`YWRd32sRDSBO#B!-4g=f_|8!t zJ~neHW~6caWX8JnhjOHQ3H4#!CcvqUwyLoJ<<0$L;}KXIS>|-D26wrQV#VBsI)SPP zj?TyeLG-FwS5ECT%retINMP$_R-0fTVoS{+9-_J%D@~38m>@Mr1q7Q1UaF=5sCu~h zZek*5hXfhw)ev0l@+D{oN@mC`_tbnO9p7k!Wz!n;`WdL zly2AOqmpOaTlg&`f#50BwszV>r?5~{8CUkeE@5UsHo^`QvnLzLfxVRj4s8VAlD2#( zXiNPUPQJ(8$4AQ&rgab>ZbL7856MP)LgUa2yl{Gg zp^xM9Vq%WMt(uKANUeB?agq}-s`=Ix{>l??rNzFpwB#J3wzpchWEGhtx> zN<_kRDvWT{^pz?7*1dwfiXC?V>^>;+-dh=@uZWCX-^~GJtaE%5D7}otzIMH>A0C&B z_RQc=j>iz}FXj%NVM%M%I(euYojoA64|7$cmswOn2df3zOf7MJnG=&maPammP$W;T zh3{rwfegXm%ZK&bieVsdy|O5Ej$8`HYGI?#dTv4d16YiazT{{3M?pnkYlVd8z^b;p zh+^SLRmPEKrKY1pXH)N1!R+fj>`4LQE%WjR7yqok2hPrj1Eo_Q-z1i8w-iZucNWgV z>N|URi_RD!)1Vb7!NFxcz%)m*M0|qugTU@v`fg*tcKG|gKMBrKlTbzd=P(frMg8wp z5&|vSIQr^Nd7Rq1_z=LnHc`pxzv5lA; zgoA9hrAUw@d1>ckPG-NH;-?jW!W;5jI+TkUU*U^(Z{2vID43+@qiWLK1S`*xi!aE4 ziTHmCLm6Z(|Lx#&squ{%4m9+Pz{HPq*TfP-@h z+5mJa^p+QB6~Z9FTuhu~*oQpQ1R+TsP z7#z&nJ=BSb{hO8Ft|-O(=Bamn8Mk@;KU@IHq9txZO_it3>3SS87BUFD;z<<43I9fE z_^2!>cMIrH9XZ*)hkmH)q?Cp7vN3BX(1VfO2z*-d8y_U;BUK9#q7ev}etyXk2!}73 zN_AXvFTqz=bw4c{P5;|TYSmxxS;u#`79r-!!{SW4SUfSiC7uQ zr@w>VtFr_b^)Dobp=OB3DIL=uL@>;gd!u=c+Dtau|4wSH2R$sN-PSq_RUP)^BUo0!|>N=H_+}^r9%QDO4<@%e+^0C}QvF(^A zo~oT*|NE35(Pfh{eIQkXGo*s>v7(r1xL$&Lu=NvECps)c^XmuN%m=NT{M4dNN{7*@AIr2)I@ zo7P@r(LpvdQ9?Y|;WFm;u19ZH>oQuZ8D8L_!a+a@25Soc+GsnkR@DAHuf3kn=L&L{ zXBOSmZae!5`FXjy@b%V4uCD7o%gWxWDk`m2Xf>ce37@Nlil-pCf7fc!3g6V!RAH;B z#mZAT2q(#%qE__A$-jT>FFpT_nYVd6yneRKw^R_MJ>M5FHH|n_S@U zvaiOD+QN&<-S^K{$vgldz}5M$J4Cp#9w+KNDf%vi)Wv9z8nDGqos7-eYWfQPqB5z3fUezDM`$(RJ&5e<5S7cikHhb=?0LJ0JzrJuS-&PUY2|# zC?o#*>w?SQ)-Og&Poc-6`4L(A>{l6;ZKNCE{yd&^6)tSRX;hl8lI7Z`4uU>2%E6o(N3DM!O-^Taf2 zKC{c)d2_iL4o`L++Q`4MF3hRuMsh*`WHE33@1qHJq@HbM%0y`;``2$(X_)t=k`S$D zoDILH`w(|-G>M6cZ|(@rPMSV8+RG#}y1U6LEfbV z*(Jy!HBEel0CSrW%lY}`r1)S3O`0Wg00qfVJ|ixf1N^U5WD{Co1wHbm2=NBXx-Y}V zUQy%bsQ!A7t8u7k1NgzZ!NYD(?Rf)y_IC;U1>dUriNoo86I5H9^VOq@ZnXya#<%1F zTlMXjs2WYkT!+REI+CFW*Zp9pn5(pR78Rzi~S9cKtA>>BUwgolOB`01P6*Y?`T za$R_HMY*dEQ){hd`OgF)JkLhmmQHsoO@~JoctM!%yv2W7$#L;<$=4!!*Gv}DP8=bO zgP}7-d~N75`pH`me9^V*;fsqEw@9b&SQ#U|s0l=x;tREL2ke&GEQhrpOMV-La8c&I zCptJ3+xJU`&0++J!AJrGXKRFm@Ws3RcdRHkV}7sy{YeodBgasGZ1it=GUji}!KWDz zhtqKqz)-}ZY^`b#+#V40k6<9#dDDDqs7u_+i@p7Aujk^3v320)rX3PmM1+eVuw%s& z3EBiwq3Ril6vH-z7cO&Ij7L+^2ADH*1)m1ez2P=n;$_gVqI-ALx{XZ>*SBI?EC&Ut+$rVd(aw4&2l%?~!-_>bYx2+p3O=4U zqoQ#A5ECCgEtIpc4Vajtcr3-T#rS>Y4Yic2YG6svV>K0#t7LpWGD=E{S44pvCdJQJ zJC~bzm@VbTRXlT>Wr%K7Z9Dc^-hwdhH2QuAgDnH=D2p5{RFPh}&8Tc%H-{nn^&T^+ zf+PwoKB9}@ilK|`u{F-N$P+iT?RTP1)Wmg*ewLWNmJ7TFTZTmdmx!CWe_Xk(#MM@w z`SO)%1f6oa0|O=W=gpZD<4E~!99++3SqPX$_O-x;aw|}w(i03}5WTE)u+*C#qejuM z2tzgUn0Wf9wLyq^AEXZ4!XB1ZV(a3Zx5VP0mkj4w^AcWu(8v}EGVgpU+5z;3%P}s_ zM)6u8dZEJis+y4Gz`-EU9pxgM$MUBMI0wp#%1#*D#jVDhJM7;c33k6A zX?bt1uy;ttH(j!~TB(oVQ2ODImZ>S}IGlU%W^>-!JWmOY@Jr zSeG*OlgbjZk0#HX9DDOrwF5`g{>;~C(FR0?{9{=8H|AiC?0DU*9g0LPIeBQJNf`}w zKnbHh93OEFMi`a%IK9pANyA492CPSC?FE+;rfd7tKP;ec@ZI!dLW^cU6pVCPl#KKt z66*Lu;S|Ip38Qm6)7kWiCY^tp%@RYt;zGUT@#yS3d=FUU7U$>w?>{7fQ;=mUd-_2Y zUq3Q5rZYNdVi_sFj+*PaGz$Uaw_A5GC!NvqlAl}VXrp*y4lch&z;^%puia}6jX(LT zKEE&wlvzJMtU2iZFo+Yh`8=RMOxCY4>XaTKV(l2MbC1>A;0BlOvaJTvowtD8wcE() zt>I|;i%1JT(0NiTD>Cr)RQ_gPvNm7rH{}(5Alh#p?1sYJIXHm11?I@MLQ3!$Zfu9M zYa0N*aj+auO;l)ib)+v_ILYmK&C;wL^WHVPVRjcACxhLgi&A*dIxCx<%x^KdW5rBR zx|{36i|$!JIRc@B^NwAPuT6nv>&QryU!`fnP2!1=iSIlAVgSr;wV zrq3{LvP?mx!!{H9;YyTW`SkljQ8CA}!6Ay#bZYe7mWFMoFPnalyMi}=o}N_*c)3z1 z4DW*@6T9iXSKb-%$BviTb1Xqswm-y_GLNUxAwW~nWcYh`R6jCWTWj zU7kv;2`GPsrS4OV7r8V%6=~4{mRbvjRjMGV7o~i!M#hT-1kyVzbhX%Y)ShETFpAw&CM=Y zpB;HCEHl*gf*k@x3=b}&ZAhJ;(g6=xMee`lEsiY*UE*l5Jwh8xg{#M~j*vZGN^w|g zh^;%22kY&2)OXI|*p=&ACkbk>V4U6fpCS*YWQ!Xnw z=oga(UFD@;-82VHv)fwvj)&SBs^RH}{&deaND{Ha_KNyd`{KNx;N)P~H;B!b zD2nHVTlH_2ZmXg1n)CKqwzJe?Il@Bw^Nx~4XRjvCT`%V@cCxHmq^ME)Jx{|htgoD< zOuf;{aeZxn^sgq{YjaRpljz$VLijka|G(|e81uQpTCZ|_L)5~r6V?ZJ%2ZetFNZLH zNf`9n^6m|{puWg7H6?LOq#SBk4&1M+s~(n@E0xS1^NzE*nftr!KiY0q`WCW_uiwjg z@@}TAJ@TFr`g8@B6{>)EZ=2b315y}ltOy%SKLN+g(W%kE(z2yqL?#~?|38AC90>8O zbirc@F@Cz$E#4hl??6IDf0g)6qpc$se(29S^OKIL<)Ea4pY*j)aK^8D?q$A-l@wXu z+(J@@g;ghkYVjRmz{^kUnON`W5S_L;4ZrI&K8h|TFkT9iQZoKp?RfD0HI5QWP2s4& zvRcx4bE4cEw~LRh6|ZDFqv~Go`0Ho))JIv)^Uf6mha&vJ-q~OLEq`Vun;%p)D%t(% za1yP){%*ol%AbPMndPzNbh>{T-#DsHhTHS-Y7-_Syxy zKQ*~NB^9TgodP$gRuoW|D-4J;!Dk{9|Qd8DFA zQOvcqqGKs=6q^0=POtHb*bpGX9(GT}BDB|7N-fw>8(4twc|_-Sr#HMMnryIT{K}*! z9uHNvMi&$*2~Ip*ElzUQz)S<`yv;{L^l#A=_IE084b_ zd0rLAfHQ7vHk^rd-MZArCqDxfFV!MiNR(UKZnK&KR{_lP#1qp z80;UUy=a$hy9_j-(?zA_K3W4O4$en(NoB+D`p+YG7f@>=2qA(Rntq#b9W2h;W?Xv1 z#%R@m`aUl8uvIR0l9%oC)3lKsMRe`B_{Tbfgcmy$pmeMWPMWvg14I}*9McDg>$nH4*AbzP3|+*x2bWzwu(W&rV@j|A2HM!>;aVY` zLM~Tpea2cuw_=T`vbxcqw<4O&Uj20C-*%FWGi1#X6A15M) zXN)u&4BPGYX?N5-guinGFfqOS2SK8)l_32HL^Q?c8pGkb6?aYiY&M)Hcum0u_ z$5uB5Sfu-sPj~kr>H=1(Od5xoAZb=6cu+*a%`Pro(@|eg%S(WhCRu~jKDVq|2{sxU zJpg;K<5XJ^pcY@H)&#TMU8K?wtJG!leEr+@_UZ_yKeBIV70=ZRII`B!#og&XsVoK7 zE2!V_eEK(gf^wcSVSYq2`nMz$Dj6J%Ru4pFeE!Rx&R#`91{@X*@6GIbXX*Y;#qnHE(et za6GK&FIL(=Gx<4HMz+H`>|YMRv3uh>PUXaIx!TNmdOD7z+Bfq)&VM_bQ~*6+T*r|; zx%|Ko@?{Q7Cf?_j6HTI;KtC=kUD8%PY#=j20wNas8a*8v&&=n zJ$2%7OMQJdKn}-y-;bsuY)`2_r^dB)(;YO!I1g1!IWeRwJX5(0^(R@d~qsbmB+hQcuo|SY#V3ne(x}kKZ{t0Pq2D8Ys=xroCl>TabM12ssza)@U zGGNex_c|{~|5R^={9Sx#^$|Rx_P_Hu7CouYP+=DKU-cu;N->fQzaVE-0Cjki_^WBa zl^<5{1Xpjp0c%$ZhA#-M!fF>reeOW#+Sdrkn53OpfSs@WdoDb8^`K_79~9<-wuS42 zjcdUqXXyeUxU;j8DL5GwXzQMR*MXH)RLDE(04WDe1s1Rw0@(Zr>E~+0kcuQY9kbc z0S%7k0f%IQvQl0VL>boncq{pC9Z3on)F~7AHbL#WW`H6>kx4RS(xL+niHm4-1aidI zkX%-@<%ixfB23fj#wxF0qXj@Cy?R%Ts5f8-1+z3D5W!zLSev~=gM%j{F6~bsmEM2d z6&*kZl(?e3@gIv;buf#wt~}nw-j|DVa*X^k#JwD#<*OWdNRy?1!oJn>Asw@%SO&W? zr>(JSb-$yc8h#ropnDb&TkMXrv%#U_j1whV`MCh0NgFC&5Na5M31$eN6M{mg_O{P7o>Ie1&!iP> zyjv};s6d90@;ZKf_}{#(QooWIPBbHo)bNV_)|7*?!+A`%Y#~?xl3W4LZ;mAbupeH` zHvhwY@_sNZBlM{J2x!joxjKx>50*i%4}&m-{m_1X4ENXGd+%@4D!f>InzF@dtoa$- z20!f_Tdb()fcvX9y+WH?BKd@wl8@&%C>*#b;ZLc?A}K{^TmT{RNY(on}c{I9~4A{k!z^ zULhIfd-Bn7G1)sjH~Q|l#UA5iwT*|F{;=%vr6U}hX%t5vPR0|^Ya%#E-+7B*!{zft zW1cOx(5KfxKg&4-ioX)xGS_4gnNNkX-cc7VSL$0mpMQaGC`J_)!DY{|M3Y&ZMxDgk z{}GZ}sE;^fnOIzm1O+K7L?vIobO9;fwv%AjYxS-Xair7x7w@}cyg2^De|w{+dlp}L zud=#$B7y(ASyrI+C!PvfIH4bGD@f+4uiWG~_v{3sOr(R$qYf_>RgHyd=R^%u#6U|T zzdu_(LaItR^0_N%)eIs@k~ddnw!R#eoK_+$E*3p(V_V-nhI}U9T%XlSSdl5b*QShn za@%mJphJYRQEmP?n`i8+6GHBjwc8MdkLCNBWvbOK8WlpNt;-?mz17Q$U#JML1AuKa z=by(V@0Slr3#^(95b?~(8A;%WVK9P?`a}up8HgwSO$LD6=;cM$T}Aw8kUl&E6DO|l zCs;mBQxtCck&T9jGoGiHYWztL51Z0jb=5|K2iB_8{NI!$;G7sI9$+pH4jQ#Y0TRDX zg>?QqNHl&xM4j?kZUj^JZea`iwYn+*y!7)ogl``RGk`bQUI-b;3UCNbtEFm7BJ)#RSIi}~K*q|n52&cE-R?z_ zdqTc=!TKy+0HOK2;IA7sP_(d6A1C)x(ew4V-J6^-5~t6gS2$+Jt!-|EuITXkzj*`l@j}o|;Avl;Jx*Gxl>lRz=r*rif30Pm* zkpOlc=2_B)cB|`*=#~F6++Eyz$Rhd8IF}HXg~nT{Bx>X2ER+xd9sxs50;^j*a)s!Y z*LBuuUMdzUpJ$eZJdY!nPO-x3IhyV~qBs8?pE6ZCBD0>GK*A(lZm3%Wv9-e zD}`^Dj77!8?s>`L+S+L+sl#bp2$1Pg+CRuJ2gdd6CltP8?~t&61S@H634a+q;;nmg z_Te&ZWXd@~_ZR*P+b^&HDtP0(OQ%u;pe(ilq*q%iKk(x|q;E~c?`*gIZ8WPqDf|FL z^*!Ajd!`;v0t7Zh$A0~b3M!W1^>c^;;Gt%20amn%oL4~*b6u!5(UduHc|{cJ z0`<=f6A|ryKee5G+D8DWomt--!2oo$!Ego^9}(UoEAuoi*Z6>!vW;aqNSjP$MCBHT zoh2!yubM#U*bB2PkpD;1Rfk2@ece$y1eHcwx}-Y5!KE4!pnbFQ3O}yxx1xK0DT0d!L4#>Or7J&ho#oxx1^T?`@19F&ONB*ApFV zlXf@aYHQ0Gtxx3NYJw-|W()(JO*7lI;JYw;9vn==*j9VgkOUo{mBU*rKg%ZnDgX7f^m0QBj?`~C^O$e6^Rw5`NU=>!hg`(wgP$E zKeaIxZE#E)c3}v|l_>t@Vd4EBnH~%OlFyGf$Nz$nQ%2`jPy`4m6|J6Qn>=ENaWO4x z-Vjf4XA>AK+PWVF2mOWmpi}H!SpWgZ%vwOP^7st*Q`nv_t$I|A6+(kkCmCxDfLVCD~h6{fuQqCM>>Ca_NCXA zEq#QYf$-n);**687&VCBIe8vn0$@Cy!T}hhg1YZD;`2C4ntG56j3)>Kjn+@^`uo7( z7L|Wi2FWKqx6uyaJEuZmNjQ0}9vS{#po>(3^B8KFY%9@3BVGS1q{2W2!~1c}|D`Uh zf(3CEk^9iNgZ*-VEtA;~5vr+R5=T zzAy^&2U*$=n#>^Zf6trRF_V*r*Sra*-p?M{1Q^;W?S#qR3QcDg}h|WeBW)e%7YnC zOHIrw#g!DwcD@{@Y|JGFVM%yJOZA*IBHD&qfhpFKWNocC^aE+=drjt~l4S7Eax6M@=tF>T$ra4y%GjrY3b=%{_H zqQjUY*#*`Jmbo^Rylk^Zq(A7RH4QVCR!);UCTSs`^7-;RYLW<*u%p=&?cM|~l8tnJ z?_?QGHAe%#NB9!lsN+rt{RP^!Bv+DPda82d9aj1E>6tuz1fux|gCyalUxSF@=G~tC zc2SWUAZ?_69qQD>yi}zN0O%Kj{^B432G%~Q1N5*y$%BW1lqQj(AypLPoeXLEaZAGz zfMI)ZPpHW2PJyH!Bp|zUVd8DHOUX^7j1&Y~lh>6mZET+Q`~($MV%Xj5f82QWbH>F? zOAEt%_GkoB`Do^}Q>HVifl8M?5~`5@w{8mO+YbPIKflN7ih}*>OVM5wa(7PiR@w5{ zmN0n9b~$aN|0pNfvBv*1hmy4_(FkrpIhZz&F@qN{{PN#Ox`#c)`nY})lm=7hR}#&# zG>a;{et1ycMErKMiNk-r{jvW%VqZ@Tcp>lm!i`z&*;BbIgL=>Dg_#b>f`Q(W5{*Oc z+rB@c_!u)Fg!Z+pF%tH=$0`=YDixZgHtuqRQh`O7G>BJY?wK*72}7M`Nss-3UXM86 z1kLK9*T13YclM5(b$3NuAz*pFKEFpvag88`9-jpp6|JtFsA6}G~yT`Qieep0{td3r>6*hMw#6N zT)4b!y4i9mG_zB;wPbOS^DmMu^@cD2wD*8vt>&B~nfvfXIuYF&RBHrv}z}Bca!Ft>BhEQaZ0}^eebPE^{ zLXo=4v3xBG<|#|}tiT>5ymy;@I-L{CQBWE1Fy6*HO(nBnnR44Mr_r~-KfexXgyF{- z3aAJoM1X_hdI8WA$ zRQ1zEhtWL&M0Wd_@T`uqmeu!Mmb(cBNd?r?hxb&nbw+b>E0)jra&Ci8ZwaRoT%=!$ zg7m}wKdWF)K;!{mqH=i0y3(3&VhVJ8;rL^4i=S6^w#z9-iD7P}>ML0=!|}JOfG`i) z1Q`GVtM`2JsKDtjRkF5CzthVXyVB|I*?dDze4Y#txHvsJUSA?DzmqxKi= zN|z%*jP&jB>(_a(I~oL_i51VA)vxmNdC0_GZv-$!Ym6Q1pSg5g2ZH^}a*k-PXoKJK)k2aR1wY zSU`}!Q4qc`^4^*^Gzj|k)iXZAd6lZxNI002g)pefG>*aRjzH|!g*}AXm4-4<<;R0p ze?@>IZeR3Tl`x?DvSz{m%10%;7fhv5tE#%=26=F<8wznp=zicv1D^Yh8^@yDT}t3P zBE}SX7knpY0)HYffZ)rMOe>^ni}fW(pQgOkqR$_v_P_pZJNDVp!drGZ4H+x{+-3N0 zI5;+(wC@r#Uz)or+zLjkasE!Hl7mkL{x?K+Iza^R1~)&tVypGcM4FwWwjYxM$M&f` zda^yE?Ip2>ep=4%>>nrT-ZEya#f#aGc2`?d$|8ap2OQvO%=Gs`q9^XbV5gPAtQm$o z@;^F^`YHHJyd7F*WWNRer&-~T z#WdK42LU)*UHk~4ZWB`jUg6FE4ABwdTU>`Uq$fpuE3bq+?lzQQI02&1e6tL3O|BYo zgvuE$?%mH_Kn|By=cw7SQnULq;P6F7p%LcpcpL|NDK>_=9HJ*Irq(o|K%f!u!yfn1 zft1026cG~^1_B?}2MI%cYX87N{`^9$3m27$ATi47QusrPM|W4@k>X~A%M%w?;`glP zP;ZQg>LvOEbhEOXMt}hXw=+FLF~_`_lR{){viTKy0U~<(td5#U)SI zmZypNy_X`?Vy4sc?FnMqBjsa}tZm^?*D^2B;@60UXxWL@TV{gLJOA5b)A`wZgQ_;g z!z*KybHI)*Bl+B~(-Sz`YnilBs6K@i$h3|B3#5IhY*^Vgwn5a>|8gZecTad3bwtkl zV65j^zR2p5EfLcKP|lqurjQX3qJ#03g7$E~fR=hp6?IRQc;IZ-E?yVjbh^X}BSe+A zh_h*zvPo?@HQ*tCyY!GK1;!#DM7%Zz=J4@8!=Q177>>`rr*x17 zLS@{Ac7E0}!9A*_yWeCUW5FienMo_8>P5q--S*qn+}JUueLN)99LDn$CbuHErrFh+ zD=MYnY zP<0DE!l$2Tt2gl#gM!=v*fS8_W6Em89x3<9>^#dKn^FBqoirHNqOqmA>wIXXz!efh z4t+gv`rQPu5(GHe6CiShTS(D-a_|Vh+?%pvepeRdIp(1Q#L@fz3PU($G-5@!$pqMC z#B8pfk&<b#}+y-qKngso&M3E0GI4TH|;B+Y{%1Z%!Lu zJx$@SqtOV?aRkOrd>?!XB*M&=xrdpJ#azJI?f*XLu22 zl$3l}zdIl5v=}`Dyx*j#&2$ls64E3F$MhcXhvK8GK~C z8u|S^*s#7cfXi!tzrf*F4r%n%TROUn17_}81d-fR=;ra3U1!r$vGEu&HQwW?mJjCC zOpqq?H^+fk>D*pH7<+FO2peqfm&I!%21)At^?o1+p90(|h{(@_9PRshcIem`Ew;fD zGjZwgLqqo~;5|pv0o!iIuFDuH*S}gtQ)%O!F`0fBL45qM!w1QGf~1^$n;N5b0n;K+ zeD9YjW|HMs(g6NlrWk)Pe?f|Di28Lk;rsRLqRQ~ghe8j;BWs3(a!XL|4u$ns4#I4l z(h6f@ji%;BpDnd^nCS49(t0wQlbv6bUEJAHj+t*p5F9+5@V|#|@C-d+(EhDS)Wu`O z;z==_Mg9x+oiUo}DVJ3y76UOT;J|9*g|l|)))2afcy6RQA9EZNWt|BW*EfA>!LFa^ zHjIGWWj@oR+IiCnz!9wemp7P_^meQ%m2YtT=;a0}%2LZ=Cy`cu!SinvetmR|9lEWA z%C!0iA|ThE_;K0%%l}ww2hoiCX=(YyjgC_W9%tl3L1skMQzt5%vpcf{_SQ{}@uS0} zv=Kt$j%t^Qfl=xGnC}6m;QgOr#v(1wPARvna%n={QYo_|Gzmm-0r55_b@q_G**Vg3 z(2Q3Zd?6_GwwC?ot;Jg?ea?2JXgVJ5d#)zxP+h{@0$JH}w$ii0yjQ>U>a|V^vq&~H zPWdtHyL_S*^UOmDN?x-uzWe9@<>OznkB^sdkpP+t#(+TGcvUe)ek$&5hun^$J?MJe zGvtM3I*f-KcGuN0T4lE#tJf$^=1O*+<&7_I5K%98out4#;AEHSqp~p6kIybV7x^yX z&ONE$p98h`B0TuL^}J(hHfva0&K&V5 zVq6+10XTKfe5qwwkf8yexg0pi*~9(N-;gq*!L?}+mnq@>e%$4k<4+OI>Ut#A>xSsP zCe!3Uz|nGz%q{7w5XE)RKgTi_qzJps@u8**HxMy;da!iMJC&>dBn3kq78Ju|br4IlyF`?ZTOADu?$Igo^TY}>HA57V&vlKvw_}91U+oM9e~O^_t&X(90nsN z2+;0bg!wAu+}np}@7Ytz%1A~`|7q0-)d!dJ@~J$&xWCaFYos!AD;^NF;-N>xe7utY znh9E3_GS1yAOp&&6g0HOfN$ro7mCwDv-$Z<|Dcy=+*FmK0XaqB5|;zuZ(6$VnNH&Q zg_)px1=&@6aDZ)*+dM`B(I&nxFp2zT1!8v|f>gKluHv^&ceUi_j6+O|VLxHAY=t)W_VKV8{46LPd!|;}V+K0F|B+e<9*P%of7!CA<9VuTYE8xfJcPS#u?YzmH z5$I@1NLm%x2{#6}5qZuEDcOS=QUx}hJVS2^CF$rNqeCEMy%Iv1?9O}%@O__)gH$OQ z(`Mff{sr{mrCXTwcG_E;A6!R_0ufdpD>}X}lD{lFWqV;)f-09BJM50J@xH6T4^0`SLy;l3OeNkiw2H@>yV-k8s{Cp#IAt$GXKMfng z#USN06 z*8Zn=Y($j+e`+4LO>KRB?h7+19ONe@WY<;i1nWuO)k(790G|m64^8zEOHXlxe7=2cND+ETGtosyiG#ola0(3P%bXg*Fh)!L@@A z!a|NzbBWGcFTCFm|9L@0N~f1h-RgeSwa!k;CKpdNcnF}qUh(}HZzb?irG+vg_+j+0 z|KXYOekInfG;w`Z4>O^;Bl-Kx1jE^=p^mTZn(ds z$4L(U+uF271tFES?xwfs!sgF-2yO+Tn98{4qCa=anORuO5PZu{3zxq(07D|7r3HBj z#-Q`zNNP#9XxEMM2!JWSB&Y}KJh-o77 z2$Ha1_YAYUb2%S-_zPReW{!S8Xf7E$A%$+gJrOxb zWb*(Y0v=^jVCaN2zXs{XXaB<-eB118qXovfcpS%f1CyY3^uo_K^#PBGC~Z66V~1se zaPaWni#~)Q%}p$X2n)^o)1a9+UAbQr*5KdYO!cXmIUcO4? zzju`6RzScrTA+R9-|pwpPnY8iQvJZ?3ZONw!MX|Ce=uePAN<*^5GQKR`$aLDX$yQ? zLNaYfMl&R}?s-6Q>FzYk9`E4$w~A&L#7_f4u41M+fH7l5Eu#n zse0_vYET#a8ZcD4@hafH-?)O)ZOv6>Hn=^QX$OOEQyIQE_!3r(&*5XFYFTv%KtF9O zQ-B<=!5a~6dK;}Y{ml#s^#ToWr(S&(*&lZP5yecqnAq4zE{@z{<>wA{{dDesdhmBk z{;Hu0$E>?`)sZp1n=UNvAR$W(Za@hvUhw{tA9w{vItU2w{@y$vvhF9Pcq{XYr|P~& zNLaX*>hj!QqIXSC-sXk((3~sgVg6g-FC`O1zsu7eX2VVyTBerZ0a!SHmD%kW0eV@U*ZqJ{`R_`^3miO3j zus0AZdwCfH?Fc}}^BFmm#gS4ctvX0@P(F-}1y_sDW32U2<0jy(Qp=s&^F#(Xlu0SN zWm7%|IQr!5dljC#9IN0L;4(f&A{hsecIQpl>7CoV2xpPI@7Ej94W9c;u>nlf%gBI& zH27xK1=W;4RN7BA>GuQ)g*R;dc~_`kCIl2w(&pJDc2b<=9e)bJkT) zLtizaN4G1uT^VhUXLz?4t<&n|0HR72c$>wCQtxrPI1_w1yc`y~Je{bxeA-0X(pEyV zUswoe0h+!0#jx^Wj>5tvBd=-@HX1+Jw$V5=m~$g5x-<^G9ciO%Y}h`cb{R(DAYy&v zl+vRZPYWhu$t6?&_oJgPg_ zS3~Gi-Kw)XSTw903U&VeucsgC@=4Eg<)sdcEMkCJ1kFc6WCuX`DXrsDrf3+Tn zp7G?-;A|#9TH2}~wq=T;C`c2xqe5c3@pPMS(Qk4cAbcu0kDoN&gXi=iVuPLHWYF%D zU8BBe4y_9*E$XYH931NIR+%|#CK=r42Zh2pWTGO=Zht<6^ksQGx!eVr?dXQ=(=t7m zSg%WAvyf8h7R~u)wD-*cpjV(srvH=DB+pfC6$M}nRb>Rsxa<*k$~|_`xnNP7vqpfY zCy?B^Po5=F;f!{{%b3L%f6d@_zOjx9P;7Y5SJk5lLJ9f8C7%n=ni%$4yt zJs*n5e3n2F!emgCSUWp0m&Ud(KK+a(y|?t@QSZ|2Aku;8U-Y~@pTQDI4O$F9Z!GbT zlXB?5Wm-Gx~e+YupK>%4p9_Esda`h~Fs9X7USDcjHIp6oK;0DEeokLl_=*v}dvR zIn~2}EAk0Z;rBFzO6wP{S8w%^#3+kH`6SI|oymafrnK4>kGu?BHie_od^Nv6gBUh; zTp?IEWF0Zhm(nf&OrEiJITY}jMmww;J!epmr(FzKesWG3;Q7O-S(TPpA?3Nxa0r9% zIf#!mq-@OvsHq-?n0CW0Pbw3tQSVM{zzF3R*IKVcBX%Gh`2_qPd;*nKOu}Koz3WF| z`s~5?t|8A<=(CR>W7{@4feTrA3BJFzf&u{5zdv>f@W^#0joSJamAnbAaL6@9sG7_? z^4q4DorxJ{?E2SnlTv7);Ucu^ z-1W19PP?Fl!PR!n zS$WqN#G?dN zuks3Uf2WHgt?S?yP9i^vOS!k`_+v?~5C@otnGdsHr`-#jq?K7+UglnXQ{YU)aDkZo z`Ksqta0+4+J(8f*Os5LYPM|^)_v1a%WnACvK}aV$oQF^tsUhwiV}x&GCxrI0z8(&Hq08ukFj$r;TI4SuV#V2%;Lq}Iwi_EK92Nko%@aU zdnM*&c|~aB)lc&fXtBHU4m_gE$fQo_x{VFs)%`A^)(U+ z^V9!YTID6%Y?fq1{mx%Ih(|tg|J<_IN`!o1X zRw?fB31FydHs4B1W9p#R(9k9q;nk%$6=o(sKh>;rtlAQe>cL@w(|lNOd69N9%7Mvv z;o?n>fd8m)-~wLlwl{NDyT)H+c2?Zw*)()*Cz%&dl&J2~q0t7ydi$%gdP?!lU8%Wh z70+u!qh7cjjqX!o#2$8jJ;-Z_V#xJGFti1*UAj}hJ+I=tpYyDVACFw>L|S&iqzbf_ zr-$rR2_%1G`jD;<#E%3r%3rkWw&4G{dhN(FlpFj2bPecPq5x&~%J6&YE0_bM7|$Nr z^NqlJhEO~KVck$W^GRJDis1SodVrF}+>r>RDe*9H3QkvkdtZ(^M7KOgf&k6h$m`O- z9RrUOIgf)iyqj{MK2HWlyBOSBSI#Zo`Y3py0;v92f<&^uuQiaV>;>kYwAofXK7V}Sf3!gyYiwLJQ= z_x;$Y?|>}X|7!topuJNG@46R@xPJbNFdY(q6g?250epIzE}2nSqczh{!J{v`J}3k4 zTVsF5sc6WhV%L(tkU-Qu`V*slR3!8&`0J zejc&OiSW&)&=;>d&t)5)T3(H1eU!(}mIeOQaD$WWZe!cFw#AFJu_2Cc^&lqU+U6B1 z;&rc~&*aC=tcB|(HiiBk@IV4tJJL9@dTbPy5~KV)LH8xFi$6zQx=m!D=UCOm4`rak zFE$d-7sEjqY>FyV<8sE z&$3bH!*aW2QT5mlB%}2D*!Q=rc-)j#54Esm8DAT?4A90JeXE-T|J?R?>qaMsVj7Jp zU34k?ZkU9~Eq2DxGt;3gEYNhP8~2%^+1u*E?|k7{+LsmoN5Cm)UrLxBML6i6t9iV* zsVgMfFF{`d6(+F699|ObP7wZ6;3=? zqSy($IAkxX+zYWFX1HE2V~XNXF~Afiqc}+e7Wi+>f5Y;)j^8W{xlF%~l$V!mNAC6F zQI}y0{!~lI`R&fUSE8X?=It%Kwf?kZp}EF^)(nI5CDmiHS3PE*e(Gq`nUKdxyo^Q) z$|W&t=it5&C02bx?nZ{Mhl>1)g9g<TU%`gH;-n`2)j4|7)B@ATt%Yr!K{Mr(4w`G3 zGe`MhJX`4o^_z1gn0`40@|5{HF9OYi!Mk$gMdqb6#W&SADk%sFC7#INz7%4A{Hwc8 z>q65qMDiNjeK+|}CkOU^8w*R)R`2`dxlbC2!iUzWTU-8_co2x`4<|A&4IB;kL;M*f z1cB$RkzhJs;wfFQ)oR(T>hJ}wrKE(HMgfz@v}@xEz3lL>lSFbnY-Wo6iDIE_X(u{-ga1QyIP_=3y;k7cO^7h%u>ao-@oRL4PE%oxlKBX3& zW{nAj%mLFpUMuu{*lN=Tn*vRFF#YV(;xS6$rejqwg!_e=p_$wTNBX!&J?~~SB8`Wi zaPn$P-mj!uvJQI*@&y~+^8qBNfNp>5Z?v-L#8`n&{PhN}fvYdi*EaANlkPXw3P0W0 z;7K)0U6KxQQQNd00UB0213y*BDbXD*8EUo35mX)10FH1fN;@~=DIHCF6IZ4>C$=nr4i zU8ixBryH2OI6KH)jm~DAac_%3gFxE-1)}FZC9ElD7;a2nE|H>4mB&qLUC=gb^A)zI zO=6bmu?fQFs#YSP6!ewV&Z>iTwB*mCHm0p#A&x32pmuQcUGmwi-?s;k;uHC$RXp4` zH;36#*c2#ZdXMXqF#M{ya?~Cy6G5cHJcuDL+n|H%C%w_^*8#ia-W4n}l}>KWQ5dMM z#+D(&l_jc57_HCB43&p_+?>S>pIdba>CqXB!P7BNSCxbm|Jlg&nfn?+Ab~+WV3`9} ziPx-#<6EJn54Xh=Tb(k?3~iQ*Y5PxB?dPbhR!}$BZj*Gh=o~sN))*8M`Q1PKqzWy1 z^gza>q;%A>Zqi#Yb!>lD=vZB+WYxG;(EM1_DkW=cXAI8IZ>!8&Qp568sFbGSIgJ^; zH4;bBoyA;Q(v}ZcoD`noT<7E}R5EDL-Pkm<$Rvvq#%x|~VP@h-?RfYamDWfNhq_^w znXp(L_4wnq{TeE*)bKy#@k+o+z&JFagg{=-=q9&}nkSKqYESLpSvo8*TFs|*KE}D; z;W4eR>wo8?@t(ke(EBW!^tr>_tF_*!6gY>s>VR8W=kH3&h~9}rj}x%#S=Rd#Vb={G ze#&~YrH+Z_80s^!H?bp%A7hF5mhg z+T7HtscL{h>aXA0+C^rRcq+OTEX_`(8A{JADot1l^Zx~o*npE3rljS6sF}*_z5NgZ z!RDub94l-SJ}0MPVaZipZ=YD>pP2!@#`RhAc81v}*I3qk7^KC+9#H(u7?sZ0Tw!c% z6vWfc>9ma$A~*A0)u7yCoJ4eVG)ElfQ9QJ5!Vnxald)&>Zc=op98*8}&kq_QwkPo3 z70#h(Y5a{|+AavyotcMM_iWTz$d7o?pGfMcJLHWyUkYZubI^j7*Dx<%AO*@!8 zMaGTK8g4XJIR)*Mr&~OtlI43~u+Shkppr^cY4$Vl_T!zYcbI{1(kST92Uz3|!TkW{)q_&J(pDcxp~tS*Qx z7xI3Y$@ya!8a{Z#fL>N)1q^M}*8 z4~-$jIqF%X+x>~g%Je7lT>0=Ob$MK>UrBkv(9K#;Lo021p2ErnwCH!p)<*Ll7ICxqpADa|J0K9qq6be;qgo!uWZEwKP{Z)vGaPm{tN9B*wT1y;xNP$LwnTR3MAC($2 zj8zIw^ubqX*tDj~v+%dS)QM+?8WjFm5o)C=Xsp^`7sY1ff8le zh>PrvE}CiZ^F@?EDOHz+NfULOYr)LXYz-SWmHb*uh!o|{J9~~V8Y^m<$sf0;WvdD+ z*8AT<Mt$V@c+Z9PS0kAn0h1gp}e}O33aMvyWiG6aelgio7HJYvd;icRS#1( zhPJk(38h-QW@6*H;LowUP;UKLYNsD_{-Yq`Kl1U< z6?&z+F05G#+T?%mOjZ$C+r{x!B#hS@xsUVB3@IUzir2q*rq2? z^-a*qqF$;!%ZC11TFGA4({qIaEV^-;^M-Z{{d#q)cY&2g@*Mv*?p zi+FS)2tLday_z-kmE^l+$HsA-lt{HXmo?}Dt;Zm}9-_XV!W8$nk#dUi;7p6$Y>Q2b zX?JiJ*Ca2O)}%8_sbxv2C3ESTmiCF^d+j9wa3U(kS(drtf#cOrQ{1YnNL-N3pY(>9 zpY5@4le8M`K*kK}RyO(^Wo4v0fhCSD<-`_*Ns&@whvT#b1^_46&JlkYIrX+iKVfJy zNKv|!Td#ch&#^y!DIc7C%q6CdZHP~r!LrHKdC)y-W|X$M?zJ{Mzr{LFPGHN$=%I8} zGQVwPOUag~w+MvzGYrPZW&l(4QYk4SFCW#2X7!K*&Rujcp1CydRY)}+1Ce%qe04gE zs6qq*o@E)v>ftENrkfgB)>E#&qP;8jIs8-anwF+~lGY(J^P0A%JV8>Ls#@{iB!Z?d zCF;^+P4>zTdFmG~T)AtTiLiCFeV3P|+gp@HG<_Me`?uN#=}y}rq&KYMmPCKf@0a!fmKf9 zNRS*pSs}=#B$VS+(^H^vDf;eZgmSo2#pQJk6&qE*&sA z;NNF!#l0W>&Jph`4RzcNO3VI`JL{)e*75Yev!nd=1A6(!&8>fPD<1YP?m5$RGhc7A zoMxY<=2oAlqzvuTwCtkTPu+EOCF%iAAH2EKC)10|7DmX3{?!A1);;uvl)Xn z8tjEijV*j@hqJLmHs)f5JCCuU-^8^VGsMPbO3V@ZZuUMAZ{1)eZ;HAs5b*|jbd$V{ zCU7bZlNWd@zodwOy>DBohjj52-6e?3`-rN%64i^ADa_ z)YZvBp+5HM>Mh`Ir$La?{=oCDFOhbjXRhNSp2X}O719eL`?r?}zQ_u*qgv>U><^8- z9FunSaeTtbxxh<-#4v?11z>S zHlv_b$?Z?7)mH7ea>5F>edtV%{j)&v)hM-^?GmPaTLUqZmPXs31p7x`(EhbLx@?US z?4{kk5@)@k9s6;)a$#3RpV9&bVXcHg+rFFS1WayH9Zh`t6vkn8ds|zJ45{Td;9Mb* zk9#m=-5xn_lX^>+Ab%?bsqWg zrD-2aG;o_-=H!9QqxXTfWa%8l8w|e#4vp(6bR|%jV_;kgKoHsKcz#EVh$TpeL`I2du`K_n=8dX=NKe1GHxYyL^6~uZnEU4 z3+_48ueFZTt~4%3ylYkhKj`{q`>g%FQCxUh)lIldILFN&VLLO$v`@%C?+ICat(hQ7ijL(GlT*Xz>iiBAEeIe7sK4)kFpq8QWGa< z=$msDw5`(1qH*+$GOH6Z(b%r)qDH4(rOn4lGRMYhYu+ggOv7d$O7Nmd*<#DoF_BKr zS{p_!eFlG3hxWntQRoB(HnT@;m+A9j!Tl1!eOov2Xe`}ikPRh8moxXO<(rp|xog@O zl(33bd?TkLGa!JK2SZtw@b`U0amIa!#a6ci&#HPU61JO!z@1}m{d6hEWt7}+BkwK- z^1C_8hJ)*h%b*CD%|ww}R{?ZU71UX&%WP|21H`$taJv~|IwO_h<)9PRQXlX_^V{WS9y90s}@6GWnbFt4d~-y%32qlb04oCa#WmiGnkU!XB zp-w(QFPUdTZbx*QL8E`vijrx&Q02i)RWsiQ#w))hL1^DlVdIV(oc;11s$gOAXy2Vq z#;*tKCgaqr{e!sMz8OBG{u4$}$NCzopoY+%s3u@ymtie+WD&zT6JfW`(P-e8 zCRt&!%Zpqmpd@;yXP)>MbI>b0q!Q`L(P7E(AIDqpL zx!?Fjys^?%WsODXyICxjoxG}E{ly)|WFky46LEq-e5uprn9@gzy4@0GY4Nd9292Hu z-c&L7fkD~ht zW+WjNO!fqn2Fa4lil#)h`QPkOpiHy|5(xFa_=m7uRj1p)T}BH7Oo8Z!@gI`&zgV)E z5+wUf&4jE4S`xf#P0sXV6;93%)N1gc6|7#>4e{GqcK^C{cnEDr%+8i5799NHGV2T$ zJ4+Szs7Pkj`>JE&S7$=KqY4vMZ=Fb*P%vU5kt+}oC{kOxrLDAz(wZ}#F7zQDU5IHl z9;`0wG}jRL?lUt!+^(5CS!3An+e>VOm}D^C3KQz?fM`_7@vEGEJJRaC4=4Xg>Xkf+ zH|EV;k)UO$?UcRTs!fE1coQq^Ts>;mo2+IjR#;-FJlR-g>nsR-N=d(^R{BtpP>O~T zdv24xwA)PQ#TOEaj&j;m)tB^7lRow2=^7f-GUG+`|5)Z4^`vfJ>jLF`Sa3mFTn*xvtsu zW5)T+y7=x^Ea6LTs&AU;-dOJ|Er_H{d#f$dms_>a!=NNH$y&NFcM7=6HUSwW0nR4c zipM@?%=|w-aNU4;7;W+Q@5aN%-q}&)!rJk*Z->&%#!qDd@RZ`;(Aqd>x%imi#87M1 zMSk@NxLdkIL9NxJlD>7V)R9yD;bN{74XwMC(@yHX-dQ@OEHVcL@?qx_CbGD%YN?K| z=%1?P6)1;y=gDdsWyN7!oI(N50BL{!4Vr3Q_NzbF+QCnWjsHbHcQqFiQ$idU$xLc( znWD5d-*d~FkXV5f7Ua&EvVl<~rX4TRbU7P)daXlF;1+zGS{Cozgt996`@q?t5VuU_ zm%wKTJq*ctE|l>l{{2g**4#Y2SpL3i`X+9&Z`>04xRAA;O`pbp(2Ha?%7k&gDz`_mdr_)%Tt{qW|;M z$h~np!K6E^c=K-aLAjOQ0Fgrlw7-V64h<=uZbrH~Qtg)~ruW-}o*$1wEJC`}1!z&E z)M@p7$>Ip2W#hi#N6RmL!~Y~3_Zc61&uzy5vCt@{40kwAu#`)-WQr?d^k5SBIJW6J z;!O2em6dScrLBN%yY6pm{lz54z`*&@_JUDGa(TMKEWfc)X;;j(bw9My z@S?W5cuWjCI!naJypT+9?E8$OI*W-{!KzRJxl25=Evfu$y%$2dJYf9=UA^DW;v)-0 z{`JiFzt{DXwO&SYmp#lSu0Q@#vNT3(4Q6RH)zswqJ)>71Jul|=&?CZ-xtQb69@*R_ z%4ThhX5x;~al2l)Ro*X$%H&RczxM8j2CP;Nt5OlO+x%9tTNpikl<#(mF0+#6G-i@w z*y}TBqF&u<8FTXQYIrc${O|NfZ*g4YhtIcuM|7C!#X}J-?j;vhm7MBgI_Un~o1K1q zY?jsP^v@HN6lO9KhrpDGYG=VzL+t@e%j%iVo1iPf?z+4dOBO4HGn+z1-4Wgwy1T^4 zPwaW~xgh(77iRK-*KnVD4{UM`d4d3)aT8zicV}yEBA)7)dm=V+Vl{bw#`2JRBGW>l zkEk*N#s$kw)MywXU*!Z%Kn90ilY3%uOB_}N;v?{h39?$33%7ANM#-y zTVBTf5B5!AnTqsvU@VSUy;l~4Pu|a0BkfB!7PUuGe*3o)9=~P$cK9LG2< zuiDjG1ls*N-FXDFN0Vi153=hyF-di(M;KR|Fq^Z!_GnGG5e%*q2*obpvnpVs(J^Q? z;7aGIetY~YR5HYrk1UyhITW@eOI;fkY#D<2zMS9_xDA6x-M{w%wt|BMke8r1I}60``2#UlhJ?$wgOBUa#ql_-Cjc zu-AJu`WVa3@DtQQu72a%zg9GdKF5((f9(72b{-oYjJ)ddOt^-x?QFg2GD(%>Q%S{} z;4p1SD5ZIx@At!3hUB_?>C@w!1({Md>%zx#urp?6puK|EAUk%KHBqP*x=%%CR$-bi3VAVgX&vt-$x4)JMz zY;P6uzyk(%9)=~>kgc&Y*$La?gdD$XIli^??dq$andvA))IV^bQe|xQDVzJ=(2DwF zZzMtJGswuwRT(GNVXBO@+>-PKC2yZH1XSmJgRbD$Ahf4p1e;lpYR?7q4T-9- zw!cQtbDLY&Hi-Ta2$_+NKrBb+dRCsyQ@=A6H$nIO6gwy0#)^Ve{qR9Q2GNHpQ&mi4 z3zQ&rDqYW(OJ|N+?$3{Qo61x%h_$453`t9O6E>?A2o z{|!QWTbpfP2EpAs$xO@&6**CIkg3i20b>bz4-B9ZnLJZPrHb z5oaS`POf~JaJ=oP-L2t^8N1Ua^xezHy<3Q1;}%cd+ScZRf&wJ5DV6VSeqrxy z$Zl@6n*|;>K;@D*d1cSvZD$(C>@Gu=LL@qx;U^S^zUyTP^GKmGk~y92=RMoJNR~O7 zS*gry3NTL0S$8MOlQLXFt8du&XmtBoB;#sMp+EQZxOR;Plgmw=+EktRhE%J{J6pYz zr9{_F>3=R2R8%M@TlOD`@bYt4@Ln)b#ITUz9L-9!b+7!*S}F#?EotcOnG5o_gcX63 z;$k&WkB=ldzhr7M5mwJRQj(F`Y^{1AS5i_EzwXuI{>uRja2L(Ib>Ep*aj_iue=R^2 zchQlEUKT@RMdbP^tAQ~!WN+pVq1gVQO9WzQcsEN&1oln2+bx;1nXQ?tb;>78ZOU_i zgvm<8GAfZ(gae^dqkNq99SDAp)FYlfG!_8eOqBaXGg zs*F6?QX!SwETuPU3q-tpuj(u8H6#!kGj2UO& zaJ4;g^6A3^5uXYQlMi(DCNk}RE{62qBftp7mSE|tRnd3)pD;hO4e1*sCEd%|h%;^y&1*c~nR5GkI8*vS6*v7*5$8;OS^$1MgL`s9wKbp)H1)Fs&MgETj5<`kFCaHJ z_wb(k@vRP}ffNVr`SFgr49!H1SRJMH9|+8nmf!z%`nJ2xtzWl7)xP~fy7tIudNi<2 zZc06tH*{?jU6!RUh-R*h$EL?%^QhnNJRrLLeKCql7#?tXW|Klnv^NWa?!+r2_c&Ig zSixPT-`~qAgeSSSy^=dB>CnR4pv0hd5=ZXtrINRpNjsRpxJ!f z5mC!*R=yBDsI4<1+ApAs%LD9FK(a%VfQ~jzQU1%v*aJ>G_vD!o>5CP=WS-6CW|v#H zdJvhX$INkt+Nn$SzFRW-^ydMu0jBa$$z zMZ%i6shsli@_~t3$FZgNA4kOtd8ihOye3jdpYfLx2sAdU9q}JMk9T)ms=LV zm1X)+hao0Z(hg^P%H>rdk}WkUk+k3 zj<`V8<^+*Aeyf1d`(XVpFV6iYen3iEdIlG~_>`Z=^V9G0v;psCi~YBMABSIO^(Kf| z@MC%{y&r!Z^?++vMf4V7XflF-JXMR=I%8yF0+Q@Vidl6?c_=hN#l$K>A)f<3f0I26 z&EJEDf|--<%<$WswU;Q+Jly5;qrX=^%aUq@?&7S?F)9+szU?a;Gy75UA*CDpYec_19>C{osjvkM%P_!)+ zwu=(<{~dL^gWHY@6_!+KP$1HpW>y*S?ze5#bu8jOQ4$;VGS#Vv&l#)79*FphH(=z> zd6OL@dL2VU1(9Oa0eJc=pAHNW3P$d-MzMW^rr17=mELAbCAv%-uVf`N98wGxz%9hG z8l$^~8j<8s-h3On-fSbZf7f)b4dacD;?h->lfO@ZT02x}1|o`}e*{VuDTR0m>OX#k zhBk#zCa+{*0wufMEFr@{M@|L5_DMj>txpP!sJYC|zSyjaxiLpv>9HtI`OMofK9eVOJkteiC)+nxkWuX!(x=j1`-$QqlJ#zgb~dy5V|{ugDAx zZ}bpum?az$v(^%`MzPK#_J4%~oo=9uHiEfb5bcj`Ojp`kCpG(8F~&G+Y-JVMG_{I$ zqLh|@)^TXYidO!-{1Ix!QJ_tw-UnMf6YIbX;btLZ;bo6UE9mH7|MEOKIvRovwS`d< zXp>VQfriB_yff8u#AD0SKOWYq^|%TV4yEf+?eNUg7~+ zN5ok`ytQ^0!KGUWbN}L*qmL8HkD@|L?HbrI`dBQ(`aux4f*;(tHA$K^Uo=G-5+P~@ z%q=WVA7=gxn9a$e#^E>99WCUOdpdQM^Rw`w=3o%1qE*z6^G3Tz^!%Vf^SNXl*AI7h zOo;BN-Idtt;5(O3(1@*GepVQdJgR4CNYvT!VSnPH*i(1~%U=5ycQyrBqaz0g6Zc8W z$jk{SuSc{gZ-rR-w5w=9;>V5iL5t;6RQ%RD-W<#ydB0X1x}xPUi)=h(vT5ilGfE1o zH(X$!c!mt)^Qk~Ye}cFiOrZknx0q{ixr8oD?noY?~$bO{f?V* zUK}VVLM=GMYuEm0E;+-1f@?#O`yE4wVWs1nYEiWW#-*KsK4?bv%{z6kwHV`iy=FOeTRZ^w5|EuuEmQ+qFJ?Td3(a^}^Oshl<; z*J?H=#DeSZHcfa1K~T)6fyZAxRKN`4-NuJd+gzdH|pf6@rkgF{sImLRASjOa} z1LAQUlYJ9(lo>t+Cu-lSa#;W4pMMJ0!?gW<;7|Vh6d8$skHf$1;s2o#?1`^H#y>?Y zzpy&}r}vh@;`xow`Ft6l>}^E-xnn2 zSbOM|WbU-ws1LB7_n>GYHK%fy^7Bv5^ftpMXQNwTk5HPZ`%J(LrO`tNeplfTLgsq4S4l=?~BVVr|Zw*B= z!nT6J+iFK15$SV$A{)4TgWK*#RJ%mOaqs=-j>{_dT0gqN7=Ca^0U*qwJ@>o5eKYNm z2M3#_cQzd$bbKj0n`meO3KbuXyk7kLo7{rg-3%l@UKipiK=gwqo|*Uz`t~+@zQQp$ zL0^C;((UuOYV{hYD0P4C;s`$=C$;3g16&ot#8Pfl)mdOb2mm+@FO&p9!D>n}IUCRb zo-ywjyjs56=UVEw(09*D2${2}UC5_w^-dqyjfRT-@&LbrdIo(uwXus8p^Ob2re-LF z(yBy55ZZ!!SS|^>=nXXZ&BswF2KPoPc=e`L;KZ{HaP5T!1qC?9v17+Tt&sv(iu09p zY#Xq<`yK$sq8IMhoc5PE$MHKu&0+KR9;ps(uX3EE-f&AgP;*xY^8sptSok$w(BM!^ zgYR;qz?>QsxfNX@F9NL~%f-wQ={IZWvFpvE6*m)CSeIW>X~P7+1#`w+4Ko)&Ji^

`l zFP5vl&~k`@n)e&si7+uv9qAY%f^~qaX#lD;y71B{IJknjIi)-YO%1++qs|SaOfGm? ziYCTQ1)^inq<(nU!Mcv2(g0jmFuBUaAWNtFwMt}(*Mvq!=ZqC|A3YaKDNZT4FM=9sRQJ1n(}R^l&niTO<4~Kjd5dObu<{XKwK7R zNjk_)wPgm=K?4(dq4fr)Raa>}9l__)Qc}}T)p)xvN0Z%K%^14U0L$P6MjxJUMS`NU}c2ovZs9#RS z%rDDNxB6P4sb8;b{18yi>FWbs+LOn-MS{3UoTEY|W$HYw&$UqS&>rWRj*Qs!rSS&x zvdRajzBc4YcducT6%c2Iw3FqH+MNvf*1xDx^WCwUt*xzeEIWYcEGT)e3NK}J_D}~< zxbEWN>UtOSld04$O0*A4;Nyhl<79U>>(DY-`-jf@!`?IURVs+;gxwtH>!pyC#YhzD z2Ufmm$V(7i53q}ZPNHmXgGAq5x%2%Cgy90H>p|5-6Dj6qTU5KlF-3r z@c5~UA(up(&~gCVIP79v^8}D0&Y9jCW~nG~`ZacuJ`Cue2o!`qG)*k($YIIzapG!t zded=BVb#FjG;A14n@Msl_-@~C7lY1h;0@Ly3PbJomfPMd*==kQ5Q_o|fyq~Nr3Lp0Bu{QPI>R(HdC zeQ9f6DKd6Z)y$Q`xw8IS{q~GUhtHM#`FW3sd5OAM_1f#nmKU8<-X2>F+8p5SA$tol zWnm%mAxj|dhWAkQ55=?umJmFb&~?p6-|r0jQfPYQMfVIOx?Z;&ogNlnTbFTL8Nua>D%y6@ z2b*0pO`-D#uk6-y%~YBlxz~8h(j0)ONKwSX^E3{45XI zyy$3lWC;=>7Z5R-oOkc8MO^(eX1owqzE#jn2627zrF0N=IyQ`x<3}LZbf9v}1oi$( zoom2TUH-9+FuSASc??)jew0qq5!CI`s6UPZt#~3go1GSvpuG9BE|jM~*s^bt<0tSL z2#OR5gE)oo72sO$LN_|`i=LH(xeZBJR8{X#DBaB}Wa;ypf&$)nj;-cIv0hM)dJZ0W zaQTm9kB_^}%Wl4imYd|lbimuFxj_|=+Nr*mca#Jw=pd=J7jHswXy?(7_g$Qvp1Aez z+5Ve6Jb$Hsm}!iReEQ|W3UJx(c(?G&e^kNtoTMDaygjFIsO6QMK@DTUUD|Kn6viH0 z2e2PYWbXK9Ty5xmqI8)H7iKwY*6yc&)Z61d)n;^J|C4c9`PFZBD;sl#&7Q{@!0GRGnzIXVT|4|!53PycUXC+iyuq2$zB0bBN2?rV3|C2M~Q<4O;+z3 z?sanCytxgv7HDm9!}0%Wo?tMm6CAkSA=mEVKiC{dDaPx!%a=Q$4E=d{j-VGG-A2%CSA?NS%Mk5zHe zhp>qI&}$RM5Iy*ArP}vF^Wc@D`9;w6$h1HXDzaeIUNfszgU22BqH0G@AVm3!L-!ja zbTezmsa0=KdyNY$b2P=O4Bac2!mBsB3ui$WF~(x6X6NQs02x_;CUeiMh~Ig0ZRuX$ zP&y`VwS2J54PoZ7h=*dEEq~Coy3ik-pt)PqfrJ|@t}GlHfc=T(-2DFH!lg$IkB4sz0U~p#>|BuyY;oqt)D%c zxoAtGRN^qj0Ah7-l@c%oms9m>NWm$jx{D-*11mJ?zGKQ309RS#^0uLF|51FSF%~!B z+?mOvcMNp?uY>->`AGYtzPH!!A+M>tIm&@#?f5{igjs0T(1j>xHCyx=zPzH~@Nh2S z49h)=T#0)*N(d8rLcT4-STx6(qXx|B>IcNLoP^NnGlj8xK9~aj#P<{HIu|2YGm<~_ zR0ow4%E~liLCEqmQx4q>7zC8`OKmpFHHJv+>MlU$z$PM3{1eWzdt}hhvClq73XeBFjWp%pjNyU=u4q-NFRqHdz{CTL>?%mv4 z7z}gjUg2$6>ZOH{+gv#vd-hP+1vme{V z;;zY8?cZmfYz`*ol;EJj)EKjTK&^wcUdqkI0p{bR;kE!BY z{xlV!QR?M@E~Q<2ObgEm z3B^Bs`n1DU_BcD}#b$-SJPS>m*3tuk{%r&`(VM=$Cw|9m6T|=pW8lU=-rp-9#@u}3 zs^O<~vfuPClM=En?76G+-%sVpyyBj{X{`>5VMt4G_rQQKMo3UF!o0d^oqEJut6Gg2$079f0i-iFK98)5}c9{nK!Olw6wiJI+87Q#+Bye6&y(G-( zjg*ws)s<7xU4w)9pdp(T7FypB_L?Lw4p!LVpand#fpV2Z8%AS{jT4TD&L?YEz{vB+ zdR#nZ_!WiPwG(sol3~g)*44!&kNF0IK_2?Oj9j^pwYSFk=JqR!>^tv4@Yh&H&VN5u1<*M6yb>JU#h^z?j3BocifAm$46x5rbgX0s?MDamJXK+f0S zUvrkR|HS@XrnLEjip57n;{eZmz-=a0R#p;od=PlJ;64PSC@kCsLFyD`=2~6f(D&~- z$AkfEdQav zhCU{M&H6!tL2T$2!~a=A!kg@D_5A$&jg?-T{I@h{LMI-O{|4TXKYPt@{ss!U88we~^TDy07Lbto(zVnI@vxHwhAVH|~ zA&<3B5eJDN=XZ5J2}wzV9Xoa)4mnNfE$05!`fW>^ zKe)Az(7=^C{S6Yz+=oR8{ceoEuB3p^bb(9(nDDs6N*W!R+Al#Mj_Z=HVQ#pJ*i>7N z*FTP{8+~$}YdJs>nO$Hsf;s+&X>@Xp^6dL-X{u&bM`5<@;Jq8g*>YE}Ivf^w1N09x z0{Bx?TU)|)E-(3H2%mWE9Z5Ml zgOEPEhUlEiO3T2&K(C5H^P(FO)oWvWtAK7x#4H6UF4GyXC=(&92-&}?uw6|9R5O7$ zu4b%S*gHC^@lbj!+>rnW$TIEwWCm;*R>RO=Xn8$E9^4B4e$k#idw$U9z7V<-l94en zDX_d|Vq$_fqlI731|*lIldSLSTZ%-NUS#IHdUYJ(WPthro)jX1gmjiaZtboC_KE6zN>34q(7DVlZVoS`I0ho5E298lb)t#Bt+HXU%osC z&qoLr*iT=vV@zAJLYAOG<|oAlL^gmpPBxeqSh!cYC8j(6Y@K)sm<5kgh$%vz&S*V9 z(J3VjUIsFza$v*Ijf_wiP1K~*$T3Oz3D2Ho0RGpl@F({SRRvG@urGMi4Pzpllim~- z_Re%>m3oe=B8)WC8R}nZW@KdaL0Vbp`UEG0l0nLP&Xsz{7!g5H9qHpEp@HVXoC$M8 zw}QKbjOs*3A+8T8MIrlQ)6b1>W#%3PEBv@ zgEOw{rfhpo@%&F0{-@9U_ouIBM3H-PMSWl8E3c^Nv{Jo^HtDe1rh5BIqWp?`X;wwl zbK;d$pG3GQ{C9-Oqf`|ebf+fPWQ}0wGN`|j)*-xf_yLJur+8wP;TwR zE_8eDRzkkIKey*@`+)doizhu=57`I(pf?P|!z{hDCf87?pIi02?mK-!ej9}xexDi$ pf1km9kp1e{@$b|Bc^tM_6wVS~L|l-9u0bdiMn&gp!DZXK{{dSDDjNU* literal 91334 zcmce;by!qu_%6&ATR>6i+QJ|L(%qsGLpMkxB@Ep)V1kk|lz^1d-JK$glynXu(lK=B z_pV`o=lY#L&Y$OSUHgI!Yt8$v=Y8US?&n^9Qc;p2yh(Wz4-bzJF8f>!5ATWx9v=S7 z>sP^VFwrR+;M>)g@-olCckp%iti}&~yWt@F!Wj>bCII^HpRl=2b3D9%@!-!TH9Wqq zPPn`1&n291b|-wB49c@;`=#hq;g~{cQ=1;}f$q-H!*m$Tq4)t)OChz>$eq4l4`F6Y zHkagYn7=V@e0_`mPhbdZwC!Vo7sWQC%jUd1`}+akI(}(U4CeIK8ZC4R?Nf+tkFO_C zbR``A=9xlsD)!&U?F?*SR&bT*zn}4Xcwip?T>Q_!&%^&*x%lz*9jUiu|6TI*{td(b ze!Q;wfA`{}>DzvpA3t7~dj9;?QeWC>aadsB&6d_y7JB-peuo`KN9J@-jJOol`_mQG zpFe-kUVreirWxA(YohQ!G*6e8mv?@PtS^R&to3xp@ zcXu~1BxDfWHfhq(xL_m`^Xr%F+}zw!aR+<<_b~D7zkmNG&s?`S31z1Z%ivfGlR7YPPgdghqIjwiKS>gJ5^s_-}vLd5z68eg&s#+C#0(@@h0S6t3~L6@$uaKDL@O@XZX;=7&?#A;CXeg>@K@E94DO3NSr+rPg;*`5a8!u9w{;%D>e@@^v3e79DTS& zwj67jNbyUBnEu4KAuc|iu}rk#^(5{JaSlPpj)U|tF2FP9eNSRW$Mpw zGm>nNT+th=aWD9)WXTbF9rB7JS{{#QUx!x)Jr@!(q=HDxj;G0<_5b*A%QjX@;Pa1Z z@@Du-@WjZ7j)D7xE3l~F$qZc_r^#DzR!8PO1i4L4zTUIpGO#kAl%69-ocj?xgn2w0 zzwNt$kK7xz)bj32g?p{!Rk+OlBIXki=vd$HSlG@gDpCb!Fji_6Vdyb^g-yRw1rCSv zONUNwc-5VrIeXQ7Xy)<{3ZW9Po~U+3r5>+0H1vd>(qXiDdUjP(enj-b)}rCA+d~`;8{^oQ5+v4I-1=eo64?IlB40I z7`9~ZcN~9qvLL}KE?$qG2`D)@@W4*iVTQ=B_8)dlJ1kCmuMbYRPvvcGIk6d3YuqAz zyx5O~Huc1QqmBPP@?;Y|F#UK;9c)C9o1M5l@(`yd6y}pek_fMHR z+pDn6XJuho-aa+{Lr}hr;TzUBwLJ$bSFqvo#jD$+jd#jDw=EFIC)?fYL*0r082?+l z)y_J!x8HDn>Wfj<($b3G16DY;HCpt0hdtrB3cq845OS`k zy}i9Gj~?~h_B(iHJ5kM-`Q;cf9>K@Qx0F{g*#~@%&!UrDN=Am2kx`1wz$M)Gbj4@| z1fp2zFo>nP{mFjFOh0}+{V%1g3>U<7FO(cfoS$qD2`(i0fe@|R=t{&e`{{z;Sq6LU#?!Ra z)JMX?x~Aw*C}ZTO$;Xr*N-VEAy6t&Pc+7-oom(b)KFmax#tGObx?(COy>gm;!7^0* zHa*}-3WS{JA8~Q1LVh6V_>GiZKhu5EQzj%NWMa(S=(s=W%*^)MP&*dkZ&GR{BO|kq zsqZ;I+CCXlhfXgK+Y&^EA-@5cxKxU8@{X2{&MVsjgBq=X+jQdQqR?ui&%%O^qF|Dp70zS!l6^%xEBpqdt>@$6^trDYcg`U)Ibd4mL~- zkh_DrbzZy7ewaG%3b*yiyo@AY-O~_1q%nq@YY-hW7)u2E$}7frJ3!+9dMG{g~wx04=a2`W@&^;Ktny%JY zkgfk9I{y%wFm?NjobL4RP6yEKJ>o*dfV0R+PW}@KTWy1~bypmpJZ%`d-0>XB=i2&? zQYt+3Mj&^Qkdj^%+XiT0Y7iuMjN%ZSjVJqA6Lw!#lR1FiR(B$2GXKeqp!sO)9 zYDqWm%by?JB0w@@X6}t-Rxh;~Elx(?p5KfU^De)8_bv?H($X?khR{|qtzf_Zu7l2Gg$L0pLUDw>#H2E%NzM@ekc>OpR8547~M;2{4lnTQTxGh|GH`?PF;Mfg zGfvTAbOmIF9+Jmc+f4`~yoYjM zAjN&v)g#JnM!!L)biOTdZfbD8)xoik;I69JTOIAUYYU$=gz_5(`3h;lnK%W`eQe;8 zxX;1HYb+Jyl)~opsUS7RZZC8TFuHod6&3mXwjz?g#0%IHS?f1UP0jBCWMeRyUS3|_ zWh>0>zBbN(+8Rb@KQuJtHgZnZa3B?bc8!RdyvI?DHY_$e+F}=SZ;?6)TECM`{+asy zNpHy(?>5GmF90#ogi|NRXEMHC)@SYD2r}Bk3TNr0g+$PaBADXuK87eL zJ2LcIN~76ZY2!LN_1y;ETyEz+^up5hNaxT;e0w>kp!|qKzmjuthG2ektii8gP=cQR z$Bu_+!roM^XUU-7NTEsk)Vh7eR9~8W7YVDD%fWu*OwCeyd@dKA&qfo`@g#D8LaX7n z#7SYl<2w?Msd|tfB&XOn4w>3qY9=!D$Gj6dy1NAg?8Zl!W-8p5SZmy{%UgUkOfg1h zTQNvdQ<+Qq!*m5=Z;n<@h7yJ=BD%hCHCJP0=#T>u&6j{>UyGv&Chw3D6Q|r3KWJou z_k;tKdO9!A00kp2^2mjRjO?~?WWq6sgOrhx5u$V7^}7A?SgXSUb!EcE$|T<-t9Z5B z>$TfT8Mg$H3nEj!j4}Ez%b@MEo4VFa00YI$I(=|^v07TnE4KObo|B-kuhrpTfv>qO z2M7w~4(0nKBul{6q6VCjpo2&d*CGmd=68&l%H_V^Eq-jH!YKF_S?lENoVV}=L`t@1 zjxhH8U?xltBXK<1E&TO^u>GWH0?1_sztvm?sgKsr&+KzfV;WA<`p=MTg^ybbO`5$# zl@O;hjU83c&t_WAXg9%Uk>tp0#Ur`-X_)a2eXALL(4Ri9TEK9Aq=0 zz-2yPZ2;SI&p&|F1)^lPt+GiFR60fZVXe;t5%*?D=IQc@qHIgIiF zQcvX0aRLbgRHq8&KGLR#MHbzOwzhO4Zn+(om5f3$!VDTHsM22LTYG=w-dO2v8sPz} z^IIIXoSBZ-YK)h2g{g_s($ZdF>lFmAK6Jfgsp>u#%=5djyhit5rRB*P*TcCOJ}6yo{WGGj*!Lj1O_>`@Xd zRDfm(Au_T+4*9{30I-0*oLt~6vdkaqaVjOA z4iCOmMe^la#f=r21`MuQYdNq6H22OkwH)uSub29{xw)y#5VH{tP)MS;UOxCp40<(4!J4IQ@d!bHP1OQGcY`> z7cy?MmK(x(ji0K>4brmoO!y+34Xd1(gF6F1uV6caLRZf<%3>+@Ol!B(o7 zl6=&B8>WW7yfq>Br>CdSb<-{pKeir^hzN6Zbd(HXg)c2F4Y~bMXx+QqdZ)cQ`vxWn zv?F}@TGnv0q2b}2`g%hE_^fSgBGEHNxW_wxV{9?!y1!X{yjGKkh0;{UNnLI2cy{-S zi!h(x*@+Um=KSn<7c?I3Pa?|1h;<4|v$L~}_M<+3?nw{b1Mq4rV@`szl5pS<`zxGb z$c=N(3);T%zn}jQ@_^IukW9?{KEv{Hd?R62RUK@ z_0@rm3xoIqJ=fg!;q3k~CX)2Jf)`;bK!ti3d{v_PPM&U?{>fTT=Z5tXCQ#v=;P*ngD2e4f|l{7vLE3uuzXS zweP+~aNKA%ZrrCUQCYy@c1CGhxY%}>CvvHQSJ2Z4XKJ#$wI++R!X?-|X*+W1_@lQV z?-+#YX=n^!bqqb{qLPpDj2b4#l$4aTTwHcuhj{-MTF}AJ3@VF_w%u}_o1MMvT9bG^ zpe5<8SgYsu{0j^KA*%%rUa&%uL9oRthT0L19}; zMwtUPqoGU;0KjB2x?5VtecuW!GYLt>gjzu(iy7McCKXhjJ1F|OxVV@XG4vSDw);j8 zu_k_YFr$@Ze7F2fna#VbD<_~O*n^FWbJ>pZBD_b7%}>wFe=ZN^U`j#5f+;v&$>K6? z-FfBasU4F~x^U}SQ{t2t{f2xeNNm5bZu_GnkjZMfIq7(YR~|3TG}Uhat5K^_%?4zkx~}A}4^5>hUL}%H0DUrf~91 zqW4;roT8#FkDXP@yy; z>A3zPhDgX|@v|=gVJ;%S*RCrxif#R3u(r1D1FVo0M2`%+3Q7MV<9L%sJGKqrKiTZ6l;6h!Ra(+dMw?-@J;N+S)5%<=(Uy!?AhGzr@7E z3BM0+p5WEEu4r{7Nf?2KQ0rR-r~o2vt33dL)4BAOiGj+hCMo#_A}IX*ufo}aW_)%B zwINQ!74#B1NHWl|o}8TgLoXj{A9s|AKlT7H z*Y@rll5rzrz2R)PpbQ`kehC`dn_f=A{w9l{d5&)G-Rj!|0RIsWk2=Js02n{~kZXO@ zLj%w?zy^8%RHs0Z30n7hKce|0_Uc7?)^hhLq&5WqcF%c1YUDS9xy(YqvmAm)%isii zo8Dc^JIVp5b*#!c!>(>u3p6jubfMq8*Xob$5Z*0;BuL@%srBI^Q}hl1-H4sVURzQp z?~SHwA_loc@AUkVAZHVG7&VXCUApiz)`w4r=*~8-sjCH$fikuks% zzFsSOwD{e2j3!pZJ%4Zks)I`|!tW^k#w#5yo;_=R9fnz!kvLNA+5Z6J;o<4e&CI;c zGT9X`;9hkOQBD>b8X7fWRTwP)gk6OY)cl4}EPi52ENrEg@b$?+dOJ~5P*4onhSG|S z*(FU<&w%Dn5wL!FXn9m@czACHdNdJqt9cVhju*wSOpJG(p)(-WB4!r5fK zj;Sb^a0KANE~h(y&Ua*HMr0Jgn>@#E(e2J;yL$&G5oThOZ(@EMPO(@1xdh<+6#${S z*By2n@o#^1N=iD3qdaIJ_t-o>t()}o1JSs35EB!lS0My?25+PfE*xXNG>N=-E&Xu>tvO}_$WO-@eErG-TMa$N&3h~mmn-il*w)6v;}gI~KVh=4c|cRQ5|cj1I1fR#Zx zPh~$ogMhthn3x43ucxz90r7AXN<*E+gjy>zG#*y#yIo5z%P9f0eV}#BjqO{cNa}WP zNBTO&CN`e8G;5v4+y6rzzJyT=wx2GMfz%OysEvci_nYCu0pS%;jDV7z^DMCML=|^J31DD z7LeC{P2a$6)Vvz7W+YTpm^|H2Z*Jba2~npIk)~VgQ54Olmk#=sx%v6!Z$a#RH16Xj zgPLXD;}Gu?W8fA$>A65tce;nst#V2uq7hF0@`W6tE}R#B4+6r+Y>)=49AoH}1uDu8 zAOR78OCGCsMX20r{D7LIHudtVp7hyP1kLjD`PusU&L)};0d5DZP&fvwWnz+I(fK6> zdKQ3$4SXqjv;pz{EjXAUDk|#5Fvx%g6LLQuGBK@ydeZ|su2kS5N;z72YF1WR0Bg7c zD7yky9R?R>bar;03BI@UUQ7vqSO8fR2yNhP84}nCN7I)tHGt8q`qP&{01wxCZr>*( z8)h6bPHwwNrG@vT!5efxFgSp5 zUZ4fC3B7uKGR${}0j}oehGWyls0pEqzXo3Bz)PM|Ob{N-UO<8bN}!Xcl`JrnZ;%7R zKn`_wG~CcW_pO)&T>H8y09U_X?edCK_5GyJcXmcJ zHKPENE4dpI7_8=^prC@R@is!Yv~*zIt_@azo@sa2RWd9gGV*aV=i6mHBkhcb%IYe} z1fq<)ckjIm{a&3dC*Hu3SwB%#aO&HToz3ztl!2dL2_zHA-CYmS?3^6z#=2d=eURIB z&i}>3BNqm>IT>6BW*Hcot5;P(yA&4}r=d-Da_!Okipv`F&m^wkQ}l#HN1uWZadQZCKg{W71OzEgF&frDm`qt97WPIiGft;W=9QACu)VQ2D@SG<_+>5#J| zTGeCzQkr=Zsm4@b^Kscs$-W4d$9vDF>#GpY;%UIQ%1S{HYVg0##MMmV;)b9>(bR16 zoBpsd?;;%a_3PuTIr;MCW$So`rQ2)>-?Kd{j+O zU&FUILb1@Xe#=SA%4UP-ok6(jM76|U)Z251@GsGWc}0XW4khl4%+3Rs(=0-Oefw#~ zACG96rYEr`2~_?x*0AyEnGyc_+irrc2%aUmuP941v^EkYBkz*=%DHC->?{QeWS?&X*frvB|)Q!o+Uiw(uv66FVjVH$b0EH28Qd~^~t z2n_rbgJG7EcC#N61b?FZ|NkV{MVU2_8y?5m$*3(LXAJG`9kjbA{!G}h|uQ`Xn>~d^O7mk`DJ7H;n{&MwNz{G#!WIQ}Cf80YtzkmPe zE{-VzKi>Ivu`u}Yf6F{jF1+~qKV9qu#-h+yvL#E0yvv`CkRz5KDkhR|Zr-|uB?4KT zfD^Zf$zxVlc^=CyHO~a2BDj@X0Df{j2o??3ox*Hb$gj2{5o(Y=@c+9OV9REI%U60; z!)q$ghwtF={a+i1gR&1T(c?^%#NcEPO2CB{K)VPX_9%V%3q9rXtX+0YO_7rS6h}Yj ztg}oz6)~f-!duF}5t=by*_Bbs5~q|HlX@9ncUD-d7px%=g$nw~IK1 zp(siws@^@`pTu7MInuVE)k$E$dZZppS#DpW(5y1Jp&u2JkRLc+#7&r83B2O{U7XDu zjm_ia*e7y`^|V=2PK;{|e27)E;S~8tvo42FuKA#u7vvr*mMohFRQp~IGry$yOnQ#XfQH2`sAuo5|F?u`oE*zUM!239 z%i7|!`k8)wT*{1)sYXI#Ui!#;yWxIUe(y+!(&sm1h#qCRBWPtw-X0@)6dyWUDnvM#pNp{}3B5t}L^CQzsIfzVTe4j^gfoOKYqPu%oF;Cg7s?ruj-|I%#LyJVpRCHP;Vd(yge+e5Yh)aPt^ zc^^B>ZyZLJ&D}G%J$vpyWoa-)eXoGwO2*2;ucxu0(MOfY(}(rK_FuENTba|hGW*=^ zH2PA+P`Va7p?bUN#cm7M>q1j1rcdRPuqC^vHb$cB|4KctYuu3xWq+%)k^|DG{@n|RZ4&|zju~2cx@ln2DThi= zaL;mTRVWquwsGGzS3^kNP;EfMvPTZZ!7Yc*_?nm98(~hg;oXqAbvMmkCqFZgdquc` z^=XxylGH-n%|AV>m)1u6yx3AKg=6DG!ZjEszJGn<+9^^X7HmL7i#1Ax{gTXHgiyV{E*Fur|1fTu={rNF~*qESYc7Jw1$N z(T6Mj8uP!qqxKJFoDS*}%k;R4RT%GCbdV7You+BYY@ z65TYHyLc5R$-e6;vH#QTe67hE=_mHNKnP#`I&RXzSbg+9y>?$ee%7w#BNx!E_+4hrnXi-C+VtEV-R?(t~M$TS6iD`zOSj5 zH(_b`xc|?!xA=J{jkF|rM(N5hsE6UP{9P&n0~>4nFIfT(=@D{-lO$<_`Qd}nH3$FF zuUG`H(~MJ&K}=-6n590q`rI_PVnG)aQ@2hG-h z@S8bdvFWRMLm(Dbe`>FSM>ly4d~S49NRBc(@8cbLUVqd$TT(fEE%BGIV$JVU>0gZQ zda?c~s2i4xjgiiIeFRz8+^tTE8#HYc+xvYI z^MI>=chPD_)92_4_j&BBA`7PAhP+XD|4NY%H*}xm#eKKav&Oq}B5Mxstzp#BqAd zf}Ls-cAr|JD$KQ<)Rx3U^md~O`t9wFp~rlUtJythZZ}(vwRugg8kce2sNf_%SVT=; zcEH@L^;N&3cD_{F@wB2O_Gpvh6TmP&@w;d@c2WlLYj~|Eu4&akYq5V3N`>z7U$CB9 zBGW;~8P+OhAnOO_Gr>as{J6CKgbw~lBGoae)~ZW4?-z~8tmRtbs>km~*d%VWwI#C3 zFjguFLrx4x%pU0rJK%F^l2A6BtUDd}fb>_gmgHrq1D=S6>P0f@MoF&Q+0pGNACH0% zVvyen3~;6HwV@Lq|10P)=B!Y#LB_Nkk0GDf&<$jb0l!x^i_FjfL4cjzW7V|?x@87jL;u)CvUp5xxV-maigXtD>DE-8TAx3A} z`XPk;*Z50+GT!e0vgGVCdANX>{S^+-3Lc)#6KMDU25V|)WCCf%+_8IBN_qR5!_z*i zBo6hLqrPwyC^iMZD|IG+y^8-%Nv#m@%&1E@IsJUUBN&dAy9nT(76Tkfq} znu5M>`}+H_MAz`}MsSA)z+HuXQZ(DCtkgn|>Rc-TteD z>swO}mpHtIhrMu22s8X>uE&mw0pt^?^?}mL#OxMt_Me&ah~|X|Si-M%?WK_{ei^<> z;AfgkBUW18olNJSYmVMuoC@K^sjP7-hMca_AXLjh&|{N)>&?jbT1hU+-LlN)&vWf9 zlMeAX_cK30^2>nSacAtml|Gj_zf&VBk0jd=F&Go^e&fSl(WUy~YCmVG`@9c=xcUpj zHNKYHYld6=TO4UvvBq?o%<`6zdA-8WY*RglKhc-Q$yEA4T9ykb)~! zcy}&m41a$cWv|P41?0D7k2Rh6?>80stVUUAL?*1m4C-|9)34%8aQ#0Y`}b?PN!B-O zUD7LvVEMi#=7kYz)mN;nUsBR{qP=R@{B=VNkL%6iF81Yh@h}FmLX~qu&%j%aCs~Y8;?8@iurNAxQ?%lID zef~__*({hyNgQJhBw67AI@$&UHsfEH3wtP8#8H;nebVm=L8UHFGq^l<|Jo%PO?)CC$~wDGgY0ag z28xcKz@F+UXCgHCz2xMZdHK91ytx$`>Lj5o2(SnQcHvOuX3xKeWj%NxB_pj^4ltb- zzg@K`*Cr~pa%Qf5{IIGEmo}AfH{99#X07zd5@V@#43)II{}cvHE%m)&UeLjgv*pa% zWtMT#g96pmcHt4P^Ayy-5BWU`=w^av|Dv*F+q&Wn#lQKyl^hFObVq;cNl1sL%if$@ z0AD7|GPo!BRzp@Vvrj(`{BNqPCe;4fX#|V(}mc2Z5!k(R&YHCx1-cApco4i?6 z0Ss>F{+=i#$rf-ZE)IRMTOzviX{qZI=?C8#`DJ)3E62kFLzUIkoQl7F`?mT&Gb0#y ze5pTi3TO{>&n|-U?$xjTnA{UWI$dn@jAaKEg{|px!C0n;hBB>6MPr965|Jix{L+B_ z(Y;vum25`1D||_qA3D*VL=ZW1pFTyvTC^iskn8;MxQr-@3+;a&OATtV zc%#Rn!Q|u5fU`NJf1PBC0qB7k@BJ0-jBXtAsr2dI<`LrL50Fl z6AL?wvKqOb>z}ZHOVJX-Xn|jWGWZ8qNbpRUK;G`#TaSbtezl{al;{WhVQlVcf?K}m z0x6Z`1(~nwcW~%R7gf`wh4ZxC=zRBwwfU8P(2s$YFXyl`8<2R;z~Dql%3wc*0{?=; zl)R|_KiUfm%9+;=h~aALHnr_Cip4_oE;4NgFSbMsP^mkAX+T1?aE6redpK`=T{ykt z#k&<0mgm;?o@ds*(kA%_`rcYsuMu;ED@5eogS5Q({Zp+Yxd@CI}=l{F*0%qmtm z+%pdJo=l;^vGn%OBVo)?BPbta3(1ii#V=fu6oh?LiD}57IPd1;Hcs&IVsp+_nz|0v zgS1tSRl%GIEqX@CrvZWdD?y6!8oIjH?tl?#yo$T~hmL|mu>Wnk{Da78Cn@Ps8l{GI zOtBhKk~Ydk)G~M()n^lWz0T2P)`PF*p)Bg9awMzmlv_MQv5bM{hro*b-nhY}B_JRB zFN-#D0Yh;DC1z&P%+1ogafJJ$Q>{Q{^1ArIy4bL`g;OT@LQA9_YIFP%_W+u%rYE{DJf&Z8PHPVfP1+AM1e9a}hzPX?VS>W2bH zB0f9+ z^kv$hxIU+Py2%JR@sS?#Wt@`PoIL7l7ww#lktxpi@Gm0A|Le4ottJkjiF0Y9OSXaM z;HQ*sxGW`oI>}9-nzhR<5rQ(+nG6m>J_~GK&|HgF&fNpYlbcHCUk(s~=4^2Ez?SH* z(fXnJk4H%+g0rE)Y8o1mptqZt28TtpffGzWlWSe;3d}4BhYh9?Xb%<=g9mw|0g|ZN zZEU!LIzX!{J?}@YLj%i1a(yGBBMM5t`Y4jmqdJ<%K*4*@fTQT&7ujOUfg;RsU_stc zwPhOV7SN9X-pEjnv?!CmY02#^(8=4%xa86Jcyg6y1)MV|MSeKLUpG}$RLmB2e7egQ zwZ-SXN|Z4SkUFX+MGK`7AD0zZ`Z%89EsY#nvEuGA*i6UZ2q2un_D?U^N&m|M-8saV zVI6Gq2c9o=ROys6AOyT4aeT&}lEvmFKowpfApiHB-$I5~C%{Kqtov6LJND;dA!Baw zj*$bVgj>2L z79l`Hl>*c$D@JE?MlN6|*0$v0hRB_tw=nR=mPzi`fq^9?eyymo8_d=`I$ighTEhD7 z<``Llq3%Q=G*7`gANaJ7Zn|P8`d%l^;-fTTxE1)8Mgt{u>D^5e0JO>NbIicd0Oar_ z>BiFKbpV=nOOj3%fOx3d4roQk>X9|jG*0YfqZufjR)@ZbCvjx@0%^I}oNzyHO>Lc3 z+(aNi^msH>Ju`X!5-W+I*dU~f*H4>oTr zfP?^&(1H>7pteySqM%MIJUm?3Y4)Qpc*(}PJ5daJo&xYT#}%=340Z@(-!L&f)zQ)6 zim|O|<0Mdw=g({Q1>(UB@CMFbJDS^I3)Se2W8Rl5mQV}mf=jvYG!?DWmr((%dq0V# zhBR60JOZH7Ad)Mt3Yb_T?ASMaV*l02`dZ^iTr`WN_cg_;r;3afIB}rH>R8UT*r9l zg5$Fn-<#DmFGWN3CpYJ!lM@z0&wu26Md9Dxs#F#yBr^1ioW3DF5U=q1pf~*G~g8I3t4>oN!J3G4wq)UBP5f-fC zLKABaZ!Ny~hK%g(u1*wRvRMD-97W-2eXtNi&J~4@R4WFn16v(Q%RcmAWn~>47=UHX zJ%f7#B_JDIx5mvjX|)9OdtjylL?)VshK5POpsjqJ2p3UFU0j|)rTV|WuSv9UN>Oc0 z&O$}Q8RWiVZS}G9FF^X!Y%s)zmWGBbs;{qa)}_2b@TrAG$0aJD_`pUW>vxCrP^r*} zSDLKkS@h!<-+*XTIe>^pO-&8;P~l7R;r0UJ@6S&LlUgXCMU#`47di0;s)GP@NJLGwnB|yKnMatvMe#f@^-3kkF(s8pc$ zaT)dNPAu?S1NO>?UV#C#+lYsXefYOUd#p|kv*r#UEf3N884%yPR{x^%PX{A5;8>$H zSD&i5xRmb3UBApKBvfi$^v&JJSN#TINF6PD+TqcI2PwDdm-SET>01(Cy|7FXdhlY> zF@nSr2<9|Z!^73gW?y3qiXHs>B#a)w+c8y=*QAxn3z7Q)aj%_yTwGjq>U_Zzag^rL zQx(C}u;lg13A;p*#qu{EM&m57d$mBBfJ*(PGgFcst?|Gcy>r0mjh~ys$Eybbew9UVhOG6W#g%$fJgl>I9lk)!_EL6 z6IK;umr2Vb#jK~@v1Fz8bdyLb@sYOr7H174US2Hvtgun7&t_6FG;JFOUVbU}K8~o< zHuRK2xeRJ*yN=nn22*6{lTlISYl^po+UHGp1bVPn$- zuSq>r`Z_WLRR2!ArmY0FBZZRS4TM$RCMq(r44_CTZEn!k)O=yia)%bg&MeR(X=_`$ zKe|o~geODYq*PQ?dBe|>S%I1qOrz&T?^yt!a&;9u??FXI!m=2fFjEWJp z6p)Y(E!0$33!_}G>-zZAQWK#+pVLWBIf9oSUR*60?r^>)>3lH~H83!!s3(_|mtWZw zf3~%61Jn+vRB)nhf~=4T6qO2uT83Ce8;TK?3SI@Q9sUq56GF)dhhAwYXaz;_x(t-H zyk1qn+Ezk)ifS)G^VTitmP~s0Tv*%mKCAP>e|5VGc-2Ej#uxq3;6I(lJy3CN7XVUWlQJyXiUZg(hTvho%0mITM9FX6wnd@4Mz;D?r1p&$X-)YB!kE}qG8Y*cw&>?KodN->lKxdkO%Z2 zv|$WokrA&Hg~{%7VgR^J76DBdG@*jEtvh7lQ0jvLzE;L8Zm8Ga;LW9@hYqVFT;MewB#;z=7b2x=%Iso* zmUdi zK9p>LG7FM4QBqMwg@>nny2oS<#7~qGzO@9+i;fDwZqtvPhaU>nfh0OJdT1T^sX~aJ z0K8G~{rf}Ky=`+;>MF-D4gx=5N(E;b85frgP>cb<7v4P;u@Zju^W8~PR!P9^$FsMo za)AQL07!7F!Mlt#uD}EEO1r1GnGp+D0q-6+TC=E!E|FpOMpIT%Ek~*Ns%(;^Wy7%( zN6?El_{?Sqh}AS+l*d#U0Ok&^rl=cz(Iq|$S#ki)(YL|h<3*D=&{`daW6ZBI^=teN zb=Bf=kbIoXLDNCS(NHpn4p};G(0Gmv!SzCy|GP46wz(xROOyp}9jvT&(L$tvu`uWl z9Z|66oeJh#5Ud-yXta_e>r6w+kY)q-Kebr*h5@`)nrI3rL&Fh((TodmeA8=5rZtZ$ zFgpc_H@sIkQDh82B3x8hd*s8~=B)Y(>-2E#^uK~QIL^6f=2tZ|#@?OusG-^1l=nIr zpo5Cbd4j}aK%f2!*x>CmT&wg>?m05AFVI~Ub)u9J8cJ;sm=`iWs~6U?veB>>4O;cS zaC{t*)x&qu=V@>U@&DpHMWup=*^-tXr6&FX3LRvy&atY=R^zwMJ3ZP!Fmk?FWtlhx zRGvWY1oi1BL(uFzIE0c<4&K%_X0)WOua-SA{R@m)=kpb{YJ3LFsJ=@I3WxJ(jbet| z2Uk_wQ^X(#HPuH=n7DTG&e__bv;qFD5TOKi!T+z%N==1@PojeUR4oiaVRPyL|I+*SZG`)-{{5#rt%uPkKtR=&zF9F zR0Tb!gLXu8n*VzY_XeNVR=|0zoc(K2RKeRh@uRnSkV4? ziTgrR`1tPWxslj}^J4w_@lYcF-cjR0IhTwe{)4R4OHyf0Pm)xK=g*PXwl=F}!`|{^&seRrYjOl973AX< zzPD?NEhnzU*wTotA9fxaxH+A(Tmw^(%79DI7*XH6t1J7!DNG%_$Mj^vE#}@ox1_V< z&AgGIfH`N(CM7xuK~&e^BPa?RHO&nI=8Bq8sl8iQKj1I1GhHGbN!M;3>;l2!sL|&- zG6yxd+TRUV*rm%Je__ z89qD_sMMvCyNU~>RbQ{`uzZQiY5Q>#juH^edpmiLok^F4r$S}7W-7O4a*Gu75mhvx zf>pj=g2a2wHZCq2&zc!1DJ!C)v|)Z;7HS1_BKo$;Chlqlgt>#pr$pw2yKq#y@alKUoV;n(7y`=*+(4T*Jd3o)0awHCI)^%I^Y&=yfND%5PSbejN zN>$OVmhP~Q@Io5rwYFQ6$muN-CWtv0Qhdh;M!>_ zLQP&3x;YNojke+EK$^tQ`r}f-^#3m9{>i=B?IMe!r&WLFiqTiPoNHHTp|_*>@mnI% zi;cKHOXXKUNf`fw zGbJo)`niVJmhsM}@hLl5KORCrr-q^0z-{aGtRT+hI!|!>gh%=1%bzXXfcLo=eoA+~ zWdL^2r6HMRQL+t#FU>T!(~7P;(SuFggmgu+#amzg(R{Znq9t^9#F*gOGd`~7A>rHk zPMlzWPXBE)HH?rY<2)F~ysF$UEA_7C&Fd9jwTax1R&dl!Z{>EXuWpbd$-3T$`0vlW z=0$bxh>DI)2a&)o#?@V`)M8bX*>jsHmkMzFczCJ8Ynjk>_q85n{!RSw@{1`sWiDpk ziQX9Lzf6kZjdY_$P5&({b9iOgKhsP}OFPur8H^ADN9m38)PI=+YcS^i&rqpFTV5Y7 zQ2?_ISqfJ?5~k+l=6Cq)^iRWClCZF_eg{}^+_Va(f?BIOoR2TF zJtm@|D>je2QOoB-9o0ghO=Vn3$zPQ+M$;~MLIuB-;~~?r0&Iter$)jGJ(t=SoXnuO zR-EF-YCybmmbTSB00l=^eH7;SSc~Z{ItD!4DT|X5(X>a5mAru!)0tLcll7 zJ691{vjYg18JmtiS@Q^3n*%)yR`@S;9G|l<9xeq$3yUCz8D7j*Ae|it{O2cHr8Bt^ zmEX+$^<$6~IF{^hPz=6)&mvZ)b3mD89c}9gHu@8-;PxhcJBSeu4u8(j!F~i}Xjy}c z^*N<~YK>+!_oCnJVhj}Ue@+==|FTRLs|SE%2Tsd%v9`(0->&TCC}?rT=>6!&n!X#f-oZp{`%*~F9dqP@z?&_lq?;U#_u#kxSHVS z{N7?+cl>{D6q}9c?h{jl&b$OW`fPo%)`NCoGVOJu0lSEq>7jdi{U{TJ`V8)q;N}a>XHswPFQ%#&&_&tID=IPy^+jg5s@ zJp^M~9o!13g+O_|)ER3?*d?u@F`As)k#omhV|r<(czyu}pE2jVnP&cTuC`H{Pj@uA z{d>W$(r1v}?HzPep8RcCuRD8Pd-ggoi0uU}!}KtCK&snG5#XDw2|hkPV0;RXgu|@e zoF3aPu&s!aPtfL05LawpqKdNh0-yKD%49?r?j8>X`;+Ws+;LU5Q`Tp8hWo<%^u3}V z2C{&Yg`vSy8w4=oObh$(GM11Rbs`!S2HXcDf@;k!N+}k_U~q1A?|VCi;0n%DZeCc> zF85G~BtiTk+)0#{aIm!c>vdWBW}WK5itUd> znkf#3m*81gVrfoG{iWM;#E&uOkoS%$U|1l)B>7|n7tSY1<*@vdPvsE+&$?tUE- z0@cGOH*a0O)ZX2#?d4XlP~eATa-^QCSPv36tEBuF6g*0fyeL>~iYLiVqU8=+XQ&S)(wnA>hQQ(|-6ZGyE zcr>Q-YxI8uiHH%NW+21cQ`iBOPGi=WSD>nFC+mBR=*6Z~V+pkJ+2bE&Y5huKhqu!T zW|X!OgBp$ZI`1Q{d=GU|ebsVv4}dYh^yE_;1@qcQwNYt6tazDlCW_e!RSvEvyCiPsuG zZ$gdPeYcYX$J31fiJlara~%dU@8O-VBZD>1$}o&tgqIN9`h&2At#0|=SX1ui2N%cq z`ointV`F>&G`m!grB^leK#P zG)NqcaF)(;6=rwj-F)1Eihb&rF`pfOU;G4>`=`{2Jwem%m6r49p@7e1FoBWW&YzGv z^o`4xYT19pIflJkArYzU<5jG8FSO7$^vg&~V|Ma1dHzlH&%}H1Sypj&=qGn6{2%o5 zGWVHuy|kA;NO-g2IOmO44re$2Nh<%iUtetl`91iS<$ZE06s&8Gltn|InRR|Y#Du&! zH!A7&V>Gv-P~yOm2%l=hvqSVA92Hm{y4Xh(HohSQ=s&sp9;WM#KH^E#Xh$wW{(H@w zX5?4*1$D^6(9Qcq`55c{!A8T&4C`w97*qMGiz0BwVYq z9}#En`bu^u+0)3qvGr% zaq|1nud(fjNIfR||M|>>Ab0NNGF+wCKa~E^ZZGKvwS?dW5m!&Nm9hbU?sacXrQBVz zJs>^s2gg|h4aEXIV+B1(V>everdodDItdZRT8xD514KbDQGM=@vIh{k_bej+JU_bt z90Us0Mi40c6(`;M@4a2czkEP3+OeZ}Zk7RG-TNozPqZ>i0|Ct|99hu-eo6G6sm@O*2jwm?A zP3C&XLKHzB5`on{OlCoa6sTc}K zuK6~&}FFqF+5nV$Y6@XN>b9u%t&t&an_tp)qP{TI}+Q{cL3jL0k%kX)i*181tb zDetS?K$bBif?4D;nu+~ixK{o5d9NrmnF^c>)R$dq4gY&J_#RXPM4sLE2TEzC2*k+O zy#bRnA=Gr}Eh`Mb8H<*jUeX=rId=UQ~P3-EZ~%4hB`ea{5R zUQRg)zJ$jtFe%uiiGUS^jDiTY%51M8^&~U2?gyPDb_})sdOEa1zPHgvEdv|9fShsYxEvagQP(y{xdEG z1Oyux9isdlP6sJIQbcBphK#ExU@ZMR(O_BsuqdfA!qrs#BfWPsq4po|U3*CAecT1S zqu9Z28zKlInJ3HMup@hY0&3=^m1ow)E?^f@Hts9z$@Tl~|KOLIAa0x<=5K~y{HWG2 zx1OtBb7pK;4oG5dCdijVbaLz#Wb|0k) z@$aMbdiK7CgL|&uc!N&?+UK^i0+)uc`@-#im6)7H0#ViEvxjp5U85$38NiC^sP|K> zViyE8B)}fAGv7yCYAbgasfm@HvLJ5XMt%UiciFgGcEG4G}4Tpx3+E2`Apr#ElboT^jIFj6TyA*XW3A!1V z)-*PKc62vsL2x~2rO4h5+CYwc{Yt!Mt^^sZ0j!{< z3Qd{4_ka%SLA%cPhK1<91f8V%_ug@nr)X)bJ7A|um8uzkm5mZWB0zz4I~DnFLmvLy z5IqU$o=q+ah@A)jlp{L$r&1+xzamFc;5)z?ivIho7uHw0)UU$Hr_cxN=n&cMgc&qb zO;`IvM3nnR;q@2*b%0F~-UAmt3JaWvB$^gOFRaq|P)g%DNy0!~%OGH*;F0}#W6Cvu zs!XJ2)~lhnuWuF|zylhs5dco{!O{imIW!qSP9335)$mJ3US6IcHEy_FP|!4<>+sR2 zhSXCXwl+mcnq9!=!*(VfuiJ5^d7JZrCkRw*Z7YCG--2FUsYM z4lh_sb(c!M(jIQNX%5~%D(sK|6;{O&1*ovz{$Ew{m;Ik*w8TB@<#n}K_+m;kGYq}N zM6!_ZOIz5CS#MiRal&mwn8p8sOlfFM-xGfd$ilQ6msc}5KZP(+#S8z9M?^U{sdsMwKpbKp z_51y@(Dk$IbBFF6qI^!vRstn?KrkSuR;Fu37;wJ4D85cYLGp>ZKkhouDx{KsGV#zt z%4zdg42*zve1NuBo9+`M3jhBh52e1Ya9|T{^l%Lc=&e`P%G3kl!x$y=PUJ!t-1Y+^7NDEcf`|3-_Aw5Y1w+Rf&>V z-S`x(I=DGGz%2yGg#LkvR;Bj=!sovL!DVx8x_!UISnHu-&_cJFK#)ul(tYd}8IAEF zssD29&t91V!P7nt0m_E6%HNLYR$bs6LfV-@eDR_0-sGd#xBC=84{M9~)N z5e6KrJ!v%zr7Mn|m8Scn05tW9`h+lS9a>u=S2#{E55ijg9{i+R;Dg&RA+QY*78WOa;N-d{>0FrwCiW1PNT)WnAht}hQdWl?d<{9+p=;(l- z0tC2;ZkQ8QJ+ajk1HK<{VB}*mvbGLzxUHZa5pcgqJ_dpAUuuBtMCwJqF_1LE-IE9Y zMh9EiL~G(8)b}w-*zb^m-ABkj6c zXmD&7N((~#J zc@NMhwfmgw>h8cbdI?a!kp>scmlHKjSs-h6(MBW+vGk@xEjDbgDGXL{3ofc3AOHRQ zJP%y*r}=f7nB#02`2Vo!Uil7u_sF8wU(dxrN>(neyqpv_F>s01Kn||1MN`k)DWf7I zGeB#32Dj6;3X~B65s&hI{&uKw%UK;6PtlNWCpkojhbIFc$*8xnmCyR7`OgBbw&YM=l5XuFUk?19+^8ScHXFhX^l-)7@s(M zpE-+3Yo7=F1-i7FJCkLf5z-*iUWb@|#7K=~=?Bya8jHxt-IqzXULo>K330*#_kAgz#naA1p2i>e2_A94r~cp2Nf-QA<1 zq!dX7LX(a_Vn9#?eEwvDh5B4*Un4vE=Y>_v0@CL?;?I;}Vz)lzJ)1k>Ol%ugufYe) zJZ0S1(p8Niab13ikcQmc+8RTM*?}sB8V_x=tcyh6)wOQ!w7O;k2-6i=z@?Pvz>$$0 zg3nUxJAooLR=KyQAK}N8U3pR2Sm`ZzLZMwM9nV0m8dHgET?jZPo~taWvc|?a)z;cX zNQXef4k0Td3(B4pm&K>|Yf<>c@mR-!XXf{&CQ`&KbTYdYZmaOW($JHm&bYt)pXObM`#eczO_YzEV>>%%$?dW6=nrW^E_Q zWZ)Dn{&V^mPFCj{f#df+A=V`DF?hV_O60xC$+)JQ9qR%pR0AO_*mQTZCmsPLAdW+< zJw9h-WVju4-~x(&){yrtxHAwXPZziu6BtKmHN2yJ2ck{(qPf5;&3IfZvuJ%aMBIFX z=l8p;ZKC*i1DK(@)Fs;xJa1)Kmn+E?KIReLOPnyG!_~skW}27rz;ISHs(L9%_ld#fI9z}uM1EiLT7r+^~&tGXFlqob1 zt7aRVx4;=3Nveg*D$g@&O&0t;@6X$u9ot%x)-7;2`i=FKC^B*%@BZh#-q9F04-YBA4L!6(M}O z&K7WlXF!7|)XIBXpL9(7W?Vb3uFh3+?J~IeJQ({7@les0aD?F<1F0j<%ahSjIZr?W zoFKg!AvFckm;}Ii*xDXAYV%ZEwyRNaV}dh+4dOCXlk)!y6i6KX0}`fo4l)z9dB4AU z0BsS5DQnoDz5_SI@1M%+c?|mpyCLGv)bt=CeXo743K$5Z2p0m_sT**hTw5R95OF+o zcRe)Xt;ZFbflyG3I8J-+xt_lURFg%N=bMg4#kZPnPbLGv>bHP2pOZ$@iq;Us9@V9s z%(?DzIzSTaPK|l<9A(-^(%8*I#dun4tiWA+q|Sb=c)EER{g6O1{zB}4Jlj5;ipd7?X%#awWX`a=0*wGXl;K<9-HI9>&KQ3_Q6**9M<%# z9SLb`Q>e|990H{rS}Zk}FQpU6dnf$HdU~Hm^;#Xc06-Uk6O>hLseMLzTU=bce%jLF z^XSo|d({|NvF4?~-P#=h{wV0iz{Ob8zAs~;ON#dm%v;ko`h7J6P|+DbBQC}9YWBXs zb#I{o>anw-K8oMr&-Uc>$n?lc99S>a_Us8j$)^VC|=-bhM)mz*-$ z5o2r5iJI|P{JWUs(mf$ z(BhM~p2{&C+8eG7NwoB&nOS|CC5A&OFTPJPftY_kBY;oYlOE_0BJ|V%5$5C|%v%E? z;N|U&AZTJnP``Mx4*c(O&=p9G*!cJwH=7?<#{i{q-!|QWp&i*c=DXU|3dwbh)c4?-Ymj(6= z3Ifx}PIh)S8y_Wz`VoKT=bIOql&vi2`hwtMQ-GDdGgTo1gdt&o2^53nvJ{Va_qU`M z-9cAfU0upewrw#{F)`UToc(vsg{rS#1mL(RdzL+hegW_+UGQE!I^9jnV(CO(HqMl- zOPQ_+(ZEniv*@dUi~lgKh4qSxv0NQTNE1y!M3&aqbJ~qhj{G(O;Xh{$7iUhr&g~FIg{PrOc5n{Y@OxW1D0N<< z^`=>aW|71>bZnOz>-nZlp6sHcBA8=S^L1ad#+YqgUESe@M?z{U3@DKTZkppb6JVG} zlFtI!epvK^np7%3@j$68IL1DI{_?r~|B`D9_E1=XsO7i*;z(u-^@$z~U#t4@9Y8w$ z`^Yk&_!4}}<&L#?L1$+t8?iN>-`vfvvv`Ep{x?#MDj?#i1pMF-1q6`hoM-G#&(1`G zE3N6Y8lpiJp84o4QH2!&;kuJ@2mO zwac3Ul|S|JPfm=d75S@hOSm!{w-b?wH2*~=VAAzqm%yi|^8ss>b@%=amVzyZhTTs1 z@T&Rmj=z`tG0>QoN~^ZvE+;296|W^89UV$bGc+&~iNXv%#)HdUR#kH!ZE^qTWo40nP3wx`K%56-hB4s^)nua zQI~CJTaK&?>d53zw6zdEoYIAmu&@&D84Pu4FR!N6)m2tLPfuFM zpY_OJe3)5q`J4Fo_=={cRAVu9D@3KGmDSV|K-M`M`W5eX{#vZbvqUsv)w*6N4L(Oy zWAE(Xus)pTJKpNpuq*RXXx1_H?Uh|ak!btKpJRoY@+=y2FC`&kDXKe*9AtTYt1j91 zEXop9Y%VoOnl z%33g;0^|rk+qcLxQVR>|KxqM9;yxyrCIFBsk{pBxb3HO!>q8^ZuT9vAcT3l849>55Qh+O3_T*zW`{oa_ZQFImK!I9y}op&q2Ky&vTPX=*L(B zXVZ5z-8(9^AJbNqE*NoL8i_?XBoSzU#{J!O%cIqu-6$rWw~+Ea+Le2-&vu2gmta%= zA`qpQXt#M`Y>Gei_ngn~CRQOjc{}o+aLQNVT)0sjthDxCUYcv{WG`1z1+uP>asafO zA6(?lDyv}I7QcYeP%Jx0+Z?UyZ|F!8d$7;5e2#K&Nn546$a_G?(IKm$4Ww2vHIWIg4S*PyWrmOnjI&}wOEa_W|Wzg^Xqc+Ed$5Y zh?f|B+^!O7vo*EE;5!T3>YqV@#I=3pysX7Xj?=BJkK#I@^n92SXsg=D_w4!g0{|(^ zqJOV_A`z08zk_Y4%?;AV#DN}wi8a0p>Sc>mFYM;_KNFS^Z{7BI2;e??54!n)NikAF zxxp4!L+wq)&h@_ugVAg*AB(<8n?Wnp)|}?D>LirntluqUTb9=Ym=(GsfeRF^*&%o5|4sG3Ve6fGJ1 z5)vCv5B$;&BFxbL>!v{h6;Y!_Vq_)1}BHakh&SlKg_axr&% z`2lv))}N_aXK*Ra&ZS%CMH2QYg#J*|>Y;uW>BKeP^Liq1y`oX#i0iVoxmmom1)RGI z&z3{%j~)m_E;Caie1;d8G9R=RUy;;Hw(h18#yi1;h7PODZ$8Kl8QW}m&nl~t7q}?) zFuN0Rd9$@i8>XNQDh;U0Tb7o zqixfvs(Pg%YGtiDUkayWF-YYQgIiTMqp6}=dOT_M% z(D0shN$l$%+2DJ=sdqGxZ{{sDh2lv|vXvMwH`XUG_TfhAWT)1C_S-^fdEUqybWP48 z*=CV(#pwLp+FLz0fC1f#-$amGU&$gMXbWbLV7a@ypMVyeyyj+YAhwLD&++u-nesCr!0HBTn4^FoE^nI-Lagbg z?nv(rNH4+IjwK51zQMr>kd@#syB2`86pi2vLpnzua!3(0;3!1q{A){ z$%t=yY@tJp4b@so&TF5-&z6~do#q#v07g|RY};Jm0dOKj3AJ^xmI?#P_|*pUEiEmI z9h#7YxVT|N$B+#t(Hj_8pW;0jMqDki&KhxgTd^+?^RL_7n{SV`W z(e66ONb{X2I9!489-8HQJo6W!GJ`7u<_Wn-Q-Nmp=PKqO*4;N-hHE=xmL(wnIZ^CV-+4BR-Uch5h(X@}zrM zCBYCEk>I9?V?-Wu5DHLW0Huw|1Nsb9YYH8Z4t|@J`&}ZIU+!|UJE*JQ*kEIsZ*Lqp zw`CVO{gyaOSI5(A_dkwZaDyukT9Q~8ETdQZT0oU$3?ku=jQ|MQMM&LL5$Is{%QIwa zX=!N(_}-aFjlaJ?7#)y_fMyTdJ;;l)vMB0rfTC?X5Aby@*$d`8bV6hiH>+ z`iDb~`5P;#eMyPx2XP~Z+{6ugD8S3A9>f({Kns<+_O5)4?4Bv}zI$&685p{$8SPS` zuYWoDKiGL>UIbVXIQU;U@X#2pPIh;F!-^UK45)p0@dSmdPPag}j=oT6PeU#ySEU9-h-GIOdl#eY}cl^Im`!y*~F=2=i}QEHSk zetjq%%{Cd^ZlEGXPAL)|!3>ENkhRHPBNzgQjqoGD=drP|M|L(gI>OWuy*d$@JMh@K z87?M{PB6FU-{Mop8M`gxlSYzH`E~?hVS#%YYjAfnPUH$%>C)Cdz=N;B@*2EEP|zF5 z+eG#1LumC9xZTJ2HXAQUxM;6Am_EN@+r6nfyriP4r~d=34T72REZ^TeE7>yc&jHwG zWM^=2aQ)Q#cFkKX0>H~LL;}p^lNH-w)>g1H5d!CJb&pywBaH9DC|Ia6R&ZH82fMzy zyft}4I&2`@5!97go;JP1?Fi#h-tOB9CthxlD=J*MqJC0UjKbO4TN?0fpU1C+h@i)T3UCCjWh4Nv!(m>Z)3^MS>EBM zhS<@SYzzApSdtrYW_BujrzPE1(2}rdsTD<@L8Xs2R3Q@87ZNy%o9&Hz%J1!crFXeP z%`loXBN!fe)O7${4aP>tTWWa`>Rd|*AT14fu>5;oXAO!P8gvkDkMmNXw0{B+G|`CT z4KTSNe@Wo1ZU~~b-f;!Ajs@my!IrT@v)aZJR#dcNF&9wny`&@=?>xqDO99(^+!Gx4 zqdQB9nk(~WUr+K!r}CCA zdeL<=(2ZAf2o%=vAF=~uYhOtUA7$oJeO6F7@Wm_s8|@t$c+E3a82|DmP*Pf7?ukTP z1^~1Ph_c4g&pkqZaB~>j97WEz-+~et0OE3K8py*AC4Pw88P}?bi1>#RaglX}n>o#8 z{|>6p)bh?yquRaI_!1ayc*f(GHm6Q9Y=gbSIzg*<`RA-EaBJXsu@{p&3 z1wgv9=m2^C$H{D=08xRdZ5%Ly0LR5kYM_KStMULls6ne)s&PdVk(t<)W9i^=H*C=p zm`qkJzEKp>met+sA@}=KYp4z`?wlqOCBF{aQ}YoWcr{|6-~E6wpwkX6e}uR~?FZJ9 zxgBX_!x6_e6d<`7k29X*7q2p~5Xp_rJX(GQXo$(qP4X;Q&X_~* z9lW}T4+VeIY;M%LH816!wv=F>Ua-kRFQLz8IJ+vbimhYLToRW#51oicu8Ex+5ETIw z;Wd|*mg-kx)LHrXh;OEWj&r`cb)%veFw$5Mt|p_P{vevP6&21o27}HC_kAwI=SkX5 z%{wQXf0Fn9Nbq;eW-_5A!%uKrf9ql=HP%ynTL4b$tdM&T7>r%8#*;X;?HJ2v54Gqk zP2-wx=p$N1g~2LD)Z)4zW)BJlH-L6?#PkwHH8q00KCr?TYMt%1g(<$(3ZX+oHf%MV zq5037bD>QshR*PqEP3l&Vo>RjbKw!S$?PTWf|@IO9_BfIOH0otHFL#3U}(_r^S$Hb zL*Jlf++G%D@}c#B{%323$_jOeS8flu`dI}UEAN7=?f?xnvljep|Iu8Ae`a6B)8CTx zc7@+Vgj#5NrJ+rW!^HtzGDF1HIkj74UHiFb6OREarE%NK+)63L-J8MOiyiFt4_!Lq zYNz9$2(oyK9dJNkbl4B8z3Hk$EDdnnqYU7Ope-VU?;8*55Rv>;&&o5!P~G+8lp3@2 zT(X|Lec_j3B01LHDr2sFxYH=|m>}u-!Rv%LN8-66x?Wm;#StR$eH;*~ za&|o$zQ%L)?GE8JW+r@Q6jbiWNjUhzU)6lve0eeCU2RY%sd0gV3Y!p*_#q=Y(W4(& z2a*MqSg=(B=dI6$YXCUDz^GTJRM9J^uc#vkNFv z$Chq=Pt=365^1c60uOBg!m1Y-t`ue^0PbfG`1ugI%L+bWwCm+zOK!_z5GTjV+FC(c znyjxPI;w#|9Wf=M^8>75Wj{9mpm*NEJ595hj zh*wsK5!icaNtWDxn((lhx_KLoz6rM#iI87#8(784H6Bu5!Rsu)GRJ8A)xU5{o+I`W zRDlCUBmRiy44tI_5V;pSg6o0M{Ye(1Cq!Rg9}JtC;Ap;(BnHFTz!-~RFuzav29Yz- z0^D=1dUC-G^qfShp;4w^qBJY%=ow7iMEV+n+6#pi2feIJB1&Onrnn(yb}f+8Z1SuS zS}_KZiYE9yz+_3l9V%80$m)P^?u)&LIq~NY6&9)x>Emx7J;5*#jiaMvEaFh`de!V( z%S?d53(~@LvGDDc$HH%JY2ez5r&$$!yU&c)@~%#6sDGPnN*b1W0dQyu&@7wHQrS2P z{5*$pk9N|?FL*SDBwsM1MV)~4m{A= zn+3DH^nC|^(1#Q=i+lReVRO*A`}86*3H87Rl!5oQ*7RfKqavv*MDIX!##dcm>7aT9 zLGcKAy=!gSe0|t=&&O!VHp0N@-sYi%fW5>nF1F{AHR~p7%H7Q9n@<$x^{WR5e|4

3F$(hXZxc$?RQ@3`qAhL~9(U*iP)Oz3}br%cY!q;*z2wJBEPC zSp|Wus)kzypHYL7nZ1glz*)Uo{Mb#BbG|={K+9dJ6P1>8sA=QDy;H(o95{6wcf5K# zeYzNCI5?zT)m-jb0+>;3stOCA>Wg~H=zIwiR`@2~afNyii?BMc%_4B3*ID;y$m=gZ zE?iZKMHmI%W9ikZ{f>7MMZfZTW6oc@eH&a-IMo|%08ZJv+iWU(0>^}G~@E7 zImX$k)7L;ZjyfSg$P8GsSSL=bN_qfpAL!Zd_2v$0HekZ9r(3NBf@|3)hm)oLW+)z-?oy1JvqgbhJ3_b5c8N-gy+pG%1i z5g1A`5_ViSFU4R106rdAG6>pD*z#cmo z841geu_TZ@?@LXg?!&d=8LcX%l*F`rWzD4Rvk z;!srEAPL|lf;^R`Cs5U_wPk8PN@>Z9YzgVZIcVut;xhEK{D+N-(i*T5>)6Pcn941{ z`4W1n+DeG_$Pv-_e&)4zR!pmk-Tg zqTBKTop&DKNKn<^{C>trP6nL%hf#c%u|sg7;NZFmSd@K%)eLX-3cT(ZL9;7EL$mc- zaspMcgdI&S1p@ft;49tcXCYF$*=mmJ!a$r0EVC(`+3HXt9YlK^1ELOT`GRtpdUrtz z83luG3lQXtdZ6)Qn6O&j#lgWLQbP?uKcAxC&c{Cu7||Tdff1bFFi=j{)9`4?uiD&| zUxDM{kmscwf!S)AvEo?&$7QViYP3@X^p$Y*1XgE@LgLA$fZ(IO$p@pmhxf0W0HW;; zn9W%Goo9gP#%K5X@_oBa)%TvQEm>PeBitfN#^` zov0iWWuM0%J$`&&Pk}zw6nW-%ofMGp2st6tAPuQXwu_ujMf-DK!?P)^l6q_J?`dY# z!0UkRRGOkKOex_%Ouv?2f=E@Jd{0zMw@z2-f*bY^A()Lme=q@81C+|?@Y}6xszos1 zFbS~9kjPURjcTW_*_HO*eFbHs*Ifnk=QP8!vpJ7uP#{61d_&$v#T2Kglc+hRatV~w zqcq1*zJl207j;0ym20a3JrR%~qzd^BfLuVkHpuwu>Vi2&Pgk1kBa|%0-;*9;SOX#i zbP4fQ(Z&uctMPrLHyD{hIsI>&Xy9iwo)!jEp0Nbhqb5S05tzCe2~>G<-eZS1K=1T) zmq)6&f&2V!?UcvxO02=GK+(E^CWa1I)e4U&f|Ik_n-{X*_a+UqC7V+RIn7s3kC5^w zW!wj^i-(8bUXZ+QchLAyKAt$8uhpeHhYbodiBGCRn;=i}|m+pl4Xz z-phxJDgOS?eY|(?P*EQUbtqvWQ%wemeaV+gH&f}Yg}E&kW*c(I4NrZ_Enu@OI&3#T zmQMXVU#Y4ZU}0xtJkZKW^nd zJ~s>7H`RnziSgGSz8`xoI+fYL$M~->-XYO*rPC+v$v0jJVa#3J{Z(^vLhg4C`dVRv z-?`grW^z+SqP3yTVjTFU4RSE7BEUk1SrsS z;-G`ENHjydM`#Gabm*>Q{FOf9^*=46-0XWr2@)aYT>GEMqn=s24=&sZbPX4?mk zg60k0OU&lOBc`&izy}$rC0SUGk$!| z-))-b^XlmcUJ65m54Zlf`j`vX`5_FB7OxS4e|cfnA;oOwG&ViHcxCtOrnwX^AF!5 z7kx07KF|z4rGzYM=&_}lwJU~XT1CnzMC1qYbbik$p{hlD7xcK#m4(twvE^6;Z72tB zZ(=LI?evQ3RtI}uV<1zwCYQ+qXMP^8R6f~z^J>w0H1*Z%(7Wa7x5yWBm#o-onuS-6Z@d_ed(p80Uv6g40I3l z<%PIJxi)C^)~+Vt(XnAUIp?e{3{qT!s=^ zj4=>n*e?LhS4dtS6HI@qEfyCNdI-cI{zLwsR`@~A4^hmTe~&ke9-%KP_*jNe6k znpr4EW&8;Z&BuEOnY+Gb*WO!qSAX6PT&TeLUGCo4lDmT|bv<7g!%*5#v%=Y&1Qys< zjj?rPhk43vZoyeBk@VT?$FqO%*a-(-nEyZ$#&}osAiZOSwYWBL>Z&HQ$hRIIZ)?FQ zDGoDwQ_h=1l^H|V#q0dL&&84>uwie&P6qMn%g&mZ(6#t%h@&GLn6`j7Qb;~1jYKy##(ne|!DApO=vWMQNaq~S3U*Rx5 zVD3L^o4{xiJ`9B|m}4GR$s@df`7%cl!dlbSWsd&)Gx1lCq^27s8iwr_lY|=@X5Sf=gWNSYS4(i9 zSIjz2NzLy)Iq>}$35j}h#q+=1yrvU! zOq0}{pKHbBBS)TFKm;!op{c@*LSnKTOqQ1p#CtRmvD~QQ=#d1o*%MFR%d^;#ueg3e zmVlvUn?I_F`_^_=&l~$$DhZ{B^C}tdrNGVnD)f8$sFjKX9*!=h&BDY-8%jLkWitzpkh%}+m&R$y2zZ3Bhf7ph+NF~uJWN>Ey6YSdpr5xd~% zq!}6%*+ZTqor)moCUM#O36g!WgagsF3Y}JDApYe~9m{}-40^BLpKasS?-c&roIPOP z`pdq3oASf_gstWUW74^xTmP90HOT^QApfODYHs$3{=)zbq4X(1-Kv*5$@n^1y83H> zWol`u71^vl@R!_3$_G50?-gZ#}AB8{H$n@G1;j)Axi9GW`9b zgJiCEhi6*9N|%{?f(8vGR%`_-JW@Z%xpRWqGC#dUFhU z=nzjkN&x%MT2qr)`}{ZVIq(<0?(t9S6S5CNouS{tJh%1xlX;wZKO;^e$Vk-0G0nzw zyj8$9$IBAgLBw`|eXbR*mLG9lzAmk@r))8pOaKpR z>lvcH(W2aRhz-#7i%yOv+o@#+pRIZiz~s)UuTKDj3JLC*W1atHBl$;>ix1@b%svh@ z=l5%5Q0gn}-CB0Cb)nj<`tj&~eU-&Dv?Nld#Xkyf4_oqN-Qou42WPCw!p=FyXrJ;6 zSqgnZYsP|UA|BRxQ)yQs`59c)n)QjWZ^rIb!)bwJkcl7A(-+KQa*r zJ7YhZHFyV5M8!B}i+Yc!yV59?n?s(5Y@|2N<;wZh5Rex9O6FKM@HfT0mSl0_iYiKa#WXFpuvpyo2-C z1M@#bsQ&rEPl3Y2VQ5P6mkCm48|_NbQesU*GkndTrQ4YqQkxbSh*)f9+e88@%>&j#wz(hwHD0DlIog67G|ETqtWcFT-! zr4Q?>IBdM5$Rih!K0boCBQ+c2+LS1M(D^DIcPyDIc?DgO;PIrgEQ0P*@v*FpsJ~Gr z)uKmu3>Ms-ZTu(~qMh!KNePNm7~+V-7KP>gW2FMvo6+s{#b-E}N{c%WX!~(MXpz!z^He*1|01t~c>c?yGKO||3}<0v}zORHE_duRW+_eINzN2qg1lU;e~ zFC?U)Bm4lgA8rDM#$j@eS1UivEuWd?l{7m3xoRRH;ZMYr^}^dr>pEH=L+3`vTw@-u zf0#RC>R$&(l}|VFHc?>+x*%y57d)5L^Ksf6RZL3QlTQ$nS?0u|vhn@}oY9Nv|FZko zC_BnyZ=cbQrdF>`e#Dr&F(P8AQDY-tl1*=%?Wa+(>$28{rO^*6(xpC151@oIUGrVpUHdL0C7?*j_>^Vu^n z1UZxq3mnBy0f7tkc<#;^mf1?nd*9E&pu>$j2x`5i^ZHeA4YoG+SbW zXk49hf2byeAJ`VB4_i(DnR=6;n7k+Ger{m>>~X}w)8O%O^uZY$Bo4U|on7A)WpH&gR># z=5(!l;0Du?Q#rDWJfRi08XqvD|nv)3cIFy1thoTIS}o z?x)*n3zPimp0lcf9}oEQotwX}W-2dU)?q@-_HvRbw)>Q)q|I52JYrXpB>gj65{N6iv!3Hg}M6 z_FZX42P;sBD{C?3^SET#zfC->!iR}%g~|U><9K5$ftnvwVbC$0UFNKe^9@lgzAIrO zrna7~cU(X9=RbafI;ffAT=li}wKVU+e~8{Qq{(u_XNcL)U9U!?UOf5liSMT=jL#CU z6&1_qp0#yr9FVn(wn)nh$WN;rXRx7mRX%uB7UnlvlxXc4rm4+ zQNkb4^%0CQ`K7v(I^p5aCv ze$d5VHJmp3;Xha9EH|nY7Qc`wa?&)Abo=dLE!RYT+OJ_jW&6FxjS}^+8so<Z)S2{1RFSd_p z==fZNsK;e@mL#p+#$>*p`QmE*{BKWjpH~{gARA9|Y~Pa0#AVYU954`^8%IFpT$> zFmcvm?;A(=n~{ADT-Z?Ke~eB@tBl_;0N(YR7` zwS?*M2?~4QANpXlrE8BtGGBQ#6@({-y!IxbOBZeA-{DzYc=h$jJdg82ZCdRh?{lTH z!k9ACT0QjCM39WSk_faq592U&@Y|vxId2)I2ge$2+k3D90UsAlUKJID?wxS8*}}-LnjFsJ29t?&ibhd;pKDdt@j?@_o28> z7=OKs)9nS4Hp~`>vCmHM!m_ih@jrKN=3=%#b7Ewy{6&l&I-usl^c=8&beOQvZx-ov zE*B%u82Y)E+dOI}{Ylb*36cLxLfaqv%-AjrUwB~OIaP*jv*@*D_CSHukFc}qQVRSP zH%<5i)PwlIX7x<>WNtp4A`-R{O^NF~VR8MN45_jj=g(?3hzf`jmjReUb+uO9SMF)% z#Ra}7&{NX+D)hbpaWqWYdi(S*gIW;dO_2!wD6TBXJoQ$V`4|#@VO*%p!Kv?W zGRB$`>~I){vUBXFto_EmN@L)c6-b5(BtulbtBO>IU71UX1${D89T?88vNryot4n2~ zaEWXiRH4(u;+^-$#YFK)FY)!1iZEjN5Nx<)G-(~4elRD4ALgIKvc~*vqP;yKkI;jYH@O)# zTDCCT+tL=>_Z{trU~HY7$iGXC^iVhAeAmYSu}IX8w6&bh`}UnDvFiexGUv}?mQ8Wh zSEuvW?dhtaqKGVpw|MVd^**3LJ_%HMQ@vRj%==Bt&;vHzyfZv`(sLFcwyA7IZWw0i zqExeX_~!%R9m!6%CJZbOUbjtEJTi0+^(gDN*hJz-)6SpYu>K0ObmKlVvI)ZWn^KWOF^C#-rw{vme%_-H|)rcl}x>agClg>=V- zpf}57`6Xw^Uj>AJikkSA68MHiR>J#tgxfp(YE0js?h?>d;eyR;j|+i!Pdx|S8n`E8 zjX2B~3W&O~PNIM(WSWVSwuP3aHJOdF@8t^RxMS{|W8egul1~M#HQ`B9MpIYK+gS^U zP@oBSkP3^oo-%}CT4e&#|ssAW3CQ80|ee;2* z*%*6Y=GPO?AM^r9mpKF38E(`o#-W)-iYb#KLxm3ogZsL!xnVT;!LK&9()rWn)Z}ycCWCjGE8iPB*Gl^4Y(ljD?(;-( z1;vhXIt|jOEnXFi)rI6qlt!@e6EmUaQyJ5H2MS+HM0iZkG`J-a=fsakG&2*g4NIf& z@o7E?-44r~zikL$bNgv6+>mP@iCe@IB=(K?7i;wLTA&vOm+c%OO@+MCVeNFjr{0qND%PM@^z5~xt~R%lDv8_|5~ ztZ|tj`-rqh{~avjzAV!4n;sAJ+T=uPGF)I#OIQi$gq0iix%_SQSMfX}mFLdbtZ$V1 ze>`1PSXEuS-89mTq=1xwgp{<>jdZt2w{%M*As`{$-3^=WPU-I6bk|wP_n!+N?%Z?D zH^z8#<`u?-M8}y7F}q>%#Y|2&$}I;z1z((W`8X2#RQh}YJXUafDqM(5hF$}u6HO#n zLmO~}?ayFlA@&-BaHOF}^}mA_0U)-SbT_LD>^!5cHTjVU zQ6Vzu>_VI-P|{4W$SoYm5ACE>dq;Igo0_wRA7r_0xnrd-SVsS{Y%W=6(D-3% zBLJj^n-?jq!K^R&eHTmVH(7|Li^~v(sSb?h4H+k1l_cJda6oHy$}>_0Y^9QEkA*7io1e-CMDsT3d(^#horSc-6JQG2Wld1;|J3bxPCDezUGao3pCUvT#0)sw$4;VA zI}=z;{Ax6MYcgFc68g-C|496V|93hm=F3AtTll~>?lZi#z;^X;+9z1>T)Zb>Vadur zcZ-wUChtx?t|c1da^dGpm2rmW(nI?eO$&y5I&gVk(?)GTa17qjS);}ro8%?^K=H6~ z`|j(?(`mZAS?N)HKqxc---9$>wiX*|?r$0Pd-4rnvANKtb+As|44~g8u7k9c6t$)q@qK+xrikp;xo zhy?g9rqn;Smcu^pclu`XE_swc4ltqkvIcleJsZK=oWu{+XKSW_dFcK8yLTmfNoP}f z;dJ-;a+FBt{THW`ZHuiYMHdP!hHSbePIn*iE*oAY6oE(xQj9Z>YxLe7p~Yc;*~D-v zpqFRo*Z3HAckrW%&`M6;qyqXoA@v9iymLcKvgb(CCB8h)a7iUUXe{o6Z#$7xQ6Q!|2bz$XeX-}lfDn%Cy>+b5| zS>)Z~qUM4J3rW!%Eid}8>!fO5;i~gklu9LEIeewEF(e5gcot!y)wWM8Z_s2i$vUP} z7&|ihYG#k9Z@zoc8;?{Q*5jSnV7~u3{Y{JB@}sQ2CSXJM{alKtW$)J4F(g>N1Ur_nwQ=QQtyBTRHUK(gm9?kGgAjn(?7pisW+>-zG&x zg4+Tswjb^Vc|ZQ#?jU>%?kn!Dy3x|7`gwlde-u<;KZQ0PIw7Qx+O#~#h<}{w=lS%$c=UI1xFb7FG3Y^SUP^ri|C-Zrm z13y`-c2`#xa;zm>8l^5Df!*s{l67eic9`F5Fz^&h6@<6z!b`bm)}biwIdRJ<7% zH+oL^AK#6CZoevVT^<|Xts2oxu`NEm=09j2q~V3l3P9JKuk)$S&j)k)yn$mshmBF7O5~nPoYY{ zafA|s**{wHIjGRKkl<^L*>MC6$@gpgTDcvz_fZM`;lfFmj~t}Pa+$)DC1x#Q16Eme zH*BH@YaHky+bmAY)ZvSlV%G!(?mY!r255yxDQ!Qna3=;^RH&BDkrqx(O-K&&_!f6^ z<<f<|64TwD$(e~G zUKA=Te7~;o(2x32g?T*zo%hYrdSRq&1A7=ePdrP#TdHNt!*5| z!{v_M7oMk8Vq~?_HE5z?uQY&ww~B86FOlixA0&Ssbkqrr}!Et6&9waglTVK|w(ootgP0r-_|^U5y^k zJ+gC@Zn3--YTguMznc_OPcd@h(M+(rhET54;NVM5-`abti|r3E%M-n{*UGntA+}g5 z%A`#3-HQabr$GACl}8x)_~udNQfKqH8uI6Cr?{xUj^l2WQCM7? z@W&nOE2HjoZkR_pbFNDo?Oinw5zNF*6XU#zP~%(lcWO$qE%usY;ND`C0Y3o= z=o z+Iw0vR7$S$`-W0}a!Dp{Tca%q`QGMrt}8!C@`FqIn}%m~DG?8kf7Tn8O@f|?w98A- zN~~N@anhg7pi{pi#fv%Ws;{ou$+@O~i|68}67=0Z^hd)SelMWAAPC5R*P03+;$ohM zBQtjx@=7J2z&#`FhX_GHsRI%9`<$T~rQZM~2Ls$tM0bUr&RAtJvYb7bWHJj{`+KEw~LBUbKD+BLv-N5_rYj?RTh}iDq>JI82Mqa?>{La zyhzDFa)NL;qnZtBB@o$W>-``QA$^vk&U3Z22O2c`*ygsTQ7qlA@5Qnbk9yp8reqR| zL2vg@C2?XR0Cdv`$PXq?qenVIi94YNGh4pTF=bPO8g^w&-8au3lhC#k0Nhl2Ikf3j zVM$NYLt?g|1EolPU5L~4B?J6hzy4aqI0(zQ2u|kJ z!c@^A>F!u>`b7`cp-;2rm)5hdYv`N9WXt4s^Bgp8GpfPHqI!GY!kVcXzaYI`>I=TI z7?~ax@8phXLtD&#l!`UxOyiZGnff$4n@KROfw#=erbV~s#>+lBe5n+QH<8i9JVB~! zsOHYgn>>!q0;80+T7jRDM08Ox8Qalk>k$5&aY+q!2l;@&}3^FGaR3s8o&-Da`Q~ zsq*@)IE`lX*f|Y7(gi)|O8%}Y!s`=S)(8l(qQzLV)n3Gb{uN}Ajh#KrRJXoeT?cua zAOHN>ll+O^es(LbDa9t9|HeoCZPp3RMDVf4HSD++{PzY8@M`g~I>~ow+P#`W*(M)kWj2Cu0{Uo>=xJ3M z`7t-3WE{P9WY+g1Wj&)`i@RCzYDw>}5q*F-&E>-YNg7cSRo-#2{Xdz+e0K4}!+m4J ztw75}0X``0<2%6Ry(F9?=U>9>pm%)><#4>{AmiPT^ppri8D(W$BI_1ke@=_6vZA$$ z2-M3e)`tyqf@xR$QVJdZXja*%k*n&lwL3O7a;&!n_R2Vq6l(n*J`@s%A&rXZyY3Y> zg!WZG>Bl|NfFQe&$em_x=Ss}o|Ho?;!?Av5wA9=!1PB&;cOOTVr$f9p20M|tC}$){Z)>Pyn9%qR;9!10g3lQyh&+2jB$wjnL!)VYmq z>h*5S6G?zbuIb5yu)uQoch>tPaosNB2IKd=btAKZHR-wrU*?tA8$+gN>e+urKL32Z z%)hFxU%bNauUPgY;pvA;mRrs7YtZItaRg!?(0}YDGH}9m?=z)X|Nk?DdVm9{E5OQ=&e_j6*>l5he`P7?uvkUa zQ+4eXx6~@|onx$UmrWf?l*1-Iea}6k><58{tDsbndv5yirQW;JN~G~A*YCb618b8? zzXAW$$#>Nei3Dq^)4;g?*uNi+1+l14A>YPVrmD_xG*Nu9@WKD7sE}=P+p5_%rE2kz z0ozS_(c^2lLjYTw?K%$cHR%LPtww?OV-|UbHrC-L-YH+~ljLBiXVMEJj1^QQBi=ltEmiy1mMf}gr z?I2vH3&Vqs>5SKum4N(da2d$7@{Wqj42-M+=mhAv%v>C8`) z)D%kO6_QY-C^%=|z#lCwKTVj{Kp#kST?dNuNG$iLDh-z`#M|onG&AO;^4cW79e(qi z$I#)ZTXiWroIHSkPI+#S{f9ed$Gn4aR28d}W8PlYCZog!B*wC=X6NX^>( z$+8_;9pXoY;3>r8^kX*Y5Mr~C{LL#EI9@CUf-!$WiUiqc+^;Rs8-IKp*P44isZ){I z$9-KG3C-kTF&Nu_W0N9Ytj4ZJhe`{sX$u;m0)%f+5S?iS{1udb)g9@Zx6OFyFp#@C zoRGX3WTz=VKp;sZtMfgzkk;8(%Z-hpg`h|JiECWp-M2TtB`41sGboY@_S(XnQJezL*Ph8) zK=g}qrxsZ5S*URV$Uu`-9-(yBgu=Tm$h`ZQM@hyprU}OPI)K}og;!uBt;v1dz8-sEmTLA?TrkU~KD%(x{}k|(fGtjSCJCrC57rpc$RPH6{u}910On5= z%Um4)p19(zo83y2Y6e$UN~brp4BLlYF;SfOYe|WNWL`0=eqz$V=lc?$DFA5r*<%um zf@*>lVmRC^lcy^F6;CF$D+JYgK~yfK8JZqwIG&Qq0UscpLAUxhG| z=fA`B4PCL+>g0*a6gPh^a0=vdzh-{gMtJHkI2;4{_`fzDPkY~bUG-| zmUI9`dzfVevXXpWjmp@#ft(2oxYwiSRohRdY4L#}{WRPb|H<0bZ@JYF8UMfrUrenPhqu&!oh)FnM@Wn{h0U?#V(M&1l7_y%@H?y zM*$qsh0{Oi0tMm}7VQ!~>EeTImDeBWr^@TU5)P?&iwA5#-nL;us*$O=rO8ia!%WZA z>WJ;XMyZlWpQhL;y4wu%HVEws$QA5=*!D)zf`J!kQ7t8z`&-)-)us0w`Yn(z|Le_V zF*QYX06GRi*XbC;8P9&u3HOHdEKwF3V?t?36CD0N8xZqEG#KgG<2h-rG6>TNdur&< z@ZvRmkGk^PKCNM|9ZyZ~aj-ojK+??Cz-`rS#u`SAo(LxqJ%sfC>yHz@{0{=7W7&&b zIhDciZ0~_e0V7=9TU?O~cZ!B?u=!MFQ(Y2}?pJF1SYMcN4EaJ(tDb%UOD{~SX*K7* zqY%@m;+>f=%rmvn@13HvpT5O7Agt-ov0@3BR6RTR2-3!3H{^`F&x;qPHre~XNbZQq5dc;~s=3Tg?hHl-_AeehKeQ3|@R z{abcp<4ib3<$4N1_jhqRbfviM67W*Vis)Z=qxtYRDSN5RbH^{a*$Z8gE$T&p2T}jo zcJ^wMlOnx!>&O>OnpSb@`q^M~706VHv8wzl85Mv6r7U>p=7l;$E#Mjq#=f4SqIXFw z0=b1qI?D~YX1w)8qYpo7Biu>u{bC=b@Ws2v!iF|^t>HD;>7^%y0{oxBNIW+<3$V_} z;mvoe$GUJgdDk0$00?Isnnoz0w~JBFscr#D76cF*f*Zm%P6}g-O5-$ z*Hv>({eATi&0%Sqta`C`sR63&*2j(v0p+@L&k4WyhW~w%h`FhtfCDjTxk6mx!b()~ zgM``q7IL39{jb;s_Al3Zbx9ie3%F|y>Q-~Sc(QV*qRPqC1K*@1xFU+xq!gOU_<^od zHdQ*6~9~Gi05HG>AJB7*4&5a zKAz$yeY7?B@pgpKt6O@(6c?1|(F6eQx0l5E|Gb;T+hSKI7t7eU{W8H*- zcB(;1UkvBm$&|u5GNL#$cB|q!m||qMgR={5N2pxrBaS*qciL>Dgx$b9dk2(wKaJj= zAIRe*#h3$WLCaEcF9Sf;6{G(1U8X2UP5Xa~V)qtN*1|gi6Xs`>l?0>T_i2;c^#NY# zPSD6yVhKt+ce3hc^PIrlu1}H|2WP}^*gP@UU2x(L4|-|YYfy0$>cjbe0irZ8Iq;nO zG*AFs;;0-c%&_uN3D8^@xt21r-XmAesw5nZOPQm?>pl#UfFMO0nGb@FkYd8cPXt@6 zM$P8#<+?2RYwO^3K(6Vjn82Kms)*cSl+^g2axxQ-YCJ~(rY5v&`z%a&^Og;^jpHcUN>Np` z`eVdZjG>&0r+qwpv|7xXD7*eP3Eiw-?X5xcIe*)TynSJnLEFa~Ri{2Zl{ceA3Qu(z>mE_=pdZ)m~#8Tk^W`@GgEn3m*ujgYrMWs93LDa|B<}m6h3KLwbL4 z{@~f_&tj7M3NLRJ)=w>twjz}wBXQven+|kYvBkX#zXgNLvq9)^KkIu?a*L}I@H`Uw zmKh54=Y2e|-)NypOqReq9GjQe>b;Tl(tY99B?A9_5Yuk1Tn5J!%t~PUN2xd?u8e~- zpWHbf=Vk%HLemKWiq+z4{#1ce!j|b-$+a4uz$h{xi=76teidtzh$*RQS`JU|B7ntc zC?{pl(LV6XlQy*Sdf^CWx@+Ol%n|0fE0Jo%i1q7gnS&x61f(+e`=`B+vRxlvHT(@;~a|=d2U!jb~Cawl@nD3D8Me6)6 zTvsygUP9YM_d3q@HZ@)zs4ee}0wPAG6UclRR@e*rtlj>Ha;M?82h1A#dj1KzU!2o5 z%tKCnydXPU0&cfJvfuRsrIUbzg@Co0X)8Trn|ih5&SNx<+-YSh|47?VOImG{Ep#Rx zP|^r!2e)q)T`)B-BP1{+0?Bq?RGB1c>7i&2&(Gw;KSV&ZCBF5B|J7njnEt!o7Uv}( zxgK1-pX*-eW9{I0cC^U%JppgA;ZMzDXP+x5W;T7QgnpkqTjyHXwd zr_T45Tv{NUs*co;&qlk(udYU&qTSGr@PG~ zPdga|1c-HK_|Jm8y(%GXo}Cnm(;o){gS*w}z?;edZcvgDdmSz-HMcO(v22iW+#EIg z7v)^*drKiL+uUajcV{m{R?G0jY{beQdb;jEwHP3(n9hYJz8Cj2{a;KL$%!-F=DJxc zG@oD1DYv_~cyx7s(!vAY<&fq^rMG{`{Bk8_(NreY|TK(n&TU1I4spoaan+o6w z;07>Qz0QWb)azltem$ILRMhrHUrr7k*bHe1Y-lCscaMiatbn3wUUBgzu;#~TwnFOa zdcMjO8yGMeG`e~MCZ~g`q++$~wVv+_WH=)G`;i~i7$OM-J_qo>MFb#+>whj~ffzX) zrEk$GiLVX3CyQoD!?6%AiAgdXkBM@ zuvNChzm}0B!s292&t^dwfttpr*J}IiYU< z)^%|u2Bmx0lG3{F)e~X!+h>%$A+yGAe0X?xK9O@uj}SUvC(si5jbR|8x;lo*eH9s? z89$6uTmoo-72^e0`LsyDeBDY37-9fCri_D+pIJZ$w3`A=R250y za5=zk-b)zGQ@adwzY14CWEoosA&^eh9BOH-@fA~96pV05HxMW>W1IDrSc>4RJ>3XJ zI)4|n@f=XAC}$?~`C>)rLuNm(ZOvDZ#6H_q$?4^760ok8rmke5JB-K$5J;INk%)X= zQPBpls2eLGr`%`&59oj@aZLrXitgu`Z;SvL0ox7IO$kFX^wja5KN&mq8Wyzt^mxUj znO`Wos^bj^tn(sG_-#Co4#}KsabiN#S5C1E_lo*te(hW%HUA}-tJ=S3C+mTXlukaK z?}<(h&wG%_sZ%$?Y84K%ovofY+QaR~GyrDI0b>!~c`Yr)1HoyiV0P#T6RHg((R{9J zh(714Lt7dLg>RcjBE0SmK}+{roVmpazHRA9b25n>Ve|VXI!&%S7k77erN<_KKm|kb z_hXhbvJ^lc&+yk`<$!)OY-=~RZfqILb5p|i-(1@-!z^XoZ)c#C7%43mxa@7Wm=b!K z{V~)x<~jO^tc`9yy?Od>Llf&{D=#TLVNz}3$%nDIdnubE88ZDoHQcSl`0j@8xe#mc zp>F^dSUiOuY;UY>Jd%=p!os=u9UC-XV@XsyO|rCqrQv$_kI9|?g~*0ZJEqQmL2x^g zWpOjKeQd~&;g`vtMarTwf08qbb-(Sey6 zz_zV+cGqTo!A!SWpq*)yIhmhtE-%k6!b=!L_)fAy07l>WH8zgVnTLl5ik69KruO>iqI@IzW8|L|#ABV@IC977-FU7zG1T<&u?tPbg~=@fjYNxJXDXNV zVt^q8BP$_B;x4|wNSdqlUu-QL`v6&1Wv z7$Z&XvX`F=x(22%ZDxB?fGA+cQB#x1($caK=xzNwHFcUV=YcfA#&maY!qy)x$M4d2 z5%|V49#~3c9;CCuBO(txzJAN|vykJSb)_?ueZT%x7;5cosVlw z4J_(l-eL?|Nn`J*F$^2Z6o01&GWsCv&fD81t7bbB{H>r3#>#Jo#o7@LxHPfmW@dQ_ znljo>Nc2kH`r(gsR3cFGvTyFGJ2zhkuxuG#k}EV zla@6}vTUe8B!@c`(0Sy$tba!#1dE)Uh$-@pid|zb!Z^1{=7oqvs}O z65F8?r4_D}(+c9_d#2M)qg(2Fh`$fCVZQ6RM=J2lM+opzK&5^M{1PxxsC!zzKUKUA zjCsNUI}?F6;S5?oL2^jrnQV&3r8=m_VpdQ}Y9Md@IcU#8Iy#1GWc4kM{r%;&<>4)Z z7@O_h`m4wLURw>V2vK@(9*TOfpyG9Iaw1}b)tr6xJVf~2yX3qt{uuTBY-2i3N26Q1 z97!9N7rTf18ZtlmwYP<6WY7BR>8WT~4*|Mr1y0myj}WcE|JTwads>lIm}gQZ_IDF2 zyIc+BwKRN0&ED9RM~cf90f+U{;O)h}%l%1@*rJdVuoJLIvo;(+DZ7ncOkm$K)6ER9 zvb*JBzD3)vDI?=YN=k}+7x}58zCL-rlG%>O`_7p790}%+&IE1@c;JIqEaU-|mP7*; zJBNouR%9hVAMPUp#uNIg_YnkOa$C3c64y_|8(axVB7o8X&)SS(bNbmuF_V0}-FV8r zvJ#eVBT5O2#QF_JaF!^EJuMF5Lc})qPZ6=EH!NMWNa=dNgeij8LmwYFCsChBQl+T_ z=VsfUjE;soGPy^I+`E%~d-*CUikW zU|f2;0f&(0tGaBpQBcqN&h9F{;CD|PkKX-g9m5NI%-#ZYbfBMAgDQx5YJpQ$^p`8*r*N+6lbo*FRJh^eVcxk6tMsIQW(E1ub>!5-}c@-Tvm$d3vtBmxe$jy~-KLYs?NuZ{;t`N$@Y3b+$c929pA* z`Nyts!`Hd*>j|KGJdTcrZIi0dqade+`mp$bHyi=!?401R)=8KbN2S#fS$vtdjlMPp~;%UNRN|0TaNrmwleG=nR`#zYW*byT{ zQEuMzp@qFMeso)JVDxg_X-r-sHk-f9dsqk^z(WOs8(9x?UT!;<85xE8G@Dsrl&Zz-Xhl zJ_SG7!bKsH=a#6v?Y-s5bWf=oN^ag)Hc6@s+s=2i!tSe4dT?aG&c&BE0{)gCG+a+< zmg*a6Dn4yPArew2Xs-<4VnCBRy^_WOZWoL78w^O80_|`!&Q1#x6LYLLbii8`(MQv& z9uD+8XOvqkMiKe!ucq_pA6iLzcMVpUFNXZC9DA-@S3)F4HO}2bUF%}}zyT))Nai_)*JcDhvr|{fVNIOE4 zIjnxbNanJV0e+DUdi}!+ocFvH7${&^EH{&&^B#j32B1M~oSf^xvoWF+k*&ZoCqB1D ztHlOfuS;XEfKtzk!Us|TKvx&)vCp=KIsi1OK0|Zj;pRRDh&4QoT*|>T0zF}v0Z{g? z+b;owK&m<}RDe$kyz8N+phy_a3U@RqZA{|DfS#&|!S=ke>a;zc)q8E-%>avv&65A* z7SR{wm#TDQJGK-@mN9mYEe?IcPUMZ^PA(&i{hOgfqdsR@DCck`h4a#I*gKZ#6*tkxIe+3_Fl}WpOl1>7<2c|GzrXh6M`*)LC^ULZ z&P}%P{x{`)-CV?ojadG*^+4AkRu+x=NhO0{ef75Gl6 z_1oI>Mwc@iU}ev0tftjxQ`2{p679fv?=i5glJI%incnN}vXR3Es{+%OX)j}rdH5&~ z2QOeUZYA^011%%?;KQ{$zj%}%FfXSC%qadFp|$q9n;*R>=ZTCB3ZjkM(Bu1?X$=k1 z67i`vQ4go@FR5E!lS+p_*bk$~y2BvAfZY1EUeVJt_c+{MQ1VABYK3;<{uE2sph!S8 zkc#bKzx5)0S8Z@T*%l|yp?4-FEsnTDjV5Avz5V6w8VvuFG^~{;|61Iz6zNB_D*T$w z8ZyAL^|2tWSyA1Mz`!jonPsrwzki?Ura$d}gx(TBEQ=-=#-ssnzS@b(XgS^0;+I%f4aO&56dv)uMB;|^ zRMYmMmxj5W3R?m@QTgl?zwy)NHrlxO$QWLY_QY|}{sUO_vtG9Y*(J5*`>v0HK}*{! zMoGBoHjf!3;yOC96ng6Zib_Oen|O!e4G*+oPi$4savy!G7@ls}FjT~y1yM{nV#yHe ziNh04FPBx~`q_;_;x_;tM0u07gkAOduMvWTPmRm2I^G0oL*CdXvt8og;Czar{QkXc z`9ty>^;mf&C0I}y19EaQ%#%OmX-)whwM>HF<>e*$Lkit{l1FWqbyox?K;dOj`d5YW zZKj7t#N3x~xk_l=EP(H2XEQW+LT^uq#Df$Fu7H&{>)P4Ul6d(|%mc3P=~=NLHC0tB zGv$UR1$+VGoC0+iVVn(r0u7$nq*FN_l2#c(GnHHIB}I~M?kaNlo{ra`K`1DzQijaW z)8*B<_{&xnZNl>;8wvcK+f33V`%D%?`s{#{>FDr|3knLdlT^bCO+GrLu8b}5E)1P# zfim8NzIHT3=5V}7ljj`eTN!yLw6aW}qnAJ87bKe2RJVA>D=Uw=h8*`b3wVGe^x^Ju z?QddfTCL%4CoAl+9c<{Sf@<%iM|m3^``63XsGmtNP96K`2-#uezB{g_Y)!qwKc!~W z1)W!4MavNfYPnJL-~v~JiVCAcK|#^~&T<6swzU^Lo1T=62{G#-6&-NL4>{j>$7cOS zWyEreE;Zvo_nIOECBt%P+~bai-a(=ha+I&s%%?g?%Fg0~cHFuAqcK|yBDd%Ig%FN` z2Y~O=X43OaR2=Tpi(y)Hv)I^n^nP$sOkY6 zjUd>n;mQ7M{V{xY1^U|5o`}@!NHDq}Q%HUy#oo!~UCH}i7!WPZlFuo`cEnztz-;J~ zFn-PTy@(CG>8{N~O7ivT@2IMtc_mcKkN`K$x#QgM;K^|+&%RBYyW8M_RkFXe%l)<@ zz)x;RKs{^5W@xh|wjNEO8`YK;G`NbP2iK^_T4U}6T;o_}!ab+_-tqxi5f~$E|5`>y zfw{wUc+49CSg>$_1}nyx|4@vPT4BkFR`lHbv5!;4T1mzEuUhqqv2>GbSwNCbGHFn) z6)5}tM)^XPJ!cq~$ANNKfLo$|`DpC^zV$ClTu5n0mor@h!vMhTqFa(&8<`VnD3cUR zy4Ja=n!e&^&S!R6X&BXAfJXsZ2-!7RG32*qaAU-^+=I6hZs$Dj!^%#^GS0P|!xF+7 zMcSs@Wo9O^`yBsdQnQ$&B}HxEhbf5h{GGnst^>rJarUBpqN$cbUXS&E_3!(sY-DuN zko&^}=Xp+|w^XX$5o+){LOd(75Z}y7y>u1X8n8@^YEnr558 zw?LRfo<9zb>bpMc^EVyP9}a`RrFNNsP|WemDyR3a~%n@ONikDpgvMBvD@i~U@lC!lEnryY5Y zooPARzFIRz0Rei7qskfeFv8cniUD}twXuqu9)HjiQtP~V3#HE5!8sR4f7%tSP|C_|og zeCo-}j{MQd^u!Ilj`1A7R!EH$Dwu)1*%7973Pt!gRq)SmZ6$y9>T?bk;p%GZw;Exqywqq@4*-T6qs)fE=Ou7&peej7+WJ?)w@qEUzAXnULX zmLek*6S^GNe5FA0Hp=Mpay1N+x7SsiVfyS-I4lA*REKah@&W-OIgVVY!r1IO4(?l) z?ePsCOQHFCs($O7O=?z7i zv?N#j28?StFF3}jyTC>cSm*%s&+~1@(BxHfmu35}>5ICgR_Xd$H9huG#oKO)RiDY% z#~0}U9JTvwBnxdF8NXv_L|_29J8rQF)1B*YkgWP8iqrJ=_UeG5Bw6ww2shd}Ng_Gc zF3*yddO-EooV=+nxx@!Q%4{nM>Wq`j!fH(Gqxk?!&DxdDS4ZSw+w`7&yfcdZeA9sN z$qnQ;`mj9Jl*3I6=B)Y1qsWi)>%w^UJ9db-$xF>|e|=m><}lnBZFR6$dzRPA>tn!j z$MQxm4+H*=qkwFXXx@6>JVS~7<)!Gc|0fu3BLecp)>cG4OC8c?6?u4M2yrBH_ zwzH&@?15Ejx_*VOqxyr2f#a6Pz0$Ish8AIo&GRDNh@fs}3Bmh4^UP$C5 z(M4o$C+q?1L>K$T=7xtQYm%+0pwR>V<+S^siB}9bq+7c%P6m*%*rQoIb-pfgWX9^+ zhCwxn`wEN7YPk(jNDa2W8AwN)L8Y-EnhPKDjwW@lO9+_RGoNMj39_TM?K1cnP2*N# z%488JPKjVN1;l7dn__L=Dj($4Ma8f5t*v350tq8u_mEq!WvJWqKY&1E6Q~X&Ke3E*FmaqzRKGE536E8)r%#UQ`RhE7b|$2siyN#t zvuZd-&bniDlN+ewgJ|&qt#Y{F9$`wzf-vUYU1o(F?-Dlas-xMr+PfJM=nr)cr|7sh zr4w8_qq_KeSGz0keVMN%QV?@PLk6xwvghS0L_3Q&^b+__s0LPU-MNZrS(ie(k0aWc zHT?a&tqk)p2E(FjEjwl8m0{qTN`&tG2crp z6|zFf4QJv+b3hUcowc~htTU6X_$%K2m3K1_bB*<|;{U9|EhvmmCa5plLhb-nQ*xsi;q6t*$u!;5h|@TE{#JWqG?A9 zYc4v|Y?zw$vBd;ICWB?MTALnXKKp|6^|@|Pij8u{V4FaI{hx_T`KjS_peSth&yN`&6F>wISlQo;YU%_<^f7@e#$B&JY zN${zac@+iGaNC5?^NVu0`|SG2ZUeK<9MstSz6%7(C-x zPOSj+!^Falc!wwz_STiLn@Jb+4E!QtgSPbHatMMZc5u)Q=$GDzhB*amJ(IY!zl}zw z>7QY6t!`Tb5Jj1Jj!`3EE0Gn2xfCqh1|#ejqlLc`iSGCuh4lHGnX@Sm>jC>jzdarU4un-*i%6wEla|$diQNpbuRd4dY<1 z20%RJ*}8@BpzBHr|7&Se+qnr%Hj=}TkT)E)D{>_3%o{P3YV@kPXw?AxSpp{rvJ3u4 zgN%Ov9YZ_JKFnXA38~I+_Fo^9pbzJPCT9~&TRPA72Tv&BUMZ=jnf$WBuK{KVsDqq& zJ!G)TrgRWgRX*r`#FP)H3-Fl%h$SD$id?gY+L25mY@|~8hd1B^= zF%S5Ij(N@OqZ#G5L3mJ*F!61T%s+b#dO1bZe*4Thn=r8KReUfB2K>z|{3l^^v6tAD)N~eabCb*+(6knL1pIb7!cGEj|#i zvHW$(0~k%F|I2|w?XEF4qWjPrN=iB!%m9e|cO0j`fl;d^t3n2?!;5ChZG4lSzf-6d zVuU`MAfP*!%;I_ijDOH%s6$>=HE>XNd_4D3}G zZ5FO3e7pUzE;xVVu2BdNwNg@nwVoTg$Dnn8JxHMg~@h^6tLf7o-*5-vd-&fuz z2EXK9eSpG#Pd99p0?tZ6?%JSlF&__Pd5V9SyPjJRBgkI(gUNd?P@L$M;jUCvGo%T4 z7jVmI&kqw7eYb`;%(xBAIYA1LOjb7=ZaOmkCT_d@y#|;H4{yt!3sIC=UJTnLfoPP$gIz&Jj z13<}Zrv`Q*Hl#G`z1xF>MY<%J>RcyCL%4C>zUtQn&CQ|g>`0HS>n$T+iWvm*UlTVmr3WOQ^|dbB~G7XxM0+XdhVk2NS0 z!lEjLqA)b8IUMp@?y<6Bw-zRagoW|hUTB2DUS!8Jxdz$b;+88bm@4XWXUVv3U`@cl+aFZ>E~Y#zpN zz&_v5d={zhX18`N+<`_e5M1Kx+7alt^Tg3tN>^8#gY;+**t9uv)gqH~2@pc*71wba zvSVXz{xFX(8=h%$rOt0c-J^HoomL`0w)BF-OK^QU`%IRXXo8LP$TH(yYp9YVHB4x} z8=?ec+?(Ro4OPmGt4-6mo+k;Yu;YQT+Q z9;6*?+W2#qLm0M4zouv;u04&ow)U3Dt)pSw8~B5w*C{t-2IJwtu5_SJ!kK+c`~ce* z#e9(h9v}sevx)rm>nL0sWF&jFu~CxRwQD60L{!Gxxeq8i^@ViTreSYl`e)}qfV}Na zKLU)KZ~9;_KWGF@@$U?{CHj_+mwm-RFb+N4L4B{mW0@1&PCgR<7wWTh333@!8n`^s_2O z&lrQ-b9iGBVRN;#*MElOlwBfg!c*&7QlAq%lW?(aWz32>c2kxsD`qg$w@w) zLVR8w{gptVv5=U`#s2~WZ83E%x}l`q^Z zK-%T(;xB{}fKhFa!#X792w}szOnyI6ntFTA_Y2B^g*tdz68Hh}!YQ%eCLa!M%{M-+ zi9zv2VPV5ahe^7+8@LqnTVzf0jL6TrnhXPVxpNVFP%sSGf@wNeENYL|S`R-5K;P5#{`o8+iUw|d3l~NQi}$)*B^s>6;ld}opAcm*5aV}O zt+F##awlVs3%8f*kwJS)+P@MLd8gOTN~JH1C@J~_#8r8I^yu`Y3k8mjj{b@!)RGai zxTsDsr@-=g97k()K=AO$k3w-lSYnC9z}iE7vuB9_cx(*9eI6I8ce6irBs~XJbiHs- z`_{P_0p`pIi{cE3Y3IJ4aWG6yursPQd5&E%GTz=a{3o|pAcZ`TF&1T|{JT$WagOd( zcFrGaj#E6rWMj)RK zBzm&Khr;AOcN%aN0Z9G7V>afO9tOHo)D!hhI&`WO6LA(3TwaOV7?XvnNC-Z2s64lX zf;HpG7+Qgd6!52tqmpL(#O;K_{U9BC60%fqmr#ie@CfU45g5-tR?D}2JVYx_NO)b% z{lxC02%ufJCCR@UxJPbXTPW63=mh%Vt9pX34mXOcE!~0Fun!UT+@Wk6i+j$DO3z~;w`3EGY zDUy`Owbmp=^W>$JT15-ss9 z1qI=&f6((?b}j{wnLIhnj_H?0GOg0o0;uF?f>utyexgYqn z90*T{FSYFZ2lU@Jn85;sKg?_D-oI?1NwYbx6yp2W8KQ49!Hx@{I2z$;vw7D^aI}Mo zc<5rA`4sL?y*S``zaR+XYfX-=XP=*t_Y$V6_oai*`Ixf84b4mZ3-_NT!8<$aU+PCR zmfnlFOT7SPNx;bZ{XVM!IywLYA_;I(-8o4879n?rd5_q#HV{TeQL349wod*%XYcWE z_P`dtU+5UmF{gv%pJtl;`gQfSy*qbL5V5Prq?aDHsCr+q1$aCY{j(VK-`{f+G zDtbCXHkj9C_D=aDomcb^#}Y!$L>oZLZ3Zd7Y0-XGqwcliMb#?4tS}qYY~>aii(%|` z!u=&CZO<})>Dk~Y`+_E)TJOG4^5M$mE!hvlDs7?rQ#hOZm_A=|rortL(1MRHACNsj zee(~IPO}<4VE58hpOc*pTikUaQS%wY?Qk~M_Vk`p;~^Z%Lebx{_>6%#{i%o{aFu2C z!Q!t_sUYcB=mu@|MeAQ^n3xxs&{O^FHfN`Nan*4(LVJ6>rHaXDE3^nE$&|~aGD`tI{qNl;@qi6?1w(>x#CGy2w*fPM#-8nQsB5bWtFD`boETghg z8-Ll-qWfr`do*LuL}kJawshcZRe-KP9o6<@5^c&@8I7@OLO% z2bl6|B3=Z0BziQDJL*>cmP;ho@2D*)E|`*&kGdQwR?}asd9J^DVB{9rUtN({0tJo# zBkHSzx_aKWK`H4{8l*%Tq+420LPEN`OF+82lM*Y0}x>VI7_%~2Fk0QGsBlPYU0 z;)89mD8qEciGQhQ&EBrA3j(ma=#GcM#nQNQ>`qIPw4VbtTwbb=5nAquh>*XN`0$+t z&B@70Rxu*OP|x$A#MizHaJ9S)=-GnXp<8tp6rz@vAE=V>cPdzJq*fyiCwVt-7sFJ@ z^SKCsnMuyTktfNkiX>^LtXW#3NQudWbtR zh-A{~)#IL+L1tvOKBFUjlOL)&&duc(DOuB&TYbovmo4$Ce&x`7P8OuHR*;eH7ogW3 ze`96;_m}1G1gl7iQ;C6}vo>RNO~iUF4JeXF&m{^2>LbM}dc=I%u3vmSA5Kt0g`Ij# z4fw|7;6kVIO%?hG%6acj$C4!@b^9X&eYb3d29zusiSOOe*I~AXY?!hbGrn7iaPc6K z7fnrkrDRRk=8H66L}u33aa!x0i?$L2MbfS2kU58((ya1}SGE9}K-=y!L-O+65yk>q zUt8c+3U1wM^eslqY12w_cQ@5rdx!N>P(uIz^DWh5WK!1YP0+vhtl>_2UIe z3DRodYw@NPVUgX(HccpqoD(&_IsHZEnfgJoJ_ZmpdB zeky61NXgr~zGXXk<7JJeDEsB7gx}T=K2mtgOu%E0M3lnYJxEXn!J%oB5N6@A z8@@v*5IdNqeDz3Z-nm(vilLCIQoJKSrn!A zUl!H{YPokE4}n~ONphcPV9~iHECq0Y)HqP{NEo4_dq23VYIxN-KMGnhc>X*T4j@~j zT@mA5oMD?UXBjHa_o`a{J6?8juJ@CB_E|y>6Mu%84B9sU9ZXHV_;eX(&pwavNpJ6< zjr<@5Vq1%GP#)^3(~if)UZn8rq{38SGl4j2W;m+agJZp3KGn>@1W#8ck~+kSOZ=+M#CNAv-ROS`G%i;CpY`@okp~N;qjXDcZKHW0 zyJ$@+=;RhVrK~o-?+O3Ncb@2o0_e#N0hdBp;QP1jzaImh`SR5FCwP!VjW76I_q^u9u8i%YX`I9vN zU0Gfhw$Ohydco07B%HwkI?5UiqVCipl+p!QMjw|>c(aR#NYkc3>R{Snu}qVuPNLIm`*?HE09p?y}q%zIX+3-F8Q=fPB1)XIYX%1!vZg-7x{>@3W~kt z4@oYsg8zuQ#TDrt!O0C{B!ITHy)xQCPY;f@4;N|W&sy0jZ#Z(c}KUqyz z^niK;;k$C!Pk@F)p;x1vl?A+YcCl3ltv$^CzlHx6Xuoc^o8x6wx@^f3q5zcg^%QYF zWH5NvtlEqdIBoy>C%n}{pXgf>8z&g!FL3;2nmu$Y&2M~%PrE0Tn`m;?po9PMhy1s1 z{{35n!W)JC+HXR`(qPHBJzmiU55>ZD?8&(Wi7HVI^pL>WYT$sUHu>+V z(aGNUoph4tp#=ra*!{vKZ&0`be$UA))UE}Ny%@3XONT8)s5DyIHlmy^T)}FjC(E!} z5H)}|?qC(Ov$&37;^m1Dlfhr?fg}!+n2OCdrFdd@bd0b!O0gJfj(pn4gyFH-M^4ZP zu>SUz(94(q$mTU8A4I>E(2;9Hj_VCpz^n9K6}E!^B~hG7R=x7BwjT8@7xu(-J@aoi zfSaZrR$QG)f!y=6j^P}z#a&}~D9)&NAQl9NPA>78Wvy?B!Igc)^Noa1%8sz^N2z>t zNUjIsgRSjKU`bB%>4$a4a3C}|IR$EXu&A>&(ZhhTqt>LOkD3*qJ>0L?q7U<;))bgc zZ#`O{=)2lTO3cEq{GrIw1;7-Ezy*88_EvRZR!7ghIMMF28& zGyJzop)QtbWh0tjL0NA!2W?1N`h4kuJ+!dw;2=mGEq;%%&Cj|0=+!J6dmNRISa^o} zlSSe0uZ|H-moB?OMeI)YMl?&!hrkk`?vd*$O|+`o#$4&jpkYP`AS3H-&zjUo(R6aq z=6)nXCsT_IC2IteSlCg->dr^qQwv( z@*Fx(=NOPT12AqN=DH{ZG7~#>oBrOV9DFQ=8D_sVt8P>| zMHf3r4meN+oz|8E8OnNDPbv~6vHR`?-tY*$O}`y^M-7RyCMPyNA{@0B6G2cem~5vm z`9v&6&h!X@5(#(20Zn6C#ksOVF5+D@UbC(B|wTEq9c!#ykdn*jG^=mUBd%w{!X8OG_MGD6HjXq z@TlkRo|sG!R88N`}!fpY?Z& z;uZO8UY88>M3y3~;?yN(iGi9|i*IG?4k-qtWWxSPfjFHM;eV^?ZQUU@*!wx*MT%3c z)HFj9^bDR$)eg&P!ePd-6ypg_fgul8g8B$x553}-S4HM}$rCEdinvxsMOxoPRW$b? z39>X>-#J(?FjCUiOUpKZqV>k7`Goo{cX=#qS9z#Q0d48LQEpLH9==ebR}o)REjhZ7 zTDZdycNZmKK5n~?ai`sa#tm>gj50t7$9~>>SfK!#U3Zduf@{A3xDW3vsAPi7uuokHHCO*>l=v(aCruSWCm8Ga)ljW-HK)b*=oXx-Bh;7vl10@+@^R8W> z2%AxjPiYNG&;xAd@Ju}uKbaFWQxSV@OG)2Xf9}`TsQN&a!}`wLHJ`zEtG<3T9S?9% zE4LyA?7RG2#Ls*VUr+){Ew+exLqs&tOV*7? zR8}MVF|Ee!ZVwn(?e79w2%}<K%7A5GL<(CI z5=u!8@O|Bua)AQTb$VqvI70MZ7(>hUwTL4}^qM$ykJkxAIM99jNLxuE+?jr@YiD@W zRqM-ENLrq8?a-YHUQrssO8x#{{>ABPl!TfXOpH;e%Vxe$ytW>v9}(V@+?jnEW5mDU zCN?+kZb>N=uAv~!PWtqtG4DF09Mjc)R^d+qBDp%wrJYQ>``XW|`dp!VEf1tNv}Tj1 z{5JUD;3X7;M*W-DV}?(uB>eD@-`hW_oLlF5ERf0FAFaG<9g|=-Cec17x|oO+UjW_b zkoS_I{@?+{AI6iq>^pkKd0F-(U#DDA1ai;(;^<9d&I~rbp5{f zmMk+_uayV&;{4q6agKowC;=deUP?zhl{3Pbkxz5pEHMHClP7}wP@|osuzRGBmpC7R z^Jx7|9mQxRk~B?RQSp7ScQ2DAPZ$qFGw2H>bF-s)e~ib!*S>VvkbvUX`WlWF^Txok zC3R28S%)@z3eCBsA9Fv+(uc?1Ik%WIgOK+J`t8*bJk)?{EvCT)i@z)C*r)17YeF1m z+St(><5f(d))9N<6V81?l%m8>o6-4L9`KA%Qe)}w5tH@AfBo3kIijNeuz&RkJCE#^ z=SYX&k@9;N54^7%xy0|vtClN^j!mLmq$$oug#UHd0W!3C0M;>Dxots1I!Q=GOx4ypK0kK0I zHP%q!AQ5k!LD=rCv$krB2O;ngAJOsYU~eatbN#QYUl| z_ipM4L&%f+I?X+tM8_|dU9h>WUi5iRfxkJg@A=wJTMG)hgsxu}8}{FRU1^Cbw9|0$ zITfXOuZ)0yFUX_HbDlPL%EKNPm|6RZ^ zWY5a~dOQm~h3KOZ{JOv$ev`-j*Vs6}zF(y2pw7l)KN%2cJ~w zsJy{+OB)^P`Sd~)Rpm+H?eVqfxSAG5U+B~Hm9}99uI*i#mQ1GLFSW#P2ziZ{j27 z(b^c4O;7ekH|>;P^YzZSESu64#==sOt~Dt-+Tv60t**iO-Z z{d1%*>g(?)F}wRSbR7QfqLu#-ycXVfzUn*E=?cc7ZMGob0YB%{_1?|6Hk#bc}z+N1AcS&8UU8d<$d(;Mb2~8RxTIXCvI2c^~QNPW8-Lo|1SDM#=1DKuIU~6kciBn{&!S)-N%1U&9mqo#Eq_bl~DI>7-+8gDEhFZT-#bva7`2n|jQ9V9_ zL@7=$a{iIL3dsv@@MBc|TE?aHle;wc9u?#JUt08rLpM*lmr(1oSWLt&2;M%(L4oi7 z^Rw4N=f@G&f&_B;|GeAO^oP7Z__i>+5E3y7ckICH1@MkER})buvQ&o~o1$_`+R6$; zG15g$Tm}(m|Ih+#2)b4+Ma2{)n7`q{e)0XAq!rU0w`J&${0`W&%?o%2C6|p&1r>A|D8}F$&a*c={#@YUWn2*m--R;?j~NTvD`+(d;g@W6Ie8 zugV|IK0HKb((hQo$Wg4s4Wy-6JByo0FL=H>ZM`3$dhGof7ijGL-%FJa6nW9yL+rb~ z4+X^u!pUkL!lnVz)vR;uVA;0cTciN?4`YutykHA8BhOhW&Ktvt$@Ww`_K(ps_qBUKr z+&6ck$Ltc9hCP-X7aCz2&Cgx(S0(F*NZ7;jv(};GMJN%$-+Z1+Z+5pH`Q5oi?^$I| z*`YyJ|F#N7W+f&JZKdte3CgPRM69?U4+|4#P-s72(WiK?!9mD<%UG<0`qeU4jwr;@ zgKGe~B|;qv~z8IWyc{?7yPe$%rTM}CbxOa3R*KIrW5B>>)1{8wLq zecfzN4nkd$#Gf%U_pH2GOyn*ZUM+i7OFuE`R#;Oh5@$-v11BV8qL!3zetx;XuF1>^ zZY~F9!yDPQ<$-_eTNS->_~{F9ANPuz{&OEAPoLaJVG5Y#dQ`$5FCIxA;U1q(gR^07 z>#^xN*wZ{Lyq3dyR}yzc`5;radq=~D%nU$1dxO`=?V^`MAJ#!;l-NpJz5D>dl#Ijhp6nU0fY3$U$Mst_$hdA z4j$}BDdy!5#7&m655Hi7G)KRk?4p~|$QIbw-c{=~{te{dKBa)uQ$+08XkoSwytom8 z?*o9cMryOednDT9S9ZbGYybO$`$9s!|E(!a8QJg`wUEn;4U~o#AaHonAIm|Ov%YQXeXIIVY<5DvhaK%+nT_)!XHI6`HMT?lB}}Lhi40K>V?qGr&QGOb@e&ZO?SnN=%#bXQf%yDw zER{t=(FSzB!v5hQX&H#y1rBk3vDtqWyn!CkWJ~Ga0lOZamJIV@9p2%`~(bxd-)6-w-t8nuoa`>u*ryfQ0vCVKs?)1PjQ*_2to2w#frBG4K|qCRB-6CEq=@J#1`$qGF4toM{`hE;m5@ zqT_`}g5DMFZzUj4Hq-V@_>q5c&#t&Y&^~74KrLg(oj1gMh*}I%fi%p%b+{747?-Nd z-{A?7)W-wP^fDtA4w)JX#(LCI`;Sc-^wB?C^##3wmbI^duT!A_je((opspea+Kx_~ z(vM=KIsY)kE+J^y-?{^NWuiwOrb(eS~Iz!%u(hmz#GzRq7)`xG09R+Bx@R@-$ z!Khz{qC#hJ3+&KaQbu!lkRByX<4vu?x1^zKbz_T(&oePGYbp;_B}daCJ>HOg**r56 z8n~uk^yarUCRxn`SMsVHlxJX%4*!jSEE-YoEPzz+&;Hxt&}9FRmV{dZK-v-t*tbzD z#U_U5+|_%lshi*KKG$Ll0{w~w&4${|>QL(oO)H6*+II)_@FJje<;$Wb73J4*HCT%mC6P-J|*3B4kV? zcSImsxBk*UXj`hLo&M9~2nQxnb=WW*!SQXgdjsi0XDgo|$ogM~=0WbF<^B-2S-nH( z_b%Hp`C~;$c+LfqEMzsV;PYFXZ_iJgRIn4cV7-@qfWKVJn zX$hZoGxgx}cGqLapGo-86l$0vmi4cPvNJ<9Hahc`+Ts(1|o*|u;-$4of zIoIqVPR8L<)I>_*6wc<_)mXkwp7`f?pCe_B;*Gyy-(}_yrGYFM1M#plX?ZA!ZvF#iy!X%FZcw9YE-xLDd7Uun+oA$5 z9JcBS$?(a@Tc8ZsDwxbB&7<{Ckm`f}f;DYfi|24LE|`|pbcuK#SN&3D-%kz5T~1Xb zBn*kk?BnBpA1e$&xv*h+)3cO{xEw*0(kr=62Lsw1S+z6KNKJt<;yCHiRg3pL?ojt@ zFpii1jl(nWydChuw;Y##4T-7qq6(_9WoFaGAtY=ytF$ZcDtkZdhVpib-jqC`NilSy zfv+LC)b1TLaKLAIOsP)Dt8{!YMSh>rHGl(E#N8|Z7QOXQgK?!ZgKyi??5{l+aH)o3 zUU#z3!?@gpg2yLx!s^cbO%^2U?q)x4%Eu<#Vq6(k-taDVA1e{eI7e(hRQ2I~md zj?h&x#!t$;6{+9wr+%&kJ2FVS7i~zv1`X2V-tsj>Q)Z z-zB??M-Asdv@!G+Bq0iS7w;WNkE>2U%qxC-F_U$r^$uaq|DpZcc7qmFvY>xnA0`N_ zylpl%F|B#ZzZ%;b;Emc(kuzUK3Z{{RP`}4+1Ulc0g;MFX(Q3=@Z#o*Oj244lAAJl# zjB}bOXthLUVogS6cF^p$0K@H3t2H6noPZM31{WyY2VDn*&0->~-OsuizG!9^QSVpvfstz2$%WE0{)BQ`zdeiqTs{uR?DSNw=01vjI_c z!aL>lPX(P$8qdi%8r!u2%qna<0_A?_)C#t=(D|eQsG9T_@y#w&0|NY>k3EYi@cs5$ z0V$6UhqEp%uhHf@P|@E)c~gAy=Ar)@Ez%!JXI7CNnk<+NVG{sUMl2(AQi1 zrMnOOOozwyo{-Iz>j%ZY56Y|!=7C112y|OoTr9b2c-V}X`!T)fvRwlWW6iHva&vj0 z^>{NC5wYzeljag`%DQ=f0S#4Cs4?D}i5;SBMUDf0-2Ct~{YHnkO$R2e}NV@#;?Ff3Nt(wGk17xJs ztT@!0-WJj!bJ%`6BHosK7p($InrD>SJ#&5I9Y?Edu6EaB@EJnl z++b5;+~z=<~If8CP^FX$s98?WE<=lC^6rXl+7(6lq z#~j9&(VAIKp#LumFlYR9K%A~-0#$hA!=kM$x zw`)gPjeRYZl}pYq*|>urr`NV6_&v5lGM3|b5P74e&RWt4R#W<$MZy&2l4^8SU{ZTZ zk~uq+ctY+v4pEbx?ix|13ds-m+ub%zLQj|#up^ys8kB#BkKOe~ZA<(W0;5s>5&3*k z8r@r$BjOJpXIp>1S#uoj=AIz6uv)b(Y!$G%Z*)>$!jV$R&q(+*^jm~H{%iRz=+m&7 zhn3xOfini@She@>ZkiF@e2$?_155>nukndo;?ug}7udkv+t!iuxSGPvDTCFLX|Xs< zi(_2jaIUv`*SA?j@TNx|&)+;rfROfNW9|2kBKDRl+qEHv#1@EC`-BMoNBYtbJH<{>@nXGU4^7*#Sv_+>5(u#wj|7!B#*Xtm~p+sh)%Cn#ig-}JHuG?sr819%(doO12yl- zF3ufoG@UJVm1<2<2B`%697=dGlJY8aI!jWy`#lH9X_r#VCCu_hzxplEem{fs z#|b%C+8{^oEG?FCBaLzQ9ZO#fbLt1q9Yfsja=aQvXI)eCWb_~$E)(LyR2f=P`^LE>1>lOM?Y^aIhX?ia)}KSSV>$6k5(~Ce5<-|7d(W845dND-cjM@iRL+jMuxr{m^b5MolLSWeVike1VeZPbeUf{lLh^vk z1A8#-U!)QXpH3+~>%_&UxIbv=#AT%9iEejU#%Q0b8FI4JGJ9BB!Fb45$S_n!&XyC2 z(%&s(Rk-WvfdVRNN zCro(mCT1R2MlJo5Uu)oOk8fcb=~s0D+h)dnZAmk$0oBPFSUtx_9?InoR_ICW7P=i4 z1YDf>y6@|1!KKx+81$NwYb;uwZkGY^M#EpY#r!Z1KM(7qaC=>i3}=k(z-mOj;7{%^ zC2vYzA|TqZl6<||-_0r&g=HA4i+kLmni6Mf{~%}tueC$>Gc`N*ao|ll0*5(;W+Tj1 z9f=+?DwOt-JHgpBMxwxDh+K#k*pY#RT~FoJ4x@*Y@jm)|>HflR*)k)ZRWCu$TIr>S z2F)3oO!CS-)3O2$i-|Rd(R+6@36jUL%v9J{hxw{u;}NN37k4tbD2Jv~IoH1;*7(lj zVH~aA`s-A>55NvHfr(2Bk~T61+s50%QHE!T*TEOeEb`g(idlJap~>;EvkN2(^v1-@ z%g0&+JGz*SWRo-aEIrp12(`OxG5WBu(_*b`_DSC`ruh={Nu6$JT<_h6N7sEHnyi<= z6%IHk%H~zybC)xem@vz3c_bRh!jYcN+@Ou@N8g(MV!;{=;dm*0S2&q&K8swJM_XKcu8v$L*Dx!*cd^m*fZdQut0JJ+^1A8`7@gU_D-i}85|aJ_`Obypttyzn zYa`jgJ^@C>G4hLnF_AsX4wq$}$|Tj}9E&e?cX1DtuD{h+uEe_Ei^ma5bV@R>*M`oq z+qd*Rax)ER#n9Y2Tj4n7Y}NAT93RyO{PCb1Ivdsw4fCGlv}*BkgS-ukaQ8Nzn5}E_ z2Zi|6e{I{ajn^sp{Hob6UylUK#0Zz@UH4wcoFm}D{uN@ZfaoSyvUKmlHj>iC!cjmAPQ##SzJrL!!^ z&Bl>&agBtW0eg$}s=o?Oiv(BqeK4MbT&UV*_L_sf+}&PR$hM{@e$8gzT6{_KCsuy< z7%#Yo1s;UzMXoJ#12Bx$+)V5?B1a6yy&m_&o_M`1c~mwTLUOlD*0#%A1a=EQGuf7h z>gfKXE5?lk$*?6NybX19P3Bzl83nqe@Ybz{34DRVafrHjYKepW2v-1`7+(`B|GiVP zql0eCJ$ENF>u$FDvBtim!#~$mtj?y8&C^TpS7RxkD_SPXp4bY;x) z1N^9bw5+i>bud6}S$Cy!IhLScv-s}*P0tHkZ8mhP6?61%tS8+-mo4tTh9aP}X@Y@?|Ya*@;wK3vxSx+P+jsY@@eJsM? zpH+wK!S!uZMy$tJy_z2-bOv=TR?2Am%qZu>5?;x%U-%19%jDuM(x;e?kLIOo4e5va z8MRZT|Ek3QmWjOYK3ZFHBYE(N(2T#>)?oqq3``ltoyqRT$XBvmy`R$nj0UY1QJ+?~wG>3CM{(eLx|d{v_ENw)}DPQ8sn zCjW0Z#ga+y9{8@XE=xMyPJRQVMcsp+m6e@(!@e?$0=ZiBkowDBD|tezA+_)$-v6nHbE=FASu5Ft;OrE8_5^{I3 zj-T2?$&c+C!LD*ZS#H^*JB?`IDch=GNSRa)xLj*Ik+c=EfBp3lu@RmGp!c(yr;9sR z-(XzPXIv*R>K?cw8C|K3P1e?+lC<)=$7}B`_|6>)$((E)x$ZJcJs4=AX&t^lP7~|? zQRc8wCg`i%>ws5bUjHIZ0GDWuQIfJ*neMbd#c#&ooJO?Pfe-wleGaV-_eYHpe8rMq z&fppzP~om1JwdT0@d1w10Kanlw~6AuR?!@?2(4K)Wg@tf7RDhZcXOW4HeEsdGWucwObxo21b7sgip-SJ=~*dMKLTr zA;7^eN$=Vgt}u!;LvH`s;lA9;B`j!1Fxn!RzFq2AYWA_d?>39-fKd(L9w^DUvVO?Qa#x9@q?1f z!>&_D^;9hq%Nng6qOBJ+4r{ulpT}|cK4nNA?{xcoOzh$+9X76+KXIMXh9Mn5x>ns0 zxPdP7tLnCE=taxyf*((WprYLmgxf(SCYhAR8*O0>^BFPgDK}%66_$X`3!JV*pRCcs zG_78$ILw~1T-Ac1sq#)K>5SejJJv)guIdz@_s1tqaI`#MKICENjSp%xutxPQ*Z0Mx z#Yp-wHI_?E!>3Lv!(Xm8pU|i)%z4n*h{LymWnZjrH`r`F(b`|d{kU|D8{q2k*CuxI zI~;%4`__r$zHnZ!L(9JhGnSS-wBzqhV4S`!_T5%Jb+px_@J@k<2Em)FMf@>{FgX;^>xw%xu zgA>3%1`>Q;W~W~BckduU%Ra3Y{-62>Lm%umYe|kDkt2?6iv&ALalfnIQE#f6&D{;3 zC0Ch+hCNlk$^RF!n;L;-gjJ%I^vM7J(1l zCngU9Y*o4B$2);9%uz`CjIQBRL?;`D3N@8PNhceQo&eW|7UZ9X8haZb&+(BHREqSl zvGjk#Dv2-J!&r{Ne7eoX(jnbloq#P4ku<_EOO>7uJ>?vlv(hH!ADNWUOCDZ3`hnoGW~e{|oAa-+P}uc2 zEG*1sdqfcGG=;AM@u?Tb`zhc9Zh&DqAA0{cBz`Q~0*WaBjydgt*k(!huHlGMwf@*k zQ@CyInoc0D|M+{(Uy_Je&e!UAg@Mt!oi=!5s9j9UM}W5~=h<4l8;s))o==lti8>;w zG{G=$NLsF*vvFt&YH~#DWTrv3)^vLX2pY^4uV>3Wf2E6^Q4tcU2+uQ7Tb;xf^p7mX9&{r)cRWsRvroZyY>S)AV5-BcN7j55ol>8H%!@n$aET`kJv9f(Hm9}EG= zxh&==LnSe%EiJG}S{`yis4*8#dKX#qME}&)z7CURYC$Lq^#W`(AMx_#5*!+tt;5IB zn#Jd*0z7&gGG77qh(;!hnuS(6MCAf}f}93`Obia(Gow>Z`5{VbIRn5SNh z#aT`<8A<=fr$LH;NkE=?WNO>wv0ikxScQl=5fAkb>$F_pw>6Tb#EX0-TMocUZZ+99E$R=v?MbHvdJjY+>IIsWiaW^+v3^~Wl~Gm zI@3ihZSQQ7U(=o0NE(Kt>K!;^?QQ?6e$jv%Y(4hvj^`-hgfbkg7T_4IJi&uklW4_4T$5sRHY%&KxW^}mw3sO6QOXbQ24Kk3lY7*4DzLlzmSrk@T2(KVFYJCudO)6P)?F3!_clJXfjpDs zC%s@Z4M126M=Z-mqT3V=<)Z^3zcG$B)^0HgzDMMnL8VvhR+Nngm#~a#+B98Q*K`qE z<#&aeTP@+S(__iJZ#6iib`9agd$3=oSw;W7o0;ZBGh%Tq&T4Jix$NiYOj-CL!z;nb z>wAB>2ad~xPEVAYSJG^CT6^uyQ$xk(tah;iD*Ly$&nSt54@hXqacLdgw%d{SHbyuP z;qaiNC7&c`Oag?oL6EgZi1xxXx9by42#3_jrPr6lrR@X53H5$v5sb+ba%C3>9aNO+ z$I6Eo3A@rya#p)MZsq`w9`CGZ&h1@w9kk- zyAnK{_^|xF11j){{~AtD7@vlXVwQ|kSsm`t`jAp}1x{kOd!~&iH7XZWjF&^cGY?aR zIMvS%d<=JX4&uLgHLxPMU$az;zK!xSnM12<<{EA#mmYG(`*FO%xcNyDbJc!UAUdUJ zU+wcp*sm{B-rBPVw79{q)T1zU(5cFpHS=h$3`?#hBMI&LgFy<9#ozeqr>q_CHEGPp zC9F~U48mPLG)=o)rJGh&r@f)q2nYKeXh(J~ZUAoxsmpUmb-5gWy#u1*%su<0SCbOj zIUMQaBA>F$6*~P9)pRXo-WbmmZ9uMz)N=9n;D%WG&A_cQoSY9baRo5yD(p1hf+NA+ zB>MMNgc1=VL#}uC@F}RL+=PGP>+<4v#EP)`_NRRx;r>GJJB-VfT3_6?DwUQ$0P%YZ z3nB%R%S&fa<_pzON9qXRg3131P_AAof6&=qBxP|lt#YjfhPLVh-x@2?;q*g-)*sk_ z0?{nd#o&gpDvs%&)8IAm#rBmrIG*V9&rM4FRREJ(*mR^z%xp(as1f$pO-o^G|d&i8o z&XB59>4$2CO)E(Q$&+=#9elX_kmf!;km3%#!BY&ZrDPNo3i1D^)bdYv2^iw4oL<$@F<*J|Z>mEYSTf0$Z%Z4vhF9gP(H zq2H>OAwCYd1mau_LhR)M;!p^d%VS;J3MLqHIcn+|O5uT4ZnB48=>InSBV4FpNQclY zziPL!SZ2NvJJ^+9JZmOeZ_cT-xxX`c!)Pds;u)(LIpxTaB6sXHX6^~I;VZi=Nfhta zNwQRbPWuBoGuC2tka3sk`3nw%m&a9`36e1_42nNm!m4uD0)=pYBE3Xj4v+5dLE_?% z?3XikY#M|cT68SCI`rDi>1msFK;T&V1kwpz2jGHSye-dDLkwq&H+3)Bb zF&*uP@2Z|I-nHo%*#Sh5$QK9z_S%e~+%wa}u&4e3LK#?3FAnNO%qFo4=RP(XGY_no zX6gfc%3V;fIlR^b#K7Muxe6 z1dc6lywsX>&7uPS>#Wq|b=#Q}cPp#ImNuu#P*F$x~E33KC>aA&V zL2x%Q`7@5MCB@76y02lSb2JlV%FgKY#mEjut>y_`RL=@13t;)*Y2>wE{WTEF!X0LM z&qYlW>u0zd71*eAe68!Hbx2p4recesSfE2R8)0w!7$dk_FcL2OW9}W`+#_5&Z|Hr| zYZArmlTX@Md|$$a+}>Z-93bTVzoag_d1=rONVGEdI}-o9+R^1} z)v!uA-o*Y>z)Qow;JXBCmsHcW&skEZ9}MI>m=XbT-%EOs=-HTX3<>nQ%%thUdxch10-Cp z+J``Vz5VC!e0u=%A~}$W2Mw<4%>9XGScmlY6cJFtI z*tL7{p54ZXVq)5RwgO=fuxH55%fa~PLlhJ8oJ zEQXS!JnXNJlHP-k==2(klMwgqPoGAhwx9&w|DX+A4dyW;Sjn~(8Gex5TMt|Xp5@7Wuw{A2iX zul08Ns0j#joiQ)xvD>=mUa?zR77%2W3 z14nIS6z!v>5U2IGrooX2QOw<+aT^xf7cbs$N09>J@#>O6BfNT(BUzSJieHQFX`jAK z9hTqtAV4$qzDwSN^bij&b>)M3gM^_K_NkAmPnv>R{QGJnIo`a{pLs%huLSFS5s;E|0nYTFABZ(~|+x3Ga_6 z4Blti;*t;81tLmt!fTYDU$M_iR~pjKCO_!W{Y%ssd^fCQc2}ut^gc9DR)IcA@^6k! z3??#^NzM#MD^IAhM7)~qcG;WMww^cuU;Frnq6Q#Fp4S1**aw_ zYd08_)5o+oZ_hcWNMo7TMQQh^;AVAzjiX1j7{TrFTAfgjh%eOs+!_;Cvo2BlUaZ}| zCZnRx$pwfS%xceSu#HCEf8Dk9RyhDQ-J6!NZ!CeX%|A?pfE@!Tcx*^1L|3zf<44)z za*k@4Ew&7deF1e`y-q7D%Gi+F5vOY|`EZXdSNW!5xyu@^-4FD&&N7NIJ$*?2nGBaA zzf7eL`AU6Qr-lBzCqF=_^0ONI=uMc&9!F+=Og2vI%FFmHawU&bQ3Y(hRBmRQn3$oB zpjZYf)k&Gk@7IknETz$_ZRUya6(+kj9E=Kkyz)gSrbU9BQl(Q$aLooxhBNWTA9OxZ zBFqSI4v_3+M|Zd-AYU2C%_m8W9Zh422iPc~J>2s_c!)cB?GX2RNeA=-zNSv$qw5j# zg*u%x-9c_eKF4Ad&;4Z0+dGVbvk(g06ME4INJ4Rw7|~vD28GLdZ<_>xr*XfuX>-xS z$4c|v*?N4=)$2w%t7FNcp<;h=-*>Wbiv5{}Y8D2&FV>N}*6Ik;Xx8ia4x|a-zD*!* zCPel14JOF{$(@=s`*K4}7-u|)-+qx4_=Y|f>*dp=&~c{pV*!c_h!JiM^8h(~6hCyu zpN^9JnDzdfmq#zr9UU+jRm3X4<9%KF=ky92rfli-5eik(-pWC{0 zvb3!gbvVc1<$P0irs33XUOW7?{XCVFJM}MW-{g8M58zfeXtQB_o;XZ#Qf*cgvBbL2C8Ws9qGPW@S$3ydJby5bdy=sWI>Za0w@Z%`!mqh=*zyob#sdwSH5kk`6Oi%72mHw9na4kt@)mAf^n@^Fj zapWFM*}-m#m8jHxJsPYi0e3VrrrZ;*O)TJ@KgHO1bjsO1!~qm&=v;})!Zhm%{7gl& zAm`gR*@A%Ycs?gM=+GyOwNp5Q!TJIe?awJBd#%y{gI-IJRVugRBW`)cqP|i^-;i_E zT9M^mvyL3#lf}XvmGKfea~9Nb5$krYoM|~Ok<0n{699l{r}FvO91avV$Y_0;d-L@% zf4zx8i|c4m0<0XsWobZ6;AkK4|E@pL%rb;syzalg$(zO25@t zML&bG^ln!}WWEC7puQrJ$0`vkp(%`WGt4;($IKoGgt!foB)-bj`MIK16H{&f62B12 zg_EJin7O*}zJf4R(N*V+%_rE*?A8~m&i>tZJN`QGi!>Cgr!<_T0~fEJ$q8F0mDAs8 z6}R!2jypy=vA#$-h7j%Y6Bj&b=iM)s&tRpW30=F(6)$&=u4_9S&qs&xeCHOf%(VAx zFz=%D8v100>csDoNDO|CSn-olq72QvJ{ZTpv!@d6s6;TbmM!$8r;bD0vg1wdFYm2? z|A#;nm>Pe)Du8&(1V5ZX6QgoZT>PViI?lLjH+${-qh#OslWow6LOz5O$N^woBgzI_ zRAMDngkCc5|7-6pfZ}S}Zcze3LeMw3CIkrX?g@beclY2Pf_n%7f)m^!xD7*ahv4oo z!QF$qpPqc*k-zFxoqOy2b?VlwGc`pGFtc~>?&n#()_Qt&`}}w`%~X)bi+d~dFzGWl z`g*SETWt+*-QNCd!J)7XIZ~9!4GzHCaCRxqd4K}8hF5j`@8i5FSO^&;rYq@FdW)DK zQr$0Bl{$WEhf}H+eWtah)Y3YA_o4vYWr%j3NnNd6l>10nDarKR$H=0+WQBP0?3Rr&3F)7v}7 zV2u*$9j(XGKm00Q_bf<K)g zhS5}+zYAn%5y5JduKqJgl&h?axO?aas!~71=H^59cW8HZUb{L!+dQ^4Z6-ml&QYVn~{|`92g7A6jJxv&YBo?Nl(? zDb(5g(QE6vS1YMu_K?X22K@Xj(}&g}{JM-pQbz9%~oSlPFR z1e^*d`nD-z=sS(ksd_O@h`>zrM12?a-|=PG}VQ1q{G&Zl{G{QOpF zW-z`vC>aCWabB==8|SeSWL0ON$2Ln&HHdoFUF-0cK7vo({@A7}!fJ5RHo3&0G8OaA zdfJf01`?|&;DojLhdAuVqguLOViLGJr&6ioua+UO=!@hb98mnPbbA6~EUTHu;>k7G z+wH|)?}3}0sHt2*cb{MSv@ZMM zMqEF7y=%y;iVo)_PltIdNjQgxgoj@{n9r>1s~nZV48<+cR<}Nm%^7>yOjWJqnbNH+ zh_uo4e@lSAPd4kJafYM1|4`?h6U)B2cS5k``;m9Llr!bgtmg?31hFfMu2%j5M|1X} zv@SA=RL8&rcLp>?q)j%IA-#H(S!u+kPI;$ys184o^tPU};6drmfIYWS*$hAf&PX7Y8&GeZWjg(tWzN4V z7jr{`UmYvCMM+F6D)bo$O>by8F!#lQb))w?X$a@+9G7|L<<6cst9EK^MU=4;aMa<% z>6?yLcEmJ;3Y_y|U>RaySF4piXK+#6*;{qfU_EegIf~f+#D&I(`t~5;xlZutT7wSvLv4+$xh_kja{>Q_swEl zNh$PG9vLWJN-w9N;Gewd>7;&gEdXgfxcNRj-q*FY`w+7E(!G@edC{7L9m8c%5M7U^YTd9SO z8ijpd&e5MaRQ#t00-H&0b0r&cIsI6c9g}uN#cFm;#?4~#q(lEJH2HC0TTN=PEQ$er z^to9@!W6sC4);DirsMG8bJtjzoRzEM1VLh39Vv}XiO^u3gbqs?#QK$uwHuMu)m4PI z%eV2Y@a?{y7qX_rl4wdJNGW{PcBb00#=xRBt^BeLB}!qQv=Gbo6Y)ez3En|}GX8vM zWAsW$AmLWCP6*q~Q7EcETaXZ!2<0Y)!c!>2l~iuwOlWBEBLp|39?aPTGU|p_;b6Nm zffcRUWeMJ(@#&ykV={=AMsZmk$I!V)z%1ixNkQyS7EywT)iq6DCnBsy-%(PHjsPk^ zN*y#@i*qDC0ZV-Q*^3Ujw{Xtsz&S!PX_MI{$u47@ILYzaoov2uXi#tB*M${BM8BWc z_xWidPeo{qjrqhSYg;T`MD>{kVxzB!@oBB z+5BkI1Q&}F^=tr@KwR>XC+-_vQC_dlHL46pHNH1Bjn_NHVd3FF?dAo*uAWaQJpZr{ z@K0nW^*27PqJ7c9ENiV=5v)X?-Iu$zRKw*mYP%iczj%WDG!MNAMcRQYWmPP%^?v5v>DB(6OBNCN2M1a#JM{J3Nj%T4O_AzvW#*r`$}8tqNpH%^h`&1gOw=u( zYM`qF_pK1z|I(ZX@3ybC=H zDTUJK>y%$F5tOu9O|&Q4W@_vyTdq=Cq{s}kTpi}?oo}b??+8i-Ph!sN1{xHt>wjNf z^8G?Y73)Bi(a<1F-&88Da^-KT{nf!?u96$fdtRQ0bnD2}fL zn1EDqKFjp<0J)#*vagn-Ob<%bxysuftP;8}51KA01YP$Z8=g&(JF)5rukxc`2_ZO? z6iyYL)QdQrDAOHZk%gP7MA&Zrf{d9k$Mnjkf^rsq+eagoZ9>YAN?(=ejbkO*yCfBJ zm7`DD^xt^5-qT7)4E~Z$_pcK|f7#H)IBvZG<27 z68!ugHQ%2>9k)ha_-wlR1VO#`tYJ0_0opM*D=69Ak01F!0yT7BN{*5ta;msotZW&8`S7$d0Ma2IUtsN8UBRoPWpi zxU4j~lT%w!72$ct@FjC(Hn3{e|B)=5kWQx)uPyT?qa9yHo!gE4WWW999I75-G`sws zr~kWb`pYCT5WVsy9I4LZC#QWOimwZpNJ=|Dr>r#T8jXD~#pm|{L%!F433`(hj8HrR)#8rE^1%;sTg+z=WN_?kH`FTHOmm-F#9mlEyMLN1eX z%_IzU0soZV=svTboWzy7cT62&b7hYf4ty>e-YRvxj1+3qXt|yYz5`qAZNOpTs!yZ2 zjS)>F5gBbf}^Z0F@5R;xyulTeqO16Hs&pA_(wq#s?VjPER7rPa= zDkPcON;lNhN)CE3y6gF;w%`wv*f<-1($Ghtv~?4KnA@D_49CLLFR^mgBq)RGD?Rr- zT@4Oz-z|-kKX$sINb^&8w=_zO;`CUA{I%FS97Qy5fws?HIEolZ=y47n-GM)6E`?!o zq2;TJE@W2KGV>0W^(=C;G#^ziOJ&w-ExKvs`RRDWKk3D$v^kNH&C;3@<-}6i64gpC zGf3=Y+0H6D+MqIbpmt*tFHLHnq#byneunSEtRa-%YvOj?`wG7MtJ_g0;ng2=(T1hJ zLvooJ!u{5Q>iJl*qBRvJE>3!jXs^r23>M1_Wk+&Z)o|o$@Lt(jA~eR^r{u{9MCTCP z`&x%J;>dZIAe_+JX3>0hiTDp&#}n~6w&&ylCjL=gcu}E<$FJYNkj4`7$o~8RM;Xm~ zPCZJUF#w9kJ)UKx4YH)(Qi)d%EMzLPbexk@8eA*TCQ$3ov(i>&sb`)AxAMrdnT6Q7!63+32SAW}YCg=!(=05{_I^Ifz6K zZRBhx{Y|Ju;oRp2&)+wE51Rr?n@Cb|BYa$>xBUDqc@B!BzN zyIHr=?hfIZmglCiTry*~McN>4EV@viRvSrGzjfci9HMExGUbJQfBv>F|7~5?VLb-386kDkJejL}lds&7#AFhatSgWr$HA!0 z-QdK-m|z6g(7-X`m&jT9rfS_nN;7K*eg@(pMydk+PV_1yW|prlnA?sF*z`Br{!<;& zxY^MsmhI2q;1Wymhu}yQ$a;y(r3S@gJ{#sb#CW#Ib{OktP8)gE*8#R@F{Qr1jA>Q8 z?0e1@eb;P>V@Q32`O@;ME%jg-lK2m$L~UclKug2u$KNoK1LeQ8=1JROVs>5Gwko2F z@2*uli$0-X#3S*a5=xjaniA@*4XttB_2I}65N)nfcvPvc;IMt)Uo%RQyOa=uyx%*Z zaTT314j_b_SgKy={PfgN9v4Tm+~f(uy9lznzd&wh+9D{_XMGm%5U4uP8vYEH#2=Kh)VhJN-@8%l4anCo)yz)CcJcDL zquX2#w4w6m(O~CV%c7{E&yVflh0qrr;X_<8YhSm&v_?J7MWmdx$x6nw;8GkEC>nk1 zju~e5j0E#gShd3MSV)!8$c80$%IR7pt$Da>CyuyvQ$m{EOfc*>R=a$N(yM6-TV&su z*%O*IGOMQxICeg6fqv$EFi+xV7+fwTKb^zKyEO&6#_Fpetg{j9IyWWiQT*92Wwj%_ z9ipd;jl+nT$c?PnMnM;MNIjq6l)W$W#pGS3R=|@`LDAc^Ubxa#sngx)H@H8JJ@LBRH8AgwXC7~iNojBP4uW3%^R+fl7SbE= zD1a@8=o+l*_<%!Q1KAsD%_n;7^wp<$N==Lf-FzZv4_(@CH%poRqe_&#I75K6>GL$n zTuv#ZuTCUm#62d_-n>12>50U-c+&JCohxVhA+?f2uGy&XbCuW1`OjO<5RH+{XFPn& z7dg-`M9@DQ(R~6o$LgZnhR$0bw!Op?b({K_++yBp?rMGJqoJn?HQWS-{o)Eyj4n0Y z=X1gSWS$q0>Mn+H!|v`NKxne>~z3?IT4C41{X$Cx^r z&81DNGEgPsO@$WNxd6}JB~IslGd-m%IU!8y9xG^wZ2hnl20qmMN;LAv%*{VXJq`A3 zZdpyM{krD7IuveMd(MA6mT$O<_nb8h$~Sn_?HY?AKaG}-HX>Vu_Y%#r(OXA~)NC5} zc%~pau&Fs-24|%yBbJ1e+CGJE#rzL7Cw{_RqxF+F*^k0$-Q*4%*L*6Hgf_TG(@_U$ z84gF}u!XS6-9|Rd8L2nPg`3|+9yrWW=RRlIBn7KZ*cNt(0h5DJQ1Rf;8qL9rk*n2m zo$}TFdE%YbcFBw{%wI|_6Qg4zQ7@XxpLdT;PC7Rwup0-ot%YL}wqd%vP(1Dx)g{b` z(%oNKvLx%aMRwcdIPfZ^k1z)7Ab8h>l<qXURlATBsa;W3S~A*3k*WZ(?^8-(G8H%2ZY%$3*w5PNEYU9iRvLkq z?~jrzT?pCStZ&d%v3MZquVFO((9*<|tN-$Y-n$~m&(#?-g%oR)Ed?E?8vYs)2>-&y zTp!zvK4+`gZr^aVC0lkcQq&rGHuop?9!i_?HXy|Vo+25Rha)ehF3v{xygru0 zzT$AM)hi~NA_zqMx)>dl{IvV(mw^}d-W879(v?L^(qn%l%DtPVSHFzNubtfT%G*XNr<$*<`Kq;!Z_> z%I@^({vP`$A>&=TsB7ab%~@_FJy&3!TC&&=Ck%)(vNk=RLA?@9OUzSK@^vyBTDuOu z-&MLN02QmWwTu_I9$Mcjas8a%XLVvZ8M;VLEhbBxNB^Y&V_V8vOu|X#W_hW|xo(&r?kmN)LX+p(M*7(n20>xi_5dD-`yA zH}p%nG>sM{K7a!(q|2go(l#xgQyCJzs1@-p++_#yWdeArXR)dvH9vfS%W!80T76 z^I!CZb&}8T=|%7o@i&faG$N3q^YqW83ifFKv39h3HhG@>D59P=+jgj_=Yyo9sQu^Z zjXB0KvsuRk*~pBcHP zVW?qxBOyUx`i!m(`LU^jW=xXF&_QG`1~JEzB%7d^#`MT{6iG=H-a_X8w0kyhp2Ter z#+R3SLvJ=$*)dE>BE*(FCvm@X+J_wUKM1uG|JcQPw{#{Oq(U4)dIB*K@D~2anJRBe z(hQr&>JE3;A-mVF3!LK@cAS?i#pp)6x9d}^{=OTKa=`SWo4NYy`+SIxOX&3jLpBkm zAyHo1kYYr85H=$XjRd##)Wlq!qj;&~#sG0R8NV{P^Qd;+C)2`%no28l8lZgHKGX{B zdY;ybG)QAF=s2K|B-}LHaY+x)KNh}pFeG2+QiO#k@!myx?p+8tl!&7X|sQ%V}8#P72|op1JOz3N+5%y}U}+uJjgR!^ooL-ZM8m!;2%ODmzj!{EBS2_-{Y2B}_Kq)(rdNF94 zkX663mgIMU5s~!!_ap252~WwWAFUCPk``}SETV0N93N1{hB2rW^(=p+6BeG|QO03` z!gv{Y7=^^pUI#XU7yBxXY*mFkli!Ui@sA#dsKFjZx1_`u4nehzLiP4vn zcH`#^FWyQ@7FkUeNz2H9`xUGIs5yC%)5EL2b>n^U3&y(lXS(9*yj{SWGT#UA?QbHb1biLg1fOx z&yp$s&1ksR^SfOV-a!t_H4i2gUbBP&ZgJB$XKrR)h!d{H)7Nh;o#(X-O=2YD zG2M9lt9j}7K!uQbLu&UzD%*4{b$b-O%4AaIO}q{9p?a~p97}?XU5!q@e4285Am&L) zh3zbllrs3W&!i<+O+}&$BbfKCpw~-<3cHxmAMi#XAoxV2^5Dxa@@C9I7!`~Dzcg_mq&O0PMHEGhLBi548EGtctM(d&|kk|8630#TnW{ zNQgoaBZcH4m2TLQgcAD~LWy5!`gq7M-IVbD{fFfbk6ASXy)W*6q!VC2z6`jm& zo#%6R4TdDD1+5tLS8OA|YwPW?+~^!I2>%=+(y0N;9|PM}^GCr%b9)q9L@wyEb$79t zKs^su#y)ODke2iC@K{<@)oXB}S?aV6CjdJ){(Ts#fdBqKeu=A2h=4%n4yFm!PoK)E z3tjeR2sDr~8)f9>UJe=1ziw~u&Ew{?xdpJs<4U=__ z=4V@@;-HJm>xB^3M><0Z&n{OZ`?=&gySl8F+XB=R9%rV$mysEV8TdlreU)P0JPO{W zR%HdAlhREdXUZoVgAf+n&x5(Sxt8mFao4ve!_G%565O^ks=^+piU}-wYfb_X7VMsO zHo5W2dNc&FZ8kX-U!cFL41ay`f@#%#%B7;ZY{VApWe0!1Y^SWWUlfs2wy`N8VAZ#A zP($eR2Z?66HSz1$K5 z`%6M$gjnPL1XdUr*Zgna(1gz>RCY#w_-AEj$Hm3vs^rUCf*w#$K+7bXDK`y0-pA)Y4astzmFo6M1VrUp~wp)3Sc0!UcG9h$aTTw zI$~K8IFmJK*0D#E$K`U+S#3p9k6=w99^)mf!3K~0}9Vt@u=Z{U(-T7=~eY8Urg-Y3yy=)lpwE9(PJF6ZIRT}_PI&&$?=d{E;;@Xp~ z@9!vjD;hto_jxS&p;BM%t0UUI0y&_Fm6TU_whNPq_Bh=#a^OZ_2mw811y;h6@YUC%LM2E!3$grQYuO{?b&PP?6|w(?N_%9p5>W z``mdn9d(iVnGzEd&y*SY^Vs~(1V^&^U1mg}_9Rm%N!YU*oGmsw`hzn)0s|eu`t9+2 znP_0IFT&5$Q0KHAokRM^v; zwLc!1gsUTG_4aKsjLDC70Uw5tt6agVMXc?D`$QLhUZXN zF}+IO)=FpS&Uk+0e&dOJu?(UiCq?8tm*s&x%0z zlCXdHTBKS?)y{@TP7cv;@<5i{Z$23kBv40h-4~*W)OTA(rlF-RG9ONd2Rb5$oXU8< z0t`&Aa5Ml^c5Eja;kV-3ul5)@zGbp>)6>&_-B_sKN$e=YmXHxrq@<)qN3-9ZG&Wm| zQmK{dYDMD_5)!&K*)KM~xEQap))>qXHRz5YkLg}jGBGjP#=~fxvO5zrKU(ROR#1S+ zur^V*voRImh`B2zwm8_CCFhw9B>sS-LH%xN1DD+#pAX?9x_=(@-izKIiZ}1-w)p5p zvtpQ?viyxcLr$>Law}$4@zWv|psQz5yB!@Jk0~1F?GW|BP3Cu8hf4JtsHMObM_6?b zYCVumNGKRl)J>pD3g!!kUPAQNbMzBldAs|M7cmMVN&uZ{J?p6wB`{f?Ff9UjqAmnn!p}zq0jKl}ViLd+i-AP; zkl^6aXd0QmG6sb8B+F!v^*D`6_Y<=!%W=tJksGFEmd_tJF~kAu8U?pf(g4Iu7ZB3d z{ZNCJ(kg41bo74RCIzGPEqERvUrV?PQP|GaDglY} zojRO|y{(VqQLOR3NaAVxV@2gpK@D&Yy$$q{ zw49upx_W$cOpMxJfJ6R5oM)M=PWT9cjvUS>K@Tem2F?Xykr%zAaL)-VMpRVt0(+E@ z2zvlUVqE2R^Z^b~SeL%g_s!Um=sR_Spj)D9G?Bz!pZ}T8o|u@3>i_52v$NxVRz?AV zdT;>rF5x-^#fZsb4FxbUjn7nBpY^`V*y;#=77`h$adW;m2{jPC9|dIDrk`sN@y3%g z{)w1UC78?LZ|{p^JexMp*lr7WYBf_?=yCq1&~6^=YKmYd1CRcHJyNp3jlBQ$_)nlg z|K}(iX!8GU@BbaQ{~rc{>etD?$+=wLX!CQ2;XUVR<%uKJ%;9=`M|RYCWb!n z*XzqZQ@Q(J`q|5m5X9I3yn_r(dC|*OQc6`hTg4Jf-_EvHE4PS_HJ=yo2;uXa?Em2u z{~OA#c8^TdM;FL;@N95BsIE^fn=3j8tO+?jL&$yX3wr4A5DXPGhRz;{AOx}Rdd*(^ zIH&Ws_<*O-$|V;VwD`b@waERYF`#FJRxNp!eXB=3t4FcY^QhPq7!&S?OTYe^zkpZ` zH_dSk{v+(-@YPXX`sNxO6M;?$exZs?_n8M9P@txF00X-|Y{x!>Q#aRx1;B8U!R`Hu zU%rdb)IQlGDc25m1PTS=_cQn<)2aC0%q~s<(k++7L9_Urot<3?oSjRq3IK5!BNK%n zOkWd!xdpd#9PpJoU6wQv@0oHEkFy<`g#O%vmWPFM5=DX9Y6OJGFX5w0^Wk)B)Nf>% zOU}JXzi!?Ra8>yX-}}0*NJ`fHabitTZf-TW(hR^yqGQ(5(u%J+%nW~j)mep%@SNyx zgK+EGb)+~j#J_+44ty^~z-{1H!T^YE%Vj4k;%*ozul&163g%s9xIW9Adk#OXDJ>6= zI#_(6KAbL$1QmX`J#n8B^tp92Y=eR=GJ@t3x>rX5AK-F3G^C-Sp$A$4Xbd%=ZMm!_ z=XT zv~nDS8eEOF7u}hmVv#9IONRhTUXqwd1nd&23bVmJm4`tBW^Ft^br%;Gzujy5Bue%V zZ+{L;@jZltE0}z^9eyYP91;)c=(5Jb)i(V)3G5Y}m7q1l3F^Z6ThJoo~lAO0Q z1+jLfr!&$8p34{9Lqvqn9Kf(4VA8~K6Y;(GGBq9`z&r_ASt`q`=&?(s_$NX<|YL6liwy_c*Bn!>e|QVs@$6NnKjGA z0O?HTFbx44j6lP}!ZNSz)ounK(f~100+)NXQw-ht{nNPKWiPq;a^FshgI zxKi>e+*Si#$89kprjjRHx9)52KGo>%2;vPS#O|zp%Ouq2j1kT<92^{WV9<=_+v9$( z8g4liKu|`9eeWIBp!$07F@@?kRetCU$-HPC1S$ax6UW*9ybk!JhOBIuc8wk6XtjG6 z$m%C07wZWIee0hrfs~5Qxjx%P^;Rqc#(~hyLW3(*GLiz4lS2)x0x41O6+ABYV-vmF zk3SSQm%ycatOUP`jg6%S830#i_0AbnS*f9IU>ir@NP_d*PqvQ`&8KX*e*0`=|eRZeQqGyl@>|`wsPaD7yI*P?bxERpW@sH(}e|1YBd$v)T2NQr~qQu z?|PLhlQ2=LCkTvy0hgDP6u!3(fbF-mIs;t+^j*%yqOSokRv13cyDoZb02`yvweEWo zz3IG}Fn@NojA_k%nGY}(;H~`|F0fIX_w{x@IONa;8IyM9dmyfRRiG^qw<2HZcIxN` z98Mf*W{a(_WSK1E_}*=YUC{4%R+epExHJGb(F}*1E}B^Y8)q8bE2y^*m)ird+XZ~va)vMRtc^dlTt+2^Zd_u z03e_RXQS^Z@vr!h>G%1ctb?G&Mt241<{nPME&b%l6Oc{<9;XFe;iUQ4B9~&(Tuxhx z@aO|k)a#l721r>~Hx=aIDrr*F2c?nY9;hqbr;Ac5m@WxN!v)Lao#;`=vz^%*sM&Bj zM8J86?&V8y0BTbJ(a(TRCO(;4^|N8@%{OQO^)&*1 z<2LER0b*2Oi|^s4<)K(X^iGvoyK*E1pK+qnQUz=(4LsgmaMWZ$SG%RnOyC0mT}m$C z1ph1muR<@;RJXe`7`Jg3MwP|r8^_HdHE>>F_T<8i7vO@Ba9hRJJ8fqHEz8Km!?PyR z6-Jy5B$g%kFx)Tr;Y$(f!4l#)cfpcpdEfbEHafQIK_lO+a9?!pqQF22TI@Sp-u&N( zd1XlO07K}$1vK}KeL@549tf|v*jT(uAprsH;NW0s;3<#%5EtKX|86~z{Bi)-#`9u- zGScTXyR4eN2WU_b4#lmGAR)mOSc5qO9?1Li4QEH)l)FHTz@3I*@(Z(0{#U#=?UeU8 z;O<93ltn;yzP_=WDoIj@Xy*clW_zz@F@AP$3g{7TV7GW(Y=5eA0t+T5OZ9Z3RX@6} zg!}-rf;MpDmOS9l5Gi_JTV7s%x@+*D4S;+Sj6BVi2XhOsgQC4`@kqEVaDFR*n;%ws z-?&T`smV}!9r|HWij;t8Ly5n!&(9F{bWv>CE-0YmwqMW%ojI(M(vo+(Afr*LI|_g% zE-ET>Z?>jbIak`x6d>Os+LhGM6|x#C z^iAPpaBmn^7y*K!(x^W?k*F^Ae!`#ztfwxRq&%_X+p#%f8N=n#(b39T%lja*{oCf} zs9wJ0cHN(AJppJC_uD}R_zN&Zzrm)91ya2I4Kt!Zv!PUCG@tw9gciKoMIe!p+w?AF z(*(FpJ`33VW|JBN*M%smb5D^4pQ^vSVQm4!xa-<-lSaLL;QQb?!tsDM*CtP*NkqzP z7Yo~+8{8<(x1J{(P7M)t(nx?$~x7@%Pl zHL{1=P5U$|h1>JRJLiK1z356X%9((Mk`?syyuUlIX~C3Lbp}Qx)kS)|T^M>go6(yQ zYMZ*2+h64^G%|F+@_c&pyOW==liyAPZ^)CYCM(^blo=s2>n|0K=li%Mw&YNo{Q{m<@cmHyg|_xn zovsmO!~blAPT@=}UbDhXZpi_FUE4x4C=(>}+NUr0+}Nh^IlKWIc){LLdgr(rewTUd7IWA98+Zaw*Zl**~$X>p=j zQ6Hc0`4l6;D6sTYfguvrfBUG&_xQDMf!YY&MP((Il=7Q5Z_-V{v<&-#{Pc86$9@~w z)-(0a#c=KeLUtR>`cpr|gN{$zS>6NIMZFE`5pdHGy=i_h!mlPV=G1*OdZf5md;HKJxi0;iSCkt6kw| z6&ZK3;C~Zg#GG)G4dk3GFj@?P&j*>Mx&aq1;{k@F@59YvG#-agKlxj?rUgW#Ex<4d zvd7x`g+J{BbO!#pxZket*`$VF_j2VcFR)BDHV)@haWz&9LZ0 zBOEo}z!sXlJ*AqlJwl!x5=cm;g?^lWrlNJxq1t>FVp_t5wD55uo90)M^;3k;lNO_Z zbN9tsCX~v> z21DQ1c=z5i?dcl+kY^?JN5IJN@qK~Z`^@3EH^U9=H7bkqW8i0%o&R*)lm{G0%Ir~% zz-D?Y^TcD3I;4QCQ*35>z<8hRNTlijPfap{jM$CBD*ESKwyy?68~>ANp5;zW%iSKc zHGQF@G9VRj5*F>1y#-wK*n|3Ep%dB6mK{)V3h^2U0nW3Y$LZ)hB=5+1yP9|=tt>EC zN=55?pG~#`E<7~}Mw&*-b>3OSP+4%C%lUBWH9Y-+`3yYQF>=>AT&Gk*eYe+ODft*U zO@2uU6Bw{Z5tO17wRWKDgErG8S_kgI*378z4L1wSG~<<) zhJdq`rnevssP4(^%v2Q%IBjKudfOJT9sSC9IXP8;^2Hj6X1Qw1PFde?x&)eyfbY7h z5^j&m#50Wos!4N28#@E4#QV)R`>jj|fNrV)s7bdys9lW^(+Px=8gTT!o}Q?+r{A#2 z_%Z?Tih)v8lPNikyL_H((m!rlkr4n_Bo{&e3Bd$vniX5oiK< zt)@t;A`dhlqg@d!Dw(OY%!bc^(e@}8pt!a<%|lB|J5qcEic|m-TR>&_keJ2jspsD7Gc6?XnBmFEmEYf*r&xj@Nply^O9wSEv;#(||XE zo`4e-O5fX`6h4TlzTFX@5@39~B!*$QP7c@F^)-fzk4jFE%l;02D7#Yw znhn60GpJKr13>v(s3IjzF>iqLbhz?^Q}S%eJGw(hI2yvWD^MT(oJ#5-Q$UofpY6?R z9336?4GqP2uRbAhwg{V)ML-EgcA+il2LfQWc}l>97PcjR2@DJOe*K|3elje0 z6%(icx>KIf(NT}XU#RW-S{`8PLG@rF%+aba^VrT!I31H=#rIge??rvOk|E0|D#1?} zz|Zm};Dmzt&(-=2A_7hrE?`@ks;?-r##?0LU3% zZV!r!kIw@-gTT|5jvUElZ)(SBvBTjh$s~+ii6xW~eQ&j2UhTmSlR`faI3j`l8@XZ`F=7&x?0OXS@Cso1a)o}<(gS899b!9| z-1rXFzuS(m{Ga5k{ypA*=(7I}oA&?f2b*|6MTN}GbOk7{z?&*1A^*1Yjp3L70vC2z A(*OVf diff --git a/reference/check_prior.html b/reference/check_prior.html index 499aee4bb..4ed22eaba 100644 --- a/reference/check_prior.html +++ b/reference/check_prior.html @@ -71,7 +71,7 @@

diff --git a/reference/ci.html b/reference/ci.html index 870f507e1..0895b7960 100644 --- a/reference/ci.html +++ b/reference/ci.html @@ -71,7 +71,7 @@
diff --git a/reference/contr.equalprior.html b/reference/contr.equalprior.html index a9dc32371..39e9a0c8e 100644 --- a/reference/contr.equalprior.html +++ b/reference/contr.equalprior.html @@ -77,7 +77,7 @@
diff --git a/reference/convert_bayesian_as_frequentist.html b/reference/convert_bayesian_as_frequentist.html index a74596a2c..558a19849 100644 --- a/reference/convert_bayesian_as_frequentist.html +++ b/reference/convert_bayesian_as_frequentist.html @@ -67,7 +67,7 @@
diff --git a/reference/density_at.html b/reference/density_at.html index fd699ddba..9c724be20 100644 --- a/reference/density_at.html +++ b/reference/density_at.html @@ -69,7 +69,7 @@
diff --git a/reference/describe_posterior.html b/reference/describe_posterior.html index d30554cc4..3b1fd5958 100644 --- a/reference/describe_posterior.html +++ b/reference/describe_posterior.html @@ -67,7 +67,7 @@
diff --git a/reference/describe_prior.html b/reference/describe_prior.html index 01ca2ed66..a4ce7372f 100644 --- a/reference/describe_prior.html +++ b/reference/describe_prior.html @@ -67,7 +67,7 @@
@@ -154,9 +154,9 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.048 seconds (Warm-up) -#> Chain 1: 0.045 seconds (Sampling) -#> Chain 1: 0.093 seconds (Total) +#> Chain 1: Elapsed Time: 0.049 seconds (Warm-up) +#> Chain 1: 0.047 seconds (Sampling) +#> Chain 1: 0.096 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). @@ -179,9 +179,9 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.049 seconds (Warm-up) +#> Chain 2: Elapsed Time: 0.048 seconds (Warm-up) #> Chain 2: 0.045 seconds (Sampling) -#> Chain 2: 0.094 seconds (Total) +#> Chain 2: 0.093 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). @@ -204,9 +204,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.045 seconds (Warm-up) +#> Chain 3: Elapsed Time: 0.046 seconds (Warm-up) #> Chain 3: 0.044 seconds (Sampling) -#> Chain 3: 0.089 seconds (Total) +#> Chain 3: 0.09 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). @@ -229,9 +229,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.058 seconds (Warm-up) -#> Chain 4: 0.04 seconds (Sampling) -#> Chain 4: 0.098 seconds (Total) +#> Chain 4: Elapsed Time: 0.059 seconds (Warm-up) +#> Chain 4: 0.041 seconds (Sampling) +#> Chain 4: 0.1 seconds (Total) #> Chain 4: #> Parameter Prior_Distribution Prior_Location Prior_Scale #> 1 (Intercept) normal 20.09062 15.067370 @@ -249,8 +249,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 6e-06 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. +#> Chain 1: Gradient evaluation took 9e-06 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -299,8 +299,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 3e-06 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. +#> Chain 3: Gradient evaluation took 4e-06 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -317,9 +317,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.018 seconds (Warm-up) +#> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.016 seconds (Sampling) -#> Chain 3: 0.034 seconds (Total) +#> Chain 3: 0.035 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). @@ -342,9 +342,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) +#> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.018 seconds (Sampling) -#> Chain 4: 0.037 seconds (Total) +#> Chain 4: 0.038 seconds (Total) #> Chain 4: #> Parameter Prior_Distribution Prior_Location Prior_Scale Prior_df #> 1 b_Intercept student_t 19.2 5.4 3 diff --git a/reference/diagnostic_draws.html b/reference/diagnostic_draws.html index 94de8f8fa..e7d1ca8f6 100644 --- a/reference/diagnostic_draws.html +++ b/reference/diagnostic_draws.html @@ -67,7 +67,7 @@
diff --git a/reference/diagnostic_posterior.html b/reference/diagnostic_posterior.html index 4a3c50f4d..ebbc41086 100644 --- a/reference/diagnostic_posterior.html +++ b/reference/diagnostic_posterior.html @@ -69,7 +69,7 @@
@@ -207,8 +207,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 2.6e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.26 seconds. +#> Chain 1: Gradient evaluation took 1.1e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -226,8 +226,8 @@

Examples#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.02 seconds (Warm-up) -#> Chain 1: 0.016 seconds (Sampling) -#> Chain 1: 0.036 seconds (Total) +#> Chain 1: 0.017 seconds (Sampling) +#> Chain 1: 0.037 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). @@ -250,9 +250,9 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) +#> Chain 2: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 2: 0.019 seconds (Sampling) -#> Chain 2: 0.039 seconds (Total) +#> Chain 2: 0.04 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). @@ -275,9 +275,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) -#> Chain 3: 0.018 seconds (Sampling) -#> Chain 3: 0.038 seconds (Total) +#> Chain 3: Elapsed Time: 0.021 seconds (Warm-up) +#> Chain 3: 0.019 seconds (Sampling) +#> Chain 3: 0.04 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). @@ -300,9 +300,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.018 seconds (Warm-up) -#> Chain 4: 0.02 seconds (Sampling) -#> Chain 4: 0.038 seconds (Total) +#> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) +#> Chain 4: 0.021 seconds (Sampling) +#> Chain 4: 0.04 seconds (Total) #> Chain 4: diagnostic_posterior(model) #> Parameter Rhat ESS MCSE diff --git a/reference/disgust.html b/reference/disgust.html index 8f451114e..921950c40 100644 --- a/reference/disgust.html +++ b/reference/disgust.html @@ -67,7 +67,7 @@
diff --git a/reference/distribution.html b/reference/distribution.html index 809713ca1..6aff15adc 100644 --- a/reference/distribution.html +++ b/reference/distribution.html @@ -69,7 +69,7 @@
diff --git a/reference/dot-extract_priors_rstanarm.html b/reference/dot-extract_priors_rstanarm.html index c6ed63e69..8dd957dfa 100644 --- a/reference/dot-extract_priors_rstanarm.html +++ b/reference/dot-extract_priors_rstanarm.html @@ -67,7 +67,7 @@
diff --git a/reference/dot-prior_new_location.html b/reference/dot-prior_new_location.html index 54a922503..e9aa95982 100644 --- a/reference/dot-prior_new_location.html +++ b/reference/dot-prior_new_location.html @@ -67,7 +67,7 @@
diff --git a/reference/dot-select_nums.html b/reference/dot-select_nums.html index 9d3bed353..2b211bafb 100644 --- a/reference/dot-select_nums.html +++ b/reference/dot-select_nums.html @@ -67,7 +67,7 @@
diff --git a/reference/effective_sample.html b/reference/effective_sample.html index cb533ff15..739222e44 100644 --- a/reference/effective_sample.html +++ b/reference/effective_sample.html @@ -67,7 +67,7 @@
diff --git a/reference/equivalence_test.html b/reference/equivalence_test.html index c10c489b7..7ac3eb11b 100644 --- a/reference/equivalence_test.html +++ b/reference/equivalence_test.html @@ -67,7 +67,7 @@
@@ -319,8 +319,8 @@

Examples#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 2e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds. +#> Chain 1: Gradient evaluation took 2.3e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.23 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -337,15 +337,15 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.045 seconds (Warm-up) -#> Chain 1: 0.046 seconds (Sampling) -#> Chain 1: 0.091 seconds (Total) +#> Chain 1: Elapsed Time: 0.048 seconds (Warm-up) +#> Chain 1: 0.048 seconds (Sampling) +#> Chain 1: 0.096 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: -#> Chain 2: Gradient evaluation took 1e-05 seconds -#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. +#> Chain 2: Gradient evaluation took 1.2e-05 seconds +#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: @@ -362,15 +362,15 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.047 seconds (Warm-up) -#> Chain 2: 0.042 seconds (Sampling) -#> Chain 2: 0.089 seconds (Total) +#> Chain 2: Elapsed Time: 0.049 seconds (Warm-up) +#> Chain 2: 0.043 seconds (Sampling) +#> Chain 2: 0.092 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 8e-06 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. +#> Chain 3: Gradient evaluation took 1.1e-05 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -394,8 +394,8 @@

Examples#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: -#> Chain 4: Gradient evaluation took 8e-06 seconds -#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. +#> Chain 4: Gradient evaluation took 1.1e-05 seconds +#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: @@ -412,9 +412,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.048 seconds (Warm-up) -#> Chain 4: 0.042 seconds (Sampling) -#> Chain 4: 0.09 seconds (Total) +#> Chain 4: Elapsed Time: 0.049 seconds (Warm-up) +#> Chain 4: 0.043 seconds (Sampling) +#> Chain 4: 0.092 seconds (Total) #> Chain 4: equivalence_test(model) #> Possible multicollinearity between cyl and wt (r = 0.78). This might @@ -482,8 +482,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 9e-06 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. +#> Chain 1: Gradient evaluation took 6e-06 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -501,8 +501,8 @@

Examples#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.019 seconds (Warm-up) -#> Chain 1: 0.019 seconds (Sampling) -#> Chain 1: 0.038 seconds (Total) +#> Chain 1: 0.02 seconds (Sampling) +#> Chain 1: 0.039 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). @@ -525,9 +525,9 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) -#> Chain 2: 0.02 seconds (Sampling) -#> Chain 2: 0.04 seconds (Total) +#> Chain 2: Elapsed Time: 0.021 seconds (Warm-up) +#> Chain 2: 0.021 seconds (Sampling) +#> Chain 2: 0.042 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). @@ -550,9 +550,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.018 seconds (Warm-up) +#> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.014 seconds (Sampling) -#> Chain 3: 0.032 seconds (Total) +#> Chain 3: 0.033 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). diff --git a/reference/estimate_density.html b/reference/estimate_density.html index 9c33f3c18..f7d72b980 100644 --- a/reference/estimate_density.html +++ b/reference/estimate_density.html @@ -75,7 +75,7 @@
@@ -294,8 +294,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 1e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. +#> Chain 1: Gradient evaluation took 7e-06 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -312,9 +312,9 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.018 seconds (Warm-up) +#> Chain 1: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 1: 0.014 seconds (Sampling) -#> Chain 1: 0.032 seconds (Total) +#> Chain 1: 0.033 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). diff --git a/reference/eti.html b/reference/eti.html index 3cae16181..f3a9e944c 100644 --- a/reference/eti.html +++ b/reference/eti.html @@ -75,7 +75,7 @@
@@ -350,9 +350,9 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) -#> Chain 2: 0.016 seconds (Sampling) -#> Chain 2: 0.035 seconds (Total) +#> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) +#> Chain 2: 0.017 seconds (Sampling) +#> Chain 2: 0.037 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). @@ -375,9 +375,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) +#> Chain 3: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 3: 0.018 seconds (Sampling) -#> Chain 3: 0.038 seconds (Total) +#> Chain 3: 0.039 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). @@ -400,9 +400,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) -#> Chain 4: 0.017 seconds (Sampling) -#> Chain 4: 0.037 seconds (Total) +#> Chain 4: Elapsed Time: 0.021 seconds (Warm-up) +#> Chain 4: 0.018 seconds (Sampling) +#> Chain 4: 0.039 seconds (Total) #> Chain 4: eti(model) #> Equal-Tailed Interval diff --git a/reference/hdi.html b/reference/hdi.html index b3c49d57c..9ca0209ce 100644 --- a/reference/hdi.html +++ b/reference/hdi.html @@ -73,7 +73,7 @@
@@ -329,8 +329,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 6e-06 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. +#> Chain 1: Gradient evaluation took 9e-06 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -423,8 +423,8 @@

Examples#> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) -#> Chain 4: 0.018 seconds (Sampling) -#> Chain 4: 0.037 seconds (Total) +#> Chain 4: 0.017 seconds (Sampling) +#> Chain 4: 0.036 seconds (Total) #> Chain 4: bayestestR::hdi(model) #> Highest Density Interval diff --git a/reference/map_estimate.html b/reference/map_estimate.html index 87193896f..62c753297 100644 --- a/reference/map_estimate.html +++ b/reference/map_estimate.html @@ -79,7 +79,7 @@
@@ -226,8 +226,8 @@

Examples#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 2.1e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. +#> Chain 1: Gradient evaluation took 2e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -269,15 +269,15 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.049 seconds (Warm-up) -#> Chain 2: 0.05 seconds (Sampling) -#> Chain 2: 0.099 seconds (Total) +#> Chain 2: Elapsed Time: 0.05 seconds (Warm-up) +#> Chain 2: 0.047 seconds (Sampling) +#> Chain 2: 0.097 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 1e-05 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. +#> Chain 3: Gradient evaluation took 8e-06 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -294,9 +294,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.046 seconds (Warm-up) +#> Chain 3: Elapsed Time: 0.047 seconds (Warm-up) #> Chain 3: 0.045 seconds (Sampling) -#> Chain 3: 0.091 seconds (Total) +#> Chain 3: 0.092 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). @@ -320,8 +320,8 @@

Examples#> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.045 seconds (Warm-up) -#> Chain 4: 0.047 seconds (Sampling) -#> Chain 4: 0.092 seconds (Total) +#> Chain 4: 0.048 seconds (Sampling) +#> Chain 4: 0.093 seconds (Total) #> Chain 4: map_estimate(model) #> MAP Estimate @@ -338,8 +338,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 6e-06 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. +#> Chain 1: Gradient evaluation took 1e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -356,9 +356,9 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.019 seconds (Warm-up) +#> Chain 1: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 1: 0.017 seconds (Sampling) -#> Chain 1: 0.036 seconds (Total) +#> Chain 1: 0.035 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). diff --git a/reference/mcse.html b/reference/mcse.html index befcb9cfd..0e9b8fcdc 100644 --- a/reference/mcse.html +++ b/reference/mcse.html @@ -67,7 +67,7 @@
diff --git a/reference/mediation.html b/reference/mediation.html index ccd10a8bf..bcdc53228 100644 --- a/reference/mediation.html +++ b/reference/mediation.html @@ -71,7 +71,7 @@
diff --git a/reference/model_to_priors.html b/reference/model_to_priors.html index 0205f84bd..7c0fcc3d6 100644 --- a/reference/model_to_priors.html +++ b/reference/model_to_priors.html @@ -67,7 +67,7 @@
diff --git a/reference/overlap.html b/reference/overlap.html index aed5fc37e..37a42d16f 100644 --- a/reference/overlap.html +++ b/reference/overlap.html @@ -71,7 +71,7 @@
diff --git a/reference/p_direction.html b/reference/p_direction.html index bd2b5c222..4821c2e55 100644 --- a/reference/p_direction.html +++ b/reference/p_direction.html @@ -77,7 +77,7 @@
@@ -448,8 +448,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 6e-06 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. +#> Chain 1: Gradient evaluation took 5e-06 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -466,9 +466,9 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.021 seconds (Warm-up) +#> Chain 1: Elapsed Time: 0.022 seconds (Warm-up) #> Chain 1: 0.021 seconds (Sampling) -#> Chain 1: 0.042 seconds (Total) +#> Chain 1: 0.043 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). @@ -541,9 +541,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) +#> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.017 seconds (Sampling) -#> Chain 4: 0.036 seconds (Total) +#> Chain 4: 0.037 seconds (Total) #> Chain 4: p_direction(model) #> Probability of Direction diff --git a/reference/p_map.html b/reference/p_map.html index f57cb1c2c..e48d1e4be 100644 --- a/reference/p_map.html +++ b/reference/p_map.html @@ -75,7 +75,7 @@
@@ -271,8 +271,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 9e-06 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. +#> Chain 1: Gradient evaluation took 1e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -315,8 +315,8 @@

Examples#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) -#> Chain 2: 0.017 seconds (Sampling) -#> Chain 2: 0.036 seconds (Total) +#> Chain 2: 0.018 seconds (Sampling) +#> Chain 2: 0.037 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). @@ -339,9 +339,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) +#> Chain 3: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 3: 0.02 seconds (Sampling) -#> Chain 3: 0.04 seconds (Total) +#> Chain 3: 0.041 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). diff --git a/reference/p_rope.html b/reference/p_rope.html index 1ba400186..8fd1bac8e 100644 --- a/reference/p_rope.html +++ b/reference/p_rope.html @@ -67,7 +67,7 @@
diff --git a/reference/p_significance.html b/reference/p_significance.html index 417c84c86..c19628a45 100644 --- a/reference/p_significance.html +++ b/reference/p_significance.html @@ -77,7 +77,7 @@
diff --git a/reference/p_to_bf.html b/reference/p_to_bf.html index 990b13740..d3c2a4d9e 100644 --- a/reference/p_to_bf.html +++ b/reference/p_to_bf.html @@ -73,7 +73,7 @@
diff --git a/reference/pd_to_p.html b/reference/pd_to_p.html index 7ae1d377e..c659a5ad4 100644 --- a/reference/pd_to_p.html +++ b/reference/pd_to_p.html @@ -67,7 +67,7 @@
diff --git a/reference/point_estimate.html b/reference/point_estimate.html index 1af746af2..54161b7d8 100644 --- a/reference/point_estimate.html +++ b/reference/point_estimate.html @@ -67,7 +67,7 @@
@@ -277,15 +277,15 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.043 seconds (Warm-up) -#> Chain 1: 0.04 seconds (Sampling) -#> Chain 1: 0.083 seconds (Total) +#> Chain 1: Elapsed Time: 0.044 seconds (Warm-up) +#> Chain 1: 0.041 seconds (Sampling) +#> Chain 1: 0.085 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: -#> Chain 2: Gradient evaluation took 1.1e-05 seconds -#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. +#> Chain 2: Gradient evaluation took 9e-06 seconds +#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: @@ -302,15 +302,15 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.044 seconds (Warm-up) -#> Chain 2: 0.039 seconds (Sampling) -#> Chain 2: 0.083 seconds (Total) +#> Chain 2: Elapsed Time: 0.045 seconds (Warm-up) +#> Chain 2: 0.04 seconds (Sampling) +#> Chain 2: 0.085 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 1.1e-05 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. +#> Chain 3: Gradient evaluation took 9e-06 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -327,15 +327,15 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.043 seconds (Warm-up) -#> Chain 3: 0.048 seconds (Sampling) -#> Chain 3: 0.091 seconds (Total) +#> Chain 3: Elapsed Time: 0.044 seconds (Warm-up) +#> Chain 3: 0.049 seconds (Sampling) +#> Chain 3: 0.093 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: -#> Chain 4: Gradient evaluation took 1.1e-05 seconds -#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. +#> Chain 4: Gradient evaluation took 9e-06 seconds +#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: @@ -394,8 +394,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 6e-06 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. +#> Chain 1: Gradient evaluation took 8e-06 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -437,9 +437,9 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) +#> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 2: 0.017 seconds (Sampling) -#> Chain 2: 0.036 seconds (Total) +#> Chain 2: 0.037 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). @@ -487,9 +487,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) -#> Chain 4: 0.018 seconds (Sampling) -#> Chain 4: 0.037 seconds (Total) +#> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) +#> Chain 4: 0.019 seconds (Sampling) +#> Chain 4: 0.039 seconds (Total) #> Chain 4: point_estimate(model, centrality = "all", dispersion = TRUE) #> Point Estimate diff --git a/reference/reexports.html b/reference/reexports.html index b4d1da5cf..2baa943ad 100644 --- a/reference/reexports.html +++ b/reference/reexports.html @@ -81,7 +81,7 @@
diff --git a/reference/reshape_iterations.html b/reference/reshape_iterations.html index cc9d9a42b..817f2afd6 100644 --- a/reference/reshape_iterations.html +++ b/reference/reshape_iterations.html @@ -75,7 +75,7 @@
diff --git a/reference/rope.html b/reference/rope.html index ad0a29028..243988db6 100644 --- a/reference/rope.html +++ b/reference/rope.html @@ -69,7 +69,7 @@
diff --git a/reference/rope_range.html b/reference/rope_range.html index e6aa546be..90ea8b71d 100644 --- a/reference/rope_range.html +++ b/reference/rope_range.html @@ -69,7 +69,7 @@
@@ -159,8 +159,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 9e-06 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. +#> Chain 1: Gradient evaluation took 6e-06 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -184,8 +184,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: -#> Chain 2: Gradient evaluation took 4e-06 seconds -#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. +#> Chain 2: Gradient evaluation took 3e-06 seconds +#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: @@ -209,8 +209,8 @@

Examples#> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 3e-06 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. +#> Chain 3: Gradient evaluation took 4e-06 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -228,8 +228,8 @@

Examples#> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) -#> Chain 3: 0.02 seconds (Sampling) -#> Chain 3: 0.039 seconds (Total) +#> Chain 3: 0.019 seconds (Sampling) +#> Chain 3: 0.038 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). @@ -253,8 +253,8 @@

Examples#> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) -#> Chain 4: 0.017 seconds (Sampling) -#> Chain 4: 0.036 seconds (Total) +#> Chain 4: 0.016 seconds (Sampling) +#> Chain 4: 0.035 seconds (Total) #> Chain 4: rope_range(model) #> [1] -0.6026948 0.6026948 diff --git a/reference/sensitivity_to_prior.html b/reference/sensitivity_to_prior.html index 61a0d3362..8b5cacfe2 100644 --- a/reference/sensitivity_to_prior.html +++ b/reference/sensitivity_to_prior.html @@ -73,7 +73,7 @@
@@ -158,8 +158,8 @@

Examples#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: -#> Chain 2: Gradient evaluation took 1e-05 seconds -#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. +#> Chain 2: Gradient evaluation took 8e-06 seconds +#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: @@ -177,14 +177,14 @@

Examples#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.037 seconds (Warm-up) -#> Chain 2: 0.027 seconds (Sampling) -#> Chain 2: 0.064 seconds (Total) +#> Chain 2: 0.028 seconds (Sampling) +#> Chain 2: 0.065 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 1e-05 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. +#> Chain 3: Gradient evaluation took 8e-06 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -202,14 +202,14 @@

Examples#> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.026 seconds (Warm-up) -#> Chain 3: 0.027 seconds (Sampling) -#> Chain 3: 0.053 seconds (Total) +#> Chain 3: 0.028 seconds (Sampling) +#> Chain 3: 0.054 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: -#> Chain 4: Gradient evaluation took 1e-05 seconds -#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. +#> Chain 4: Gradient evaluation took 2.8e-05 seconds +#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.28 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: @@ -226,9 +226,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.028 seconds (Warm-up) -#> Chain 4: 0.031 seconds (Sampling) -#> Chain 4: 0.059 seconds (Total) +#> Chain 4: Elapsed Time: 0.029 seconds (Warm-up) +#> Chain 4: 0.032 seconds (Sampling) +#> Chain 4: 0.061 seconds (Total) #> Chain 4: sensitivity_to_prior(model) #> Parameter Sensitivity_Median @@ -263,8 +263,8 @@

Examples#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: -#> Chain 2: Gradient evaluation took 1.1e-05 seconds -#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. +#> Chain 2: Gradient evaluation took 1e-05 seconds +#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: @@ -281,15 +281,15 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.044 seconds (Warm-up) +#> Chain 2: Elapsed Time: 0.048 seconds (Warm-up) #> Chain 2: 0.04 seconds (Sampling) -#> Chain 2: 0.084 seconds (Total) +#> Chain 2: 0.088 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: -#> Chain 3: Gradient evaluation took 1.1e-05 seconds -#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. +#> Chain 3: Gradient evaluation took 9e-06 seconds +#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: @@ -307,14 +307,14 @@

Examples#> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.045 seconds (Warm-up) -#> Chain 3: 0.043 seconds (Sampling) -#> Chain 3: 0.088 seconds (Total) +#> Chain 3: 0.042 seconds (Sampling) +#> Chain 3: 0.087 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: -#> Chain 4: Gradient evaluation took 1.1e-05 seconds -#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. +#> Chain 4: Gradient evaluation took 1e-05 seconds +#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: diff --git a/reference/sexit.html b/reference/sexit.html index 4d0e614da..f94747476 100644 --- a/reference/sexit.html +++ b/reference/sexit.html @@ -115,7 +115,7 @@
diff --git a/reference/sexit_thresholds.html b/reference/sexit_thresholds.html index a7352bb0a..3eb22374e 100644 --- a/reference/sexit_thresholds.html +++ b/reference/sexit_thresholds.html @@ -75,7 +75,7 @@
@@ -186,8 +186,8 @@

Examples#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) -#> Chain 2: 0.015 seconds (Sampling) -#> Chain 2: 0.035 seconds (Total) +#> Chain 2: 0.014 seconds (Sampling) +#> Chain 2: 0.034 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). @@ -210,9 +210,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) -#> Chain 3: 0.017 seconds (Sampling) -#> Chain 3: 0.037 seconds (Total) +#> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) +#> Chain 3: 0.016 seconds (Sampling) +#> Chain 3: 0.035 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). diff --git a/reference/si.html b/reference/si.html index 69b1d6dc3..2f00604ea 100644 --- a/reference/si.html +++ b/reference/si.html @@ -75,7 +75,7 @@
@@ -304,8 +304,8 @@

Examples#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: -#> Chain 1: Gradient evaluation took 2.9e-05 seconds -#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.29 seconds. +#> Chain 1: Gradient evaluation took 2.8e-05 seconds +#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.28 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: @@ -322,15 +322,15 @@

Examples#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: -#> Chain 1: Elapsed Time: 0.188 seconds (Warm-up) -#> Chain 1: 0.192 seconds (Sampling) -#> Chain 1: 0.38 seconds (Total) +#> Chain 1: Elapsed Time: 0.193 seconds (Warm-up) +#> Chain 1: 0.197 seconds (Sampling) +#> Chain 1: 0.39 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: -#> Chain 2: Gradient evaluation took 1.5e-05 seconds -#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds. +#> Chain 2: Gradient evaluation took 1.6e-05 seconds +#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.16 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: @@ -347,9 +347,9 @@

Examples#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: -#> Chain 2: Elapsed Time: 0.191 seconds (Warm-up) -#> Chain 2: 0.171 seconds (Sampling) -#> Chain 2: 0.362 seconds (Total) +#> Chain 2: Elapsed Time: 0.197 seconds (Warm-up) +#> Chain 2: 0.174 seconds (Sampling) +#> Chain 2: 0.371 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). @@ -372,9 +372,9 @@

Examples#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: -#> Chain 3: Elapsed Time: 0.168 seconds (Warm-up) -#> Chain 3: 0.251 seconds (Sampling) -#> Chain 3: 0.419 seconds (Total) +#> Chain 3: Elapsed Time: 0.17 seconds (Warm-up) +#> Chain 3: 0.254 seconds (Sampling) +#> Chain 3: 0.424 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). @@ -397,9 +397,9 @@

Examples#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: -#> Chain 4: Elapsed Time: 0.167 seconds (Warm-up) -#> Chain 4: 0.181 seconds (Sampling) -#> Chain 4: 0.348 seconds (Total) +#> Chain 4: Elapsed Time: 0.176 seconds (Warm-up) +#> Chain 4: 0.185 seconds (Sampling) +#> Chain 4: 0.361 seconds (Total) #> Chain 4: si(stan_model, verbose = FALSE) #> Support Interval diff --git a/reference/simulate_correlation.html b/reference/simulate_correlation.html index 9c8823a82..e223698e2 100644 --- a/reference/simulate_correlation.html +++ b/reference/simulate_correlation.html @@ -67,7 +67,7 @@
diff --git a/reference/simulate_prior.html b/reference/simulate_prior.html index 7efc6d569..1b4083be2 100644 --- a/reference/simulate_prior.html +++ b/reference/simulate_prior.html @@ -67,7 +67,7 @@
diff --git a/reference/simulate_simpson.html b/reference/simulate_simpson.html index ae2304ae3..d3601352f 100644 --- a/reference/simulate_simpson.html +++ b/reference/simulate_simpson.html @@ -71,7 +71,7 @@
diff --git a/reference/spi.html b/reference/spi.html index 7109b87a7..d0b0e0e1c 100644 --- a/reference/spi.html +++ b/reference/spi.html @@ -71,7 +71,7 @@
diff --git a/reference/unupdate.html b/reference/unupdate.html index cb0c48f6d..31a7fe6f6 100644 --- a/reference/unupdate.html +++ b/reference/unupdate.html @@ -73,7 +73,7 @@
diff --git a/reference/weighted_posteriors.html b/reference/weighted_posteriors.html index 90b186aa9..3ac3d1a12 100644 --- a/reference/weighted_posteriors.html +++ b/reference/weighted_posteriors.html @@ -69,7 +69,7 @@
diff --git a/search.json b/search.json index f6aa2c979..06cb7f27c 100644 --- a/search.json +++ b/search.json @@ -1 +1 @@ -[{"path":[]},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"our-pledge","dir":"","previous_headings":"","what":"Our Pledge","title":"Contributor Covenant Code of Conduct","text":"members, contributors, leaders pledge make participation community harassment-free experience everyone, regardless age, body size, visible invisible disability, ethnicity, sex characteristics, gender identity expression, level experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, sexual identity orientation. pledge act interact ways contribute open, welcoming, diverse, inclusive, healthy community.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"our-standards","dir":"","previous_headings":"","what":"Our Standards","title":"Contributor Covenant Code of Conduct","text":"Examples behavior contributes positive environment community include: Demonstrating empathy kindness toward people respectful differing opinions, viewpoints, experiences Giving gracefully accepting constructive feedback Accepting responsibility apologizing affected mistakes, learning experience Focusing best just us individuals, overall community Examples unacceptable behavior include: use sexualized language imagery, sexual attention advances kind Trolling, insulting derogatory comments, personal political attacks Public private harassment Publishing others’ private information, physical email address, without explicit permission conduct reasonably considered inappropriate professional setting","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"enforcement-responsibilities","dir":"","previous_headings":"","what":"Enforcement Responsibilities","title":"Contributor Covenant Code of Conduct","text":"Community leaders responsible clarifying enforcing standards acceptable behavior take appropriate fair corrective action response behavior deem inappropriate, threatening, offensive, harmful. Community leaders right responsibility remove, edit, reject comments, commits, code, wiki edits, issues, contributions aligned Code Conduct, communicate reasons moderation decisions appropriate.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"scope","dir":"","previous_headings":"","what":"Scope","title":"Contributor Covenant Code of Conduct","text":"Code Conduct applies within community spaces, also applies individual officially representing community public spaces. Examples representing community include using official e-mail address, posting via official social media account, acting appointed representative online offline event.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"enforcement","dir":"","previous_headings":"","what":"Enforcement","title":"Contributor Covenant Code of Conduct","text":"Instances abusive, harassing, otherwise unacceptable behavior may reported community leaders responsible enforcement dom.makowski@gmail.com. complaints reviewed investigated promptly fairly. community leaders obligated respect privacy security reporter incident.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"enforcement-guidelines","dir":"","previous_headings":"","what":"Enforcement Guidelines","title":"Contributor Covenant Code of Conduct","text":"Community leaders follow Community Impact Guidelines determining consequences action deem violation Code Conduct:","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"id_1-correction","dir":"","previous_headings":"Enforcement Guidelines","what":"1. Correction","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Use inappropriate language behavior deemed unprofessional unwelcome community. Consequence: private, written warning community leaders, providing clarity around nature violation explanation behavior inappropriate. public apology may requested.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"id_2-warning","dir":"","previous_headings":"Enforcement Guidelines","what":"2. Warning","title":"Contributor Covenant Code of Conduct","text":"Community Impact: violation single incident series actions. Consequence: warning consequences continued behavior. interaction people involved, including unsolicited interaction enforcing Code Conduct, specified period time. includes avoiding interactions community spaces well external channels like social media. Violating terms may lead temporary permanent ban.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"id_3-temporary-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"3. Temporary Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: serious violation community standards, including sustained inappropriate behavior. Consequence: temporary ban sort interaction public communication community specified period time. public private interaction people involved, including unsolicited interaction enforcing Code Conduct, allowed period. Violating terms may lead permanent ban.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"id_4-permanent-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"4. Permanent Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Demonstrating pattern violation community standards, including sustained inappropriate behavior, harassment individual, aggression toward disparagement classes individuals. Consequence: permanent ban sort public interaction within community.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"attribution","dir":"","previous_headings":"","what":"Attribution","title":"Contributor Covenant Code of Conduct","text":"Code Conduct adapted Contributor Covenant, version 2.1, available https://www.contributor-covenant.org/version/2/1/code_of_conduct.html. Community Impact Guidelines inspired [Mozilla’s code conduct enforcement ladder][https://github.com/mozilla/inclusion]. answers common questions code conduct, see FAQ https://www.contributor-covenant.org/faq. Translations available https://www.contributor-covenant.org/translations.","code":""},{"path":"https://easystats.github.io/bayestestR/CONTRIBUTING.html","id":null,"dir":"","previous_headings":"","what":"Contribution Guidelines","title":"Contribution Guidelines","text":"easystats guidelines 0.1.0 people much welcome contribute code, documentation, testing suggestions. package aims beginner-friendly. Even ’re new open-source way life, new coding github stuff, encourage try submitting pull requests (PRs). “’d like help, ’m good enough programming yet” ’s alright, don’t worry! can always dig code, documentation tests. always typos fix, docs improve, details add, code lines document, tests add… Even smaller PRs appreciated. “’d like help, don’t know start” can look around issue section find features / ideas / bugs start working . can also open new issue just say ’re , interested helping . might ideas adapted skills. “’m sure suggestion idea worthwile” Enough impostor syndrom! suggestions opinions good, even ’s just thought , ’s always good receive feedback. “waste time ? get credit?” Software contributions getting valued academic world, good time collaborate us! Authors substantial contributions added within authors list. ’re also keen including eventual academic publications. Anyway, starting important! enter whole new world, new fantastic point view… fork repo, changes submit . work together make best :)","code":""},{"path":"https://easystats.github.io/bayestestR/CONTRIBUTING.html","id":"code","dir":"","previous_headings":"","what":"Code","title":"Contribution Guidelines","text":"Please document comment code, purpose step (code line) stated clear understandable way. submitting change, please read R style guide particular easystats convention code-style keep consistency code formatting. Regarding style guide, note exception: put readability clarity everything. Thus, like underscores full names (prefer model_performance modelperf interpret_odds_logistic intoddslog). start code, make sure ’re dev branch (“advanced”). , can create new branch named feature (e.g., feature_lightsaber) changes. Finally, submit branch merged dev branch. , every now , dev branch merge master, new package version.","code":""},{"path":"https://easystats.github.io/bayestestR/CONTRIBUTING.html","id":"checks-to-do-before-submission","dir":"","previous_headings":"","what":"Checks to do before submission","title":"Contribution Guidelines","text":"Make sure documentation (roxygen) good Make sure add tests new functions Run: styler::style_pkg(): Automatic style formatting lintr::lint_package(): Style checks devtools::check(): General checks","code":""},{"path":"https://easystats.github.io/bayestestR/CONTRIBUTING.html","id":"useful-materials","dir":"","previous_headings":"","what":"Useful Materials","title":"Contribution Guidelines","text":"Understanding GitHub flow","code":""},{"path":"https://easystats.github.io/bayestestR/PULL_REQUEST_TEMPLATE.html","id":null,"dir":"","previous_headings":"","what":"Description","title":"Description","text":"PR aims adding feature…","code":""},{"path":"https://easystats.github.io/bayestestR/PULL_REQUEST_TEMPLATE.html","id":"proposed-changes","dir":"","previous_headings":"","what":"Proposed Changes","title":"Description","text":"changed foo function …","code":""},{"path":"https://easystats.github.io/bayestestR/SUPPORT.html","id":null,"dir":"","previous_headings":"","what":"Getting help with {bayestestR}","title":"Getting help with {bayestestR}","text":"Thanks using bayestestR. filing issue, places explore pieces put together make process smooth possible. Start making minimal reproducible example using reprex package. haven’t heard used reprex , ’re treat! Seriously, reprex make R-question-asking endeavors easier (pretty insane ROI five ten minutes ’ll take learn ’s ). additional reprex pointers, check Get help! resource used tidyverse team. Armed reprex, next step figure ask: ’s question: start StackOverflow. people answer questions. ’s bug: ’re right place, file issue. ’re sure: let’s discuss try figure ! problem bug feature request, can easily return report . opening new issue, sure search issues pull requests make sure bug hasn’t reported /already fixed development version. default, search pre-populated :issue :open. can edit qualifiers (e.g. :pr, :closed) needed. example, ’d simply remove :open search issues repo, open closed. Thanks help!","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"the-bayes-factor","dir":"Articles","previous_headings":"","what":"The Bayes Factor","title":"Bayes Factors","text":"Bayes Factors (BFs) indices relative evidence one “model” another. role hypothesis testing index, Bayesian framework p-value classical/frequentist framework. significance-based testing, p-values used assess unlikely observed data null hypothesis true, Bayesian model selection framework, Bayes factors assess evidence different models, model corresponding specific hypothesis. According Bayes’ theorem, can update prior probabilities model M (P(M)) posterior probabilities (P(M|D)) observing datum D accounting probability observing datum given model (P(D|M), also known likelihood): P(M|D) = \\frac{P(D|M)\\times P(M)}{P(D)} Using equation, can compare probability-odds two models: \\underbrace{\\frac{P(M_1|D)}{P(M_2|D)}}_{\\text{Posterior Odds}} = \\underbrace{\\frac{P(D|M_1)}{P(D|M_2)}}_{\\text{Likelihood Ratio}} \\times \\underbrace{\\frac{P(M_1)}{P(M_2)}}_{\\text{Prior Odds}} likelihood ratio (middle term) Bayes factor - factor prior odds updated observing data posterior odds. Thus, Bayes factors can calculated two ways: ratio quantifying relative probability observed data two models. (contexts, probabilities also called marginal likelihoods.) BF_{12}=\\frac{P(D|M_1)}{P(D|M_2)} degree shift prior beliefs relative credibility two models (since can computed dividing posterior odds prior odds). BF_{12}=\\frac{Posterior~Odds_{12}}{Prior~Odds_{12}} provide functions computing Bayes factors two different contexts: testing single parameters (coefficients) within model comparing statistical models ","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"bayesfactor_parameters","dir":"Articles","previous_headings":"","what":"1. Testing Models’ Parameters with Bayes Factors","title":"Bayes Factors","text":"Bayes factor single parameter can used answer question: “Given observed data, null hypothesis absence effect become less credible?” Bayesian analysis Students’ (1908) Sleep data set. Let’s use Students’ (1908) Sleep data set (data(\"sleep\")). data comes study participants administered drug researchers assessed extra hours sleep participants slept afterwards. try answering following research question using Bayes factors: Given observed data, hypothesis drug (effect group) effect numbers hours extra sleep (variable extra) become less credible? boxplot suggests second group higher number hours extra sleep. much? Let’s fit simple Bayesian linear model, prior b_{group} \\sim N(0, 3) (.e. prior follows Gaussian/normal distribution mean = 0 SD = 3), using rstanarm package:","code":"set.seed(123) library(rstanarm) model <- stan_glm( formula = extra ~ group, data = sleep, prior = normal(0, 3, autoscale = FALSE), chains = 10, iter = 5000, warmup = 1000 )"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"testing-against-a-null-region","dir":"Articles","previous_headings":"1. Testing Models’ Parameters with Bayes Factors","what":"Testing against a null-region","title":"Bayes Factors","text":"One way operationlizing null-hypothesis setting null region, effect falls within interval practically equivalent null (Kruschke, 2010). case, means defining range effects consider equal drug effect . can compute prior probability drug’s effect falling outside null-region, prior probability drug’s effect falling within null-region get prior odds. Say effect smaller hour extra sleep practically equivalent effect , define prior odds : \\frac {P(b_{drug} \\notin [-1, 1])} {P(b_{drug} \\[-1, 1])} Given prior normal distribution centered 0 hours scale (SD) 3 hours, priors look like : prior odds 2.8. looking posterior distribution, can now compute posterior probability drug’s effect falling outside null-region, posterior probability drug’s effect falling within null-region get posterior odds: \\frac {P(b_{drug} \\notin [-1,1] | Data)} {P(b_{drug} \\[-1,1] | Data)} can see center posterior distribution shifted away 0 (~1.5). Likewise, posterior odds 2.5, seems favor effect non-null. , mean data support alternative null? Hard say, since even data observed, priors already favored alternative - need take priors account ! Let’s compute Bayes factor change prior odds posterior odds: BF_{10} = Odds_{posterior} / Odds_{prior} = 0.9! BF indicates data provide 1/0.9 = 1.1 times evidence effect drug practically nothing drug clinically significant effect. Thus, although center distribution shifted away 0, posterior distribution seems favor non-null effect drug, seems given observed data, probability mass overall shifted closer null interval, making values null interval probable! (see Non-overlapping Hypotheses Morey & Rouder, 2011) can achieved function bayesfactor_parameters(), computes Bayes factor model’s parameters: can also plot using see package: Note interpretation guides Bayes factors can found effectsize package:","code":"My_first_BF <- bayesfactor_parameters(model, null = c(-1, 1)) My_first_BF > Bayes Factor (Null-Interval) > > Parameter | BF > ------------------- > (Intercept) | 0.102 > group2 | 0.883 > > * Evidence Against The Null: [-1.000, 1.000] library(see) plot(My_first_BF) effectsize::interpret_bf(exp(My_first_BF$log_BF[2]), include_value = TRUE) > [1] \"anecdotal evidence (BF = 1/1.13) against\" > (Rules: jeffreys1961)"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"testing-against-the-point-null-0","dir":"Articles","previous_headings":"1. Testing Models’ Parameters with Bayes Factors","what":"Testing against the point-null (0)","title":"Bayes Factors","text":"don’t know region practically equivalent 0? just want null exactly zero? problem - width null region shrinks point, change prior probability posterior probability null can estimated comparing density null value two distributions.1 ratio called Savage-Dickey ratio, added benefit also approximation Bayes factor comparing estimated model model parameter interest restricted point-null: “[…] Bayes factor H_0 versus H_1 obtained analytically integrating model parameter \\theta. However, Bayes factor may likewise obtained considering H_1, dividing height posterior \\theta height prior \\theta, point interest.” (Wagenmakers, Lodewyckx, Kuriyal, & Grasman, 2010)","code":"My_second_BF <- bayesfactor_parameters(model, null = 0) My_second_BF > Bayes Factor (Savage-Dickey density ratio) > > Parameter | BF > ---------------- > group2 | 1.24 > > * Evidence Against The Null: 0 plot(My_second_BF)"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"directional-hypotheses","dir":"Articles","previous_headings":"1. Testing Models’ Parameters with Bayes Factors","what":"Directional hypotheses","title":"Bayes Factors","text":"can also compute Bayes factors directional hypotheses (“one sided”), prior hypotheses direction effect. can done setting order restriction prior distribution (results order restriction posterior distribution) alternative (Morey & Wagenmakers, 2014). example, prior hypothesis drug positive effect number sleep hours, alternative restricted region right null (point interval): can see, given priori assumption direction effect (effect positive), presence effect 2.8 times likely absence effect, given observed data (data 2.8 time probable H_1 H_0). indicates , given observed data, priori hypothesis, posterior mass shifted away null value, giving evidence null (note Bayes factor 2.8 still considered quite weak evidence). Thanks flexibility Bayesian framework, also possible compute Bayes factor dividing hypotheses - , null alternative complementary, opposing one-sided hypotheses (Morey & Wagenmakers, 2014). example, compared alternative H_A: drug positive effects null H_0: drug effect. can also compare instead alternative complementary hypothesis: H_{-}: drug negative effects. can see test produces even stronger (conclusive) evidence one-sided vs. point-null test! indeed, rule thumb, specific two hypotheses , distinct one another, power Bayes factor ! 2 Thanks transitivity Bayes factors, can also use bayesfactor_parameters() compare even types hypotheses, trickery. example: \\underbrace{BF_{0\") test_group2_right > Bayes Factor (Savage-Dickey density ratio) > > Parameter | BF > ---------------- > group2 | 2.37 > > * Evidence Against The Null: 0 > * Direction: Right-Sided test plot(test_group2_right) test_group2_dividing <- bayesfactor_parameters(model, null = c(-Inf, 0)) test_group2_dividing > Bayes Factor (Null-Interval) > > Parameter | BF > ----------------- > group2 | 20.53 > > * Evidence Against The Null: [-Inf, 0.000] plot(test_group2_dividing)"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"si","dir":"Articles","previous_headings":"1. Testing Models’ Parameters with Bayes Factors","what":"Support intervals","title":"Bayes Factors","text":"far ’ve seen Bayes factors quantify relative support competing hypotheses. However, can also ask: Upon observing data, credibility parameter’s values increased (decreased)? example, ’ve seen point null become somewhat less credible observing data, might also ask values gained credibility given observed data?. resulting range values called support interval indicates values supported data (Wagenmakers, Gronau, Dablander, & Etz, 2018). can comparing prior posterior distributions checking posterior densities higher prior densities. bayestestR, can achieved si() function: argument BF = 1 indicates want interval contain values gained support factor least 1 (, support ). Visually, can see credibility values within interval increased (likewise credibility values outside interval decreased): can also see support interval (just barely) excludes point null (0) - whose credibility ’ve already seen decreased observed data. emphasizes relationship support interval Bayes factor: “interpretation intervals analogous frequentist confidence interval contains parameter values rejected tested level \\alpha. instance, BF = 1/3 support interval encloses values theta updating factor stronger 3 .” (Wagenmakers et al., 2018) Thus, choice BF (level support interval indicate) depends want interval represent: BF = 1 contains values whose credibility merely decreased observing data. BF > 1 contains values received impressive support data. BF < 1 contains values whose credibility impressively decreased observing data. Testing values outside interval produce Bayes factor larger 1/BF support alternative.","code":"my_first_si <- si(model, BF = 1) print(my_first_si) > Support Interval > > Parameter | BF = 1 SI | Effects | Component > --------------------------------------------------- > (Intercept) | [-0.44, 1.99] | fixed | conditional > group2 | [ 0.16, 3.04] | fixed | conditional plot(my_first_si)"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"bayesfactor_models","dir":"Articles","previous_headings":"","what":"2. Comparing Models using Bayes Factors","title":"Bayes Factors","text":"Bayes factors can also used compare statistical models. statistical context, answer following question: model observed data probable? words, model likely produced observed data? usually done comparing marginal likelihoods two models. case, Bayes factor measure relative evidence one model . Let’s use Bayes factors model comparison find model best describes length iris’ sepal using iris data set.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"for-bayesian-models-brms-and-rstanarm","dir":"Articles","previous_headings":"2. Comparing Models using Bayes Factors","what":"For Bayesian models (brms and rstanarm)","title":"Bayes Factors","text":"Note: order compute Bayes factors Bayesian models, non-default arguments must added upon fitting: brmsfit models must fitted save_pars = save_pars(= TRUE) stanreg models must fitted defined diagnostic_file. Let’s first fit 5 Bayesian regressions brms predict Sepal.Length: can now compare models bayesfactor_models() function, using denominator argument specify model rest models compared (case, intercept-model): can see Species + Petal.Length model best model - BF=2\\times 10^{53} compared null (intercept ). Due transitive property Bayes factors, can easily change reference model full Species * Petal.Length model: can see, Species + Petal.Length model also favored compared Species * Petal.Length model, though several orders magnitude less - supported 23.38 times !) can also change reference model Species model: Notice , Bayesian framework compared models need nested models, happened compared Petal.Length-model Species-model (something done frequentist framework, compared models must nested one another). can also get matrix Bayes factors pairwise model comparisons: NOTE: order correctly precisely estimate Bayes Factors, always need 4 P’s: Proper Priors 3, Plentiful Posterior 4.","code":"library(brms) # intercept only model m0 <- brm(Sepal.Length ~ 1, data = iris, prior = set_prior(\"student_t(3, 6, 6)\", class = \"Intercept\") + set_prior(\"student_t(3, 0, 6)\", class = \"sigma\"), chains = 10, iter = 5000, warmup = 1000, save_pars = save_pars(all = TRUE) ) # Petal.Length only m1 <- brm(Sepal.Length ~ Petal.Length, data = iris, prior = set_prior(\"student_t(3, 6, 6)\", class = \"Intercept\") + set_prior(\"student_t(3, 0, 6)\", class = \"sigma\") + set_prior(\"normal(0, 1)\", coef = \"Petal.Length\"), chains = 10, iter = 5000, warmup = 1000, save_pars = save_pars(all = TRUE) ) # Species only m2 <- brm(Sepal.Length ~ Species, data = iris, prior = set_prior(\"student_t(3, 6, 6)\", class = \"Intercept\") + set_prior(\"student_t(3, 0, 6)\", class = \"sigma\") + set_prior(\"normal(0, 3)\", coef = c(\"Speciesversicolor\", \"Speciesvirginica\")), chains = 10, iter = 5000, warmup = 1000, save_pars = save_pars(all = TRUE) ) # Species + Petal.Length model m3 <- brm(Sepal.Length ~ Species + Petal.Length, data = iris, prior = set_prior(\"student_t(3, 6, 6)\", class = \"Intercept\") + set_prior(\"student_t(3, 0, 6)\", class = \"sigma\") + set_prior(\"normal(0, 1)\", coef = \"Petal.Length\") + set_prior(\"normal(0, 3)\", coef = c(\"Speciesversicolor\", \"Speciesvirginica\")), chains = 10, iter = 5000, warmup = 1000, save_pars = save_pars(all = TRUE) ) # full interactive model m4 <- brm(Sepal.Length ~ Species * Petal.Length, data = iris, prior = set_prior(\"student_t(3, 6, 6)\", class = \"Intercept\") + set_prior(\"student_t(3, 0, 6)\", class = \"sigma\") + set_prior(\"normal(0, 1)\", coef = \"Petal.Length\") + set_prior(\"normal(0, 3)\", coef = c(\"Speciesversicolor\", \"Speciesvirginica\")) + set_prior(\"normal(0, 2)\", coef = c(\"Speciesversicolor:Petal.Length\", \"Speciesvirginica:Petal.Length\")), chains = 10, iter = 5000, warmup = 1000, save_pars = save_pars(all = TRUE) ) library(bayestestR) comparison <- bayesfactor_models(m1, m2, m3, m4, denominator = m0) comparison > Bayes Factors for Model Comparison > > Model BF > [1] Petal.Length 1.27e+44 > [2] Species 8.34e+27 > [3] Species + Petal.Length 2.29e+53 > [4] Species * Petal.Length 9.79e+51 > > * Against Denominator: [5] (Intercept only) > * Bayes Factor Type: marginal likelihoods (bridgesampling) update(comparison, reference = 4) > Bayes Factors for Model Comparison > > Model BF > [1] Petal.Length 1.30e-08 > [2] Species 8.52e-25 > [3] Species + Petal.Length 23.38 > [5] (Intercept only) 1.02e-52 > > * Against Denominator: [4] Species * Petal.Length > * Bayes Factor Type: marginal likelihoods (bridgesampling) update(comparison, reference = 2) > Bayes Factors for Model Comparison > > Model BF > [1] Petal.Length 1.53e+16 > [3] Species + Petal.Length 2.74e+25 > [4] Species * Petal.Length 1.17e+24 > [5] (Intercept only) 1.20e-28 > > * Against Denominator: [2] Species > * Bayes Factor Type: marginal likelihoods (bridgesampling) as.matrix(comparison) > # Bayes Factors for Model Comparison > > Numerator > Denominator > > | [1] | [2] | [3] | [4] | [5] > --------------------------------------------------------------------------------- > [1] Petal.Length | 1 | 6.54e-17 | 1.80e+09 | 7.68e+07 | 7.85e-45 > [2] Species | 1.53e+16 | 1 | 2.74e+25 | 1.17e+24 | 1.20e-28 > [3] Species + Petal.Length | 5.57e-10 | 3.64e-26 | 1 | 0.043 | 4.37e-54 > [4] Species * Petal.Length | 1.30e-08 | 8.52e-25 | 23.38 | 1 | 1.02e-52 > [5] (Intercept only) | 1.27e+44 | 8.34e+27 | 2.29e+53 | 9.79e+51 | 1"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"for-frequentist-models-via-the-bic-approximation","dir":"Articles","previous_headings":"2. Comparing Models using Bayes Factors","what":"For Frequentist models via the BIC approximation","title":"Bayes Factors","text":"also possible compute Bayes factors comparison frequentist models. done comparing BIC measures, allowing Bayesian comparison nested well non-nested frequentist models (Wagenmakers, 2007). Let’s try linear mixed-effects models:","code":"library(lme4) # define models with increasing complexity m0 <- lmer(Sepal.Length ~ (1 | Species), data = iris) m1 <- lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) m2 <- lmer(Sepal.Length ~ Petal.Length + (Petal.Length | Species), data = iris) m3 <- lmer(Sepal.Length ~ Petal.Length + Petal.Width + (Petal.Length | Species), data = iris) m4 <- lmer(Sepal.Length ~ Petal.Length * Petal.Width + (Petal.Length | Species), data = iris) # model comparison bayesfactor_models(m1, m2, m3, m4, denominator = m0) > Bayes Factors for Model Comparison > > Model BF > [m1] Petal.Length + (1 | Species) 3.82e+25 > [m2] Petal.Length + (Petal.Length | Species) 4.96e+24 > [m3] Petal.Length + Petal.Width + (Petal.Length | Species) 4.03e+23 > [m4] Petal.Length * Petal.Width + (Petal.Length | Species) 9.06e+22 > > * Against Denominator: [m0] 1 + (1 | Species) > * Bayes Factor Type: BIC approximation"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"bayesfactor_restricted","dir":"Articles","previous_headings":"2. Comparing Models using Bayes Factors","what":"Order restricted models","title":"Bayes Factors","text":"stated discussing one-sided hypothesis tests, can create new models imposing order restrictions given model. example, consider following model, predict length iris’ sepal length petal, well species, priors: - b_{petal} \\sim N(0,2) - b_{versicolors}\\ \\&\\ b_{virginica} \\sim N(0,1.2) priors unrestricted - , values -\\infty \\infty parameters model non-zero credibility (matter small; true prior posterior distribution). Subsequently, priori ordering parameters relating iris species can ordering, priori setosa can larger sepals virginica, also possible virginica larger sepals setosa! make sense let priors cover possibilities? depends prior knowledge hypotheses. example, even novice botanist assume unlikely petal length negatively associated sepal length - iris longer petals likely larger, thus also longer sepal. expert botanist perhaps assume setosas smaller sepals versicolors virginica. priors can formulated restricted priors (Morey, 2015; Morey & Rouder, 2011): novice botanist: b_{petal} > 0 expert botanist: b_{versicolors} > 0\\ \\&\\ b_{virginica} > 0 testing restrictions prior posterior samples, can see probabilities restricted distributions change observing data. can achieved bayesfactor_restricted(), compute Bayes factor restricted model vs unrestricted model. Let’s first specify restrictions logical conditions: Let’s test hypotheses: can see novice botanist’s hypothesis gets Bayes factor ~2, indicating data provides twice much evidence model petal length restricted positively associated sepal length model restriction. expert botanist? seems failed miserably, BF favoring unrestricted model many many times . possible? seems controlling petal length, versicolor virginica actually shorter sepals! Note Bayes factors compare restricted model unrestricted model. wanted compare restricted model null model, use transitive property Bayes factors like : BF_{\\text{restricted vs. NULL}} = \\frac {BF_{\\text{restricted vs. un-restricted}}} {BF_{\\text{un-restricted vs NULL}}} restrictions prior distribution, appropriate testing pre-planned (priori) hypotheses, used post hoc comparisons (Morey, 2015). NOTE: See Specifying Correct Priors Factors 2 Levels appendix .","code":"iris_model <- stan_glm(Sepal.Length ~ Species + Petal.Length, data = iris, prior = normal(0, c(2, 1.2, 1.2), autoscale = FALSE), chains = 10, iter = 5000, warmup = 1000 ) botanist_hypotheses <- c( \"Petal.Length > 0\", \"(Speciesversicolor > 0) & (Speciesvirginica > 0)\" ) model_prior <- unupdate(iris_model) botanist_BFs <- bayesfactor_restricted( posterior = iris_model, prior = model_prior, hypothesis = botanist_hypotheses ) print(botanist_BFs) > Bayes Factor (Order-Restriction) > > Hypothesis P(Prior) P(Posterior) BF > Petal.Length > 0 0.50 1 1.99 > (Speciesversicolor > 0) & (Speciesvirginica > 0) 0.25 0 0.00e+00 > > * Bayes factors for the restricted model vs. the un-restricted model."},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"bayesian-model-averaging","dir":"Articles","previous_headings":"","what":"3. Bayesian Model Averaging","title":"Bayes Factors","text":"previous section, discussed direct comparison two models determine effect supported data. However, many cases many models consider, perhaps straightforward models compare determine effect supported data. cases, can use Bayesian model averaging (BMA) determine support provided data parameter term across many models.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"bayesfactor_inclusion","dir":"Articles","previous_headings":"3. Bayesian Model Averaging","what":"Inclusion Bayes factors","title":"Bayes Factors","text":"Inclusion Bayes factors answer question: observed data probable models particular predictor, models without particular predictor? words, average, models predictor X likely produced observed data models without predictor X?5 Since model prior probability, possible sum prior probability models include predictor interest (prior inclusion probability), models include predictor (prior exclusion probability). data observed, model assigned posterior probability, can similarly consider sums posterior models’ probabilities obtain posterior inclusion probability posterior exclusion probability. , change prior inclusion odds posterior inclusion odds Inclusion Bayes factor [“BF_{Inclusion}”; Clyde, Ghosh, & Littman (2011)]. Lets use brms example : examine interaction term’s inclusion Bayes factor, can see across 5 models, model term average (1/0.171) 5.84 times less supported model without term. Note Species, factor represented model several parameters, gets single Bayes factor - inclusion Bayes factors given per predictor! can also compare matched models - averaging done across models (1) include interactions predictor interest; (2) interaction predictors, averaging done across models contain main effects interaction predictor comprised (see explanation might want ).","code":"bayesfactor_inclusion(comparison) > Inclusion Bayes Factors (Model Averaged) > > P(prior) P(posterior) Inclusion BF > Petal.Length 0.60 1.00 1.91e+25 > Species 0.60 1.00 1.25e+09 > Petal.Length:Species 0.20 0.04 0.171 > > * Compared among: all models > * Priors odds: uniform-equal bayesfactor_inclusion(comparison, match_models = TRUE) > Inclusion Bayes Factors (Model Averaged) > > P(prior) P(posterior) Inclusion BF > Petal.Length 0.40 0.96 2.74e+25 > Species 0.40 0.96 1.80e+09 > Petal.Length:Species 0.20 0.04 0.043 > > * Compared among: matched models only > * Priors odds: uniform-equal"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"comparison-with-jasp","dir":"Articles","previous_headings":"3. Bayesian Model Averaging","what":"Comparison with JASP","title":"Bayes Factors","text":"bayesfactor_inclusion() meant provide Bayes Factors per predictor, similar JASP’s Effects option. Let’s compare two. Note comparison use BayesFactor package, JASP uses hood. (Note package used different model-parameterization different default prior-specifications compared Stan-based packages.) Across models: Across matched models: Nuisance Effects: ’ll add dose null model JASP, R:","code":"library(BayesFactor) data(ToothGrowth) ToothGrowth$dose <- as.factor(ToothGrowth$dose) BF_ToothGrowth <- anovaBF(len ~ dose * supp, ToothGrowth, progress = FALSE) bayesfactor_inclusion(BF_ToothGrowth) > Inclusion Bayes Factors (Model Averaged) > > P(prior) P(posterior) Inclusion BF > supp 0.60 1.00 141.02 > dose 0.60 1.00 3.21e+14 > dose:supp 0.20 0.71 10.00 > > * Compared among: all models > * Priors odds: uniform-equal bayesfactor_inclusion(BF_ToothGrowth, match_models = TRUE) > Inclusion Bayes Factors (Model Averaged) > > P(prior) P(posterior) Inclusion BF > supp 0.40 0.28 59.71 > dose 0.40 0.29 1.38e+14 > dose:supp 0.20 0.71 2.54 > > * Compared among: matched models only > * Priors odds: uniform-equal BF_ToothGrowth_against_dose <- BF_ToothGrowth[3:4] / BF_ToothGrowth[2] # OR: # update(bayesfactor_models(BF_ToothGrowth), # subset = c(4, 5), # reference = 3) BF_ToothGrowth_against_dose > Bayes factor analysis > -------------- > [1] supp + dose : 60 ±4.6% > [2] supp + dose + supp:dose : 152 ±1.1% > > Against denominator: > len ~ dose > --- > Bayes factor type: BFlinearModel, JZS bayesfactor_inclusion(BF_ToothGrowth_against_dose) > Inclusion Bayes Factors (Model Averaged) > > P(prior) P(posterior) Inclusion BF > dose 1.00 1.00 > supp 0.67 1.00 105.77 > dose:supp 0.33 0.71 5.00 > > * Compared among: all models > * Priors odds: uniform-equal"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"weighted_posteriors","dir":"Articles","previous_headings":"3. Bayesian Model Averaging","what":"Averaging posteriors","title":"Bayes Factors","text":"Similar can average evidence predictor across models, can also average posterior estimate across models. useful situations Bayes factors seem support null effect, yet HDI alternative excludes null value (also see si() described ). example, looking Motor Trend Car Road Tests (data(mtcars)), naturally predict miles/gallon (mpg) transition type () weight (wt), number carburetors (carb)? good predictor? can determine comparing following models: seems model without carb predictor 1/BF=1.2 times likely model carb predictor. might assume latter model, HDI include point-null value 0 effect, also indicate credibility null posterior. However, case: can ? estimating HDI effect carb full model, acting assumption model correct. However, ’ve just seen, models practically tied. case limit estimation effect just one model? (Bergh, Haaf, Ly, Rouder, & Wagenmakers, 2019). Using Bayesian Model Averaging, can combine posteriors samples several models, weighted models’ marginal likelihood (done via bayesfactor_models() function). parameter part models missing others, assumed fixed 0 (can also seen method applying shrinkage estimates). results posterior distribution across several models, can now treat like posterior distribution, estimate HDI. bayestestR, can weighted_posteriors() function: can see across models consideration, posterior carb effect almost equally weighted alternative model null model - represented half posterior mass concentrated 0 - makes sense models almost equally supported data. can also see across models, now HDI contain 0. Thus resolved conflict Bayes factor HDI (Rouder, Haaf, & Vandekerckhove, 2018)! Note: Parameters might play different roles across different models. example, parameter plays different role model Y ~ + B (main effect) model Y ~ + B + :B (simple effect). many cases centering predictors (mean subtracting continuous variables, orthogonal coding factors) can cases reduce issue.","code":"mod <- stan_glm(mpg ~ wt + am, data = mtcars, prior = normal(0, c(10, 10), autoscale = FALSE), chains = 10, iter = 5000, warmup = 1000, diagnostic_file = file.path(tempdir(), \"df1.csv\"), refresh = 0 ) mod_carb <- stan_glm(mpg ~ wt + am + carb, data = mtcars, prior = normal(0, c(10, 10, 20), autoscale = FALSE), chains = 10, iter = 5000, warmup = 1000, diagnostic_file = file.path(tempdir(), \"df0.csv\"), refresh = 0 ) BF_carb <- bayesfactor_models(mod_carb, denominator = mod, verbose = FALSE) BF_carb > Bayes Factors for Model Comparison > > Model BF > [1] wt + am + carb 0.820 > > * Against Denominator: [2] wt + am > * Bayes Factor Type: marginal likelihoods (bridgesampling) hdi(mod_carb, ci = 0.95) > Highest Density Interval > > Parameter | 95% HDI > ---------------------------- > (Intercept) | [27.95, 40.05] > wt | [-5.49, -1.72] > am | [-0.75, 6.00] > carb | [-2.04, -0.31] BMA_draws <- weighted_posteriors(mod, mod_carb, verbose = FALSE) BMA_hdi <- hdi(BMA_draws, ci = 0.95) BMA_hdi plot(BMA_hdi) > Highest Density Interval > > Parameter | 95% HDI > ---------------------------- > (Intercept) | [28.77, 42.61] > wt | [-6.77, -2.18] > am | [-2.59, 5.47] > carb | [-1.69, 0.00]"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"testing-contrasts-with-emmeans-modelbased","dir":"Articles","previous_headings":"Appendices","what":"Testing contrasts (with emmeans / modelbased)","title":"Bayes Factors","text":"Besides testing parameter bayesfactor_parameters() can used test estimate based prior posterior distribution estimate. One way achieve mix bayesfactor_parameters() + emmeans test Bayesian contrasts. example, sleep example , can estimate group means difference : strong evidence mean group 1 0, group 2 0, hardly evidence difference 0. Conflict? Uncertainty? Bayesian way! can also use easystats’ modelbased package compute Bayes factors contrasts: NOTE: See Specifying Correct Priors Factors 2 Levels section .","code":"library(emmeans) (group_diff <- emmeans(model, pairwise ~ group, data = sleep)) # pass the original model via prior bayesfactor_parameters(group_diff, prior = model) > $emmeans > group emmean lower.HPD upper.HPD > 1 0.79 -0.48 2.0 > 2 2.28 1.00 3.5 > > Point estimate displayed: median > HPD interval probability: 0.95 > > $contrasts > contrast estimate lower.HPD upper.HPD > group1 - group2 -1.47 -3.2 0.223 > > Point estimate displayed: median > HPD interval probability: 0.95 > Bayes Factor (Savage-Dickey density ratio) > > group | contrast | BF > ------------------------------- > 1 | | 0.286 > 2 | | 21.18 > | group1 - group2 | 1.26 > > * Evidence Against The Null: 0 library(modelbased) estimate_contrasts(model, test = \"bf\", bf_prior = model)"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"contr_bayes","dir":"Articles","previous_headings":"Appendices","what":"Specifying correct priors for factors","title":"Bayes Factors","text":"section introduces biased priors obtained using common effects factor coding (contr.sum) dummy factor coding (contr.treatment), solution using orthonormal factor coding (contr.equalprior) (outlined Rouder, Morey, Speckman, & Province, 2012, sec. 7.2). Special care taken working factors 3 levels.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"contrasts-and-marginal-means","dir":"Articles","previous_headings":"Appendices > Specifying correct priors for factors","what":"Contrasts (and marginal means)","title":"Bayes Factors","text":"effects factor coding commonly used factorial analysis carries hidden bias applies Bayesian priors. example, want test pairwise differences 3 levels factor, expect priori differences distribution, … example, test prior pairwise differences 3 species iris dataset. Notice , though prior estimate 3 pairwise contrasts ~0, scale HDI much narrower prior setosa - versicolor contrast! happened??? caused inherent bias priors introduced effects coding (’s even worse default treatment coding, prior intercept usually drastically different effect’s parameters). since affects priors, bias also bias Bayes factors / understating evidence contrasts others! solution use equal-prior factor coding, -la contr.equalprior* family, can either specify factor coding per-factor: can set globally: Let’s estimate prior differences: can see using contr.equalprior_pairs coding scheme, equal priors pairwise contrasts, width corresponding normal(0, c(1, 1), autoscale = FALSE) prior set! solutions problem priors. can read Solomon Kurz’s blog post.","code":"df <- iris contrasts(df$Species) <- contr.sum fit_sum <- stan_glm(Sepal.Length ~ Species, data = df, prior = normal(0, c(1, 1), autoscale = FALSE), prior_PD = TRUE, # sample priors family = gaussian(), chains = 10, iter = 5000, warmup = 1000, refresh = 0 ) (pairs_sum <- pairs(emmeans(fit_sum, ~Species))) ggplot(stack(insight::get_parameters(pairs_sum)), aes(x = values, fill = ind)) + geom_density(linewidth = 1) + facet_grid(ind ~ .) + labs(x = \"prior difference values\") + theme(legend.position = \"none\") > contrast estimate lower.HPD upper.HPD > setosa - versicolor -0.017 -2.8 2.7 > setosa - virginica -0.027 -4.0 4.6 > versicolor - virginica 0.001 -4.2 4.5 > > Point estimate displayed: median > HPD interval probability: 0.95 contrasts(df$Species) <- contr.equalprior_pairs options(contrasts = c(\"contr.equalprior_pairs\", \"contr.poly\")) fit_bayes <- stan_glm(Sepal.Length ~ Species, data = df, prior = normal(0, c(1, 1), autoscale = FALSE), prior_PD = TRUE, # sample priors family = gaussian(), chains = 10, iter = 5000, warmup = 1000, refresh = 0 ) (pairs_bayes <- pairs(emmeans(fit_bayes, ~Species))) ggplot(stack(insight::get_parameters(pairs_bayes)), aes(x = values, fill = ind)) + geom_density(linewidth = 1) + facet_grid(ind ~ .) + labs(x = \"prior difference values\") + theme(legend.position = \"none\") > contrast estimate lower.HPD upper.HPD > setosa - versicolor 0.0000 -2.10 1.89 > setosa - virginica 0.0228 -1.93 1.99 > versicolor - virginica 0.0021 -2.06 1.89 > > Point estimate displayed: median > HPD interval probability: 0.95"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"order-restrictions","dir":"Articles","previous_headings":"Appendices > Specifying correct priors for factors","what":"Order restrictions","title":"Bayes Factors","text":"bias also affect order restrictions involving 3 levels. example, want test order restriction among , B, C, priori probability obtaining order > C > B 1/6 (reach back intro stats year 1), … example, interested following order restrictions iris dataset (line separate restriction): default factor coding, looks like : happened??? comparison 2 levels prior ~0.5, expected. comparison 3 levels different priors, depending order restriction - .e. orders priori likely others!!! , solved using equal prior factor coding ().","code":"hyp <- c( # comparing 2 levels \"setosa < versicolor\", \"setosa < virginica\", \"versicolor < virginica\", # comparing 3 (or more) levels \"setosa < virginica & virginica < versicolor\", \"virginica < setosa & setosa < versicolor\", \"setosa < versicolor & versicolor < virginica\" ) contrasts(df$Species) <- contr.sum fit_sum <- stan_glm(Sepal.Length ~ Species, data = df, prior = normal(0, c(1, 1), autoscale = FALSE), family = gaussian(), chains = 10, iter = 5000, warmup = 1000 ) em_sum <- emmeans(fit_sum, ~Species) # the posterior marginal means bayesfactor_restricted(em_sum, fit_sum, hypothesis = hyp) > > SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). > Chain 1: > Chain 1: Gradient evaluation took 2.2e-05 seconds > Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds. > Chain 1: Adjust your expectations accordingly! > Chain 1: > Chain 1: > Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) > Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) > Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) > Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) > Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) > Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) > Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) > Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) > Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) > Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) > Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) > Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) > Chain 1: > Chain 1: Elapsed Time: 0.028 seconds (Warm-up) > Chain 1: 0.045 seconds (Sampling) > Chain 1: 0.073 seconds (Total) > Chain 1: > > SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). > Chain 2: > Chain 2: Gradient evaluation took 1.2e-05 seconds > Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. > Chain 2: Adjust your expectations accordingly! > Chain 2: > Chain 2: > Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) > Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) > Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) > Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) > Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) > Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) > Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) > Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) > Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) > Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) > Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) > Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) > Chain 2: > Chain 2: Elapsed Time: 0.028 seconds (Warm-up) > Chain 2: 0.041 seconds (Sampling) > Chain 2: 0.069 seconds (Total) > Chain 2: > > SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). > Chain 3: > Chain 3: Gradient evaluation took 1.2e-05 seconds > Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. > Chain 3: Adjust your expectations accordingly! > Chain 3: > Chain 3: > Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) > Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) > Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) > Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) > Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) > Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) > Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) > Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) > Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) > Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) > Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) > Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) > Chain 3: > Chain 3: Elapsed Time: 0.028 seconds (Warm-up) > Chain 3: 0.039 seconds (Sampling) > Chain 3: 0.067 seconds (Total) > Chain 3: > > SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). > Chain 4: > Chain 4: Gradient evaluation took 1.1e-05 seconds > Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. > Chain 4: Adjust your expectations accordingly! > Chain 4: > Chain 4: > Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) > Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) > Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) > Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) > Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) > Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) > Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) > Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) > Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) > Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) > Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) > Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) > Chain 4: > Chain 4: Elapsed Time: 0.027 seconds (Warm-up) > Chain 4: 0.041 seconds (Sampling) > Chain 4: 0.068 seconds (Total) > Chain 4: > Bayes Factor (Order-Restriction) > > Hypothesis P(Prior) P(Posterior) BF > setosa < versicolor 0.51 1 1.97 > setosa < virginica 0.49 1 2.02 > versicolor < virginica 0.49 1 2.03 > setosa < virginica & virginica < versicolor 0.11 0 0.00e+00 > virginica < setosa & setosa < versicolor 0.20 0 0.00e+00 > setosa < versicolor & versicolor < virginica 0.20 1 5.09 > > * Bayes factors for the restricted model vs. the un-restricted model. contrasts(df$Species) <- contr.equalprior_pairs fit_bayes <- stan_glm(Sepal.Length ~ Species, data = df, prior = normal(0, c(1, 1), autoscale = FALSE), family = gaussian(), chains = 10, iter = 5000, warmup = 1000 ) em_bayes <- emmeans(fit_sum, ~Species) # the posterior marginal means bayesfactor_restricted(em_bayes, fit_sum, hypothesis = hyp) > Bayes Factor (Order-Restriction) > > Hypothesis P(Prior) P(Posterior) BF > setosa < versicolor 0.49 1 2.06 > setosa < virginica 0.49 1 2.03 > versicolor < virginica 0.51 1 1.96 > setosa < virginica & virginica < versicolor 0.17 0 0.00e+00 > virginica < setosa & setosa < versicolor 0.16 0 0.00e+00 > setosa < versicolor & versicolor < virginica 0.16 1 6.11 > > * Bayes factors for the restricted model vs. the un-restricted model."},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"conclusion","dir":"Articles","previous_headings":"Appendices > Specifying correct priors for factors","what":"Conclusion","title":"Bayes Factors","text":"comparing results two factor coding schemes, find: 1. cases, estimated (posterior) means quite similar (identical). 2. priors Bayes factors differ two schemes. 3. contr.equalprior*, prior distribution difference order 3 () means balanced. Read equal prior contrasts contr.equalprior docs!","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/bayestestR.html","id":"why-use-the-bayesian-framework","dir":"Articles","previous_headings":"","what":"Why use the Bayesian Framework?","title":"Get Started with Bayesian Analysis","text":"Bayesian framework statistics quickly gaining popularity among scientists, associated general shift towards open honest science. Reasons prefer approach : reliability (Etz & Vandekerckhove, 2016) accuracy (noisy data small samples) (Kruschke, Aguinis, & Joo, 2012) possibility introducing prior knowledge analysis (Andrews & Baguley, 2013; Kruschke et al., 2012) critically, intuitive nature results straightforward interpretation (Kruschke, 2010; Wagenmakers et al., 2018) general, frequentist approach associated focus null hypothesis testing, misuse p-values shown critically contribute reproducibility crisis social psychological sciences (Chambers, Feredoes, Muthukumaraswamy, & Etchells, 2014; Szucs & Ioannidis, 2016). emerging consensus generalization Bayesian approach one way overcoming issues (Benjamin et al., 2018; Etz & Vandekerckhove, 2016). agree Bayesian framework right way go, might wonder exactly framework. ’s fuss ?","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayestestR.html","id":"what-is-the-bayesian-framework","dir":"Articles","previous_headings":"","what":"What is the Bayesian Framework?","title":"Get Started with Bayesian Analysis","text":"Adopting Bayesian framework shift paradigm change methodology. Indeed, common statistical procedures (t-tests, correlations, ANOVAs, regressions, etc.) can achieved using Bayesian framework. key difference frequentist framework (“classical” approach statistics, p t values, well weird degrees freedom), effects fixed (unknown) data random. words, assumes unknown parameter unique value trying estimate/guess using sample data. hand, Bayesian framework, instead estimating “true effect”, probability different effects given observed data computed, resulting distribution possible values parameters, called posterior distribution. uncertainty Bayesian inference can summarized, instance, median distribution, well range values posterior distribution includes 95% probable values (95% credible interval). Cum grano salis, considered counterparts point-estimate confidence interval frequentist framework. illustrate difference interpretation, Bayesian framework allows say “given observed data, effect 95% probability falling within range”, frequentist (less intuitive) alternative “repeatedly computing confidence intervals data sort, 95% probability effect falls within given range”. essence, Bayesian sampling algorithms (MCMC sampling) return probability distribution (posterior) effect compatible observed data. Thus, effect can described characterizing posterior distribution relation centrality (point-estimates), uncertainty, well existence significance words, putting maths behind aside moment, can say : frequentist approach tries estimate real effect. instance, “real” value correlation x y. Hence, frequentist models return point-estimate (.e., single value distribution) “real” correlation (e.g., r = 0.42) estimated number obscure assumptions (minimum, considering data sampled random “parent”, usually normal distribution). Bayesian framework assumes thing. data . Based observed data (prior belief result), Bayesian sampling algorithm (MCMC sampling one example) returns probability distribution (called posterior) effect compatible observed data. correlation x y, return distribution says, example, “probable effect 0.42, data also compatible correlations 0.12 0.74 certain probabilities”. characterize statistical significance effects, need p-values, indices. simply describe posterior distribution effect. example, can report median, 89% Credible Interval indices. Accurate depiction regular Bayesian user estimating credible interval. Note: Altough purpose package advocate use Bayesian statistics, please note serious arguments supporting frequentist indices (see instance thread). always, world black white (p < .001). … work?","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/bayestestR.html","id":"bayestestr-installation","dir":"Articles","previous_headings":"A simple example","what":"bayestestR installation","title":"Get Started with Bayesian Analysis","text":"can install bayestestR along whole easystats suite running following: Let’s also install load rstanarm, allows fitting Bayesian models, well bayestestR, describe .","code":"install.packages(\"remotes\") remotes::install_github(\"easystats/easystats\") install.packages(\"rstanarm\") library(rstanarm)"},{"path":"https://easystats.github.io/bayestestR/articles/bayestestR.html","id":"traditional-linear-regression","dir":"Articles","previous_headings":"A simple example","what":"Traditional linear regression","title":"Get Started with Bayesian Analysis","text":"Let’s start fitting simple frequentist linear regression (lm() function stands linear model) two numeric variables, Sepal.Length Petal.Length famous iris dataset, included default R. analysis suggests statistically significant (whatever means) positive (coefficient 0.41) linear relationship two variables. Fitting interpreting frequentist models easy obvious people use instead Bayesian framework… right? anymore.","code":"model <- lm(Sepal.Length ~ Petal.Length, data = iris) summary(model) Call: lm(formula = Sepal.Length ~ Petal.Length, data = iris) Residuals: Min 1Q Median 3Q Max -1.2468 -0.2966 -0.0152 0.2768 1.0027 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.3066 0.0784 54.9 <2e-16 *** Petal.Length 0.4089 0.0189 21.6 <2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.41 on 148 degrees of freedom Multiple R-squared: 0.76, Adjusted R-squared: 0.758 F-statistic: 469 on 1 and 148 DF, p-value: <2e-16"},{"path":"https://easystats.github.io/bayestestR/articles/bayestestR.html","id":"bayesian-linear-regression","dir":"Articles","previous_headings":"A simple example","what":"Bayesian linear regression","title":"Get Started with Bayesian Analysis","text":"Summary Posterior Distribution ’s ! just fitted Bayesian version model simply using stan_glm() function instead lm() described posterior distributions parameters! conclusion draw, example, similar. effect (median effect’s posterior distribution) 0.41, can also considered significant Bayesian sense (later). , ready learn ? Check next tutorial! , want even , can check articles describing functionality package offer! https://easystats.github.io/bayestestR/articles/","code":"model <- stan_glm(Sepal.Length ~ Petal.Length, data = iris) posteriors <- describe_posterior(model) # for a nicer table print_md(posteriors, digits = 2)"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/credible_interval.html","id":"what-is-a-credible-interval","dir":"Articles","previous_headings":"","what":"What is a Credible Interval?","title":"Credible Intervals (CI)","text":"Credible intervals important concept Bayesian statistics. core purpose describe summarise uncertainty related unknown parameters trying estimate. regard, appear quite similar frequentist Confidence Intervals. However, goal similar, statistical definition meaning different. Indeed, latter obtained complex algorithm full rarely-tested assumptions approximations, credible intervals fairly straightforward compute. Bayesian inference returns distribution possible effect values (posterior), credible interval just range containing particular percentage probable values. instance, 95% credible interval simply central portion posterior distribution contains 95% values. Note drastically improve interpretability Bayesian interval compared frequentist one. Indeed, Bayesian framework allows us say “given observed data, effect 95% probability falling within range”, compared less straightforward, frequentist alternative (95% Confidence* Interval) “95% probability computing confidence interval data sort, effect falls within range”.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/credible_interval.html","id":"vs--95-ci","dir":"Articles","previous_headings":"","what":"89% vs. 95% CI","title":"Credible Intervals (CI)","text":"Using 89% another popular choice, used default long time (read story change). start? Naturally, came choosing CI level report default, people started using 95%, arbitrary convention used frequentist world. However, authors suggested 95% might appropriate Bayesian posterior distributions, potentially lacking stability enough posterior samples drawn (Kruschke, 2014). proposition use 90% instead 95%. However, recently, McElreath (2014, 2018) suggested use arbitrary thresholds first place, use 89%? Moreover, 89 highest prime number exceed already unstable 95% threshold. anything? Nothing, reminds us total arbitrariness conventions (McElreath, 2018). Thus, CIs computed 89% intervals (ci = 0.89), deemed stable , instance, 95% intervals (Kruschke, 2014). effective sample size (ESS; see ) least 10.000 recommended one wants compute precise 95% intervals (Kruschke, 2014, p. 183ff). Unfortunately, default number posterior samples Bayes packages (e.g., rstanarm brms) 4.000 (thus, might want increase fitting model). However, 95% advantages . instance, shares (case normal posterior distribution) intuitive relationship standard deviation conveys accurate image (artificial) bounds distribution. Also, wider, makes analyses conservative (.e., probability covering 0 larger 95% CI lower ranges 89%), good thing context reproducibility crisis. add mess, software use different default, instance 90%. Ultimately, user make informed decision, based needs goals, justify choice.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/credible_interval.html","id":"different-types-of-cis","dir":"Articles","previous_headings":"","what":"Different types of CIs","title":"Credible Intervals (CI)","text":"reader might notice bayestestR provides two methods compute credible intervals, Highest Density Interval (HDI) (hdi()) Equal-tailed Interval (ETI) (eti()). methods can also changed via method argument ci() function. difference? Let’s see: exactly … also case types distributions? difference strong one. Contrary HDI, points within interval higher probability density points outside interval, ETI equal-tailed. means 90% interval 5% distribution either side limits. indicates 5th percentile 95th percentile. symmetric distributions, two methods computing credible intervals, ETI HDI, return similar results. case skewed distributions. Indeed, possible parameter values ETI lower credibility (less probable) parameter values outside ETI. property seems undesirable summary credible values distribution. hand, ETI range change transformations applied distribution (instance, log-odds probabilities transformation): lower higher bounds transformed distribution correspond transformed lower higher bounds original distribution. contrary, applying transformations distribution change resulting HDI. Thus, instance, exponentiated credible intervals required, calculating ETI recommended.","code":"library(bayestestR) library(ggplot2) # Generate a normal distribution posterior <- distribution_normal(1000) # Compute HDI and ETI ci_hdi <- ci(posterior, method = \"HDI\") ci_eti <- ci(posterior, method = \"ETI\") # Plot the distribution and add the limits of the two CIs out <- estimate_density(posterior, extend = TRUE) ggplot(out, aes(x = x, y = y)) + geom_area(fill = \"orange\") + theme_classic() + # HDI in blue geom_vline(xintercept = ci_hdi$CI_low, color = \"royalblue\", linewidth = 3) + geom_vline(xintercept = ci_hdi$CI_high, color = \"royalblue\", linewidth = 3) + # Quantile in red geom_vline(xintercept = ci_eti$CI_low, color = \"red\", linewidth = 1) + geom_vline(xintercept = ci_eti$CI_high, color = \"red\", linewidth = 1) # Generate a beta distribution posterior <- distribution_beta(1000, 6, 2) # Compute HDI and Quantile CI ci_hdi <- ci(posterior, method = \"HDI\") ci_eti <- ci(posterior, method = \"ETI\") # Plot the distribution and add the limits of the two CIs out <- estimate_density(posterior, extend = TRUE) ggplot(out, aes(x = x, y = y)) + geom_area(fill = \"orange\") + theme_classic() + # HDI in blue geom_vline(xintercept = ci_hdi$CI_low, color = \"royalblue\", linewidth = 3) + geom_vline(xintercept = ci_hdi$CI_high, color = \"royalblue\", linewidth = 3) + # ETI in red geom_vline(xintercept = ci_eti$CI_low, color = \"red\", linewidth = 1) + geom_vline(xintercept = ci_eti$CI_high, color = \"red\", linewidth = 1)"},{"path":"https://easystats.github.io/bayestestR/articles/credible_interval.html","id":"the-support-interval","dir":"Articles","previous_headings":"","what":"The Support Interval","title":"Credible Intervals (CI)","text":"Unlike HDI ETI, look posterior distribution, Support Interval (SI) provides information regarding change credibility values prior posterior - words, indicates values parameter gained support observed data factor greater equal k (Wagenmakers, Gronau, Dablander, & Etz, 2018). blue lines values received support data (BF = 1~SI), red lines values received least moderate support (BF = 3~SI) data. perspective Savage-Dickey Bayes factor, testing point null hypothesis value within Support Interval yield Bayes factor smaller 1/BF.","code":"prior <- distribution_normal(40000, mean = 0, sd = 1) posterior <- distribution_normal(40000, mean = 0.5, sd = 0.3) si_1 <- si(posterior, prior, BF = 1) si_3 <- si(posterior, prior, BF = 3) ggplot(mapping = aes(x = x, y = y)) + theme_classic() + # The posterior geom_area( fill = \"orange\", data = estimate_density(posterior, extend = TRUE) ) + # The prior geom_area( color = \"black\", fill = NA, linewidth = 1, linetype = \"dashed\", data = estimate_density(prior, extend = TRUE) ) + # BF = 1 SI in blue geom_vline(xintercept = si_1$CI_low, color = \"royalblue\", linewidth = 1) + geom_vline(xintercept = si_1$CI_high, color = \"royalblue\", linewidth = 1) + # BF = 3 SI in red geom_vline(xintercept = si_3$CI_low, color = \"red\", linewidth = 1) + geom_vline(xintercept = si_3$CI_high, color = \"red\", linewidth = 1)"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"loading-the-packages","dir":"Articles","previous_headings":"","what":"Loading the packages","title":"1. Initiation to Bayesian models","text":"’ve installed necessary packages, can load rstanarm (fit models), bayestestR (compute useful indices), insight (access parameters).","code":"library(rstanarm) library(bayestestR) library(insight)"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"simple-linear-regression-model","dir":"Articles","previous_headings":"","what":"Simple linear (regression) model","title":"1. Initiation to Bayesian models","text":"begin conducting simple linear regression test relationship Petal.Length (predictor, independent, variable) Sepal.Length (response, dependent, variable) iris dataset included default R.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"fitting-the-model","dir":"Articles","previous_headings":"Simple linear (regression) model","what":"Fitting the model","title":"1. Initiation to Bayesian models","text":"Let’s start fitting frequentist version model, just reference point: can also zoom parameters interest us: model, linear relationship Petal.Length Sepal.Length positive significant (\\beta = 0.41, t(148) = 21.6, p < .001). means one-unit increase Petal.Length (predictor), can expect Sepal.Length (response) increase 0.41. effect can visualized plotting predictor values x axis response values y using ggplot2 package: Now let’s fit Bayesian version model using stan_glm function rstanarm package: can see sampling algorithm run.","code":"model <- lm(Sepal.Length ~ Petal.Length, data = iris) summary(model) > > Call: > lm(formula = Sepal.Length ~ Petal.Length, data = iris) > > Residuals: > Min 1Q Median 3Q Max > -1.2468 -0.2966 -0.0152 0.2768 1.0027 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 4.3066 0.0784 54.9 <2e-16 *** > Petal.Length 0.4089 0.0189 21.6 <2e-16 *** > --- > Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 > > Residual standard error: 0.41 on 148 degrees of freedom > Multiple R-squared: 0.76, Adjusted R-squared: 0.758 > F-statistic: 469 on 1 and 148 DF, p-value: <2e-16 get_parameters(model) > Parameter Estimate > 1 (Intercept) 4.31 > 2 Petal.Length 0.41 library(ggplot2) # Load the package # The ggplot function takes the data as argument, and then the variables # related to aesthetic features such as the x and y axes. ggplot(iris, aes(x = Petal.Length, y = Sepal.Length)) + geom_point() + # This adds the points geom_smooth(method = \"lm\") # This adds a regression line model <- stan_glm(Sepal.Length ~ Petal.Length, data = iris)"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"extracting-the-posterior","dir":"Articles","previous_headings":"Simple linear (regression) model","what":"Extracting the posterior","title":"1. Initiation to Bayesian models","text":"done, let us extract parameters (.e., coefficients) model. can see, parameters take form lengthy dataframe two columns, corresponding intercept effect Petal.Length. columns contain posterior distributions two parameters. simple terms, posterior distribution set different plausible values parameter. Contrast result saw frequentist linear regression mode using lm, results single values effect model, distribution values. one important differences two frameworks.","code":"posteriors <- get_parameters(model) head(posteriors) # Show the first 6 rows > (Intercept) Petal.Length > 1 4.4 0.39 > 2 4.4 0.40 > 3 4.3 0.41 > 4 4.3 0.40 > 5 4.3 0.40 > 6 4.3 0.41"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"about-posterior-draws","dir":"Articles","previous_headings":"Simple linear (regression) model > Extracting the posterior","what":"About posterior draws","title":"1. Initiation to Bayesian models","text":"Let’s look length posteriors. size 4000, less? First , observations (rows) usually referred posterior draws. underlying idea Bayesian sampling algorithm (e.g., Monte Carlo Markov Chains - MCMC) draw hidden true posterior distribution. Thus, posterior draws can estimate underlying true posterior distribution. Therefore, draws , better estimation posterior distribution. However, increased draws also means longer computation time. look documentation (?sampling) rstanarm’s \"sampling\" algorithm used default model , can see several parameters influence number posterior draws. default, 4 chains (can see distinct sampling runs), create 2000 iter (draws). However, half iterations kept, half used warm-(convergence algorithm). Thus, total posterior draws equals 4 chains * (2000 iterations - 1000 warm-) = 4000. can change , instance: case, expected, 2 chains * (1000 iterations - 250 warm-) = 1500 posterior draws. let’s keep first model default setup (draws).","code":"nrow(posteriors) # Size (number of rows) > [1] 4000 model <- stan_glm(Sepal.Length ~ Petal.Length, data = iris, chains = 2, iter = 1000, warmup = 250) nrow(get_parameters(model)) # Size (number of rows) [1] 1500"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"visualizing-the-posterior-distribution","dir":"Articles","previous_headings":"Simple linear (regression) model > Extracting the posterior","what":"Visualizing the posterior distribution","title":"1. Initiation to Bayesian models","text":"Now ’ve understood values come , let’s look . start visualizing posterior distribution parameter interest, effect Petal.Length. distribution represents probability (y axis) different effects (x axis). central values probable extreme values. can see, distribution ranges 0.35 0.50, bulk around 0.41. Congrats! ’ve just described first posterior distribution. heart Bayesian analysis. don’t need p-values, t-values, degrees freedom. Everything need contained within posterior distribution. description consistent values obtained frequentist regression (resulted \\beta 0.41). reassuring! Indeed, cases, Bayesian analysis drastically differ frequentist results interpretation. Rather, makes results interpretable intuitive, easier understand describe. can now go ahead precisely characterize posterior distribution.","code":"ggplot(posteriors, aes(x = Petal.Length)) + geom_density(fill = \"orange\")"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"describing-the-posterior","dir":"Articles","previous_headings":"Simple linear (regression) model","what":"Describing the Posterior","title":"1. Initiation to Bayesian models","text":"Unfortunately, often practical report whole posterior distributions graphs. need find concise way summarize . recommend describe posterior distribution 3 elements: point-estimate one-value summary (similar beta frequentist regressions). credible interval representing associated uncertainty. indices significance, giving information relative importance effect.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"point-estimate","dir":"Articles","previous_headings":"Simple linear (regression) model > Describing the Posterior","what":"Point-estimate","title":"1. Initiation to Bayesian models","text":"single value can best represent posterior distribution? Centrality indices, mean, median, mode usually used point-estimates. ’s difference ? Let’s answer first inspecting mean: close frequentist \\beta. , know, mean quite sensitive outliers extremes values. Maybe median robust? Well, close mean (identical rounding values). Maybe take mode, , peak posterior distribution? Bayesian framework, value called Maximum Posteriori (MAP). Let’s see: close! Let’s visualize values posterior distribution: Well, values give similar results. Thus, choose median, value direct meaning probabilistic perspective: 50% chance true effect higher 50% chance effect lower (divides distribution two equal parts).","code":"mean(posteriors$Petal.Length) > [1] 0.41 median(posteriors$Petal.Length) > [1] 0.41 map_estimate(posteriors$Petal.Length) > MAP Estimate > > Parameter | MAP_Estimate > ------------------------ > x | 0.41 ggplot(posteriors, aes(x = Petal.Length)) + geom_density(fill = \"orange\") + # The mean in blue geom_vline(xintercept = mean(posteriors$Petal.Length), color = \"blue\", linewidth = 1) + # The median in red geom_vline(xintercept = median(posteriors$Petal.Length), color = \"red\", linewidth = 1) + # The MAP in purple geom_vline(xintercept = as.numeric(map_estimate(posteriors$Petal.Length)), color = \"purple\", linewidth = 1)"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"uncertainty","dir":"Articles","previous_headings":"Simple linear (regression) model > Describing the Posterior","what":"Uncertainty","title":"1. Initiation to Bayesian models","text":"Now point-estimate, describe uncertainty. compute range: make sense include extreme values? Probably . Thus, compute credible interval. Long story short, ’s kind similar frequentist confidence interval, easier interpret easier compute — makes sense. compute credible interval based Highest Density Interval (HDI). give us range containing 89% probable effect values. Note use 89% CIs instead 95% CIs (frequentist framework), 89% level gives stable results (Kruschke, 2014) reminds us arbitrariness conventions (McElreath, 2018). Nice, can conclude effect 89% chance falling within [0.38, 0.44] range. just computed two important pieces information describing effects.","code":"range(posteriors$Petal.Length) > [1] 0.33 0.48 hdi(posteriors$Petal.Length, ci = 0.89) > 89% HDI: [0.38, 0.44]"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"effect-significance","dir":"Articles","previous_headings":"Simple linear (regression) model > Describing the Posterior","what":"Effect significance","title":"1. Initiation to Bayesian models","text":"However, many scientific fields sufficient simply describe effects. Scientists also want know effect significance practical statistical terms, words, whether effect important. instance, effect different 0? assess significance effect. can ? Well, particular case, eloquent: possible effect values (.e., whole posterior distribution) positive 0.35, already substantial evidence effect zero. still, want objective decision criterion, say yes effect ‘significant’. One approach, similar frequentist framework, see Credible Interval contains 0. case, mean effect ‘significant’. index fine-grained, ? Can better? Yes!","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"a-linear-model-with-a-categorical-predictor","dir":"Articles","previous_headings":"","what":"A linear model with a categorical predictor","title":"1. Initiation to Bayesian models","text":"Imagine moment interested weight chickens varies depending two different feed types. example, start selecting chickwts dataset (available base R) two feed types interest us (peculiar interests): meat meals sunflowers.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"data-preparation-and-model-fitting","dir":"Articles","previous_headings":"A linear model with a categorical predictor","what":"Data preparation and model fitting","title":"1. Initiation to Bayesian models","text":"Let’s run another Bayesian regression predict weight two types feed type.","code":"library(datawizard) # We keep only rows for which feed is meatmeal or sunflower data <- data_filter(chickwts, feed %in% c(\"meatmeal\", \"sunflower\")) model <- stan_glm(weight ~ feed, data = data)"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"posterior-description","dir":"Articles","previous_headings":"A linear model with a categorical predictor","what":"Posterior description","title":"1. Initiation to Bayesian models","text":"represents posterior distribution difference meatmeal sunflowers. seems difference positive (since values concentrated right side 0). Eating sunflowers makes fat (least, ’re chicken). , much? Let us compute median CI: makes fat around 51 grams (median). However, uncertainty quite high: 89% chance difference two feed types 14 91. effect different 0?","code":"posteriors <- get_parameters(model) ggplot(posteriors, aes(x = feedsunflower)) + geom_density(fill = \"red\") median(posteriors$feedsunflower) > [1] 52 hdi(posteriors$feedsunflower) > 95% HDI: [2.76, 101.93]"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"rope-percentage","dir":"Articles","previous_headings":"A linear model with a categorical predictor","what":"ROPE Percentage","title":"1. Initiation to Bayesian models","text":"Testing whether distribution different 0 doesn’t make sense, 0 single value (probability distribution different single value infinite). However, one way assess significance define area around 0, consider practically equivalent zero (.e., absence , negligible, effect). called Region Practical Equivalence (ROPE), one way testing significance parameters. can define region? Driing driiiing – easystats team speaking. can help? – Prof. Sanders. expert chicks… mean chickens. Just calling let know based expert knowledge, effect -20 20 negligible. Bye. Well, ’s convenient. Now know can define ROPE [-20, 20] range. effects within range considered null (negligible). can now compute proportion 89% probable values (89% CI) null, .e., outside range. 5% 89% CI can considered null. lot? Based guidelines, yes, much. Based particular definition ROPE, conclude effect significant (probability negligible high). said, honest, doubts Prof. Sanders. don’t really trust definition ROPE. objective way defining ? Prof. Sanders giving default values define Region Practical Equivalence (ROPE). Yes! One practice instance use tenth (1/10 = 0.1) standard deviation (SD) response variable, can considered “negligible” effect size (Cohen, 1988). Let’s redefine ROPE region within [-6.2, 6.2] range. Note can directly obtained rope_range function :) Let’s recompute percentage ROPE: reasonable definition ROPE, observe 89% posterior distribution effect overlap ROPE. Thus, can conclude effect significant (sense important enough noted).","code":"rope(posteriors$feedsunflower, range = c(-20, 20), ci = 0.89) > # Proportion of samples inside the ROPE [-20.00, 20.00]: > > inside ROPE > ----------- > 4.95 % rope_value <- 0.1 * sd(data$weight) rope_range <- c(-rope_value, rope_value) rope_range > [1] -6.2 6.2 rope_value <- rope_range(model) rope_value > [1] -6.2 6.2 rope(posteriors$feedsunflower, range = rope_range, ci = 0.89) > # Proportion of samples inside the ROPE [-6.17, 6.17]: > > inside ROPE > ----------- > 0.00 %"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"probability-of-direction-pd","dir":"Articles","previous_headings":"A linear model with a categorical predictor","what":"Probability of Direction (pd)","title":"1. Initiation to Bayesian models","text":"Maybe interested whether effect non-negligible. Maybe just want know effect positive negative. case, can simply compute proportion posterior positive, matter “size” effect. can conclude effect positive probability 98%. call index Probability Direction (pd). can, fact, computed easily following: Interestingly, happens index usually highly correlated frequentist p-value. almost roughly infer corresponding p-value simple transformation: ran model frequentist framework, approximately observe effect p-value 0.04. true?","code":"# select only positive values n_positive <- nrow(data_filter(posteriors, feedsunflower > 0)) n_positive / nrow(posteriors) * 100 > [1] 98 p_direction(posteriors$feedsunflower) > Probability of Direction > > Parameter | pd > ------------------ > Posterior | 98.09% pd <- 97.82 onesided_p <- 1 - pd / 100 twosided_p <- onesided_p * 2 twosided_p > [1] 0.044"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"comparison-to-frequentist","dir":"Articles","previous_headings":"A linear model with a categorical predictor > Probability of Direction (pd)","what":"Comparison to frequentist","title":"1. Initiation to Bayesian models","text":"frequentist model tells us difference positive significant (\\beta = 52, p = 0.04). Although arrived similar conclusion, Bayesian framework allowed us develop profound intuitive understanding effect, uncertainty estimation.","code":"summary(lm(weight ~ feed, data = data)) > > Call: > lm(formula = weight ~ feed, data = data) > > Residuals: > Min 1Q Median 3Q Max > -123.91 -25.91 -6.92 32.09 103.09 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 276.9 17.2 16.10 2.7e-13 *** > feedsunflower 52.0 23.8 2.18 0.04 * > --- > Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 > > Residual standard error: 57 on 21 degrees of freedom > Multiple R-squared: 0.185, Adjusted R-squared: 0.146 > F-statistic: 4.77 on 1 and 21 DF, p-value: 0.0405"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"all-with-one-function","dir":"Articles","previous_headings":"","what":"All with one function","title":"1. Initiation to Bayesian models","text":"yet, agree, bit tedious extract compute indices. told can , , one function? Behold, describe_posterior! function computes adored mentioned indices, can run directly model: Tada! ! median, CI, pd ROPE percentage! Understanding describing posterior distributions just one aspect Bayesian modelling. ready ?! Click see next example.","code":"describe_posterior(model, test = c(\"p_direction\", \"rope\", \"bayesfactor\")) > Summary of Posterior Distribution > > Parameter | Median | 95% CI | pd | ROPE | % in ROPE > ------------------------------------------------------------------------------ > (Intercept) | 277.13 | [240.57, 312.75] | 100% | [-6.17, 6.17] | 0% > feedsunflower | 51.69 | [ 2.81, 102.04] | 98.09% | [-6.17, 6.17] | 1.01% > > Parameter | BF | Rhat | ESS > ------------------------------------------- > (Intercept) | 1.77e+13 | 1.000 | 32904.00 > feedsunflower | 0.770 | 1.000 | 32751.00"},{"path":[]},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"frequentist-version","dir":"Articles","previous_headings":"Correlations","what":"Frequentist version","title":"2. Confirmation of Bayesian skills","text":", let us begin frequentist correlation two continuous variables, width length sepals flowers. data available R iris dataset (used previous tutorial). compute Pearson’s correlation test, store results object called result, display : can see output, test actually compared two hypotheses: - null hypothesis (h0; correlation), - alternative hypothesis (h1; non-null correlation). Based p-value, null hypothesis rejected: correlation two variables negative non-significant (r = -.12, p > .05).","code":"result <- cor.test(iris$Sepal.Width, iris$Sepal.Length) result > > Pearson's product-moment correlation > > data: iris$Sepal.Width and iris$Sepal.Length > t = -1, df = 148, p-value = 0.2 > alternative hypothesis: true correlation is not equal to 0 > 95 percent confidence interval: > -0.273 0.044 > sample estimates: > cor > -0.12"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"bayesian-correlation","dir":"Articles","previous_headings":"Correlations","what":"Bayesian correlation","title":"2. Confirmation of Bayesian skills","text":"compute Bayesian correlation test, need BayesFactor package (can install running install.packages(\"BayesFactor\")). can load package, compute correlation using correlationBF() function, store result. Now, let us run describe_posterior() function : see many things , important indices now median posterior distribution, -.11. () quite close frequentist correlation. , previously, describe credible interval, pd ROPE percentage, focus another index provided Bayesian framework, Bayes Factor (BF).","code":"library(BayesFactor) result <- correlationBF(iris$Sepal.Width, iris$Sepal.Length) describe_posterior(result) > Summary of Posterior Distribution > > Parameter | Median | 95% CI | pd | ROPE | % in ROPE | BF | Prior > ----------------------------------------------------------------------------------------------- > rho | -0.11 | [-0.27, 0.04] | 92.25% | [-0.05, 0.05] | 20.42% | 0.509 | Beta (3 +- 3)"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"bayes-factor-bf","dir":"Articles","previous_headings":"Correlations","what":"Bayes Factor (BF)","title":"2. Confirmation of Bayesian skills","text":"said previously correlation test actually compares two hypotheses, null (absence effect) alternative one (presence effect). Bayes factor (BF) allows comparison determines two models observed data probable: model effect interest, null model without effect interest. , context correlation example, null hypothesis correlation two variables (h0: \\rho = 0; \\rho stands Bayesian correlation coefficient), alternative hypothesis correlation different 0 - positive negative (h1: \\rho \\neq 0). can use bayesfactor_models() specifically compute Bayes factor comparing models: got BF 0.51. mean? Bayes factors continuous measures relative evidence, Bayes factor greater 1 giving evidence favour one models (often referred numerator), Bayes factor smaller 1 giving evidence favour model (denominator). Yes, heard right, evidence favour null! ’s one reason Bayesian framework sometimes considered superior frequentist framework. Remember stats lessons, p-value can used reject h0, accept . Bayes factor, can measure evidence - favour - null. words, frequentist framework, p-value significant, can conclude evidence effect absent, evidence absence effect. Bayesian framework, can latter. important since sometimes hypotheses effect. BFs representing evidence alternative null can reversed using BF_{01}=1/BF_{10} (01 10 correspond h0 h1 h1 h0, respectively) provide evidence null alternative. improves human readability1 cases BF alternative null smaller 1 (.e., support null). case, BF = 1/0.51 = 2, indicates data 2 times probable null compared alternative hypothesis, , though favouring null, considered anecdotal evidence null. can thus conclude anecdotal evidence favour absence correlation two variables (rmedian = 0.11, BF = 0.51), much informative statement can frequentist statistics. ’s !","code":"bayesfactor_models(result) > Bayes Factors for Model Comparison > > Model BF > [2] (rho != 0) 0.509 > > * Against Denominator: [1] (rho = 0) > * Bayes Factor Type: JZS (BayesFactor)"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"visualise-the-bayes-factor","dir":"Articles","previous_headings":"Correlations","what":"Visualise the Bayes factor","title":"2. Confirmation of Bayesian skills","text":"general, pie charts absolute -go data visualisation, brain’s perceptive system heavily distorts information presented way2. Nevertheless, one exception: pizza charts. intuitive way interpreting strength evidence provided BFs amount surprise. Wagenmakers’ pizza poking analogy. great blog. “pizza plots” can directly created see visualisation companion package easystats (can install running install.packages(\"see\")): , seeing pizza, much surprised outcome blinded poke?","code":"library(see) plot(bayesfactor_models(result)) + scale_fill_pizza()"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"t-tests","dir":"Articles","previous_headings":"","what":"t-tests","title":"2. Confirmation of Bayesian skills","text":"“know know nothing, especially versicolor virginica differ terms Sepal.Width” - Socrates. Time finally answer crucial question!","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"versicolor-vs--virginica","dir":"Articles","previous_headings":"t-tests","what":"Versicolor vs. virginica","title":"2. Confirmation of Bayesian skills","text":"Bayesian t-tests can performed similar way correlations. particularly interested two levels Species factor, versicolor virginica. start filtering iris non-relevant observations corresponding setosa specie, visualise observations distribution Sepal.Width variable. seems (visually) virgnica flowers , average, slightly higer width sepals. Let’s assess difference statistically using ttestBF() function BayesFactor package.","code":"library(datawizard) library(ggplot2) # Select only two relevant species data <- droplevels(data_filter(iris, Species != \"setosa\")) # Visualise distributions and observations ggplot(data, aes(x = Species, y = Sepal.Width, fill = Species)) + geom_violindot(fill_dots = \"black\", size_dots = 1) + scale_fill_material() + theme_modern()"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"compute-the-bayesian-t-test","dir":"Articles","previous_headings":"t-tests","what":"Compute the Bayesian t-test","title":"2. Confirmation of Bayesian skills","text":"indices, can say difference Sepal.Width virginica versicolor probability 100% negative [pd sign median] (median = -0.19, 89% CI [-0.29, -0.092]). data provides strong evidence null hypothesis (BF = 18). Keep mind see another way investigating question.","code":"result <- BayesFactor::ttestBF(formula = Sepal.Width ~ Species, data = data) describe_posterior(result) > Summary of Posterior Distribution > > Parameter | Median | 95% CI | pd | ROPE | % in ROPE > ------------------------------------------------------------------------- > Difference | -0.19 | [-0.32, -0.06] | 99.75% | [-0.03, 0.03] | 0% > > Parameter | BF | Prior > --------------------------------------- > Difference | 17.72 | Cauchy (0 +- 0.71)"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"logistic-model","dir":"Articles","previous_headings":"","what":"Logistic Model","title":"2. Confirmation of Bayesian skills","text":"hypothesis one uses t-test can also tested using binomial model (e.g., logistic model). Indeed, possible reformulate following hypothesis, “important difference variable two groups” hypothesis “variable able discriminate (classify) two groups”. However, models much powerful t-test. case difference Sepal.Width virginica versicolor, question becomes, well can classify two species using Sepal.Width.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"fit-the-model","dir":"Articles","previous_headings":"Logistic Model","what":"Fit the model","title":"2. Confirmation of Bayesian skills","text":"","code":"library(rstanarm) model <- stan_glm(Species ~ Sepal.Width, data = data, family = \"binomial\", chains = 10, iter = 5000, warmup = 1000, refresh = 0 )"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"visualise-the-model","dir":"Articles","previous_headings":"Logistic Model","what":"Visualise the model","title":"2. Confirmation of Bayesian skills","text":"Using modelbased package.","code":"library(modelbased) vizdata <- estimate_relation(model) ggplot(vizdata, aes(x = Sepal.Width, y = Predicted)) + geom_ribbon(aes(ymin = CI_low, ymax = CI_high), alpha = 0.5) + geom_line() + ylab(\"Probability of being virginica\") + theme_modern()"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"performance-and-parameters","dir":"Articles","previous_headings":"Logistic Model","what":"Performance and Parameters","title":"2. Confirmation of Bayesian skills","text":", can extract indices interest posterior distribution using old pal describe_posterior().","code":"describe_posterior(model, test = c(\"pd\", \"ROPE\", \"BF\")) > Summary of Posterior Distribution > > Parameter | Median | 95% CI | pd | ROPE | % in ROPE > --------------------------------------------------------------------------- > (Intercept) | -6.12 | [-10.45, -2.25] | 99.92% | [-0.18, 0.18] | 0% > Sepal.Width | 2.13 | [ 0.79, 3.63] | 99.94% | [-0.18, 0.18] | 0% > > Parameter | BF | Rhat | ESS > -------------------------------------- > (Intercept) | 13.38 | 1.000 | 26540.00 > Sepal.Width | 11.97 | 1.000 | 26693.00 library(performance) model_performance(model) > # Indices of model performance > > ELPD | ELPD_SE | LOOIC | LOOIC_SE | WAIC | R2 | RMSE | Sigma > ------------------------------------------------------------------------ > -66.284 | 3.052 | 132.568 | 6.104 | 132.562 | 0.099 | 0.477 | 1.000 > > ELPD | Log_loss | Score_log | Score_spherical > ------------------------------------------------ > -66.284 | 0.643 | -35.436 | 0.014"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"visualise-the-indices","dir":"Articles","previous_headings":"Logistic Model","what":"Visualise the indices","title":"2. Confirmation of Bayesian skills","text":".","code":"library(see) plot(rope(result))"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"diagnostic-indices","dir":"Articles","previous_headings":"Logistic Model","what":"Diagnostic Indices","title":"2. Confirmation of Bayesian skills","text":"diagnostic indices Rhat ESS.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example3.html","id":"mixed-models","dir":"Articles","previous_headings":"","what":"Mixed Models","title":"3. Become a Bayesian master","text":"CONTINUED.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example3.html","id":"priors","dir":"Articles","previous_headings":"Mixed Models","what":"Priors","title":"3. Become a Bayesian master","text":"CONTINUED.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example3.html","id":"whats-next","dir":"Articles","previous_headings":"","what":"What’s next?","title":"3. Become a Bayesian master","text":"journey become true Bayesian master yet . merely beginning. now time leave bayestestR universe apply Bayesian framework variety statistical contexts: Marginal means Contrast analysis Testing Contrasts Bayesian Models ‘emmeans’ ‘bayestestR’","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"how-to-describe-and-report-the-parameters-of-a-model","dir":"Articles","previous_headings":"Reporting Guidelines","what":"How to describe and report the parameters of a model","title":"Reporting Guidelines","text":"Bayesian analysis returns posterior distribution parameter (effect). minimally describe distributions, recommend reporting point-estimate centrality well information characterizing estimation uncertainty (dispersion). Additionally, one can also report indices effect existence /significance. Based previous comparison point-estimates indices effect existence, can draw following recommendations.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"centrality","dir":"Articles","previous_headings":"Reporting Guidelines > How to describe and report the parameters of a model","what":"Centrality","title":"Reporting Guidelines","text":"suggest reporting median index centrality, robust compared mean MAP estimate. However, case severely skewed posterior distribution, MAP estimate good alternative.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"uncertainty","dir":"Articles","previous_headings":"Reporting Guidelines > How to describe and report the parameters of a model","what":"Uncertainty","title":"Reporting Guidelines","text":"95% 89% Credible Intervals (CI) two reasonable ranges characterize uncertainty related estimation (see discussion differences two values). also recommend computing CIs based HDI rather quantiles, favouring probable central values. Note CI based quantile (equal-tailed interval) might appropriate case transformations (instance transforming log-odds probabilities). Otherwise, intervals originally cover null might cover transformation (see ).","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"existence","dir":"Articles","previous_headings":"Reporting Guidelines > How to describe and report the parameters of a model","what":"Existence","title":"Reporting Guidelines","text":"Reviewer 2 (circa long time ago galaxy far away). Bayesian framework can neatly delineate quantify different aspects hypothesis testing, effect existence significance. straightforward index describe existence effect Probability Direction (pd), representing certainty associated probable direction (positive negative) effect. index easy understand, simple interpret, straightforward compute, robust model characteristics, independent scale data. Moreover, strongly correlated frequentist p-value, can thus used draw parallels give reference readers non-familiar Bayesian statistics. two-sided p-value respectively .1, .05, .01 .001 correspond approximately pd* 95%, 97.5%, 99.5% 99.95%. Thus, convenience, suggest following reference values interpretation helpers: pd <= 95% ~ p > .1: uncertain pd > 95% ~ p < .1: possibly existing pd > 97%: likely existing pd > 99%: probably existing pd > 99.9%: certainly existing","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"significance","dir":"Articles","previous_headings":"Reporting Guidelines > How to describe and report the parameters of a model","what":"Significance","title":"Reporting Guidelines","text":"percentage ROPE index significance (primary meaning), informing us whether parameter related non-negligible change (terms magnitude) outcome. suggest reporting percentage full posterior distribution (full ROPE) instead given proportion CI ROPE, appears sensitive (especially delineate highly significant effects). Rather using binary, --nothing decision criterion, suggested original equivalence test, recommend using percentage continuous index significance. However, based simulation data, suggest following reference values interpretation helpers: > 99% ROPE: negligible (can accept null hypothesis) > 97.5% ROPE: probably negligible <= 97.5% & >= 2.5% ROPE: undecided significance < 2.5% ROPE: probably significant < 1% ROPE: significant (can reject null hypothesis) Note extra caution required interpretation highly depends parameters sample size ROPE range (see ).","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"template-sentence","dir":"Articles","previous_headings":"Reporting Guidelines > How to describe and report the parameters of a model","what":"Template Sentence","title":"Reporting Guidelines","text":"Based suggestions, template sentence minimal reporting parameter based posterior distribution : “effect X probability pd negative (Median = median, 89% CI [ HDIlow , HDIhigh ] can considered significant [ROPE% ROPE]).”","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"how-to-compare-different-models","dir":"Articles","previous_headings":"Reporting Guidelines","what":"How to compare different models","title":"Reporting Guidelines","text":"Although can also used assess effect existence significance, Bayes factor (BF) versatile index can used directly compare different models (data generation processes). Bayes factor ratio informs us much (less) likely observed data two compared models - usually model versus model without effect. Depending specifications null model (whether point-estimate (e.g., 0) interval), Bayes factor used context effect existence significance. general, Bayes factor greater 1 taken evidence favour one model (nominator), Bayes factor smaller 1 taken evidence favour model (denominator). Several rules thumb exist help interpretation (see ), > 3 one common threshold categorize non-anecdotal evidence.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"template-sentence-1","dir":"Articles","previous_headings":"Reporting Guidelines > How to compare different models","what":"Template Sentence","title":"Reporting Guidelines","text":"reporting Bayes factors (BF), one can use following sentence: “moderate evidence favour absence effect x (BF = BF).”","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"suggestions","dir":"Articles","previous_headings":"","what":"Suggestions","title":"Reporting Guidelines","text":"advice, opinion , encourage let us know opening discussion thread making pull request.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/indicesExistenceComparison.html","id":"indices-of-effect-existence-and-significance-in-the-bayesian-framework","dir":"Articles","previous_headings":"","what":"Indices of Effect Existence and Significance in the Bayesian Framework","title":"In-Depth 2: Comparison of Indices of Effect Existence and Significance","text":"comparison different Bayesian indices (pd, BFs, ROPE etc.) accessible . , case don’t wish read full article, following table summarizes key takeaways!","code":""},{"path":"https://easystats.github.io/bayestestR/articles/indicesExistenceComparison.html","id":"suggestions","dir":"Articles","previous_headings":"","what":"Suggestions","title":"In-Depth 2: Comparison of Indices of Effect Existence and Significance","text":"advice, opinion , encourage let us know opening discussion thread making pull request.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/mediation.html","id":"mediation-analysis-in-brms-and-rstanarm","dir":"Articles","previous_headings":"","what":"Mediation Analysis in brms and rstanarm","title":"Mediation Analysis using Bayesian Regression Models","text":"mediation() summary function, especially mediation analysis, .e. multivariate response models casual mediation effects. models m2 m3, treat treatment effect job_seek mediator effect. brms model (m2), f1 describes mediator model f2 describes outcome model. similar rstanarm model. mediation() returns data frame information direct effect (median value posterior samples treatment outcome model), mediator effect (median value posterior samples mediator outcome model), indirect effect (median value multiplication posterior samples mediator outcome model posterior samples treatment mediation model) total effect (median value sums posterior samples used direct indirect effect). proportion mediated indirect effect divided total effect. simplest call just needs model-object. Typically, mediation() finds treatment mediator variables automatically. work, use treatment mediator arguments specify related variable names. values, 89% credible intervals calculated default. Use ci calculate different interval.","code":"library(bayestestR) library(mediation) library(brms) library(rstanarm) # load sample data data(jobs) set.seed(123) # linear models, for mediation analysis b1 <- lm(job_seek ~ treat + econ_hard + sex + age, data = jobs) b2 <- lm(depress2 ~ treat + job_seek + econ_hard + sex + age, data = jobs) # mediation analysis, for comparison with brms m1 <- mediate(b1, b2, sims = 1000, treat = \"treat\", mediator = \"job_seek\") # Fit Bayesian mediation model in brms f1 <- bf(job_seek ~ treat + econ_hard + sex + age) f2 <- bf(depress2 ~ treat + job_seek + econ_hard + sex + age) m2 <- brm(f1 + f2 + set_rescor(FALSE), data = jobs, refresh = 0) # Fit Bayesian mediation model in rstanarm m3 <- stan_mvmer( list( job_seek ~ treat + econ_hard + sex + age + (1 | occp), depress2 ~ treat + job_seek + econ_hard + sex + age + (1 | occp) ), data = jobs, refresh = 0 ) # for brms mediation(m2) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.124, 0.046] #> Indirect Effect (ACME) | -0.015 | [-0.041, 0.008] #> Mediator Effect | -0.240 | [-0.294, -0.185] #> Total Effect | -0.055 | [-0.145, 0.034] #> #> Proportion mediated: 28.14% [-181.46%, 237.75%] # for rstanarm mediation(m3) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.129, 0.048] #> Indirect Effect (ACME) | -0.018 | [-0.042, 0.006] #> Mediator Effect | -0.241 | [-0.296, -0.187] #> Total Effect | -0.057 | [-0.151, 0.033] #> #> Proportion mediated: 30.59% [-221.09%, 282.26%]"},{"path":"https://easystats.github.io/bayestestR/articles/mediation.html","id":"comparison-to-the-mediation-package","dir":"Articles","previous_headings":"","what":"Comparison to the mediation package","title":"Mediation Analysis using Bayesian Regression Models","text":"comparison mediation package. Note summary()-output mediation package shows indirect effect first, followed direct effect. want calculate mean instead median values posterior samples, use centrality-argument. Furthermore, print()-method, allows print digits. can see, results similar mediation package produces non-Bayesian models.","code":"summary(m1) #> #> Causal Mediation Analysis #> #> Quasi-Bayesian Confidence Intervals #> #> Estimate 95% CI Lower 95% CI Upper p-value #> ACME -0.0157 -0.0387 0.01 0.19 #> ADE -0.0438 -0.1315 0.04 0.35 #> Total Effect -0.0595 -0.1530 0.02 0.21 #> Prop. Mediated 0.2137 -2.0277 2.70 0.32 #> #> Sample Size Used: 899 #> #> #> Simulations: 1000 mediation(m2, ci = 0.95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.124, 0.046] #> Indirect Effect (ACME) | -0.015 | [-0.041, 0.008] #> Mediator Effect | -0.240 | [-0.294, -0.185] #> Total Effect | -0.055 | [-0.145, 0.034] #> #> Proportion mediated: 28.14% [-181.46%, 237.75%] mediation(m3, ci = 0.95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.129, 0.048] #> Indirect Effect (ACME) | -0.018 | [-0.042, 0.006] #> Mediator Effect | -0.241 | [-0.296, -0.187] #> Total Effect | -0.057 | [-0.151, 0.033] #> #> Proportion mediated: 30.59% [-221.09%, 282.26%] m <- mediation(m2, centrality = \"mean\", ci = 0.95) print(m, digits = 4) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ------------------------------------------------------ #> Direct Effect (ADE) | -0.0395 | [-0.1237, 0.0456] #> Indirect Effect (ACME) | -0.0158 | [-0.0405, 0.0083] #> Mediator Effect | -0.2401 | [-0.2944, -0.1846] #> Total Effect | -0.0553 | [-0.1454, 0.0341] #> #> Proportion mediated: 28.60% [-181.01%, 238.20%]"},{"path":"https://easystats.github.io/bayestestR/articles/mediation.html","id":"comparison-to-sem-from-the-lavaan-package","dir":"Articles","previous_headings":"","what":"Comparison to SEM from the lavaan package","title":"Mediation Analysis using Bayesian Regression Models","text":"Finally, also compare results SEM model, using lavaan. example demonstrate “translate” model different packages modeling approached. summary output lavaan longer, can find related numbers quite easily: direct effect treatment treat (c1), -0.040 indirect effect treatment indirect_treat, -0.016 mediator effect job_seek job_seek (b), -0.240 total effect total_treat, -0.056","code":"library(lavaan) data(jobs) set.seed(1234) model <- \" # direct effects depress2 ~ c1*treat + c2*econ_hard + c3*sex + c4*age + b*job_seek # mediation job_seek ~ a1*treat + a2*econ_hard + a3*sex + a4*age # indirect effects (a*b) indirect_treat := a1*b indirect_econ_hard := a2*b indirect_sex := a3*b indirect_age := a4*b # total effects total_treat := c1 + (a1*b) total_econ_hard := c2 + (a2*b) total_sex := c3 + (a3*b) total_age := c4 + (a4*b) \" m4 <- sem(model, data = jobs) summary(m4) #> lavaan 0.6-19 ended normally after 1 iteration #> #> Estimator ML #> Optimization method NLMINB #> Number of model parameters 11 #> #> Number of observations 899 #> #> Model Test User Model: #> #> Test statistic 0.000 #> Degrees of freedom 0 #> #> Parameter Estimates: #> #> Standard errors Standard #> Information Expected #> Information saturated (h1) model Structured #> #> Regressions: #> Estimate Std.Err z-value P(>|z|) #> depress2 ~ #> treat (c1) -0.040 0.043 -0.929 0.353 #> econ_hard (c2) 0.149 0.021 7.156 0.000 #> sex (c3) 0.107 0.041 2.604 0.009 #> age (c4) 0.001 0.002 0.332 0.740 #> job_seek (b) -0.240 0.028 -8.524 0.000 #> job_seek ~ #> treat (a1) 0.066 0.051 1.278 0.201 #> econ_hard (a2) 0.053 0.025 2.167 0.030 #> sex (a3) -0.008 0.049 -0.157 0.875 #> age (a4) 0.005 0.002 1.983 0.047 #> #> Variances: #> Estimate Std.Err z-value P(>|z|) #> .depress2 0.373 0.018 21.201 0.000 #> .job_seek 0.524 0.025 21.201 0.000 #> #> Defined Parameters: #> Estimate Std.Err z-value P(>|z|) #> indirect_treat -0.016 0.012 -1.264 0.206 #> indirct_cn_hrd -0.013 0.006 -2.100 0.036 #> indirect_sex 0.002 0.012 0.157 0.875 #> indirect_age -0.001 0.001 -1.932 0.053 #> total_treat -0.056 0.045 -1.244 0.214 #> total_econ_hrd 0.136 0.022 6.309 0.000 #> total_sex 0.109 0.043 2.548 0.011 #> total_age -0.000 0.002 -0.223 0.824 # just to have the numbers right at hand and you don't need to scroll up mediation(m2, ci = 0.95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.124, 0.046] #> Indirect Effect (ACME) | -0.015 | [-0.041, 0.008] #> Mediator Effect | -0.240 | [-0.294, -0.185] #> Total Effect | -0.055 | [-0.145, 0.034] #> #> Proportion mediated: 28.14% [-181.46%, 237.75%]"},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"function-overview","dir":"Articles","previous_headings":"","what":"Function Overview","title":"Overview of Vignettes","text":"Function Reference","code":""},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"get-started","dir":"Articles","previous_headings":"","what":"Get Started","title":"Overview of Vignettes","text":"Get Started Bayesian Analysis","code":""},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"examples","dir":"Articles","previous_headings":"","what":"Examples","title":"Overview of Vignettes","text":"Initiation Bayesian models Confirmation Bayesian skills Become Bayesian master","code":""},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"articles","dir":"Articles","previous_headings":"","what":"Articles","title":"Overview of Vignettes","text":"Credible Intervals (CI)) Region Practical Equivalence (ROPE) Probability Direction (pd) Bayes Factors","code":""},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"in-depths","dir":"Articles","previous_headings":"","what":"In-Depths","title":"Overview of Vignettes","text":"Comparison Point-Estimates Indices Effect Existence Significance Bayesian Framework Mediation Analysis using Bayesian Regression Models","code":""},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"guidelines","dir":"Articles","previous_headings":"","what":"Guidelines","title":"Overview of Vignettes","text":"Reporting Guidelines","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"what-is-the-pd","dir":"Articles","previous_headings":"","what":"What is the pd?","title":"Probability of Direction (pd)","text":"Probability Direction (pd) index effect existence, ranging 50% 100%, representing certainty effect goes particular direction (.e., positive negative). Beyond simplicity interpretation, understanding computation, index also presents interesting properties: independent model: solely based posterior distributions require additional information data model. robust scale response variable predictors. strongly correlated frequentist p-value, can thus used draw parallels give reference readers non-familiar Bayesian statistics. However, index relevant assess magnitude importance effect (meaning “significance”), better achieved indices ROPE percentage. fact, indices significance existence totally independent. can effect pd 99.99%, whole posterior distribution concentrated within [0.0001, 0.0002] range. case, effect positive high certainty, also significant (.e., small). Indices effect existence, pd, particularly useful exploratory research clinical studies, focus make sure effect interest opposite direction (clinical studies, treatment harmful). However, effect’s direction confirmed, focus shift toward significance, including precise estimation magnitude, relevance importance.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"relationship-with-the-p-value","dir":"Articles","previous_headings":"","what":"Relationship with the p-value","title":"Probability of Direction (pd)","text":"cases, seems pd direct correspondence frequentist one-sided p-value formula: p_{one-sided} = 1-p_d Similarly, two-sided p-value (commonly reported one) equivalent formula: p_{two-sided} = 2*(1-p_d) Thus, two-sided p-value respectively .1, .05, .01 .001 correspond approximately pd 95%, 97.5%, 99.5% 99.95% . Correlation frequentist p-value probability direction (pd) ’s like p-value, must bad p-value bad [insert reference reproducibility crisis]. fact, aspect reproducibility crisis might misunderstood. Indeed, p-value intrinsically bad wrong. Instead, misuse, misunderstanding misinterpretation fuels decay situation. instance, fact pd highly correlated p-value suggests latter index effect existence significance (.e., “worth interest”). Bayesian version, pd, intuitive meaning makes obvious fact thresholds arbitrary. Additionally, mathematical interpretative transparency pd, reconceptualisation index effect existence, offers valuable insight characterization Bayesian results. Moreover, concomitant proximity frequentist p-value makes perfect metric ease transition psychological research adoption Bayesian framework.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"methods-of-computation","dir":"Articles","previous_headings":"","what":"Methods of computation","title":"Probability of Direction (pd)","text":"simple direct way compute pd 1) look median’s sign, 2) select portion posterior sign 3) compute percentage portion represents. “simple” method straightforward, precision directly tied number posterior draws. second approach relies density estimation. starts estimating density function (many methods available), computing area curve (AUC) density curve side 0. density-based method hypothetically considered precise, strongly depends method used estimate density function.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"methods-comparison","dir":"Articles","previous_headings":"","what":"Methods comparison","title":"Probability of Direction (pd)","text":"Let’s compare 4 available methods, direct method 3 density-based methods differing density estimation algorithm (see estimate_density).","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"correlation","dir":"Articles","previous_headings":"Methods comparison","what":"Correlation","title":"Probability of Direction (pd)","text":"Let’s start testing proximity similarity results obtained different methods. methods give highly correlated give similar results. means method choice drastic game changer used tweak results much.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"accuracy","dir":"Articles","previous_headings":"Methods comparison","what":"Accuracy","title":"Probability of Direction (pd)","text":"test accuracy methods, start computing direct pd dense distribution (large amount observations). baseline, “true” pd. , iteratively draw smaller samples parent distribution, compute pd different methods. closer estimate reference one, better. “Kernel” based density methods seems consistently underestimate pd. Interestingly, “direct” method appears reliable, even case small number posterior draws.","code":"data <- data.frame() for (i in 1:25) { the_mean <- runif(1, 0, 4) the_sd <- abs(runif(1, 0.5, 4)) parent_distribution <- rnorm(100000, the_mean, the_sd) true_pd <- as.numeric(pd(parent_distribution)) for (j in 1:25) { sample_size <- round(runif(1, 25, 5000)) subsample <- sample(parent_distribution, sample_size) data <- rbind( data, data.frame( sample_size = sample_size, true = true_pd, direct = as.numeric(pd(subsample)) - true_pd, kernel = as.numeric(pd(subsample, method = \"kernel\")) - true_pd, logspline = as.numeric(pd(subsample, method = \"logspline\")) - true_pd, KernSmooth = as.numeric(pd(subsample, method = \"KernSmooth\")) - true_pd ) ) } } data <- as.data.frame(sapply(data, as.numeric)) library(datawizard) # for reshape_longer data <- reshape_longer(data, select = 3:6, names_to = \"Method\", values_to = \"Distance\") ggplot(data, aes(x = sample_size, y = Distance, color = Method, fill = Method)) + geom_point(alpha = 0.3, stroke = 0, shape = 16) + geom_smooth(alpha = 0.2) + geom_hline(yintercept = 0) + theme_classic() + xlab(\"\\nDistribution Size\")"},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"can-the-pd-be-100","dir":"Articles","previous_headings":"Methods comparison","what":"Can the pd be 100%?","title":"Probability of Direction (pd)","text":"p = 0.000 coined one term avoid reporting results (Lilienfeld et al., 2015), even often displayed statistical software. rationale every probability distribution, value probability exactly 0. always infinitesimal probability associated data point, p = 0.000 returned software due approximations related, among , finite memory hardware. One apply rationale pd: since data points non-null probability density, pd (particular portion probability density) can never 100%. entirely valid point, people using direct method might argue pd based posterior draws, rather theoretical, hidden, true posterior distribution (approximated posterior draws). posterior draws represent finite sample pd = 100% valid statement.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"what-is-the-rope","dir":"Articles","previous_headings":"","what":"What is the ROPE?","title":"Region of Practical Equivalence (ROPE)","text":"Unlike frequentist approach, Bayesian inference based statistical significance, effects tested “zero”. Indeed, Bayesian framework offers probabilistic view parameters, allowing assessment uncertainty related . Thus, rather concluding effect present simply differs zero, conclude probability outside specific range can considered “practically effect” (.e., negligible magnitude) sufficient. range called region practical equivalence (ROPE). Indeed, statistically, probability posterior distribution different 0 make much sense (probability different single point infinite). Therefore, idea underlining ROPE let user define area around null value enclosing values equivalent null value practical purposes (J. Kruschke, 2014; J. K. Kruschke, 2010; J. K. Kruschke, Aguinis, & Joo, 2012).","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"equivalence-test","dir":"Articles","previous_headings":"","what":"Equivalence Test","title":"Region of Practical Equivalence (ROPE)","text":"ROPE, region corresponding “null” hypothesis, used equivalence test, test whether parameter significant (sense important enough cared ). test usually based “HDI+ROPE decision rule” (J. Kruschke, 2014; J. K. Kruschke & Liddell, 2018) check whether parameter values accepted rejected explicitly formulated “null hypothesis” (.e., ROPE). words, checks percentage Credible Interval (CI) null region (ROPE). percentage sufficiently low, null hypothesis rejected. percentage sufficiently high, null hypothesis accepted.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"credible-interval-in-rope-vs-full-posterior-in-rope","dir":"Articles","previous_headings":"","what":"Credible interval in ROPE vs full posterior in ROPE","title":"Region of Practical Equivalence (ROPE)","text":"Using ROPE HDI Credible Interval, Kruschke (2018) suggests using percentage 95% HDI falls within ROPE decision rule. However, 89% HDI considered better choice (J. Kruschke, 2014; R. McElreath, 2014; Richard McElreath, 2018), bayestestR provides default percentage 89% HDI falls within ROPE. However, simulation studies data suggest using percentage full posterior distribution, instead CI, might sensitive (especially delineate highly significant effects). Thus, recommend user considers using full ROPE percentage (setting ci = 1), return portion entire posterior distribution ROPE.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"what-percentage-in-rope-to-accept-or-to-reject","dir":"Articles","previous_headings":"","what":"What percentage in ROPE to accept or to reject?","title":"Region of Practical Equivalence (ROPE)","text":"HDI completely outside ROPE, “null hypothesis” parameter “rejected”. ROPE completely covers HDI, .e., credible values parameter inside region practical equivalence, null hypothesis accepted. Else, ’s unclear whether null hypothesis accepted rejected. full ROPE used (.e., 100% HDI), null hypothesis rejected accepted percentage posterior within ROPE smaller 2.5% greater 97.5%. Desirable results low proportions inside ROPE (closer zero better).","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"how-to-define-the-rope-range","dir":"Articles","previous_headings":"","what":"How to define the ROPE range?","title":"Region of Practical Equivalence (ROPE)","text":"Kruschke (2018) suggests ROPE set, default, range -0.1 0.1 standardized parameter (negligible effect size according Cohen, 1988). linear models (lm), can generalised : [-0.1*SD_{y}, 0.1*SD_{y}]. logistic models, parameters expressed log odds ratio can converted standardized difference formula: \\pi/\\sqrt{3} (see effectsize package, resulting range -0.18 -0.18. models binary outcome, strongly recommended manually specify rope argument. Currently, default applied logistic models. t-tests, standard deviation response used, similarly linear models (see ). correlations, -0.05, 0.05 used, .e., half value negligible correlation suggested Cohen’s (1988) rules thumb. models, -0.1, 0.1 used determine ROPE limits, strongly advised specify manually.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"sensitivity-to-parameters-scale","dir":"Articles","previous_headings":"","what":"Sensitivity to parameter’s scale","title":"Region of Practical Equivalence (ROPE)","text":"important consider unit (.e., scale) predictors using index based ROPE, correct interpretation ROPE representing region practical equivalence zero dependent scale predictors. Indeed, unlike indices (pd), percentage ROPE depend unit parameter. words, ROPE represents fixed portion response’s scale, proximity coefficient depends scale coefficient . instance, consider simple regression growth ~ time, modelling development Wookies babies, negligible change (ROPE) less 54 cm. time variable expressed days, find coefficient (representing growth day) 10 cm (median posterior coefficient 10). consider negligible. However, decide express time variable years, coefficient scaled transformation (now represent growth year). coefficient now around 3550 cm (10 * 355), now consider significant. can see pd percentage ROPE linear relationship Sepal.Length Sepal.Width respectively 92.95% 15.95%, corresponding uncertain significant effect. happen scale predictor? can see, simply dividing predictor 100, drastically changed conclusion related percentage ROPE (became close 0): effect now interpreted significant. Thus, recommend paying close attention unit predictors selecting ROPE range (e.g., coefficient correspond small effect?), reporting reading ROPE results.","code":"library(rstanarm) library(bayestestR) library(see) data <- iris # Use the iris data model <- stan_glm(Sepal.Length ~ Sepal.Width, data = data) # Fit model # Compute indices pd <- p_direction(model) percentage_in_rope <- rope(model, ci = 1) # Visualise the pd plot(pd) pd # Visualise the percentage in ROPE plot(percentage_in_rope) percentage_in_rope > Probability of Direction > > Parameter | pd > -------------------- > (Intercept) | 100% > Sepal.Width | 91.65% > # Proportion of samples inside the ROPE [-0.08, 0.08]: > > Parameter | inside ROPE > ------------------------- > (Intercept) | 0.00 % > Sepal.Width | 16.28 % data$Sepal.Width_scaled <- data$Sepal.Width / 100 # Divide predictor by 100 model <- stan_glm(Sepal.Length ~ Sepal.Width_scaled, data = data) # Fit model # Compute indices pd <- p_direction(model) percentage_in_rope <- rope(model, ci = 1) # Visualise the pd plot(pd) pd # Visualise the percentage in ROPE plot(percentage_in_rope) percentage_in_rope > Probability of Direction > > Parameter | pd > --------------------------- > (Intercept) | 100% > Sepal.Width_scaled | 91.65% > # Proportion of samples inside the ROPE [-0.08, 0.08]: > > Parameter | inside ROPE > -------------------------------- > (Intercept) | 0.00 % > Sepal.Width_scaled | 0.10 %"},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"multicollinearity-non-independent-covariates","dir":"Articles","previous_headings":"","what":"Multicollinearity: Non-independent covariates","title":"Region of Practical Equivalence (ROPE)","text":"parameters show strong correlations, .e., covariates independent, joint parameter distributions may shift towards away ROPE. Collinearity invalidates ROPE hypothesis testing based univariate marginals, probabilities conditional independence. problematic parameters partial overlap ROPE region. case collinearity, (joint) distributions parameters may either get increased decreased ROPE, means inferences based ROPE inappropriate (J. Kruschke, 2014). equivalence_test() rope() functions perform simple check pairwise correlations parameters, can collinearity two variables, first step check assumptions hypothesis testing look different pair plots. even sophisticated check projection predictive variable selection (Piironen & Vehtari, 2017).","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"introduction","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework","what":"Introduction","title":"In-Depth 1: Comparison of Point-Estimates","text":"One main difference Bayesian frequentist frameworks former returns probability distribution effect (.e., model parameter interest, regression slope) instead single value. However, still need demand - reporting use analysis - single value (point-estimate) best characterises underlying posterior distribution. three main indices used literature effect estimation: - mean - median - MAP (Maximum Posteriori) estimate (roughly corresponding mode - “peak” - distribution) Unfortunately, consensus one use, systematic comparison ever done. present work, compare three point-estimates effect , well widely known beta, extracted comparable frequentist model. comparisons can help us draw bridges relationships two influential statistical frameworks.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"methods","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 1: Relationship with Error (Noise) and Sample Size","what":"Methods","title":"In-Depth 1: Comparison of Point-Estimates","text":"carrying simulation aimed modulating following characteristics: Model type: linear logistic. “True” effect (known parameters values data drawn): Can 1 0 (effect). Sample size: 20 100 steps 10. Error: Gaussian noise applied predictor SD uniformly spread 0.33 6.66 (1000 different values). generated dataset combination characteristics, resulting total 2 * 2 * 9 * 1000 = 36000 Bayesian frequentist models. code used generation available (please note takes usually several days/weeks complete).","code":"library(ggplot2) library(datawizard) library(see) library(parameters) df <- read.csv(\"https://raw.github.com/easystats/circus/main/data/bayesSim_study1.csv\")"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"sensitivity-to-noise","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 1: Relationship with Error (Noise) and Sample Size > Results","what":"Sensitivity to Noise","title":"In-Depth 1: Comparison of Point-Estimates","text":"","code":"dat <- df dat <- data_select(dat, select = c(\"error\", \"true_effect\", \"outcome_type\", \"Coefficient\", \"Median\", \"Mean\", \"MAP\")) dat <- reshape_longer( dat, select = -c(\"error\", \"true_effect\", \"outcome_type\"), names_to = \"estimate\", values_to = \"value\" ) dat$temp <- as.factor(cut(dat$error, 10, labels = FALSE)) tmp <- lapply(split(dat, dat$temp), function(x) { x$error_group <- rep(round(mean(x$error), 1), times = nrow(x)) return(x) }) dat <- do.call(rbind, tmp) dat <- data_filter(dat, value < 6) ggplot(dat, aes(x = error_group, y = value, fill = estimate, group = interaction(estimate, error_group))) + # geom_hline(yintercept = 0) + # geom_point(alpha=0.05, size=2, stroke = 0, shape=16) + # geom_smooth(method=\"loess\") + geom_boxplot(outlier.shape = NA) + theme_modern() + scale_fill_manual( values = c(\"Coefficient\" = \"#607D8B\", \"MAP\" = \"#795548\", \"Mean\" = \"#FF9800\", \"Median\" = \"#FFEB3B\"), name = \"Index\" ) + ylab(\"Point-estimate\") + xlab(\"Noise\") + facet_wrap(~ outcome_type * true_effect, scales = \"free\")"},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"sensitivity-to-sample-size","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 1: Relationship with Error (Noise) and Sample Size > Results","what":"Sensitivity to Sample Size","title":"In-Depth 1: Comparison of Point-Estimates","text":"","code":"dat <- df dat <- data_select(dat, select = c(\"sample_size\", \"true_effect\", \"outcome_type\", \"Coefficient\", \"Median\", \"Mean\", \"MAP\")) dat <- reshape_longer( dat, select = -c(\"sample_size\", \"true_effect\", \"outcome_type\"), names_to = \"estimate\", values_to = \"value\" ) dat$temp <- as.factor(cut(dat$sample_size, 10, labels = FALSE)) tmp <- lapply(split(dat, dat$temp), function(x) { x$size_group <- rep(round(mean(x$sample_size), 1), times = nrow(x)) return(x) }) dat <- do.call(rbind, tmp) dat <- data_filter(dat, value < 6) ggplot(dat, aes(x = size_group, y = value, fill = estimate, group = interaction(estimate, size_group))) + # geom_hline(yintercept = 0) + # geom_point(alpha=0.05, size=2, stroke = 0, shape=16) + # geom_smooth(method=\"loess\") + geom_boxplot(outlier.shape = NA) + theme_modern() + scale_fill_manual( values = c(\"Coefficient\" = \"#607D8B\", \"MAP\" = \"#795548\", \"Mean\" = \"#FF9800\", \"Median\" = \"#FFEB3B\"), name = \"Index\" ) + ylab(\"Point-estimate\") + xlab(\"Sample size\") + facet_wrap(~ outcome_type * true_effect, scales = \"free\")"},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"statistical-modelling","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 1: Relationship with Error (Noise) and Sample Size > Results","what":"Statistical Modelling","title":"In-Depth 1: Comparison of Point-Estimates","text":"fitted (frequentist) multiple linear regression statistically test predict presence absence effect estimates well interaction noise sample size. suggests , order delineate presence absence effect, compared frequentist’s beta coefficient: linear models, Mean better predictor, closely followed Median, MAP frequentist Coefficient. logistic models, MAP better predictor, followed Median, Mean , behind, frequentist Coefficient. Overall, median appears safe choice, maintaining high performance across different types models.","code":"dat <- df dat <- data_select(dat, select = c(\"sample_size\", \"true_effect\", \"outcome_type\", \"Coefficient\", \"Median\", \"Mean\", \"MAP\")) dat <- reshape_longer( dat, select = -c(\"sample_size\", \"error\", \"true_effect\", \"outcome_type\"), names_to = \"estimate\", values_to = \"value\" ) out <- glm(true_effect ~ outcome_type / estimate / value, data = dat, family = \"binomial\") out <- parameters(out, ci_method = \"wald\") out <- data_select(out, c(\"Parameter\", \"Coefficient\", \"p\")) rows <- grep(\"^outcome_type(.*):value$\", x = out$Parameter) out <- data_filter(out, rows) out <- out[order(out$Coefficient, decreasing = TRUE), ] knitr::kable(out, digits = 2)"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"methods-1","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 2: Relationship with Sampling Characteristics","what":"Methods","title":"In-Depth 1: Comparison of Point-Estimates","text":"carrying another simulation aimed modulating following characteristics: Model type: linear logistic. “True” effect (original regression coefficient data drawn): Can 1 0 (effect). draws: 10 5000 step 5 (1000 iterations). warmup: Ratio warmup iterations. 1/10 9/10 step 0.1 (9 iterations). generated 3 datasets combination characteristics, resulting total 2 * 2 * 8 * 40 * 9 * 3 = 34560 Bayesian frequentist models. code used generation avaible (please note takes usually several days/weeks complete).","code":"df <- read.csv(\"https://raw.github.com/easystats/circus/main/data/bayesSim_study2.csv\")"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"sensitivity-to-number-of-iterations","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 2: Relationship with Sampling Characteristics > Results","what":"Sensitivity to number of iterations","title":"In-Depth 1: Comparison of Point-Estimates","text":"","code":"dat <- df dat <- data_select(dat, select = c(\"iterations\", \"true_effect\", \"outcome_type\", \"beta\", \"Median\", \"Mean\", \"MAP\")) dat <- reshape_longer( dat, select = -c(\"iterations\", \"true_effect\", \"outcome_type\"), names_to = \"estimate\", values_to = \"value\" ) dat$temp <- as.factor(cut(dat$iterations, 5, labels = FALSE)) tmp <- lapply(split(dat, dat$temp), function(x) { x$iterations_group <- rep(round(mean(x$iterations), 1), times = nrow(x)) return(x) }) dat <- do.call(rbind, tmp) dat <- data_filter(dat, value < 6) ggplot(dat, aes(x = iterations_group, y = value, fill = estimate, group = interaction(estimate, iterations_group))) + geom_boxplot(outlier.shape = NA) + theme_classic() + scale_fill_manual( values = c(\"beta\" = \"#607D8B\", \"MAP\" = \"#795548\", \"Mean\" = \"#FF9800\", \"Median\" = \"#FFEB3B\"), name = \"Index\" ) + ylab(\"Point-estimate of the true value 0\\n\") + xlab(\"\\nNumber of Iterations\") + facet_wrap(~ outcome_type * true_effect, scales = \"free\")"},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"sensitivity-to-warmup-ratio","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 2: Relationship with Sampling Characteristics > Results","what":"Sensitivity to warmup ratio","title":"In-Depth 1: Comparison of Point-Estimates","text":"","code":"dat <- df dat$warmup <- dat$warmup / dat$iterations dat <- data_select(dat, select = c(\"warmup\", \"true_effect\", \"outcome_type\", \"beta\", \"Median\", \"Mean\", \"MAP\")) dat <- reshape_longer( dat, select = -c(\"warmup\", \"true_effect\", \"outcome_type\"), names_to = \"estimate\", values_to = \"value\" ) dat$temp <- as.factor(cut(dat$warmup, 3, labels = FALSE)) tmp <- lapply(split(dat, dat$temp), function(x) { x$warmup_group <- rep(round(mean(x$warmup), 1), times = nrow(x)) return(x) }) dat <- do.call(rbind, tmp) dat <- data_filter(dat, value < 6) ggplot(dat, aes(x = warmup_group, y = value, fill = estimate, group = interaction(estimate, warmup_group))) + geom_boxplot(outlier.shape = NA) + theme_classic() + scale_fill_manual( values = c(\"beta\" = \"#607D8B\", \"MAP\" = \"#795548\", \"Mean\" = \"#FF9800\", \"Median\" = \"#FFEB3B\"), name = \"Index\" ) + ylab(\"Point-estimate of the true value 0\\n\") + xlab(\"\\nNumber of Iterations\") + facet_wrap(~ outcome_type * true_effect, scales = \"free\")"},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"discussion","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework","what":"Discussion","title":"In-Depth 1: Comparison of Point-Estimates","text":"Conclusions can found guidelines section article.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"suggestions","dir":"Articles > Web_only","previous_headings":"","what":"Suggestions","title":"In-Depth 1: Comparison of Point-Estimates","text":"advice, opinion , encourage let us know opening discussion thread making pull request.","code":""},{"path":"https://easystats.github.io/bayestestR/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Dominique Makowski. Author, maintainer. Daniel Lüdecke. Author. Mattan S. Ben-Shachar. Author. Indrajeet Patil. Author. Micah K. Wilson. Author. Brenton M. Wiernik. Author. Paul-Christian Bürkner. Reviewer. Tristan Mahr. Reviewer. Henrik Singmann. Contributor. Quentin F. Gronau. Contributor. Sam Crawley. Contributor.","code":""},{"path":"https://easystats.github.io/bayestestR/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Makowski, D., Ben-Shachar, M., & Lüdecke, D. (2019). bayestestR: Describing Effects Uncertainty, Existence Significance within Bayesian Framework. Journal Open Source Software, 4(40), 1541. doi:10.21105/joss.01541","code":"@Article{, title = {bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework.}, author = {Dominique Makowski and Mattan S. Ben-Shachar and Daniel Lüdecke}, journal = {Journal of Open Source Software}, doi = {10.21105/joss.01541}, year = {2019}, number = {40}, volume = {4}, pages = {1541}, url = {https://joss.theoj.org/papers/10.21105/joss.01541}, }"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"bayestestr-","dir":"","previous_headings":"","what":"Understand and Describe Bayesian Models and Posterior Distributions","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Become Bayesian master ⚠️ changed default CI width! Please make informed decision set explicitly (ci = 0.89, ci = 0.95 anything else decide) ⚠️ Existing R packages allow users easily fit large variety models extract visualize posterior draws. However, packages return limited set indices (e.g., point-estimates CIs). bayestestR provides comprehensive consistent set functions analyze describe posterior distributions generated variety models objects, including popular modeling packages rstanarm, brms BayesFactor. can reference package documentation follows: Makowski, D., Ben-Shachar, M. S., & Lüdecke, D. (2019). bayestestR: Describing Effects Uncertainty, Existence Significance within Bayesian Framework. Journal Open Source Software, 4(40), 1541. 10.21105/joss.01541 Makowski, D., Ben-Shachar, M. S., Chen, S. H. ., & Lüdecke, D. (2019). Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. 10.3389/fpsyg.2019.02767","code":""},{"path":"https://easystats.github.io/bayestestR/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"bayestestR package available CRAN, latest development version available R-universe (rOpenSci). downloaded package, can load using: Tip Instead library(bayestestR), use library(easystats). make features easystats-ecosystem available. stay updated, use easystats::install_latest().","code":"library(\"bayestestR\")"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"documentation","dir":"","previous_headings":"","what":"Documentation","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Access package documentation check-vignettes:","code":""},{"path":"https://easystats.github.io/bayestestR/index.html","id":"tutorials","dir":"","previous_headings":"Documentation","what":"Tutorials","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Get Started Bayesian Analysis Example 1: Initiation Bayesian models Example 2: Confirmation Bayesian skills Example 3: Become Bayesian master","code":""},{"path":"https://easystats.github.io/bayestestR/index.html","id":"articles","dir":"","previous_headings":"Documentation","what":"Articles","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Credible Intervals (CI) Probability Direction (pd) Region Practical Equivalence (ROPE) Bayes Factors (BF) Comparison Point-Estimates Comparison Indices Effect Existence Reporting Guidelines","code":""},{"path":"https://easystats.github.io/bayestestR/index.html","id":"features","dir":"","previous_headings":"","what":"Features","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Bayesian framework, parameters estimated probabilistic fashion distributions. distributions can summarised described reporting four types indices: mean(), median() map_estimate() estimation mode. point_estimate() can used get can run directly models. hdi() Highest Density Intervals (HDI), spi() Shortest Probability Intervals (SPI) eti() Equal-Tailed Intervals (ETI). ci() can used general method Confidence Credible Intervals (CI). p_direction() Bayesian equivalent frequentist p-value (see Makowski et al., 2019) p_pointnull() represents odds null hypothesis (h0 = 0) compared likely hypothesis (MAP). bf_pointnull() classic Bayes Factor (BF) assessing likelihood effect presence absence (h0 = 0). p_rope() probability effect falling inside Region Practical Equivalence (ROPE). bf_rope() computes Bayes factor null defined region (ROPE). p_significance() combines region equivalence probability direction. describe_posterior() master function can compute indices cited . describe_posterior() works many objects, including complex brmsfit-models. better readability, output separated model components: bayestestR also includes many features useful Bayesian analyses. examples:","code":"describe_posterior( rnorm(10000), centrality = \"median\", test = c(\"p_direction\", \"p_significance\"), verbose = FALSE ) ## Summary of Posterior Distribution ## ## Parameter | Median | 95% CI | pd | ps ## -------------------------------------------------- ## Posterior | -0.01 | [-1.98, 1.93] | 50.52% | 0.46 zinb <- read.csv(\"http://stats.idre.ucla.edu/stat/data/fish.csv\") set.seed(123) model <- brm( bf( count ~ child + camper + (1 | persons), zi ~ child + camper + (1 | persons) ), data = zinb, family = zero_inflated_poisson(), chains = 1, iter = 500 ) describe_posterior( model, effects = \"all\", component = \"all\", test = c(\"p_direction\", \"p_significance\"), centrality = \"all\" ) ## Summary of Posterior Distribution ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## -------------------------------------------------------------------------------------- ## (Intercept) | 0.96 | 0.96 | 0.96 | [-0.81, 2.51] | 90.00% | 0.88 | 1.011 | 110.00 ## child | -1.16 | -1.16 | -1.16 | [-1.36, -0.94] | 100% | 1.00 | 0.996 | 278.00 ## camper | 0.73 | 0.72 | 0.73 | [ 0.54, 0.91] | 100% | 1.00 | 0.996 | 271.00 ## ## # Fixed effects (zero-inflated) ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## -------------------------------------------------------------------------------------- ## (Intercept) | -0.48 | -0.51 | -0.22 | [-2.03, 0.89] | 78.00% | 0.73 | 0.997 | 138.00 ## child | 1.85 | 1.86 | 1.81 | [ 1.19, 2.54] | 100% | 1.00 | 0.996 | 303.00 ## camper | -0.88 | -0.86 | -0.99 | [-1.61, -0.07] | 98.40% | 0.96 | 0.996 | 292.00 ## ## # Random effects (conditional) Intercept: persons ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## --------------------------------------------------------------------------------------- ## persons.1 | -0.99 | -1.01 | -0.84 | [-2.68, 0.80] | 92.00% | 0.90 | 1.007 | 106.00 ## persons.2 | -4.65e-03 | -0.04 | 0.03 | [-1.63, 1.66] | 50.00% | 0.45 | 1.013 | 109.00 ## persons.3 | 0.69 | 0.66 | 0.69 | [-0.95, 2.34] | 79.60% | 0.78 | 1.010 | 114.00 ## persons.4 | 1.57 | 1.56 | 1.56 | [-0.05, 3.29] | 96.80% | 0.96 | 1.009 | 114.00 ## ## # Random effects (zero-inflated) Intercept: persons ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## ------------------------------------------------------------------------------------ ## persons.1 | 1.10 | 1.11 | 1.08 | [-0.23, 2.72] | 94.80% | 0.93 | 0.997 | 166.00 ## persons.2 | 0.18 | 0.18 | 0.22 | [-0.94, 1.58] | 63.20% | 0.54 | 0.996 | 154.00 ## persons.3 | -0.30 | -0.31 | -0.54 | [-1.79, 1.02] | 64.00% | 0.59 | 0.997 | 154.00 ## persons.4 | -1.45 | -1.46 | -1.44 | [-2.90, -0.10] | 98.00% | 0.97 | 1.000 | 189.00 ## ## # Random effects (conditional) SD/Cor: persons ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## ---------------------------------------------------------------------------------- ## (Intercept) | 1.42 | 1.58 | 1.07 | [ 0.71, 3.58] | 100% | 1.00 | 1.010 | 126.00 ## ## # Random effects (zero-inflated) SD/Cor: persons ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## ---------------------------------------------------------------------------------- ## (Intercept) | 1.30 | 1.49 | 0.99 | [ 0.63, 3.41] | 100% | 1.00 | 0.996 | 129.00"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"point-estimates","dir":"","previous_headings":"","what":"Point-estimates","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"easystats packages, plot() methods available see package many functions: median mean available base R functions, map_estimate() bayestestR can used directly find Highest Maximum Posteriori (MAP) estimate posterior, .e., value associated highest probability density (“peak” posterior distribution). words, estimation mode continuous parameters.","code":"library(bayestestR) posterior <- distribution_gamma(10000, 1.5) # Generate a skewed distribution centrality <- point_estimate(posterior) # Get indices of centrality centrality ## Point Estimate ## ## Median | Mean | MAP ## -------------------- ## 1.18 | 1.50 | 0.51"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"uncertainty-ci","dir":"","previous_headings":"","what":"Uncertainty (CI)","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"hdi() computes Highest Density Interval (HDI) posterior distribution, .e., interval contains points within interval higher probability density points outside interval. HDI can used context Bayesian posterior characterization Credible Interval (CI). Unlike equal-tailed intervals (see eti()) typically exclude 2.5% tail distribution, HDI equal-tailed therefore always includes mode(s) posterior distributions.","code":"posterior <- distribution_chisquared(10000, 4) hdi(posterior, ci = 0.89) ## 89% HDI: [0.18, 7.63] eti(posterior, ci = 0.89) ## 89% ETI: [0.75, 9.25]"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/index.html","id":"probability-of-direction-pd","dir":"","previous_headings":"Existence and Significance Testing","what":"Probability of Direction (pd)","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"p_direction() computes Probability Direction (pd, also known Maximum Probability Effect - MPE). varies 50% 100% (.e., 0.5 1) can interpreted probability (expressed percentage) parameter (described posterior distribution) strictly positive negative (whichever probable). mathematically defined proportion posterior distribution median’s sign. Although differently expressed, index fairly similar (.e., strongly correlated) frequentist p-value. Relationship p-value: cases, seems pd corresponds frequentist one-sided p-value formula p-value = (1-pd/100) two-sided p-value (commonly reported) formula p-value = 2*(1-pd/100). Thus, pd 95%, 97.5% 99.5% 99.95% corresponds approximately two-sided p-value respectively .1, .05, .01 .001. See reporting guidelines.","code":"posterior <- distribution_normal(10000, 0.4, 0.2) p_direction(posterior) ## Probability of Direction ## ## Parameter | pd ## ------------------ ## Posterior | 97.72%"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"rope","dir":"","previous_headings":"Existence and Significance Testing","what":"ROPE","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"rope() computes proportion (percentage) HDI (default 89% HDI) posterior distribution lies within region practical equivalence. Statistically, probability posterior distribution different 0 make much sense (probability different single point infinite). Therefore, idea underlining ROPE let user define area around null value enclosing values equivalent null value practical purposes Kruschke (2018). Kruschke suggests null value set, default, -0.1 0.1 range standardized parameter (negligible effect size according Cohen, 1988). generalized: instance, linear models, ROPE set 0 +/- .1 * sd(y). ROPE range can automatically computed models using rope_range function. Kruschke suggests using proportion 95% (90%, considered stable) HDI falls within ROPE index “null-hypothesis” testing (understood Bayesian framework, see equivalence_test).","code":"posterior <- distribution_normal(10000, 0.4, 0.2) rope(posterior, range = c(-0.1, 0.1)) ## # Proportion of samples inside the ROPE [-0.10, 0.10]: ## ## inside ROPE ## ----------- ## 4.40 %"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"bayes-factor","dir":"","previous_headings":"Existence and Significance Testing","what":"Bayes Factor","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"bayesfactor_parameters() computes Bayes factors null (either point interval), bases prior posterior samples single parameter. Bayes factor indicates degree mass posterior distribution shifted away closer null value(s) (relative prior distribution), thus indicating null value become less likely given observed data. null interval, Bayes factor computed comparing prior posterior odds parameter falling within outside null; null point, Savage-Dickey density ratio computed, also approximation Bayes factor comparing marginal likelihoods model model tested parameter restricted point null (Wagenmakers, Lodewyckx, Kuriyal, & Grasman, 2010). lollipops represent density point-null prior distribution (blue lollipop dotted distribution) posterior distribution (red lollipop yellow distribution). ratio two - Savage-Dickey ratio - indicates degree mass parameter distribution shifted away closer null. info, see Bayes factors vignette.","code":"prior <- distribution_normal(10000, mean = 0, sd = 1) posterior <- distribution_normal(10000, mean = 1, sd = 0.7) bayesfactor_parameters(posterior, prior, direction = \"two-sided\", null = 0, verbose = FALSE) ## Bayes Factor (Savage-Dickey density ratio) ## ## BF ## ---- ## 1.94 ## ## * Evidence Against The Null: 0"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/index.html","id":"find-ropes-appropriate-range","dir":"","previous_headings":"Utilities","what":"Find ROPE’s appropriate range","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"rope_range(): function attempts automatically finding suitable “default” values Region Practical Equivalence (ROPE). Kruschke (2018) suggests null value set, default, range -0.1 0.1 standardized parameter (negligible effect size according Cohen, 1988), can generalised linear models -0.1 * sd(y), 0.1 * sd(y). logistic models, parameters expressed log odds ratio can converted standardized difference formula sqrt(3)/pi, resulting range -0.05 0.05.","code":"rope_range(model)"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"density-estimation","dir":"","previous_headings":"Utilities","what":"Density Estimation","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"estimate_density(): function wrapper different methods density estimation. default, uses base R density default uses different smoothing bandwidth (\"SJ\") legacy default implemented base R density function (\"nrd0\"). However, Deng & Wickham suggest method = \"KernSmooth\" fastest accurate.","code":""},{"path":"https://easystats.github.io/bayestestR/index.html","id":"perfect-distributions","dir":"","previous_headings":"Utilities","what":"Perfect Distributions","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"distribution(): Generate sample size n near-perfect distributions.","code":"distribution(n = 10) ## [1] -1.55 -1.00 -0.66 -0.38 -0.12 0.12 0.38 0.66 1.00 1.55"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"probability-of-a-value","dir":"","previous_headings":"Utilities","what":"Probability of a Value","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"density_at(): Compute density given point distribution.","code":"density_at(rnorm(1000, 1, 1), 1) ## [1] 0.41"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of Conduct","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Please note bayestestR project released Contributor Code Conduct. contributing project, agree abide terms.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/area_under_curve.html","id":null,"dir":"Reference","previous_headings":"","what":"Area under the Curve (AUC) — area_under_curve","title":"Area under the Curve (AUC) — area_under_curve","text":"Based DescTools AUC function. can calculate area curve naive algorithm elaborated spline approach. curve must given vectors xy-coordinates. function can handle unsorted x values (sorting x) ties x values (ignoring duplicates).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/area_under_curve.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Area under the Curve (AUC) — area_under_curve","text":"","code":"area_under_curve(x, y, method = c(\"trapezoid\", \"step\", \"spline\"), ...) auc(x, y, method = c(\"trapezoid\", \"step\", \"spline\"), ...)"},{"path":"https://easystats.github.io/bayestestR/reference/area_under_curve.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Area under the Curve (AUC) — area_under_curve","text":"x Vector x values. y Vector y values. method Method compute Area Curve (AUC). Can \"trapezoid\" (default), \"step\" \"spline\". \"trapezoid\", curve formed connecting points direct line (composite trapezoid rule). \"step\" chosen stepwise connection two points used. calculating area spline interpolation splinefun function used combination integrate. ... Arguments passed methods.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/area_under_curve.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Area under the Curve (AUC) — area_under_curve","text":"","code":"library(bayestestR) posterior <- distribution_normal(1000) dens <- estimate_density(posterior) dens <- dens[dens$x > 0, ] x <- dens$x y <- dens$y area_under_curve(x, y, method = \"trapezoid\") #> [1] 0.4980638 area_under_curve(x, y, method = \"step\") #> [1] 0.4992903 area_under_curve(x, y, method = \"spline\") #> [1] 0.4980639"},{"path":"https://easystats.github.io/bayestestR/reference/as.data.frame.density.html","id":null,"dir":"Reference","previous_headings":"","what":"Coerce to a Data Frame — as.data.frame.density","title":"Coerce to a Data Frame — as.data.frame.density","text":"Coerce Data Frame","code":""},{"path":"https://easystats.github.io/bayestestR/reference/as.data.frame.density.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Coerce to a Data Frame — as.data.frame.density","text":"","code":"# S3 method for class 'density' as.data.frame(x, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/as.data.frame.density.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Coerce to a Data Frame — as.data.frame.density","text":"x R object. ... additional arguments passed methods.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/as.numeric.p_direction.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert to Numeric — as.numeric.map_estimate","title":"Convert to Numeric — as.numeric.map_estimate","text":"Convert Numeric","code":""},{"path":"https://easystats.github.io/bayestestR/reference/as.numeric.p_direction.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert to Numeric — as.numeric.map_estimate","text":"","code":"# S3 method for class 'map_estimate' as.numeric(x, ...) # S3 method for class 'p_direction' as.numeric(x, ...) # S3 method for class 'p_map' as.numeric(x, ...) # S3 method for class 'p_significance' as.numeric(x, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/as.numeric.p_direction.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert to Numeric — as.numeric.map_estimate","text":"x object coerced tested. ... arguments passed methods.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayes Factors (BF) — bayesfactor","title":"Bayes Factors (BF) — bayesfactor","text":"function compte Bayes factors (BFs) appropriate input. vectors single models, compute BFs single parameters, hypothesis specified, BFs restricted models. multiple models, return BF corresponding comparison models model comparison passed, compute inclusion BF. complete overview functions, read Bayes factor vignette.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayes Factors (BF) — bayesfactor","text":"","code":"bayesfactor( ..., prior = NULL, direction = \"two-sided\", null = 0, hypothesis = NULL, effects = c(\"fixed\", \"random\", \"all\"), verbose = TRUE, denominator = 1, match_models = FALSE, prior_odds = NULL )"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayes Factors (BF) — bayesfactor","text":"... numeric vector, model object(s), output bayesfactor_models. prior object representing prior distribution (see 'Details'). direction Test type (see 'Details'). One 0, \"two-sided\" (default, two tailed), -1, \"left\" (left tailed) 1, \"right\" (right tailed). null Value null, either scalar (point-null) range (interval-null). hypothesis character vector specifying restrictions logical conditions (see examples ). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. verbose Toggle warnings. denominator Either integer indicating models use denominator, model used denominator. Ignored BFBayesFactor. match_models See details. prior_odds Optional vector prior odds models. See BayesFactor::priorOdds<-.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bayes Factors (BF) — bayesfactor","text":"type Bayes factor, depending input. See bayesfactor_parameters(), bayesfactor_models() bayesfactor_inclusion().","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Bayes Factors (BF) — bayesfactor","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayes Factors (BF) — bayesfactor","text":"","code":"# \\dontrun{ library(bayestestR) prior <- distribution_normal(1000, mean = 0, sd = 1) posterior <- distribution_normal(1000, mean = 0.5, sd = 0.3) bayesfactor(posterior, prior = prior, verbose = FALSE) #> Bayes Factor (Savage-Dickey density ratio) #> #> BF #> ---- #> 1.21 #> #> * Evidence Against The Null: 0 #> # rstanarm models # --------------- model <- suppressWarnings(rstanarm::stan_lmer(extra ~ group + (1 | ID), data = sleep)) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 4.3e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.43 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.198 seconds (Warm-up) #> Chain 1: 0.3 seconds (Sampling) #> Chain 1: 0.498 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.4e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.175 seconds (Warm-up) #> Chain 2: 0.177 seconds (Sampling) #> Chain 2: 0.352 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.4e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.193 seconds (Warm-up) #> Chain 3: 0.122 seconds (Sampling) #> Chain 3: 0.315 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.2e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.174 seconds (Warm-up) #> Chain 4: 0.202 seconds (Sampling) #> Chain 4: 0.376 seconds (Total) #> Chain 4: bayesfactor(model, verbose = FALSE) #> Bayes Factor (Savage-Dickey density ratio) #> #> Parameter | BF #> ------------------- #> (Intercept) | 0.206 #> group2 | 2.66 #> #> * Evidence Against The Null: 0 #> # Frequentist models # --------------- m0 <- lm(extra ~ 1, data = sleep) m1 <- lm(extra ~ group, data = sleep) m2 <- lm(extra ~ group + ID, data = sleep) comparison <- bayesfactor(m0, m1, m2) comparison #> Bayes Factors for Model Comparison #> #> Model BF #> [..2] group 1.30 #> [..3] group + ID 1.12e+04 #> #> * Against Denominator: [..1] (Intercept only) #> * Bayes Factor Type: BIC approximation bayesfactor(comparison) #> Inclusion Bayes Factors (Model Averaged) #> #> P(prior) P(posterior) Inclusion BF #> group 0.67 1.00 5.61e+03 #> ID 0.33 1.00 9.77e+03 #> #> * Compared among: all models #> * Priors odds: uniform-equal # }"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":null,"dir":"Reference","previous_headings":"","what":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"bf_* function alias main function. info, see Bayes factors vignette.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"","code":"bayesfactor_inclusion(models, match_models = FALSE, prior_odds = NULL, ...) bf_inclusion(models, match_models = FALSE, prior_odds = NULL, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"models object class bayesfactor_models() BFBayesFactor. match_models See details. prior_odds Optional vector prior odds models. See BayesFactor::priorOdds<-. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"data frame containing prior posterior probabilities, log(BF) effect (Use .numeric() extract non-log Bayes factors; see examples).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"Inclusion Bayes factors answer question: observed data probable models particular effect, models without particular effect? words, average - models effect \\(X\\) likely produced observed data models without effect \\(X\\)?","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"match-models","dir":"Reference","previous_headings":"","what":"Match Models","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"match_models=FALSE (default), Inclusion BFs computed comparing models term models without term. TRUE, comparison restricted models (1) include interactions term interest; (2) interaction terms, averaging done across models containe main effect terms interaction term comprised.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"Random effects lmer style converted interaction terms: .e., (X|G) become terms 1:G X:G.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"interpreting-bayes-factors","dir":"Reference","previous_headings":"","what":"Interpreting Bayes Factors","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"Bayes factor greater 1 can interpreted evidence null, one convention Bayes factor greater 3 can considered \"substantial\" evidence null (vice versa, Bayes factor smaller 1/3 indicates substantial evidence favor null-model) (Wetzels et al. 2011).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"Hinne, M., Gronau, Q. F., van den Bergh, D., Wagenmakers, E. (2019, March 25). conceptual introduction Bayesian Model Averaging. doi:10.31234/osf.io/wgb64 Clyde, M. ., Ghosh, J., & Littman, M. L. (2011). Bayesian adaptive sampling variable selection model averaging. Journal Computational Graphical Statistics, 20(1), 80-101. Mathot, S. (2017). Bayes like Baws: Interpreting Bayesian Repeated Measures JASP. Blog post.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"Mattan S. Ben-Shachar","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"","code":"library(bayestestR) # Using bayesfactor_models: # ------------------------------ mo0 <- lm(Sepal.Length ~ 1, data = iris) mo1 <- lm(Sepal.Length ~ Species, data = iris) mo2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris) mo3 <- lm(Sepal.Length ~ Species * Petal.Length, data = iris) BFmodels <- bayesfactor_models(mo1, mo2, mo3, denominator = mo0) (bf_inc <- bayesfactor_inclusion(BFmodels)) #> Inclusion Bayes Factors (Model Averaged) #> #> P(prior) P(posterior) Inclusion BF #> Species 0.75 1.00 2.02e+55 #> Petal.Length 0.50 1.00 3.58e+26 #> Petal.Length:Species 0.25 0.04 0.113 #> #> * Compared among: all models #> * Priors odds: uniform-equal as.numeric(bf_inc) #> [1] 2.021143e+55 3.575448e+26 1.131202e-01 # \\donttest{ # BayesFactor # ------------------------------- BF <- BayesFactor::generalTestBF(len ~ supp * dose, ToothGrowth, progress = FALSE) bayesfactor_inclusion(BF) #> Inclusion Bayes Factors (Model Averaged) #> #> P(prior) P(posterior) Inclusion BF #> supp 0.60 0.98 35.18 #> dose 0.60 1.00 5.77e+12 #> dose:supp 0.20 0.56 5.08 #> #> * Compared among: all models #> * Priors odds: uniform-equal # compare only matched models: bayesfactor_inclusion(BF, match_models = TRUE) #> Inclusion Bayes Factors (Model Averaged) #> #> P(prior) P(posterior) Inclusion BF #> supp 0.40 0.42 22.68 #> dose 0.40 0.44 3.81e+12 #> dose:supp 0.20 0.56 1.33 #> #> * Compared among: matched models only #> * Priors odds: uniform-equal # }"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayes Factors (BF) for model comparison — bayesfactor_models","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"function computes extracts Bayes factors fitted models. bf_* function alias main function.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"","code":"bayesfactor_models(..., denominator = 1, verbose = TRUE) bf_models(..., denominator = 1, verbose = TRUE) # Default S3 method bayesfactor_models(..., denominator = 1, verbose = TRUE) # S3 method for class 'bayesfactor_models' update(object, subset = NULL, reference = NULL, ...) # S3 method for class 'bayesfactor_models' as.matrix(x, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"... Fitted models (see details), fit data, single BFBayesFactor object (see 'Details'). Ignored .matrix(), update(). following named arguments present, passed insight::get_loglikelihood() (see details): estimator (defaults \"ML\") check_response (defaults FALSE) denominator Either integer indicating models use denominator, model used denominator. Ignored BFBayesFactor. verbose Toggle warnings. object, x bayesfactor_models() object. subset Vector model indices keep remove. reference Index model reference , \"top\" reference best model, \"bottom\" reference worst model.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"data frame containing models' formulas (reconstructed fixed random effects) log(BF)s (Use .numeric() extract non-log Bayes factors; see examples), prints nicely.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"passed models supported insight DV models tested equality (else assumed true), models' terms extracted (allowing follow-analysis bayesfactor_inclusion). brmsfit stanreg models, Bayes factors computed using bridgesampling package. brmsfit models must fitted save_pars = save_pars(= TRUE). stanreg models must fitted defined diagnostic_file. BFBayesFactor, bayesfactor_models() mostly wraparound BayesFactor::extractBF(). model types, Bayes factors computed using BIC approximation. Note BICs extracted using insight::get_loglikelihood, see documentation options dealing transformed responses REML estimation. order correctly precisely estimate Bayes factors, rule thumb 4 P's: Proper Priors Plentiful Posteriors. many? number posterior samples needed testing substantially larger estimation (default 4000 samples may enough many cases). conservative rule thumb obtain 10 times samples required estimation (Gronau, Singmann, & Wagenmakers, 2017). less 40,000 samples detected, bayesfactor_models() gives warning. See also Bayes factors vignette.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"interpreting-bayes-factors","dir":"Reference","previous_headings":"","what":"Interpreting Bayes Factors","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"Bayes factor greater 1 can interpreted evidence null, one convention Bayes factor greater 3 can considered \"substantial\" evidence null (vice versa, Bayes factor smaller 1/3 indicates substantial evidence favor null-model) (Wetzels et al. 2011).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"Gronau, Q. F., Singmann, H., & Wagenmakers, E. J. (2017). Bridgesampling: R package estimating normalizing constants. arXiv preprint arXiv:1710.08162. Kass, R. E., Raftery, . E. (1995). Bayes Factors. Journal American Statistical Association, 90(430), 773-795. Robert, C. P. (2016). expected demise Bayes factor. Journal Mathematical Psychology, 72, 33–37. Wagenmakers, E. J. (2007). practical solution pervasive problems p values. Psychonomic bulletin & review, 14(5), 779-804. Wetzels, R., Matzke, D., Lee, M. D., Rouder, J. N., Iverson, G. J., Wagenmakers, E.-J. (2011). Statistical Evidence Experimental Psychology: Empirical Comparison Using 855 t Tests. Perspectives Psychological Science, 6(3), 291–298. doi:10.1177/1745691611406923","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"Mattan S. Ben-Shachar","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"","code":"# With lm objects: # ---------------- lm1 <- lm(mpg ~ 1, data = mtcars) lm2 <- lm(mpg ~ hp, data = mtcars) lm3 <- lm(mpg ~ hp + drat, data = mtcars) lm4 <- lm(mpg ~ hp * drat, data = mtcars) (BFM <- bayesfactor_models(lm1, lm2, lm3, lm4, denominator = 1)) #> Bayes Factors for Model Comparison #> #> Model BF #> [lm2] hp 4.54e+05 #> [lm3] hp + drat 7.70e+07 #> [lm4] hp * drat 1.59e+07 #> #> * Against Denominator: [lm1] (Intercept only) #> * Bayes Factor Type: BIC approximation # bayesfactor_models(lm2, lm3, lm4, denominator = lm1) # same result # bayesfactor_models(lm1, lm2, lm3, lm4, denominator = lm1) # same result update(BFM, reference = \"bottom\") #> Bayes Factors for Model Comparison #> #> Model BF #> [lm2] hp 4.54e+05 #> [lm3] hp + drat 7.70e+07 #> [lm4] hp * drat 1.59e+07 #> #> * Against Denominator: [lm1] (Intercept only) #> * Bayes Factor Type: BIC approximation as.matrix(BFM) #> # Bayes Factors for Model Comparison #> #> Numerator #> Denominator #> #> | [1] | [2] | [3] | [4] #> ---------------------------------------------------------------- #> [1] (Intercept only) | 1 | 4.54e+05 | 7.70e+07 | 1.59e+07 #> [2] hp | 2.20e-06 | 1 | 169.72 | 35.09 #> [3] hp + drat | 1.30e-08 | 0.006 | 1 | 0.207 #> [4] hp * drat | 6.28e-08 | 0.028 | 4.84 | 1 as.numeric(BFM) #> [1] 1.0 453874.3 77029881.3 15925712.4 lm2b <- lm(sqrt(mpg) ~ hp, data = mtcars) # Set check_response = TRUE for transformed responses bayesfactor_models(lm2b, denominator = lm2, check_response = TRUE) #> Bayes Factors for Model Comparison #> #> Model BF #> [lm2b] hp 6.94 #> #> * Against Denominator: [lm2] hp #> * Bayes Factor Type: BIC approximation # \\donttest{ # With lmerMod objects: # --------------------- lmer1 <- lme4::lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) lmer2 <- lme4::lmer(Sepal.Length ~ Petal.Length + (Petal.Length | Species), data = iris) #> boundary (singular) fit: see help('isSingular') lmer3 <- lme4::lmer( Sepal.Length ~ Petal.Length + (Petal.Length | Species) + (1 | Petal.Width), data = iris ) #> boundary (singular) fit: see help('isSingular') bayesfactor_models(lmer1, lmer2, lmer3, denominator = 1, estimator = \"REML\" ) #> Bayes Factors for Model Comparison #> #> Model BF #> [lmer2] Petal.Length + (Petal.Length | Species) 0.058 #> [lmer3] Petal.Length + (Petal.Length | Species) + (1 | Petal.Width) 0.005 #> #> * Against Denominator: [lmer1] Petal.Length + (1 | Species) #> * Bayes Factor Type: BIC approximation # rstanarm models # --------------------- # (note that a unique diagnostic_file MUST be specified in order to work) stan_m0 <- suppressWarnings(rstanarm::stan_glm(Sepal.Length ~ 1, data = iris, family = gaussian(), diagnostic_file = file.path(tempdir(), \"df0.csv\") )) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 3.6e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.36 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.017 seconds (Warm-up) #> Chain 1: 0.037 seconds (Sampling) #> Chain 1: 0.054 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 8e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.016 seconds (Warm-up) #> Chain 2: 0.036 seconds (Sampling) #> Chain 2: 0.052 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 8e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.036 seconds (Sampling) #> Chain 3: 0.055 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 8e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 4: 0.036 seconds (Sampling) #> Chain 4: 0.054 seconds (Total) #> Chain 4: stan_m1 <- suppressWarnings(rstanarm::stan_glm(Sepal.Length ~ Species, data = iris, family = gaussian(), diagnostic_file = file.path(tempdir(), \"df1.csv\") )) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 1.9e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.19 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.028 seconds (Warm-up) #> Chain 1: 0.046 seconds (Sampling) #> Chain 1: 0.074 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 9e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.03 seconds (Warm-up) #> Chain 2: 0.047 seconds (Sampling) #> Chain 2: 0.077 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 9e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.028 seconds (Warm-up) #> Chain 3: 0.047 seconds (Sampling) #> Chain 3: 0.075 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 9e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.029 seconds (Warm-up) #> Chain 4: 0.048 seconds (Sampling) #> Chain 4: 0.077 seconds (Total) #> Chain 4: stan_m2 <- suppressWarnings(rstanarm::stan_glm(Sepal.Length ~ Species + Petal.Length, data = iris, family = gaussian(), diagnostic_file = file.path(tempdir(), \"df2.csv\") )) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.095 seconds (Warm-up) #> Chain 1: 0.109 seconds (Sampling) #> Chain 1: 0.204 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 9e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.084 seconds (Warm-up) #> Chain 2: 0.106 seconds (Sampling) #> Chain 2: 0.19 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 9e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.086 seconds (Warm-up) #> Chain 3: 0.109 seconds (Sampling) #> Chain 3: 0.195 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 9e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.079 seconds (Warm-up) #> Chain 4: 0.097 seconds (Sampling) #> Chain 4: 0.176 seconds (Total) #> Chain 4: bayesfactor_models(stan_m1, stan_m2, denominator = stan_m0, verbose = FALSE) #> Bayes Factors for Model Comparison #> #> Model BF #> [1] Species 6.27e+27 #> [2] Species + Petal.Length 2.25e+53 #> #> * Against Denominator: [3] (Intercept only) #> * Bayes Factor Type: marginal likelihoods (bridgesampling) # brms models # -------------------- # (note the save_pars MUST be set to save_pars(all = TRUE) in order to work) brm1 <- brms::brm(Sepal.Length ~ 1, data = iris, save_pars = save_pars(all = TRUE)) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.029 seconds (Warm-up) #> Chain 1: 0.028 seconds (Sampling) #> Chain 1: 0.057 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 7e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.028 seconds (Warm-up) #> Chain 2: 0.025 seconds (Sampling) #> Chain 2: 0.053 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 7e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.03 seconds (Warm-up) #> Chain 3: 0.035 seconds (Sampling) #> Chain 3: 0.065 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 7e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.031 seconds (Warm-up) #> Chain 4: 0.033 seconds (Sampling) #> Chain 4: 0.064 seconds (Total) #> Chain 4: brm2 <- brms::brm(Sepal.Length ~ Species, data = iris, save_pars = save_pars(all = TRUE)) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.016 seconds (Warm-up) #> Chain 1: 0.015 seconds (Sampling) #> Chain 1: 0.031 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.017 seconds (Warm-up) #> Chain 2: 0.016 seconds (Sampling) #> Chain 2: 0.033 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.016 seconds (Warm-up) #> Chain 3: 0.014 seconds (Sampling) #> Chain 3: 0.03 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.017 seconds (Warm-up) #> Chain 4: 0.015 seconds (Sampling) #> Chain 4: 0.032 seconds (Total) #> Chain 4: brm3 <- brms::brm( Sepal.Length ~ Species + Petal.Length, data = iris, save_pars = save_pars(all = TRUE) ) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 7e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.048 seconds (Warm-up) #> Chain 1: 0.057 seconds (Sampling) #> Chain 1: 0.105 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 4e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.049 seconds (Warm-up) #> Chain 2: 0.052 seconds (Sampling) #> Chain 2: 0.101 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 4e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.046 seconds (Warm-up) #> Chain 3: 0.052 seconds (Sampling) #> Chain 3: 0.098 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 4e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.055 seconds (Warm-up) #> Chain 4: 0.052 seconds (Sampling) #> Chain 4: 0.107 seconds (Total) #> Chain 4: bayesfactor_models(brm1, brm2, brm3, denominator = 1, verbose = FALSE) #> Bayes Factors for Model Comparison #> #> Model BF #> [2] Species 5.86e+29 #> [3] Species + Petal.Length 7.50e+55 #> #> * Against Denominator: [1] (Intercept only) #> * Bayes Factor Type: marginal likelihoods (bridgesampling) # BayesFactor # --------------------------- data(puzzles) BF <- BayesFactor::anovaBF(RT ~ shape * color + ID, data = puzzles, whichRandom = \"ID\", progress = FALSE ) BF #> Bayes factor analysis #> -------------- #> [1] shape + ID : 2.841658 ±0.92% #> [2] color + ID : 2.830879 ±0.86% #> [3] shape + color + ID : 11.75567 ±1.98% #> [4] shape + color + shape:color + ID : 4.371906 ±1.99% #> #> Against denominator: #> RT ~ ID #> --- #> Bayes factor type: BFlinearModel, JZS #> bayesfactor_models(BF) # basically the same #> Bayes Factors for Model Comparison #> #> Model BF #> [2] shape + ID 2.84 #> [3] color + ID 2.83 #> [4] shape + color + ID 11.76 #> [5] shape + color + shape:color + ID 4.37 #> #> * Against Denominator: [1] ID #> * Bayes Factor Type: JZS (BayesFactor) # }"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"method computes Bayes factors null (either point interval), based prior posterior samples single parameter. Bayes factor indicates degree mass posterior distribution shifted away closer null value(s) (relative prior distribution), thus indicating null value become less likely given observed data. null interval, Bayes factor computed comparing prior posterior odds parameter falling within outside null interval (Morey & Rouder, 2011; Liao et al., 2020); null point, Savage-Dickey density ratio computed, also approximation Bayes factor comparing marginal likelihoods model model tested parameter restricted point null (Wagenmakers et al., 2010; Heck, 2019). Note logspline package used estimating densities probabilities, must installed function work. bayesfactor_pointnull() bayesfactor_rope() wrappers around bayesfactor_parameters different defaults null tested (point range, respectively). Aliases main functions prefixed bf_*, like bf_parameters() bf_pointnull(). info, particular specifying correct priors factors 2 levels, see Bayes factors vignette.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"","code":"bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) bayesfactor_pointnull( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) bayesfactor_rope( posterior, prior = NULL, direction = \"two-sided\", null = rope_range(posterior, verbose = FALSE), ..., verbose = TRUE ) bf_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) bf_pointnull( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) bf_rope( posterior, prior = NULL, direction = \"two-sided\", null = rope_range(posterior, verbose = FALSE), ..., verbose = TRUE ) # S3 method for class 'numeric' bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) # S3 method for class 'stanreg' bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"location\", \"smooth_terms\", \"sigma\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ..., verbose = TRUE ) # S3 method for class 'brmsfit' bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"location\", \"smooth_terms\", \"sigma\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ..., verbose = TRUE ) # S3 method for class 'blavaan' bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) # S3 method for class 'data.frame' bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, rvar_col = NULL, ..., verbose = TRUE )"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"posterior numerical vector, stanreg / brmsfit object, emmGrid data frame - representing posterior distribution(s) (see 'Details'). prior object representing prior distribution (see 'Details'). direction Test type (see 'Details'). One 0, \"two-sided\" (default, two tailed), -1, \"left\" (left tailed) 1, \"right\" (right tailed). null Value null, either scalar (point-null) range (interval-null). ... Arguments passed methods. (Can used pass arguments internal logspline::logspline().) verbose Toggle warnings. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. rvar_col single character - name rvar column data frame processed. See example p_direction().","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"data frame containing (log) Bayes factor representing evidence null (Use .numeric() extract non-log Bayes factors; see examples).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"method used compute Bayes factors based prior posterior distributions.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"one-sided-amp-dividing-tests-setting-an-order-restriction-","dir":"Reference","previous_headings":"","what":"One-sided & Dividing Tests (setting an order restriction)","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"One sided tests (controlled direction) conducted restricting prior posterior non-null values (\"alternative\") one side null (Morey & Wagenmakers, 2014). example, prior hypothesis parameter positive, alternative restricted region right null (point interval). example, Bayes factor comparing \"null\" 0-0.1 alternative >0.1, set bayesfactor_parameters(null = c(0, 0.1), direction = \">\"). also possible compute Bayes factor dividing hypotheses - , null alternative complementary, opposing one-sided hypotheses (Morey & Wagenmakers, 2014). example, Bayes factor comparing \"null\" <0 alternative >0, set bayesfactor_parameters(null = c(-Inf, 0)).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"setting-the-correct-prior","dir":"Reference","previous_headings":"","what":"Setting the correct prior","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"computation Bayes factors, model priors must proper priors (least flat, preferable informative); priors alternative get wider, likelihood null value(s) increases, extreme completely flat priors null infinitely favorable alternative (called Jeffreys-Lindley-Bartlett paradox). Thus, ever try (want) compute Bayes factor informed prior. (Note default, brms::brm() uses flat priors fixed-effects; See example .) important provide correct prior meaningful results, match posterior-type input: numeric vector - prior also numeric vector, representing prior-estimate. data frame - prior also data frame, representing prior-estimates, matching column order. rvar_col specified, prior name rvar column represents prior-estimates. Supported Bayesian model (stanreg, brmsfit, etc.) prior model equivalent model MCMC samples priors . See unupdate(). prior set NULL, unupdate() called internally (supported brmsfit_multiple model). Output {marginaleffects} function - prior also equivalent output {marginaleffects} function based prior-model (See unupdate()). Output {emmeans} function prior also equivalent output {emmeans} function based prior-model (See unupdate()). prior can also original (posterior) model, case function try \"unupdate\" estimates (supported estimates undergone transformations – \"log\", \"response\", etc. – regriding).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"interpreting-bayes-factors","dir":"Reference","previous_headings":"","what":"Interpreting Bayes Factors","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"Bayes factor greater 1 can interpreted evidence null, one convention Bayes factor greater 3 can considered \"substantial\" evidence null (vice versa, Bayes factor smaller 1/3 indicates substantial evidence favor null-model) (Wetzels et al. 2011).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"Wagenmakers, E. J., Lodewyckx, T., Kuriyal, H., Grasman, R. (2010). Bayesian hypothesis testing psychologists: tutorial Savage-Dickey method. Cognitive psychology, 60(3), 158-189. Heck, D. W. (2019). caveat Savage–Dickey density ratio: case computing Bayes factors regression parameters. British Journal Mathematical Statistical Psychology, 72(2), 316-333. Morey, R. D., & Wagenmakers, E. J. (2014). Simple relation Bayesian order-restricted point-null hypothesis tests. Statistics & Probability Letters, 92, 121-124. Morey, R. D., & Rouder, J. N. (2011). Bayes factor approaches testing interval null hypotheses. Psychological methods, 16(4), 406. Liao, J. G., Midya, V., & Berg, . (2020). Connecting contrasting Bayes factor modified ROPE procedure testing interval null hypotheses. American Statistician, 1-19. Wetzels, R., Matzke, D., Lee, M. D., Rouder, J. N., Iverson, G. J., Wagenmakers, E.-J. (2011). Statistical Evidence Experimental Psychology: Empirical Comparison Using 855 t Tests. Perspectives Psychological Science, 6(3), 291–298. doi:10.1177/1745691611406923","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"Mattan S. Ben-Shachar","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"","code":"library(bayestestR) prior <- distribution_normal(1000, mean = 0, sd = 1) posterior <- distribution_normal(1000, mean = .5, sd = .3) (BF_pars <- bayesfactor_parameters(posterior, prior, verbose = FALSE)) #> Bayes Factor (Savage-Dickey density ratio) #> #> BF #> ---- #> 1.21 #> #> * Evidence Against The Null: 0 #> as.numeric(BF_pars) #> [1] 1.212843 # \\donttest{ # rstanarm models # --------------- contrasts(sleep$group) <- contr.equalprior_pairs # see vingette stan_model <- suppressWarnings(stan_lmer( extra ~ group + (1 | ID), data = sleep, refresh = 0 )) bayesfactor_parameters(stan_model, verbose = FALSE) #> Bayes Factor (Savage-Dickey density ratio) #> #> Parameter | BF #> ------------------ #> (Intercept) | 4.55 #> group1 | 3.74 #> #> * Evidence Against The Null: 0 #> bayesfactor_parameters(stan_model, null = rope_range(stan_model)) #> Sampling priors, please wait... #> Bayes Factor (Null-Interval) #> #> Parameter | BF #> ------------------ #> (Intercept) | 4.17 #> group1 | 3.36 #> #> * Evidence Against The Null: [-0.202, 0.202] #> # emmGrid objects # --------------- group_diff <- pairs(emmeans(stan_model, ~group, data = sleep)) bayesfactor_parameters(group_diff, prior = stan_model, verbose = FALSE) #> Bayes Factor (Savage-Dickey density ratio) #> #> contrast | BF #> ---------------------- #> group1 - group2 | 3.81 #> #> * Evidence Against The Null: 0 #> # Or # group_diff_prior <- pairs(emmeans(unupdate(stan_model), ~group)) # bayesfactor_parameters(group_diff, prior = group_diff_prior, verbose = FALSE) # } # brms models # ----------- # \\dontrun{ contrasts(sleep$group) <- contr.equalprior_pairs # see vingette my_custom_priors <- set_prior(\"student_t(3, 0, 1)\", class = \"b\") + set_prior(\"student_t(3, 0, 1)\", class = \"sd\", group = \"ID\") brms_model <- suppressWarnings(brm(extra ~ group + (1 | ID), data = sleep, prior = my_custom_priors, refresh = 0 )) #> Compiling Stan program... #> Start sampling bayesfactor_parameters(brms_model, verbose = FALSE) #> Bayes Factor (Savage-Dickey density ratio) #> #> Parameter | BF #> ------------------- #> (Intercept) | 6.58 #> group1 | 11.41 #> #> * Evidence Against The Null: 0 #> # }"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"method computes Bayes factors comparing model order restrictions parameters fully unrestricted model. Note method used confirmatory analyses. bf_* function alias main function. info, particular specifying correct priors factors 2 levels, see Bayes factors vignette.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"","code":"bayesfactor_restricted(posterior, ...) bf_restricted(posterior, ...) # S3 method for class 'stanreg' bayesfactor_restricted( posterior, hypothesis, prior = NULL, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), ... ) # S3 method for class 'brmsfit' bayesfactor_restricted( posterior, hypothesis, prior = NULL, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), ... ) # S3 method for class 'blavaan' bayesfactor_restricted( posterior, hypothesis, prior = NULL, verbose = TRUE, ... ) # S3 method for class 'emmGrid' bayesfactor_restricted( posterior, hypothesis, prior = NULL, verbose = TRUE, ... ) # S3 method for class 'data.frame' bayesfactor_restricted( posterior, hypothesis, prior = NULL, rvar_col = NULL, ... ) # S3 method for class 'bayesfactor_restricted' as.logical(x, which = c(\"posterior\", \"prior\"), ...)"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"posterior stanreg / brmsfit object, emmGrid data frame - representing posterior distribution(s) (see Details). ... Currently used. hypothesis character vector specifying restrictions logical conditions (see examples ). prior object representing prior distribution (see Details). verbose Toggle warnings. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. rvar_col single character - name rvar column data frame processed. See example p_direction(). x object class bayesfactor_restricted logical matrix posterior prior distribution(s)?","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"data frame containing (log) Bayes factor representing evidence un-restricted model (Use .numeric() extract non-log Bayes factors; see examples). (bool_results attribute contains results sample, indicating included hypothesized restriction.)","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"method used compute Bayes factors order-restricted models vs un-restricted models setting order restriction prior posterior distributions (Morey & Wagenmakers, 2013). (Though possible use bayesfactor_restricted() test interval restrictions, suitable testing order restrictions; see examples).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"setting-the-correct-prior","dir":"Reference","previous_headings":"","what":"Setting the correct prior","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"computation Bayes factors, model priors must proper priors (least flat, preferable informative); priors alternative get wider, likelihood null value(s) increases, extreme completely flat priors null infinitely favorable alternative (called Jeffreys-Lindley-Bartlett paradox). Thus, ever try (want) compute Bayes factor informed prior. (Note default, brms::brm() uses flat priors fixed-effects; See example .) important provide correct prior meaningful results, match posterior-type input: numeric vector - prior also numeric vector, representing prior-estimate. data frame - prior also data frame, representing prior-estimates, matching column order. rvar_col specified, prior name rvar column represents prior-estimates. Supported Bayesian model (stanreg, brmsfit, etc.) prior model equivalent model MCMC samples priors . See unupdate(). prior set NULL, unupdate() called internally (supported brmsfit_multiple model). Output {marginaleffects} function - prior also equivalent output {marginaleffects} function based prior-model (See unupdate()). Output {emmeans} function prior also equivalent output {emmeans} function based prior-model (See unupdate()). prior can also original (posterior) model, case function try \"unupdate\" estimates (supported estimates undergone transformations – \"log\", \"response\", etc. – regriding).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"interpreting-bayes-factors","dir":"Reference","previous_headings":"","what":"Interpreting Bayes Factors","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"Bayes factor greater 1 can interpreted evidence null, one convention Bayes factor greater 3 can considered \"substantial\" evidence null (vice versa, Bayes factor smaller 1/3 indicates substantial evidence favor null-model) (Wetzels et al. 2011).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"Morey, R. D., & Wagenmakers, E. J. (2014). Simple relation Bayesian order-restricted point-null hypothesis tests. Statistics & Probability Letters, 92, 121-124. Morey, R. D., & Rouder, J. N. (2011). Bayes factor approaches testing interval null hypotheses. Psychological methods, 16(4), 406. Morey, R. D. (Jan, 2015). Multiple Comparisons BayesFactor, Part 2 – order restrictions. Retrieved https://richarddmorey.org/category/order-restrictions/.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"","code":"set.seed(444) library(bayestestR) prior <- data.frame( A = rnorm(500), B = rnorm(500), C = rnorm(500) ) posterior <- data.frame( A = rnorm(500, .4, 0.7), B = rnorm(500, -.2, 0.4), C = rnorm(500, 0, 0.5) ) hyps <- c( \"A > B & B > C\", \"A > B & A > C\", \"C > A\" ) (b <- bayesfactor_restricted(posterior, hypothesis = hyps, prior = prior)) #> Bayes Factor (Order-Restriction) #> #> Hypothesis P(Prior) P(Posterior) BF #> A > B & B > C 0.16 0.23 1.39 #> A > B & A > C 0.36 0.59 1.61 #> C > A 0.46 0.34 0.742 #> #> * Bayes factors for the restricted model vs. the un-restricted model. bool <- as.logical(b, which = \"posterior\") head(bool) #> A > B & B > C A > B & A > C C > A #> [1,] TRUE TRUE FALSE #> [2,] TRUE TRUE FALSE #> [3,] TRUE TRUE FALSE #> [4,] FALSE TRUE FALSE #> [5,] FALSE FALSE TRUE #> [6,] FALSE TRUE FALSE see::plots( plot(estimate_density(posterior)), # distribution **conditional** on the restrictions plot(estimate_density(posterior[bool[, hyps[1]], ])) + ggplot2::ggtitle(hyps[1]), plot(estimate_density(posterior[bool[, hyps[2]], ])) + ggplot2::ggtitle(hyps[2]), plot(estimate_density(posterior[bool[, hyps[3]], ])) + ggplot2::ggtitle(hyps[3]), guides = \"collect\" ) # \\donttest{ # rstanarm models # --------------- data(\"mtcars\") fit_stan <- rstanarm::stan_glm(mpg ~ wt + cyl + am, data = mtcars, refresh = 0 ) hyps <- c( \"am > 0 & cyl < 0\", \"cyl < 0\", \"wt - cyl > 0\" ) bayesfactor_restricted(fit_stan, hypothesis = hyps) #> Sampling priors, please wait... #> Bayes Factor (Order-Restriction) #> #> Hypothesis P(Prior) P(Posterior) BF #> am > 0 & cyl < 0 0.25 0.56 2.25 #> cyl < 0 0.50 1.00 1.99 #> wt - cyl > 0 0.50 0.10 0.197 #> #> * Bayes factors for the restricted model vs. the un-restricted model. # } # \\donttest{ # emmGrid objects # --------------- # replicating http://bayesfactor.blogspot.com/2015/01/multiple-comparisons-with-bayesfactor-2.html data(\"disgust\") contrasts(disgust$condition) <- contr.equalprior_pairs # see vignette fit_model <- rstanarm::stan_glm(score ~ condition, data = disgust, family = gaussian()) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.03 seconds (Warm-up) #> Chain 1: 0.038 seconds (Sampling) #> Chain 1: 0.068 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.1e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.03 seconds (Warm-up) #> Chain 2: 0.039 seconds (Sampling) #> Chain 2: 0.069 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.2e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.029 seconds (Warm-up) #> Chain 3: 0.038 seconds (Sampling) #> Chain 3: 0.067 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.1e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.028 seconds (Warm-up) #> Chain 4: 0.039 seconds (Sampling) #> Chain 4: 0.067 seconds (Total) #> Chain 4: em_condition <- emmeans::emmeans(fit_model, ~condition, data = disgust) hyps <- c(\"lemon < control & control < sulfur\") bayesfactor_restricted(em_condition, prior = fit_model, hypothesis = hyps) #> Sampling priors, please wait... #> Bayes Factor (Order-Restriction) #> #> Hypothesis P(Prior) P(Posterior) BF #> lemon < control & control < sulfur 0.17 0.75 4.28 #> #> * Bayes factors for the restricted model vs. the un-restricted model. # > # Bayes Factor (Order-Restriction) # > # > Hypothesis P(Prior) P(Posterior) BF # > lemon < control & control < sulfur 0.17 0.75 4.49 # > --- # > Bayes factors for the restricted model vs. the un-restricted model. # }"},{"path":"https://easystats.github.io/bayestestR/reference/bayestestR-package.html","id":null,"dir":"Reference","previous_headings":"","what":"bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework — bayestestR-package","title":"bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework — bayestestR-package","text":"Existing R packages allow users easily fit large variety models extract visualize posterior draws. However, packages return limited set indices (e.g., point-estimates CIs). bayestestR provides comprehensive consistent set functions analyze describe posterior distributions generated variety models objects, including popular modeling packages rstanarm, brms BayesFactor. References: Makowski et al. (2019) doi:10.21105/joss.01541 Makowski et al. (2019) doi:10.3389/fpsyg.2019.02767","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayestestR-package.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework — bayestestR-package","text":"bayestestR","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/bayestestR-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework — bayestestR-package","text":"Maintainer: Dominique Makowski dom.makowski@gmail.com (ORCID) Authors: Daniel Lüdecke d.luedecke@uke.de (ORCID) Mattan S. Ben-Shachar matanshm@post.bgu.ac.il (ORCID) Indrajeet Patil patilindrajeet.science@gmail.com (ORCID) Micah K. Wilson micah.k.wilson@curtin.edu.au (ORCID) Brenton M. Wiernik brenton@wiernik.org (ORCID) contributors: Paul-Christian Bürkner paul.buerkner@gmail.com [reviewer] Tristan Mahr tristan.mahr@wisc.edu (ORCID) [reviewer] Henrik Singmann singmann@gmail.com (ORCID) [contributor] Quentin F. Gronau (ORCID) [contributor] Sam Crawley sam@crawley.nz (ORCID) [contributor]","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":null,"dir":"Reference","previous_headings":"","what":"Bias Corrected and Accelerated Interval (BCa) — bci","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"Compute Bias Corrected Accelerated Interval (BCa) posterior distributions.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"","code":"bci(x, ...) bcai(x, ...) # S3 method for class 'numeric' bci(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'data.frame' bci(x, ci = 0.95, rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'MCMCglmm' bci(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'sim.merMod' bci( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'sim' bci(x, ci = 0.95, parameters = NULL, verbose = TRUE, ...) # S3 method for class 'emmGrid' bci(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'slopes' bci(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'stanreg' bci( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' bci( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'BFBayesFactor' bci(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'get_predicted' bci(x, ci = 0.95, use_iterations = FALSE, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. ci Value vector probability (credible) interval - CI (0 1) estimated. Default .95 (95%). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability credible interval. CI_low, CI_high lower upper credible interval limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"Unlike equal-tailed intervals (see eti()) typically exclude 2.5% tail distribution always include median, HDI equal-tailed therefore always includes mode(s) posterior distributions. can useful better represent credibility mass distribution, HDI also limitations. See spi() details. 95% 89% Credible Intervals (CI) two reasonable ranges characterize uncertainty related estimation (see discussion differences two values). 89% intervals (ci = 0.89) deemed stable , instance, 95% intervals (Kruschke, 2014). effective sample size least 10.000 recommended one wants estimate 95% intervals high precision (Kruschke, 2014, p. 183ff). Unfortunately, default number posterior samples Bayes packages (e.g., rstanarm brms) 4.000 (thus, might want increase fitting model). Moreover, 89 indicates arbitrariness interval limits - remarkable property highest prime number exceed already unstable 95% threshold (McElreath, 2015). However, 95% advantages . instance, shares (case normal posterior distribution) intuitive relationship standard deviation conveys accurate image (artificial) bounds distribution. Also, wider, makes analyses conservative (.e., probability covering 0 larger 95% CI lower ranges 89%), good thing context reproducibility crisis. 95% equal-tailed interval (ETI) 2.5% distribution either side limits. indicates 2.5th percentile 97.5h percentile. symmetric distributions, two methods computing credible intervals, ETI HDI, return similar results. case skewed distributions. Indeed, possible parameter values ETI lower credibility (less probable) parameter values outside ETI. property seems undesirable summary credible values distribution. hand, ETI range change transformations applied distribution (instance, log odds scale probabilities): lower higher bounds transformed distribution correspond transformed lower higher bounds original distribution. contrary, applying transformations distribution change resulting HDI.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"DiCiccio, T. J. B. Efron. (1996). Bootstrap Confidence Intervals. Statistical Science. 11(3): 189–212. 10.1214/ss/1032280214","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"","code":"posterior <- rnorm(1000) bci(posterior) #> 95% ETI: [-1.78, 2.11] bci(posterior, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> 80% ETI | 89% ETI | 95% ETI #> --------------------------------------------- #> [-1.17, 1.34] | [-1.52, 1.70] | [-1.78, 2.11]"},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"difference two Bayesian information criterion (BIC) indices two models can used approximate Bayes factors via: $$BF_{10} = e^{(BIC_0 - BIC_1)/2}$$","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"","code":"bic_to_bf(bic, denominator, log = FALSE)"},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"bic vector BIC values. denominator BIC value use denominator (test ). log TRUE, return log(BF).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"Bayes Factors corresponding BIC values denominator.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"Wagenmakers, E. J. (2007). practical solution pervasive problems p values. Psychonomic bulletin & review, 14(5), 779-804","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"","code":"bic1 <- BIC(lm(Sepal.Length ~ 1, data = iris)) bic2 <- BIC(lm(Sepal.Length ~ Species, data = iris)) bic3 <- BIC(lm(Sepal.Length ~ Species + Petal.Length, data = iris)) bic4 <- BIC(lm(Sepal.Length ~ Species * Petal.Length, data = iris)) bic_to_bf(c(bic1, bic2, bic3, bic4), denominator = bic1) #> [1] 1.000000e+00 1.695852e+29 5.843105e+55 2.203243e+54"},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if Prior is Informative — check_prior","title":"Check if Prior is Informative — check_prior","text":"Performs simple test check whether prior informative posterior. idea, accompanying heuristics, discussed Gelman et al. 2017.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if Prior is Informative — check_prior","text":"","code":"check_prior(model, method = \"gelman\", simulate_priors = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if Prior is Informative — check_prior","text":"model stanreg, stanfit, brmsfit, blavaan, MCMCglmm object. method Can \"gelman\" \"lakeland\". \"gelman\" method, SD posterior 0.1 times SD prior, prior considered informative. \"lakeland\" method, prior considered informative posterior falls within 95% HDI prior. simulate_priors prior distributions simulated using simulate_prior() (default; faster) sampled via unupdate() (slower, accurate). ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check if Prior is Informative — check_prior","text":"data frame two columns: parameter names quality prior (might \"informative\", \"uninformative\") \"determinable\" prior distribution determined).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check if Prior is Informative — check_prior","text":"Gelman, ., Simpson, D., Betancourt, M. (2017). Prior Can Often Understood Context Likelihood. Entropy, 19(10), 555. doi:10.3390/e19100555","code":""},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if Prior is Informative — check_prior","text":"","code":"# \\donttest{ library(bayestestR) model <- rstanarm::stan_glm(mpg ~ wt + am, data = mtcars, chains = 1, refresh = 0) check_prior(model, method = \"gelman\") #> Parameter Prior_Quality #> 1 (Intercept) informative #> 2 wt uninformative #> 3 am uninformative check_prior(model, method = \"lakeland\") #> Parameter Prior_Quality #> 1 (Intercept) informative #> 2 wt informative #> 3 am informative # An extreme example where both methods diverge: model <- rstanarm::stan_glm(mpg ~ wt, data = mtcars[1:3, ], prior = normal(-3.3, 1, FALSE), prior_intercept = normal(0, 1000, FALSE), refresh = 0 ) check_prior(model, method = \"gelman\") #> Parameter Prior_Quality #> 1 (Intercept) uninformative #> 2 wt informative check_prior(model, method = \"lakeland\") #> Parameter Prior_Quality #> 1 (Intercept) informative #> 2 wt misinformative # can provide visual confirmation to the Lakeland method plot(si(model, verbose = FALSE)) # }"},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":null,"dir":"Reference","previous_headings":"","what":"Confidence/Credible/Compatibility Interval (CI) — ci","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"Compute Confidence/Credible/Compatibility Intervals (CI) Support Intervals (SI) Bayesian frequentist models. Documentation accessible :","code":""},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"","code":"ci(x, ...) # S3 method for class 'numeric' ci(x, ci = 0.95, method = \"ETI\", verbose = TRUE, BF = 1, ...) # S3 method for class 'data.frame' ci(x, ci = 0.95, method = \"ETI\", BF = 1, rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'sim.merMod' ci( x, ci = 0.95, method = \"ETI\", effects = c(\"fixed\", \"random\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'sim' ci(x, ci = 0.95, method = \"ETI\", parameters = NULL, verbose = TRUE, ...) # S3 method for class 'stanreg' ci( x, ci = 0.95, method = \"ETI\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, BF = 1, ... ) # S3 method for class 'brmsfit' ci( x, ci = 0.95, method = \"ETI\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, BF = 1, ... ) # S3 method for class 'BFBayesFactor' ci(x, ci = 0.95, method = \"ETI\", verbose = TRUE, BF = 1, ...) # S3 method for class 'MCMCglmm' ci(x, ci = 0.95, method = \"ETI\", verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"x stanreg brmsfit model, vector representing posterior distribution. ... Currently used. ci Value vector probability CI (0 1) estimated. Default 0.95 (95%). method Can \"ETI\" (default), \"HDI\", \"BCI\", \"SPI\" \"SI\". verbose Toggle warnings. BF amount support required included support interval. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability credible interval. CI_low, CI_high lower upper credible interval limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"Bayesian models Frequentist models","code":""},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"comes interpretation, recommend thinking CI terms \"uncertainty\" \"compatibility\" interval, latter defined \"Given value interval background assumptions, data seem surprising\" (Gelman & Greenland 2019). also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"Gelman , Greenland S. confidence intervals better termed \"uncertainty intervals\"? BMJ 2019;l5381. 10.1136/bmj.l5381","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"","code":"library(bayestestR) posterior <- rnorm(1000) ci(posterior, method = \"ETI\") #> 95% ETI: [-2.00, 1.96] ci(posterior, method = \"HDI\") #> 95% HDI: [-1.91, 2.03] df <- data.frame(replicate(4, rnorm(100))) ci(df, method = \"ETI\", ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | 95% ETI #> --------------------------------------------------------- #> X1 | [-1.46, 1.35] | [-1.70, 1.63] | [-1.94, 1.94] #> X2 | [-1.21, 1.34] | [-1.51, 1.71] | [-1.81, 2.08] #> X3 | [-1.20, 1.19] | [-1.54, 1.48] | [-2.02, 1.71] #> X4 | [-1.22, 1.51] | [-1.88, 1.61] | [-2.20, 1.82] ci(df, method = \"HDI\", ci = c(0.80, 0.89, 0.95)) #> Highest Density Interval #> #> Parameter | 80% HDI | 89% HDI | 95% HDI #> --------------------------------------------------------- #> X1 | [-1.38, 1.37] | [-1.95, 1.37] | [-1.77, 2.17] #> X2 | [-1.20, 1.35] | [-1.64, 1.52] | [-2.15, 1.80] #> X3 | [-1.21, 1.17] | [-1.46, 1.56] | [-2.07, 1.72] #> X4 | [-1.03, 1.52] | [-1.45, 1.74] | [-2.34, 1.71] model <- suppressWarnings( stan_glm(mpg ~ wt, data = mtcars, chains = 2, iter = 200, refresh = 0) ) ci(model, method = \"ETI\", ci = c(0.80, 0.89)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | Effects | Component #> --------------------------------------------------------------------- #> (Intercept) | [34.59, 39.93] | [34.12, 40.56] | fixed | conditional #> wt | [-6.10, -4.52] | [-6.27, -4.33] | fixed | conditional ci(model, method = \"HDI\", ci = c(0.80, 0.89)) #> Highest Density Interval #> #> Parameter | 80% HDI | 89% HDI #> --------------------------------------------- #> (Intercept) | [34.36, 39.67] | [34.20, 40.60] #> wt | [-6.09, -4.51] | [-6.37, -4.47] bf <- ttestBF(x = rnorm(100, 1, 1)) ci(bf, method = \"ETI\") #> Equal-Tailed Interval #> #> Parameter | 95% ETI #> ------------------------- #> Difference | [0.80, 1.23] ci(bf, method = \"HDI\") #> Highest Density Interval #> #> Parameter | 95% HDI #> ------------------------- #> Difference | [0.81, 1.24] model <- emtrends(model, ~1, \"wt\", data = mtcars) ci(model, method = \"ETI\") #> Equal-Tailed Interval #> #> X1 | 95% ETI #> ------------------------ #> overall | [-6.37, -4.20] ci(model, method = \"HDI\") #> Highest Density Interval #> #> X1 | 95% HDI #> ------------------------ #> overall | [-6.37, -4.18]"},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":null,"dir":"Reference","previous_headings":"","what":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"Build contrasts factors equal marginal priors levels. 3 functions give orthogonal contrasts, scaled differently allow different prior specifications (see 'Details'). Implementation Singmann & Gronau's bfrms, following description Rouder, Morey, Speckman, & Province (2012, p. 363).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"","code":"contr.equalprior(n, contrasts = TRUE, sparse = FALSE) contr.equalprior_pairs(n, contrasts = TRUE, sparse = FALSE) contr.equalprior_deviations(n, contrasts = TRUE, sparse = FALSE)"},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"n vector levels factor, number levels. contrasts logical indicating whether contrasts computed. sparse logical indicating result sparse (class dgCMatrix), using package Matrix.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"matrix n rows k columns, k=n-1 contrasts TRUE k=n contrasts FALSE.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"using stats::contr.treatment, dummy variable difference level reference level. useful setting different priors coefficient, used one trying set general prior differences means, (well stats::contr.sum others) results unequal marginal priors means difference . can see priors means (narrow prior), likewise pairwise differences (priors differences narrow). solution use one methods provided , result marginally equal priors means differences . Though obscure interpretation parameters, setting equal priors means differences important useful specifying equal priors means factor differences correct estimation Bayes factors contrasts order restrictions multi-level factors (k>2). See info specifying correct priors factors 2 levels Bayes factors vignette. NOTE: setting priors dummy variables, always: Use priors centered 0! location/centered priors meaningless! Use identically-scaled priors dummy variables single factor! contr.equalprior returns original orthogonal-normal contrasts described Rouder, Morey, Speckman, & Province (2012, p. 363). Setting contrasts = FALSE returns \\(I_{n} - \\frac{1}{n}\\) matrix.","code":"library(brms) data <- data.frame( group = factor(rep(LETTERS[1:4], each = 3)), y = rnorm(12) ) contrasts(data$group) # R's default contr.treatment #> B C D #> A 0 0 0 #> B 1 0 0 #> C 0 1 0 #> D 0 0 1 model_prior <- brm( y ~ group, data = data, sample_prior = \"only\", # Set the same priors on the 3 dummy variable # (Using an arbitrary scale) prior = set_prior(\"normal(0, 10)\", coef = c(\"groupB\", \"groupC\", \"groupD\")) ) est <- emmeans::emmeans(model_prior, pairwise ~ group) point_estimate(est, centr = \"mean\", disp = TRUE) #> Point Estimate #> #> Parameter | Mean | SD #> ------------------------- #> A | -0.01 | 6.35 #> B | -0.10 | 9.59 #> C | 0.11 | 9.55 #> D | -0.16 | 9.52 #> A - B | 0.10 | 9.94 #> A - C | -0.12 | 9.96 #> A - D | 0.15 | 9.87 #> B - C | -0.22 | 14.38 #> B - D | 0.05 | 14.14 #> C - D | 0.27 | 14.00"},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"contr-equalprior-pairs","dir":"Reference","previous_headings":"","what":"contr.equalprior_pairs","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"Useful setting priors terms pairwise differences means - scales priors defines prior distribution pair-wise differences pairwise differences (e.g., - B, B - C, etc.). means prior distribution, distribution differences matches prior set \"normal(0, 10)\". Success!","code":"contrasts(data$group) <- contr.equalprior_pairs contrasts(data$group) #> [,1] [,2] [,3] #> A 0.0000000 0.6123724 0.0000000 #> B -0.1893048 -0.2041241 0.5454329 #> C -0.3777063 -0.2041241 -0.4366592 #> D 0.5670111 -0.2041241 -0.1087736 model_prior <- brm( y ~ group, data = data, sample_prior = \"only\", # Set the same priors on the 3 dummy variable # (Using an arbitrary scale) prior = set_prior(\"normal(0, 10)\", coef = c(\"group1\", \"group2\", \"group3\")) ) est <- emmeans(model_prior, pairwise ~ group) point_estimate(est, centr = \"mean\", disp = TRUE) #> Point Estimate #> #> Parameter | Mean | SD #> ------------------------- #> A | -0.31 | 7.46 #> B | -0.24 | 7.47 #> C | -0.34 | 7.50 #> D | -0.30 | 7.25 #> A - B | -0.08 | 10.00 #> A - C | 0.03 | 10.03 #> A - D | -0.01 | 9.85 #> B - C | 0.10 | 10.28 #> B - D | 0.06 | 9.94 #> C - D | -0.04 | 10.18"},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"contr-equalprior-deviations","dir":"Reference","previous_headings":"","what":"contr.equalprior_deviations","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"Useful setting priors terms deviations mean grand mean - scales priors defines prior distribution distance (, ) mean one levels might overall mean. (See examples.)","code":""},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"Rouder, J. N., Morey, R. D., Speckman, P. L., & Province, J. M. (2012). Default Bayes factors ANOVA designs. Journal Mathematical Psychology, 56(5), 356-374. https://doi.org/10.1016/j.jmp.2012.08.001","code":""},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"","code":"contr.equalprior(2) # Q_2 in Rouder et al. (2012, p. 363) #> [,1] #> [1,] -0.7071068 #> [2,] 0.7071068 contr.equalprior(5) # equivalent to Q_5 in Rouder et al. (2012, p. 363) #> [,1] [,2] [,3] [,4] #> [1,] 0.000000e+00 0.0000000 0.0000000 0.8944272 #> [2,] -4.163336e-17 0.0000000 0.8660254 -0.2236068 #> [3,] -5.773503e-01 -0.5773503 -0.2886751 -0.2236068 #> [4,] -2.113249e-01 0.7886751 -0.2886751 -0.2236068 #> [5,] 7.886751e-01 -0.2113249 -0.2886751 -0.2236068 ## check decomposition Q3 <- contr.equalprior(3) Q3 %*% t(Q3) ## 2/3 on diagonal and -1/3 on off-diagonal elements #> [,1] [,2] [,3] #> [1,] 0.6666667 -0.3333333 -0.3333333 #> [2,] -0.3333333 0.6666667 -0.3333333 #> [3,] -0.3333333 -0.3333333 0.6666667"},{"path":"https://easystats.github.io/bayestestR/reference/convert_bayesian_as_frequentist.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert (refit) a Bayesian model to frequentist — convert_bayesian_as_frequentist","title":"Convert (refit) a Bayesian model to frequentist — convert_bayesian_as_frequentist","text":"Refit Bayesian model frequentist. Can useful comparisons.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/convert_bayesian_as_frequentist.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert (refit) a Bayesian model to frequentist — convert_bayesian_as_frequentist","text":"","code":"convert_bayesian_as_frequentist(model, data = NULL, REML = TRUE) bayesian_as_frequentist(model, data = NULL, REML = TRUE)"},{"path":"https://easystats.github.io/bayestestR/reference/convert_bayesian_as_frequentist.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert (refit) a Bayesian model to frequentist — convert_bayesian_as_frequentist","text":"model Bayesian model. data Data used model. NULL, try extract model. REML mixed effects, models estimated using restricted maximum likelihood (REML) (TRUE, default) maximum likelihood (FALSE)?","code":""},{"path":"https://easystats.github.io/bayestestR/reference/convert_bayesian_as_frequentist.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convert (refit) a Bayesian model to frequentist — convert_bayesian_as_frequentist","text":"","code":"# \\donttest{ # Rstanarm ---------------------- # Simple regressions model <- rstanarm::stan_glm(Sepal.Length ~ Species, data = iris, chains = 2, refresh = 0 ) bayesian_as_frequentist(model) #> #> Call: #> stats::lm(formula = formula$conditional, data = data) #> #> Coefficients: #> (Intercept) Speciesversicolor Speciesvirginica #> 5.006 0.930 1.582 #> model <- rstanarm::stan_glm(vs ~ mpg, family = \"binomial\", data = mtcars, chains = 2, refresh = 0 ) bayesian_as_frequentist(model) #> #> Call: stats::glm(formula = formula$conditional, family = family, data = data) #> #> Coefficients: #> (Intercept) mpg #> -8.8331 0.4304 #> #> Degrees of Freedom: 31 Total (i.e. Null); 30 Residual #> Null Deviance:\t 43.86 #> Residual Deviance: 25.53 \tAIC: 29.53 # Mixed models model <- rstanarm::stan_glmer( Sepal.Length ~ Petal.Length + (1 | Species), data = iris, chains = 2, refresh = 0 ) bayesian_as_frequentist(model) #> Linear mixed model fit by REML ['lmerMod'] #> Formula: Sepal.Length ~ Petal.Length + (1 | Species) #> Data: data #> REML criterion at convergence: 119.793 #> Random effects: #> Groups Name Std.Dev. #> Species (Intercept) 1.0778 #> Residual 0.3381 #> Number of obs: 150, groups: Species, 3 #> Fixed Effects: #> (Intercept) Petal.Length #> 2.5045 0.8885 model <- rstanarm::stan_glmer(vs ~ mpg + (1 | cyl), family = \"binomial\", data = mtcars, chains = 2, refresh = 0 ) bayesian_as_frequentist(model) #> Generalized linear mixed model fit by maximum likelihood (Laplace #> Approximation) [glmerMod] #> Family: binomial ( logit ) #> Formula: vs ~ mpg + (1 | cyl) #> Data: data #> AIC BIC logLik deviance df.resid #> 31.1738 35.5710 -12.5869 25.1738 29 #> Random effects: #> Groups Name Std.Dev. #> cyl (Intercept) 1.925 #> Number of obs: 32, groups: cyl, 3 #> Fixed Effects: #> (Intercept) mpg #> -3.9227 0.1723 # }"},{"path":"https://easystats.github.io/bayestestR/reference/density_at.html","id":null,"dir":"Reference","previous_headings":"","what":"Density Probability at a Given Value — density_at","title":"Density Probability at a Given Value — density_at","text":"Compute density value given point distribution (.e., value y axis value x distribution).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/density_at.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Density Probability at a Given Value — density_at","text":"","code":"density_at(posterior, x, precision = 2^10, method = \"kernel\", ...)"},{"path":"https://easystats.github.io/bayestestR/reference/density_at.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Density Probability at a Given Value — density_at","text":"posterior Vector representing posterior distribution. x value get approximate probability. precision Number points density data. See n parameter density. method Density estimation method. Can \"kernel\" (default), \"logspline\" \"KernSmooth\". ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/density_at.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Density Probability at a Given Value — density_at","text":"","code":"library(bayestestR) posterior <- distribution_normal(n = 10) density_at(posterior, 0) #> [1] 0.3206131 density_at(posterior, c(0, 1)) #> [1] 0.3206131 0.2374056"},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":null,"dir":"Reference","previous_headings":"","what":"Describe Posterior Distributions — describe_posterior","title":"Describe Posterior Distributions — describe_posterior","text":"Compute indices relevant describe characterize posterior distributions.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Describe Posterior Distributions — describe_posterior","text":"","code":"describe_posterior(posterior, ...) # S3 method for class 'numeric' describe_posterior( posterior, centrality = \"median\", dispersion = FALSE, ci = 0.95, ci_method = \"eti\", test = c(\"p_direction\", \"rope\"), rope_range = \"default\", rope_ci = 0.95, keep_iterations = FALSE, bf_prior = NULL, BF = 1, verbose = TRUE, ... ) # S3 method for class 'data.frame' describe_posterior( posterior, centrality = \"median\", dispersion = FALSE, ci = 0.95, ci_method = \"eti\", test = c(\"p_direction\", \"rope\"), rope_range = \"default\", rope_ci = 0.95, keep_iterations = FALSE, bf_prior = NULL, BF = 1, rvar_col = NULL, verbose = TRUE, ... ) # S3 method for class 'stanreg' describe_posterior( posterior, centrality = \"median\", dispersion = FALSE, ci = 0.95, ci_method = \"eti\", test = c(\"p_direction\", \"rope\"), rope_range = \"default\", rope_ci = 0.95, keep_iterations = FALSE, bf_prior = NULL, diagnostic = c(\"ESS\", \"Rhat\"), priors = FALSE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, BF = 1, verbose = TRUE, ... ) # S3 method for class 'brmsfit' describe_posterior( posterior, centrality = \"median\", dispersion = FALSE, ci = 0.95, ci_method = \"eti\", test = c(\"p_direction\", \"rope\"), rope_range = \"default\", rope_ci = 0.95, keep_iterations = FALSE, bf_prior = NULL, diagnostic = c(\"ESS\", \"Rhat\"), effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\", \"location\", \"distributional\", \"auxiliary\"), parameters = NULL, BF = 1, priors = FALSE, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Describe Posterior Distributions — describe_posterior","text":"posterior vector, data frame model posterior draws. bayestestR supports wide range models (see methods(\"describe_posterior\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric method. ... Additional arguments passed methods. centrality point-estimates (centrality indices) compute. Character (vector) list one options: \"median\", \"mean\", \"MAP\" (see map_estimate()), \"trimmed\" (just mean(x, trim = threshold)), \"mode\" \"\". dispersion Logical, TRUE, computes indices dispersion related estimate(s) (SD MAD mean median, respectively). Dispersion available \"MAP\" \"mode\" centrality indices. ci Value vector probability CI (0 1) estimated. Default 0.95 (95%). ci_method type index used Credible Interval. Can \"ETI\" (default, see eti()), \"HDI\" (see hdi()), \"BCI\" (see bci()), \"SPI\" (see spi()), \"SI\" (see si()). test indices effect existence compute. Character (vector) list one options: \"p_direction\" (\"pd\"), \"rope\", \"p_map\", \"p_significance\" (\"ps\"), \"p_rope\", \"equivalence_test\" (\"equitest\"), \"bayesfactor\" (\"bf\") \"\" compute tests. \"test\", corresponding bayestestR function called (e.g. rope() p_direction()) results included summary output. rope_range ROPE's lower higher bounds. vector two values (e.g., c(-0.1, 0.1)), \"default\" list numeric vectors length numbers parameters. \"default\", bounds set x +- 0.1*SD(response). rope_ci Credible Interval (CI) probability, corresponding proportion HDI, use percentage ROPE. keep_iterations TRUE, keep iterations (draws) bootstrapped Bayesian models. added additional columns named iter_1, iter_2, .... can reshape long format running reshape_iterations(). bf_prior Distribution representing prior computation Bayes factors / SI. Used input posterior, otherwise (case models) ignored. BF amount support required included support interval. verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). diagnostic Diagnostic metrics compute. Character (vector) list one options: \"ESS\", \"Rhat\", \"MCSE\" \"\". priors Add prior used parameter. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Describe Posterior Distributions — describe_posterior","text":"One components point estimates (like posterior mean median), intervals tests can omitted summary output setting related argument NULL. example, test = NULL centrality = NULL return HDI (CI).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Describe Posterior Distributions — describe_posterior","text":"Makowski, D., Ben-Shachar, M. S., Chen, S. H. ., Lüdecke, D. (2019). Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767 Region Practical Equivalence (ROPE) Bayes factors","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Describe Posterior Distributions — describe_posterior","text":"","code":"library(bayestestR) if (require(\"logspline\")) { x <- rnorm(1000) describe_posterior(x, verbose = FALSE) describe_posterior(x, centrality = \"all\", dispersion = TRUE, test = \"all\", verbose = FALSE ) describe_posterior(x, ci = c(0.80, 0.90), verbose = FALSE) df <- data.frame(replicate(4, rnorm(100))) describe_posterior(df, verbose = FALSE) describe_posterior( df, centrality = \"all\", dispersion = TRUE, test = \"all\", verbose = FALSE ) describe_posterior(df, ci = c(0.80, 0.90), verbose = FALSE) df <- data.frame(replicate(4, rnorm(20))) head(reshape_iterations( describe_posterior(df, keep_iterations = TRUE, verbose = FALSE) )) } #> Summary of Posterior Distribution #> #> Parameter | Median | 95% CI | pd | ROPE | % in ROPE #> ----------------------------------------------------------------------- #> X1 | -0.21 | [-1.70, 1.42] | 60.00% | [-0.10, 0.10] | 5.56% #> X2 | -0.21 | [-2.38, 2.41] | 55.00% | [-0.10, 0.10] | 11.11% #> X3 | 0.22 | [-1.96, 2.61] | 55.00% | [-0.10, 0.10] | 5.56% #> X4 | -0.20 | [-1.58, 0.61] | 65.00% | [-0.10, 0.10] | 16.67% #> X1 | -0.21 | [-1.70, 1.42] | 60.00% | [-0.10, 0.10] | 5.56% #> X2 | -0.21 | [-2.38, 2.41] | 55.00% | [-0.10, 0.10] | 11.11% #> #> Parameter | iter_index | iter_group | iter_value #> ------------------------------------------------ #> X1 | 1.00 | 1.00 | 0.51 #> X2 | 2.00 | 1.00 | -0.36 #> X3 | 3.00 | 1.00 | 1.32 #> X4 | 4.00 | 1.00 | 0.34 #> X1 | 1.00 | 2.00 | -0.14 #> X2 | 2.00 | 2.00 | -0.50 # \\donttest{ # rstanarm models # ----------------------------------------------- if (require(\"rstanarm\") && require(\"emmeans\")) { model <- suppressWarnings( stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) describe_posterior(model) describe_posterior(model, centrality = \"all\", dispersion = TRUE, test = \"all\") describe_posterior(model, ci = c(0.80, 0.90)) describe_posterior(model, rope_range = list(c(-10, 5), c(-0.2, 0.2), \"default\")) # emmeans estimates # ----------------------------------------------- describe_posterior(emtrends(model, ~1, \"wt\")) } #> Warning: Bayes factors might not be precise. #> For precise Bayes factors, sampling at least 40,000 posterior samples is #> recommended. #> Summary of Posterior Distribution #> #> X1 | Median | 95% CI | pd | ROPE | % in ROPE #> -------------------------------------------------------------------- #> overall | -5.37 | [-6.57, -4.25] | 100% | [-0.10, 0.10] | 0% # BayesFactor objects # ----------------------------------------------- if (require(\"BayesFactor\")) { bf <- ttestBF(x = rnorm(100, 1, 1)) describe_posterior(bf) describe_posterior(bf, centrality = \"all\", dispersion = TRUE, test = \"all\") describe_posterior(bf, ci = c(0.80, 0.90)) } #> Summary of Posterior Distribution #> #> Parameter | Median | 80% CI | 90% CI | pd | ROPE #> ------------------------------------------------------------------------ #> Difference | 0.97 | [0.84, 1.09] | [0.81, 1.12] | 100% | [-0.09, 0.09] #> #> Parameter | % in ROPE | BF | Prior #> ------------------------------------------------------ #> Difference | 0% | 1.27e+15 | Cauchy (0 +- 0.71) # }"},{"path":"https://easystats.github.io/bayestestR/reference/describe_prior.html","id":null,"dir":"Reference","previous_headings":"","what":"Describe Priors — describe_prior","title":"Describe Priors — describe_prior","text":"Returns summary priors used model.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_prior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Describe Priors — describe_prior","text":"","code":"describe_prior(model, ...) # S3 method for class 'brmsfit' describe_prior( model, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\", \"location\", \"distributional\", \"auxiliary\"), parameters = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/describe_prior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Describe Priors — describe_prior","text":"model Bayesian model. ... Currently used. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_prior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Describe Priors — describe_prior","text":"","code":"# \\donttest{ library(bayestestR) # rstanarm models # ----------------------------------------------- if (require(\"rstanarm\")) { model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars) describe_prior(model) } #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.048 seconds (Warm-up) #> Chain 1: 0.045 seconds (Sampling) #> Chain 1: 0.093 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.1e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.049 seconds (Warm-up) #> Chain 2: 0.045 seconds (Sampling) #> Chain 2: 0.094 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.1e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 3: 0.044 seconds (Sampling) #> Chain 3: 0.089 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.1e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.058 seconds (Warm-up) #> Chain 4: 0.04 seconds (Sampling) #> Chain 4: 0.098 seconds (Total) #> Chain 4: #> Parameter Prior_Distribution Prior_Location Prior_Scale #> 1 (Intercept) normal 20.09062 15.067370 #> 2 wt normal 0.00000 15.399106 #> 3 cyl normal 0.00000 8.436748 # brms models # ----------------------------------------------- if (require(\"brms\")) { model <- brms::brm(mpg ~ wt + cyl, data = mtcars) describe_prior(model) } #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 1: 0.016 seconds (Sampling) #> Chain 1: 0.035 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 2: 0.016 seconds (Sampling) #> Chain 2: 0.037 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 3: 0.016 seconds (Sampling) #> Chain 3: 0.034 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 4: 0.018 seconds (Sampling) #> Chain 4: 0.037 seconds (Total) #> Chain 4: #> Parameter Prior_Distribution Prior_Location Prior_Scale Prior_df #> 1 b_Intercept student_t 19.2 5.4 3 #> 2 b_wt uniform NA NA NA #> 3 b_cyl uniform NA NA NA #> 4 sigma student_t 0.0 5.4 3 # BayesFactor objects # ----------------------------------------------- if (require(\"BayesFactor\")) { bf <- ttestBF(x = rnorm(100, 1, 1)) describe_prior(bf) } #> Parameter Prior_Distribution Prior_Location Prior_Scale #> 1 Difference cauchy 0 0.7071068 # }"},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Diagnostic values for each iteration — diagnostic_draws","title":"Diagnostic values for each iteration — diagnostic_draws","text":"Returns accumulated log-posterior, average Metropolis acceptance rate, divergent transitions, treedepth rather terminated evolution normally.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Diagnostic values for each iteration — diagnostic_draws","text":"","code":"diagnostic_draws(posterior, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Diagnostic values for each iteration — diagnostic_draws","text":"posterior stanreg, stanfit, brmsfit, blavaan object. ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_draws.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Diagnostic values for each iteration — diagnostic_draws","text":"","code":"# \\donttest{ set.seed(333) if (require(\"brms\", quietly = TRUE)) { model <- suppressWarnings(brm(mpg ~ wt * cyl * vs, data = mtcars, iter = 100, control = list(adapt_delta = 0.80), refresh = 0 )) diagnostic_draws(model) } #> Compiling Stan program... #> Start sampling #> Chain Iteration Acceptance_Rate Step_Size Tree_Depth n_Leapfrog Divergent #> 1 1 1 0.8635300 0.03468417 6 63 0 #> 2 1 2 0.9136471 0.03468417 7 255 0 #> 3 1 3 0.9863575 0.03468417 10 1023 0 #> 4 1 4 0.9662903 0.03468417 6 127 0 #> 5 1 5 0.9900292 0.03468417 6 63 0 #> 6 1 6 0.9289357 0.03468417 9 639 0 #> 7 1 7 0.9969973 0.03468417 6 63 0 #> 8 1 8 0.9937014 0.03468417 10 1023 0 #> 9 1 9 0.9797568 0.03468417 8 383 0 #> 10 1 10 0.9855480 0.03468417 9 639 0 #> 11 1 11 0.8617986 0.03468417 8 447 0 #> 12 1 12 0.9949926 0.03468417 9 639 0 #> 13 1 13 0.9937324 0.03468417 10 1023 0 #> 14 1 14 0.9770628 0.03468417 7 191 0 #> 15 1 15 0.9194509 0.03468417 10 1023 0 #> 16 1 16 0.9568499 0.03468417 6 127 0 #> 17 1 17 0.9861167 0.03468417 6 63 0 #> 18 1 18 0.9507910 0.03468417 9 895 0 #> 19 1 19 0.9887289 0.03468417 10 1023 0 #> 20 1 20 0.9993835 0.03468417 5 31 0 #> 21 1 21 0.9843024 0.03468417 10 1023 0 #> 22 1 22 0.9797890 0.03468417 8 383 0 #> 23 1 23 0.9944690 0.03468417 8 511 0 #> 24 1 24 0.9767860 0.03468417 8 383 0 #> 25 1 25 0.9383421 0.03468417 9 959 0 #> 26 1 26 0.9570864 0.03468417 9 655 0 #> 27 1 27 0.9778837 0.03468417 10 1023 0 #> 28 1 28 0.9799058 0.03468417 7 255 0 #> 29 1 29 0.9462549 0.03468417 6 63 0 #> 30 1 30 0.8323784 0.03468417 7 127 0 #> 31 1 31 0.9895450 0.03468417 10 1023 0 #> 32 1 32 0.9926231 0.03468417 8 335 0 #> 33 1 33 0.9927103 0.03468417 5 31 0 #> 34 1 34 0.9987344 0.03468417 6 63 0 #> 35 1 35 0.9830424 0.03468417 6 63 0 #> 36 1 36 0.9682450 0.03468417 7 127 0 #> 37 1 37 0.9933061 0.03468417 10 1023 0 #> 38 1 38 0.9403289 0.03468417 10 1023 0 #> 39 1 39 0.9995874 0.03468417 10 1023 0 #> 40 1 40 0.8767612 0.03468417 9 1023 0 #> 41 1 41 0.9725447 0.03468417 10 1023 0 #> 42 1 42 0.9861283 0.03468417 8 383 0 #> 43 1 43 0.8548139 0.03468417 10 1023 0 #> 44 1 44 0.8389647 0.03468417 10 1023 0 #> 45 1 45 0.9554398 0.03468417 5 63 0 #> 46 1 46 0.9761778 0.03468417 10 1023 0 #> 47 1 47 0.9701103 0.03468417 7 127 0 #> 48 1 48 0.9922098 0.03468417 7 191 0 #> 49 1 49 0.9621097 0.03468417 8 511 0 #> 50 1 50 0.9994729 0.03468417 9 639 0 #> 51 2 1 0.9977753 0.03332572 10 1023 0 #> 52 2 2 0.8687279 0.03332572 7 191 0 #> 53 2 3 0.9933227 0.03332572 9 767 0 #> 54 2 4 0.9684332 0.03332572 6 63 0 #> 55 2 5 0.7551614 0.03332572 10 1023 0 #> 56 2 6 0.9266202 0.03332572 7 127 0 #> 57 2 7 0.9960281 0.03332572 10 1023 0 #> 58 2 8 0.9963768 0.03332572 6 63 0 #> 59 2 9 0.9775107 0.03332572 8 447 0 #> 60 2 10 0.7663591 0.03332572 5 63 0 #> 61 2 11 0.9954953 0.03332572 6 63 0 #> 62 2 12 0.9986424 0.03332572 6 127 0 #> 63 2 13 0.9925190 0.03332572 10 1023 0 #> 64 2 14 0.9999734 0.03332572 10 1023 0 #> 65 2 15 0.8988157 0.03332572 3 7 0 #> 66 2 16 0.9068830 0.03332572 10 1023 0 #> 67 2 17 0.9299921 0.03332572 10 1023 0 #> 68 2 18 0.9866188 0.03332572 10 1023 0 #> 69 2 19 0.9163855 0.03332572 10 1023 0 #> 70 2 20 0.9925532 0.03332572 7 255 0 #> 71 2 21 0.9876059 0.03332572 7 191 0 #> 72 2 22 0.9880354 0.03332572 9 1023 0 #> 73 2 23 0.8505949 0.03332572 10 1023 0 #> 74 2 24 0.9905202 0.03332572 6 63 0 #> 75 2 25 0.7128606 0.03332572 6 127 0 #> 76 2 26 0.9990718 0.03332572 10 1023 0 #> 77 2 27 0.9784712 0.03332572 7 127 0 #> 78 2 28 0.9965097 0.03332572 8 383 0 #> 79 2 29 0.9942409 0.03332572 6 63 0 #> 80 2 30 0.8183472 0.03332572 9 655 0 #> 81 2 31 0.9680243 0.03332572 7 255 0 #> 82 2 32 0.9890766 0.03332572 5 31 0 #> 83 2 33 0.8715178 0.03332572 10 1023 0 #> 84 2 34 0.9362153 0.03332572 10 1023 0 #> 85 2 35 0.9997559 0.03332572 8 511 0 #> 86 2 36 0.8987255 0.03332572 3 15 0 #> 87 2 37 0.9880821 0.03332572 9 831 0 #> 88 2 38 0.9293936 0.03332572 10 1023 0 #> 89 2 39 0.9527558 0.03332572 10 1023 0 #> 90 2 40 0.8142194 0.03332572 10 1023 0 #> 91 2 41 0.9874643 0.03332572 10 1023 0 #> 92 2 42 0.9770989 0.03332572 9 1023 0 #> 93 2 43 0.8866260 0.03332572 9 895 0 #> 94 2 44 0.8828927 0.03332572 10 1023 0 #> 95 2 45 0.9949399 0.03332572 7 255 0 #> 96 2 46 0.9146691 0.03332572 7 191 0 #> 97 2 47 0.9659167 0.03332572 10 1023 0 #> 98 2 48 0.9957213 0.03332572 10 1023 0 #> 99 2 49 0.9404257 0.03332572 7 191 0 #> 100 2 50 0.9803275 0.03332572 8 319 0 #> 101 3 1 0.8155798 0.03742549 7 191 0 #> 102 3 2 0.9565482 0.03742549 10 1023 0 #> 103 3 3 0.9060364 0.03742549 7 127 0 #> 104 3 4 0.9684992 0.03742549 9 575 0 #> 105 3 5 0.9600335 0.03742549 10 1023 0 #> 106 3 6 0.9894624 0.03742549 6 63 0 #> 107 3 7 0.8202755 0.03742549 6 127 0 #> 108 3 8 0.9680231 0.03742549 10 1023 0 #> 109 3 9 0.8682610 0.03742549 10 1023 0 #> 110 3 10 0.9976172 0.03742549 7 191 0 #> 111 3 11 0.9476391 0.03742549 10 1023 0 #> 112 3 12 0.9262045 0.03742549 8 383 0 #> 113 3 13 0.9867719 0.03742549 7 127 0 #> 114 3 14 0.9935106 0.03742549 5 31 0 #> 115 3 15 0.8351069 0.03742549 10 1023 0 #> 116 3 16 0.9793729 0.03742549 6 63 0 #> 117 3 17 0.9247268 0.03742549 10 1023 0 #> 118 3 18 0.9983438 0.03742549 10 1023 0 #> 119 3 19 0.9405665 0.03742549 6 127 0 #> 120 3 20 0.9904495 0.03742549 8 511 0 #> 121 3 21 0.9905696 0.03742549 9 639 0 #> 122 3 22 0.8409423 0.03742549 8 319 0 #> 123 3 23 0.9956203 0.03742549 7 159 0 #> 124 3 24 0.8021489 0.03742549 8 447 0 #> 125 3 25 0.9744307 0.03742549 10 1023 0 #> 126 3 26 0.9732865 0.03742549 10 1023 0 #> 127 3 27 0.9197724 0.03742549 6 127 0 #> 128 3 28 0.9771663 0.03742549 7 255 0 #> 129 3 29 0.8528693 0.03742549 10 1023 0 #> 130 3 30 0.9857750 0.03742549 9 575 0 #> 131 3 31 0.9776708 0.03742549 6 127 0 #> 132 3 32 0.9103742 0.03742549 10 1023 0 #> 133 3 33 0.8127369 0.03742549 9 975 0 #> 134 3 34 0.9979473 0.03742549 10 1023 0 #> 135 3 35 0.9930098 0.03742549 10 1023 0 #> 136 3 36 0.9739349 0.03742549 10 1023 0 #> 137 3 37 0.9803240 0.03742549 10 1023 0 #> 138 3 38 0.9944887 0.03742549 10 1023 0 #> 139 3 39 0.9891067 0.03742549 10 1023 0 #> 140 3 40 0.9473967 0.03742549 9 607 0 #> 141 3 41 0.9985215 0.03742549 8 447 0 #> 142 3 42 0.9952529 0.03742549 9 671 0 #> 143 3 43 0.8960930 0.03742549 7 175 0 #> 144 3 44 0.9864406 0.03742549 9 895 0 #> 145 3 45 0.9969098 0.03742549 7 191 0 #> 146 3 46 0.7906908 0.03742549 9 911 0 #> 147 3 47 0.8782428 0.03742549 6 127 0 #> 148 3 48 0.9951144 0.03742549 6 63 0 #> 149 3 49 0.9446957 0.03742549 10 1023 0 #> 150 3 50 0.9983473 0.03742549 10 1023 0 #> 151 4 1 0.7601993 0.03204934 10 1023 0 #> 152 4 2 0.9471750 0.03204934 10 1023 0 #> 153 4 3 0.9960623 0.03204934 10 1023 0 #> 154 4 4 0.9909054 0.03204934 9 1023 0 #> 155 4 5 0.6559207 0.03204934 8 255 0 #> 156 4 6 0.9831670 0.03204934 8 407 0 #> 157 4 7 0.9634444 0.03204934 8 255 0 #> 158 4 8 0.9564323 0.03204934 10 1023 0 #> 159 4 9 0.9956201 0.03204934 7 159 0 #> 160 4 10 0.9656818 0.03204934 7 127 0 #> 161 4 11 0.9472236 0.03204934 8 511 0 #> 162 4 12 0.9997053 0.03204934 10 1023 0 #> 163 4 13 0.9977500 0.03204934 10 1023 0 #> 164 4 14 0.9206532 0.03204934 10 1023 0 #> 165 4 15 0.9921322 0.03204934 8 383 0 #> 166 4 16 0.9503797 0.03204934 10 1023 0 #> 167 4 17 0.9978729 0.03204934 10 1023 0 #> 168 4 18 0.5197365 0.03204934 10 1023 0 #> 169 4 19 0.9963902 0.03204934 10 1023 0 #> 170 4 20 0.9900801 0.03204934 6 127 0 #> 171 4 21 0.9547605 0.03204934 6 127 0 #> 172 4 22 0.9878980 0.03204934 9 831 0 #> 173 4 23 0.9957027 0.03204934 7 127 0 #> 174 4 24 0.9596356 0.03204934 9 783 0 #> 175 4 25 0.9987294 0.03204934 6 63 0 #> 176 4 26 0.9829070 0.03204934 5 63 0 #> 177 4 27 0.8891739 0.03204934 8 423 0 #> 178 4 28 0.9505994 0.03204934 10 1023 0 #> 179 4 29 0.9929616 0.03204934 6 63 0 #> 180 4 30 0.7685323 0.03204934 6 79 0 #> 181 4 31 0.9900524 0.03204934 9 911 0 #> 182 4 32 0.9579094 0.03204934 7 255 0 #> 183 4 33 0.9937229 0.03204934 8 455 0 #> 184 4 34 0.9798194 0.03204934 8 383 0 #> 185 4 35 0.9502694 0.03204934 9 703 0 #> 186 4 36 0.9891410 0.03204934 6 127 0 #> 187 4 37 0.9968589 0.03204934 6 63 0 #> 188 4 38 0.9649275 0.03204934 9 975 0 #> 189 4 39 0.9441781 0.03204934 9 895 0 #> 190 4 40 0.9700667 0.03204934 4 15 0 #> 191 4 41 0.9007986 0.03204934 10 1023 0 #> 192 4 42 0.9788679 0.03204934 8 383 0 #> 193 4 43 0.8766682 0.03204934 7 255 0 #> 194 4 44 0.9638160 0.03204934 10 1023 0 #> 195 4 45 0.9948981 0.03204934 9 767 0 #> 196 4 46 0.9952965 0.03204934 10 1023 0 #> 197 4 47 0.9811392 0.03204934 8 351 0 #> 198 4 48 0.9838751 0.03204934 4 31 0 #> 199 4 49 0.9346458 0.03204934 9 895 0 #> 200 4 50 0.8551706 0.03204934 10 1023 0 #> Energy LogPosterior #> 1 81.68979 -78.53575 #> 2 81.30190 -79.44012 #> 3 83.73613 -77.76254 #> 4 83.83327 -78.54812 #> 5 81.58150 -77.67229 #> 6 80.76513 -77.72217 #> 7 78.86958 -76.65568 #> 8 80.47306 -76.57489 #> 9 80.00525 -77.24579 #> 10 80.38747 -77.69605 #> 11 83.50795 -78.71900 #> 12 79.99357 -76.68875 #> 13 81.08848 -78.41224 #> 14 80.44635 -78.40657 #> 15 83.25134 -76.90618 #> 16 81.55041 -79.25998 #> 17 81.93603 -79.45798 #> 18 82.52994 -79.45795 #> 19 82.56797 -78.76203 #> 20 80.91393 -79.51177 #> 21 82.62931 -79.59458 #> 22 82.52404 -77.47962 #> 23 81.75065 -79.21792 #> 24 81.54242 -77.54294 #> 25 80.28798 -76.43826 #> 26 81.23868 -77.89696 #> 27 84.82903 -79.69482 #> 28 84.27202 -79.33650 #> 29 82.01173 -78.61202 #> 30 83.52101 -80.83462 #> 31 84.19173 -80.50366 #> 32 83.22688 -80.28814 #> 33 80.95878 -78.06432 #> 34 83.41489 -79.20627 #> 35 81.76939 -76.08874 #> 36 78.81229 -77.83248 #> 37 79.71623 -77.32440 #> 38 83.44286 -78.74506 #> 39 81.74730 -77.08450 #> 40 80.00788 -76.52118 #> 41 78.52085 -75.14859 #> 42 76.14307 -74.66305 #> 43 78.17077 -76.99175 #> 44 82.32819 -75.88508 #> 45 82.69528 -78.50423 #> 46 84.08986 -80.33075 #> 47 86.27963 -81.32510 #> 48 83.57613 -78.75639 #> 49 83.37602 -80.38082 #> 50 85.22319 -81.98533 #> 51 79.04425 -75.93612 #> 52 79.59342 -76.88132 #> 53 81.98043 -76.85721 #> 54 84.09465 -81.35656 #> 55 87.52969 -82.04522 #> 56 84.40385 -80.15068 #> 57 83.29075 -79.89185 #> 58 81.02457 -77.92864 #> 59 79.48660 -75.45440 #> 60 81.65772 -76.70111 #> 61 78.79809 -75.66794 #> 62 77.92424 -75.54409 #> 63 81.11599 -77.93756 #> 64 82.91378 -77.97202 #> 65 81.60834 -78.62578 #> 66 83.37749 -78.77475 #> 67 81.49486 -77.59621 #> 68 81.84369 -78.65842 #> 69 83.21586 -77.13988 #> 70 81.77040 -78.27169 #> 71 80.24218 -75.79366 #> 72 80.95216 -75.74954 #> 73 85.45359 -80.36946 #> 74 81.86366 -77.80998 #> 75 84.36743 -78.75500 #> 76 84.03165 -81.71340 #> 77 85.38472 -81.17972 #> 78 87.72757 -78.53602 #> 79 81.70592 -79.29349 #> 80 89.51925 -83.20471 #> 81 88.66343 -83.49537 #> 82 85.51220 -82.51621 #> 83 88.48239 -82.78556 #> 84 86.23336 -80.13370 #> 85 82.87816 -79.53007 #> 86 85.66489 -80.89422 #> 87 90.04579 -80.05998 #> 88 87.75324 -82.60976 #> 89 86.50583 -78.22795 #> 90 83.36130 -78.28413 #> 91 82.86748 -78.98554 #> 92 84.29937 -78.10824 #> 93 82.14089 -79.40267 #> 94 82.73149 -80.67393 #> 95 84.70622 -81.30023 #> 96 89.91822 -85.70927 #> 97 93.83836 -81.46507 #> 98 85.20532 -81.23038 #> 99 84.54402 -79.25060 #> 100 83.81674 -77.61327 #> 101 87.84960 -80.90316 #> 102 83.56417 -78.36262 #> 103 81.36451 -79.69073 #> 104 84.53421 -81.87829 #> 105 83.92396 -79.49371 #> 106 82.05882 -77.34125 #> 107 79.43416 -76.36843 #> 108 78.99842 -76.88755 #> 109 89.20045 -82.35807 #> 110 89.71731 -85.52151 #> 111 91.03985 -83.68061 #> 112 89.72210 -85.08433 #> 113 88.48898 -83.66736 #> 114 85.29272 -79.85449 #> 115 85.05584 -80.59265 #> 116 86.14571 -81.42476 #> 117 87.89185 -80.74854 #> 118 88.68445 -83.90403 #> 119 89.87877 -82.33156 #> 120 85.60347 -80.76304 #> 121 87.87220 -82.80787 #> 122 88.01223 -81.47572 #> 123 84.11699 -79.98619 #> 124 87.77799 -83.58798 #> 125 86.39205 -78.62288 #> 126 89.77851 -85.15048 #> 127 88.10457 -82.96944 #> 128 88.49397 -78.50942 #> 129 85.00417 -76.89855 #> 130 79.76467 -76.28304 #> 131 80.15671 -76.97819 #> 132 84.73166 -79.85099 #> 133 86.48743 -81.80299 #> 134 90.36706 -87.56979 #> 135 91.32256 -77.72510 #> 136 82.83510 -78.23065 #> 137 81.86588 -77.50455 #> 138 83.28792 -77.62655 #> 139 81.73475 -75.42355 #> 140 80.24097 -77.58810 #> 141 80.02829 -77.88882 #> 142 81.95778 -76.93972 #> 143 80.35629 -76.19316 #> 144 77.97606 -76.20728 #> 145 78.93616 -77.04431 #> 146 84.85137 -78.47213 #> 147 86.10982 -80.57124 #> 148 85.22013 -81.62952 #> 149 85.16246 -76.54344 #> 150 78.63998 -75.95998 #> 151 85.09025 -78.61080 #> 152 84.10733 -81.79794 #> 153 85.92464 -83.01607 #> 154 85.51669 -81.25821 #> 155 89.79504 -83.21534 #> 156 87.48172 -80.87003 #> 157 84.27767 -76.44178 #> 158 80.14010 -76.75732 #> 159 77.44618 -75.36220 #> 160 78.00573 -76.10621 #> 161 79.32189 -77.88368 #> 162 80.32595 -77.78867 #> 163 81.65414 -79.27375 #> 164 86.60368 -78.97770 #> 165 83.43989 -78.77708 #> 166 82.12886 -76.53895 #> 167 79.44107 -77.20244 #> 168 89.00809 -82.69022 #> 169 88.11850 -78.93262 #> 170 83.11158 -78.67737 #> 171 82.11975 -77.40317 #> 172 81.67231 -78.88842 #> 173 80.68019 -76.97997 #> 174 79.23976 -77.34819 #> 175 78.67661 -77.87987 #> 176 79.36949 -77.91093 #> 177 81.28618 -77.91514 #> 178 83.50125 -77.54460 #> 179 79.29785 -77.86241 #> 180 81.69942 -78.75083 #> 181 83.85002 -78.72521 #> 182 82.14366 -78.13542 #> 183 79.35053 -77.44870 #> 184 80.39313 -78.60093 #> 185 83.93737 -80.50491 #> 186 84.07392 -78.84227 #> 187 81.65765 -79.31137 #> 188 84.54190 -82.44243 #> 189 88.29899 -85.10235 #> 190 91.25734 -89.05588 #> 191 95.95799 -86.31677 #> 192 90.32763 -81.04195 #> 193 85.14247 -80.29119 #> 194 84.00587 -80.27536 #> 195 86.07170 -83.35788 #> 196 86.94719 -80.06940 #> 197 84.22259 -79.43354 #> 198 81.80435 -77.56382 #> 199 81.70503 -79.41256 #> 200 85.60565 -82.82386 # }"},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":null,"dir":"Reference","previous_headings":"","what":"Posteriors Sampling Diagnostic — diagnostic_posterior","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"Extract diagnostic metrics (Effective Sample Size (ESS), Rhat Monte Carlo Standard Error MCSE).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"","code":"diagnostic_posterior(posterior, ...) # Default S3 method diagnostic_posterior(posterior, diagnostic = c(\"ESS\", \"Rhat\"), ...) # S3 method for class 'stanreg' diagnostic_posterior( posterior, diagnostic = \"all\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... ) # S3 method for class 'brmsfit' diagnostic_posterior( posterior, diagnostic = \"all\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"posterior stanreg, stanfit, brmsfit, blavaan object. ... Currently used. diagnostic Diagnostic metrics compute. Character (vector) list one options: \"ESS\", \"Rhat\", \"MCSE\" \"\". effects variables fixed effects (\"fixed\"), random effects (\"random\") (\"\") returned? applies mixed models. May abbreviated. component type parameters return, parameters conditional model, zero-inflated part model, dispersion term, instrumental variables marginal effects returned? Applies models zero-inflated /dispersion formula, models instrumental variables (called fixed-effects regressions), models marginal effects (mfx). See details section Model Components .May abbreviated. Note conditional component also refers count mean component - names may differ, depending modeling package. three convenient shortcuts (applicable model classes): component = \"\" returns possible parameters. component = \"location\", location parameters conditional, zero_inflated, smooth_terms, instruments returned (everything fixed random effects - depending effects argument - auxiliary parameters). component = \"distributional\" (\"auxiliary\"), components like sigma, dispersion, beta precision (auxiliary parameters) returned. parameters Regular expression pattern describes parameters returned.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"Effective Sample (ESS) large possible, although applications, effective sample size greater 1000 sufficient stable estimates (Bürkner, 2017). ESS corresponds number independent samples estimation power N autocorrelated samples. measure \"much independent information autocorrelated chains\" (Kruschke 2015, p182-3). Rhat closest 1. larger 1.1 (Gelman Rubin, 1992) 1.01 (Vehtari et al., 2019). split Rhat statistic quantifies consistency ensemble Markov chains. Monte Carlo Standard Error (MCSE) another measure accuracy chains. defined standard deviation chains divided effective sample size (formula mcse() Kruschke 2015, p. 187). MCSE \"provides quantitative suggestion big estimation noise \".","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"Gelman, ., & Rubin, D. B. (1992). Inference iterative simulation using multiple sequences. Statistical science, 7(4), 457-472. Vehtari, ., Gelman, ., Simpson, D., Carpenter, B., Bürkner, P. C. (2019). Rank-normalization, folding, localization: improved Rhat assessing convergence MCMC. arXiv preprint arXiv:1903.08008. Kruschke, J. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"","code":"# \\donttest{ # rstanarm models # ----------------------------------------------- model <- suppressWarnings( rstanarm::stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) diagnostic_posterior(model) #> Parameter Rhat ESS MCSE #> 1 (Intercept) 0.9980336 182.6025 0.36283152 #> 2 gear 0.9917174 206.3058 0.06519599 #> 3 wt 0.9978902 186.7773 0.04770867 # brms models # ----------------------------------------------- model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.6e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.26 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 1: 0.016 seconds (Sampling) #> Chain 1: 0.036 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 2: 0.019 seconds (Sampling) #> Chain 2: 0.039 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 3: 0.018 seconds (Sampling) #> Chain 3: 0.038 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 4: 0.02 seconds (Sampling) #> Chain 4: 0.038 seconds (Total) #> Chain 4: diagnostic_posterior(model) #> Parameter Rhat ESS MCSE #> 1 b_Intercept 1.000297 4618.359 0.02593799 #> 2 b_cyl 1.003470 1868.916 0.01002953 #> 3 b_wt 1.003250 1697.213 0.01926801 # }"},{"path":"https://easystats.github.io/bayestestR/reference/disgust.html","id":null,"dir":"Reference","previous_headings":"","what":"Moral Disgust Judgment — disgust","title":"Moral Disgust Judgment — disgust","text":"sample (simulated) dataset, used tests examples.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/disgust.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Moral Disgust Judgment — disgust","text":"data frame 500 rows 5 variables: score Score questionnaire, ranges 0 50 higher scores representing harsher moral judgment condition one three conditions, differing odor present room: pleasant scent associated cleanliness (lemon), disgusting scent (sulfur), control condition unusual odor present","code":"data(\"disgust\") head(disgust, n = 5) #> score condition #> 1 13 control #> 2 26 control #> 3 30 control #> 4 23 control #> 5 34 control"},{"path":"https://easystats.github.io/bayestestR/reference/disgust.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Moral Disgust Judgment — disgust","text":"Richard D. Morey","code":""},{"path":"https://easystats.github.io/bayestestR/reference/distribution.html","id":null,"dir":"Reference","previous_headings":"","what":"Empirical Distributions — distribution","title":"Empirical Distributions — distribution","text":"Generate sequence n-quantiles, .e., sample size n near-perfect distribution.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/distribution.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Empirical Distributions — distribution","text":"","code":"distribution(type = \"normal\", ...) distribution_custom(n, type = \"norm\", ..., random = FALSE) distribution_beta(n, shape1, shape2, ncp = 0, random = FALSE, ...) distribution_binomial(n, size = 1, prob = 0.5, random = FALSE, ...) distribution_binom(n, size = 1, prob = 0.5, random = FALSE, ...) distribution_cauchy(n, location = 0, scale = 1, random = FALSE, ...) distribution_chisquared(n, df, ncp = 0, random = FALSE, ...) distribution_chisq(n, df, ncp = 0, random = FALSE, ...) distribution_gamma(n, shape, scale = 1, random = FALSE, ...) distribution_mixture_normal(n, mean = c(-3, 3), sd = 1, random = FALSE, ...) distribution_normal(n, mean = 0, sd = 1, random = FALSE, ...) distribution_gaussian(n, mean = 0, sd = 1, random = FALSE, ...) distribution_nbinom(n, size, prob, mu, phi, random = FALSE, ...) distribution_poisson(n, lambda = 1, random = FALSE, ...) distribution_student(n, df, ncp, random = FALSE, ...) distribution_t(n, df, ncp, random = FALSE, ...) distribution_student_t(n, df, ncp, random = FALSE, ...) distribution_tweedie(n, xi = NULL, mu, phi, power = NULL, random = FALSE, ...) distribution_uniform(n, min = 0, max = 1, random = FALSE, ...) rnorm_perfect(n, mean = 0, sd = 1)"},{"path":"https://easystats.github.io/bayestestR/reference/distribution.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Empirical Distributions — distribution","text":"type Can names base R's Distributions, like \"cauchy\", \"pois\" \"beta\". ... Arguments passed methods. n number observations random Generate near-perfect random (simple wrappers base R r* functions) distributions. shape1, shape2 non-negative parameters Beta distribution. ncp non-centrality parameter. size number trials (zero ). prob probability success trial. location, scale location scale parameters. df degrees freedom (non-negative, can non-integer). shape Shape parameter. mean vector means. sd vector standard deviations. mu mean phi Corresponding glmmTMB's implementation nbinom distribution, size=mu/phi. lambda vector (non-negative) means. xi tweedie distributions, value xi variance var(Y) = phi * mu^xi. power Alias xi. min, max lower upper limits distribution. Must finite.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/distribution.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Empirical Distributions — distribution","text":"random = FALSE, function return q*(ppoints(n), ...).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/distribution.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Empirical Distributions — distribution","text":"","code":"library(bayestestR) x <- distribution(n = 10) plot(density(x)) x <- distribution(type = \"gamma\", n = 100, shape = 2) plot(density(x))"},{"path":"https://easystats.github.io/bayestestR/reference/dot-extract_priors_rstanarm.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract and Returns the priors formatted for rstanarm — .extract_priors_rstanarm","title":"Extract and Returns the priors formatted for rstanarm — .extract_priors_rstanarm","text":"Extract Returns priors formatted rstanarm","code":""},{"path":"https://easystats.github.io/bayestestR/reference/dot-extract_priors_rstanarm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract and Returns the priors formatted for rstanarm — .extract_priors_rstanarm","text":"","code":".extract_priors_rstanarm(model, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/dot-prior_new_location.html","id":null,"dir":"Reference","previous_headings":"","what":"Set a new location for a prior — .prior_new_location","title":"Set a new location for a prior — .prior_new_location","text":"Set new location prior","code":""},{"path":"https://easystats.github.io/bayestestR/reference/dot-prior_new_location.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set a new location for a prior — .prior_new_location","text":"","code":".prior_new_location(prior, sign, magnitude = 10)"},{"path":"https://easystats.github.io/bayestestR/reference/dot-select_nums.html","id":null,"dir":"Reference","previous_headings":"","what":"select numerics columns — .select_nums","title":"select numerics columns — .select_nums","text":"select numerics columns","code":""},{"path":"https://easystats.github.io/bayestestR/reference/dot-select_nums.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"select numerics columns — .select_nums","text":"","code":".select_nums(x)"},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":null,"dir":"Reference","previous_headings":"","what":"Effective Sample Size (ESS) — effective_sample","title":"Effective Sample Size (ESS) — effective_sample","text":"function returns effective sample size (ESS).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Effective Sample Size (ESS) — effective_sample","text":"","code":"effective_sample(model, ...) # S3 method for class 'brmsfit' effective_sample( model, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ... ) # S3 method for class 'stanreg' effective_sample( model, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Effective Sample Size (ESS) — effective_sample","text":"model stanreg, stanfit, brmsfit, blavaan, MCMCglmm object. ... Currently used. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Effective Sample Size (ESS) — effective_sample","text":"data frame two columns: Parameter name effective sample size (ESS).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Effective Sample Size (ESS) — effective_sample","text":"Effective Sample (ESS) large possible, altough applications, effective sample size greater 1,000 sufficient stable estimates (Bürkner, 2017). ESS corresponds number independent samples estimation power N autocorrelated samples. measure “much independent information autocorrelated chains” (Kruschke 2015, p182-3).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Effective Sample Size (ESS) — effective_sample","text":"Kruschke, J. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press. Bürkner, P. C. (2017). brms: R package Bayesian multilevel models using Stan. Journal Statistical Software, 80(1), 1-28","code":""},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Effective Sample Size (ESS) — effective_sample","text":"","code":"# \\donttest{ library(rstanarm) model <- suppressWarnings( stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) effective_sample(model) #> Parameter ESS #> 1 (Intercept) 172 #> 2 wt 181 #> 3 gear 175 # }"},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":null,"dir":"Reference","previous_headings":"","what":"Test for Practical Equivalence — equivalence_test","title":"Test for Practical Equivalence — equivalence_test","text":"Perform Test Practical Equivalence Bayesian frequentist models.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Test for Practical Equivalence — equivalence_test","text":"","code":"equivalence_test(x, ...) # Default S3 method equivalence_test(x, ...) # S3 method for class 'data.frame' equivalence_test( x, range = \"default\", ci = 0.95, rvar_col = NULL, verbose = TRUE, ... ) # S3 method for class 'stanreg' equivalence_test( x, range = \"default\", ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' equivalence_test( x, range = \"default\", ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Test for Practical Equivalence — equivalence_test","text":"x Vector representing posterior distribution. Can also stanreg brmsfit model. ... Currently used. range ROPE's lower higher bounds. \"default\" depending number outcome variables vector list. models one response, range can : vector length two (e.g., c(-0.1, 0.1)), list numeric vector length numbers parameters (see 'Examples'). list named numeric vectors, names correspond parameter names. case, parameters matching name range set \"default\". multivariate models, range list another list (one response variable) numeric vectors . Vector names correspond name response variables. \"default\" input vector, range set c(-0.1, 0.1). \"default\" input Bayesian model, rope_range() used. See 'Examples'. ci Credible Interval (CI) probability, corresponding proportion HDI, use percentage ROPE. rvar_col single character - name rvar column data frame processed. See example p_direction(). verbose Toggle warnings. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Test for Practical Equivalence — equivalence_test","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability HDI. ROPE_low, ROPE_high limits ROPE. values identical parameters. ROPE_Percentage proportion HDI lies inside ROPE. ROPE_Equivalence \"test result\", character. Either \"rejected\", \"accepted\" \"undecided\". HDI_low , HDI_high lower upper HDI limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Test for Practical Equivalence — equivalence_test","text":"Documentation accessible : Bayesian models Frequentist models Bayesian models, Test Practical Equivalence based \"HDI+ROPE decision rule\" (Kruschke, 2014, 2018) check whether parameter values accepted rejected explicitly formulated \"null hypothesis\" (.e., ROPE). words, checks percentage 89% HDI null region (ROPE). percentage sufficiently low, null hypothesis rejected. percentage sufficiently high, null hypothesis accepted. Using ROPE HDI, Kruschke (2018) suggests using percentage 95% (89%, considered stable) HDI falls within ROPE decision rule. HDI completely outside ROPE, \"null hypothesis\" parameter \"rejected\". ROPE completely covers HDI, .e., credible values parameter inside region practical equivalence, null hypothesis accepted. Else, ’s undecided whether accept reject null hypothesis. full ROPE used (.e., 100% HDI), null hypothesis rejected accepted percentage posterior within ROPE smaller 2.5% greater 97.5%. Desirable results low proportions inside ROPE (closer zero better). attention required finding suitable values ROPE limits (argument range). See 'Details' rope_range() information. Multicollinearity: Non-independent covariates parameters show strong correlations, .e. covariates independent, joint parameter distributions may shift towards away ROPE. cases, test practical equivalence may inappropriate results. Collinearity invalidates ROPE hypothesis testing based univariate marginals, probabilities conditional independence. problematic results \"undecided\" parameters, may either move towards \"rejection\" away (Kruschke 2014, 340f). equivalence_test() performs simple check pairwise correlations parameters, can collinearity two variables, first step check assumptions hypothesis testing look different pair plots. even sophisticated check projection predictive variable selection (Piironen Vehtari 2017).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Test for Practical Equivalence — equivalence_test","text":"print()-method digits-argument control amount digits output, plot()-method visualize results equivalence-test (models ).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Test for Practical Equivalence — equivalence_test","text":"Kruschke, J. K. (2018). Rejecting accepting parameter values Bayesian estimation. Advances Methods Practices Psychological Science, 1(2), 270-280. doi:10.1177/2515245918771304 Kruschke, J. K. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press Piironen, J., & Vehtari, . (2017). Comparison Bayesian predictive methods model selection. Statistics Computing, 27(3), 711–735. doi:10.1007/s11222-016-9649-y","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Test for Practical Equivalence — equivalence_test","text":"","code":"library(bayestestR) equivalence_test(x = rnorm(1000, 0, 0.01), range = c(-0.1, 0.1)) #> # Test for Practical Equivalence #> #> ROPE: [-0.10 0.10] #> #> H0 | inside ROPE | 95% HDI #> -------------------------------------- #> Accepted | 100.00 % | [-0.02, 0.02] #> #> equivalence_test(x = rnorm(1000, 0, 1), range = c(-0.1, 0.1)) #> # Test for Practical Equivalence #> #> ROPE: [-0.10 0.10] #> #> H0 | inside ROPE | 95% HDI #> --------------------------------------- #> Undecided | 8.11 % | [-2.00, 1.97] #> #> equivalence_test(x = rnorm(1000, 1, 0.01), range = c(-0.1, 0.1)) #> # Test for Practical Equivalence #> #> ROPE: [-0.10 0.10] #> #> H0 | inside ROPE | 95% HDI #> ------------------------------------- #> Rejected | 0.00 % | [0.98, 1.02] #> #> equivalence_test(x = rnorm(1000, 1, 1), ci = c(.50, .99)) #> # Test for Practical Equivalence #> #> ROPE: [-0.10 0.10] #> #> H0 | inside ROPE | 50% HDI #> ------------------------------------- #> Rejected | 0.00 % | [0.31, 1.64] #> #> #> H0 | inside ROPE | 99% HDI #> --------------------------------------- #> Undecided | 5.05 % | [-1.58, 3.65] #> #> # print more digits test <- equivalence_test(x = rnorm(1000, 1, 1), ci = c(.50, .99)) print(test, digits = 4) #> # Test for Practical Equivalence #> #> ROPE: [-0.1000 0.1000] #> #> H0 | inside ROPE | 50% HDI #> ----------------------------------------- #> Rejected | 0.0000 % | [0.3115, 1.7148] #> #> #> H0 | inside ROPE | 99% HDI #> ------------------------------------------- #> Undecided | 4.9495 % | [-1.7070, 3.7015] #> #> # \\donttest{ model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 1: 0.046 seconds (Sampling) #> Chain 1: 0.091 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.047 seconds (Warm-up) #> Chain 2: 0.042 seconds (Sampling) #> Chain 2: 0.089 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 8e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 3: 0.045 seconds (Sampling) #> Chain 3: 0.09 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 8e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.048 seconds (Warm-up) #> Chain 4: 0.042 seconds (Sampling) #> Chain 4: 0.09 seconds (Total) #> Chain 4: equivalence_test(model) #> Possible multicollinearity between cyl and wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?equivalence_test'. #> # Test for Practical Equivalence #> #> ROPE: [-0.60 0.60] #> #> Parameter | H0 | inside ROPE | 95% HDI #> ----------------------------------------------------- #> (Intercept) | Rejected | 0.00 % | [36.21, 43.06] #> wt | Rejected | 0.00 % | [-4.74, -1.62] #> cyl | Rejected | 0.00 % | [-2.36, -0.70] #> #> # multiple ROPE ranges - asymmetric, symmetric, default equivalence_test(model, range = list(c(10, 40), c(-5, -4), \"default\")) #> Possible multicollinearity between cyl and wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?equivalence_test'. #> # Test for Practical Equivalence #> #> Parameter | H0 | inside ROPE | 95% HDI | ROPE #> ----------------------------------------------------------------------- #> (Intercept) | Undecided | 58.39 % | [36.21, 43.06] | [10.00, 40.00] #> wt | Undecided | 12.05 % | [-4.74, -1.62] | [-5.00, -4.00] #> cyl | Rejected | 0.00 % | [-2.36, -0.70] | [-0.10, 0.10] #> #> # named ROPE ranges equivalence_test(model, range = list(wt = c(-5, -4), `(Intercept)` = c(10, 40))) #> Possible multicollinearity between cyl and wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?equivalence_test'. #> # Test for Practical Equivalence #> #> Parameter | H0 | inside ROPE | 95% HDI | ROPE #> ----------------------------------------------------------------------- #> (Intercept) | Undecided | 58.39 % | [36.21, 43.06] | [10.00, 40.00] #> wt | Undecided | 12.05 % | [-4.74, -1.62] | [-5.00, -4.00] #> cyl | Rejected | 0.00 % | [-2.36, -0.70] | [-0.10, 0.10] #> #> # plot result test <- equivalence_test(model) #> Possible multicollinearity between cyl and wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?equivalence_test'. plot(test) #> Picking joint bandwidth of 0.0895 equivalence_test(emmeans::emtrends(model, ~1, \"wt\", data = mtcars)) #> # Test for Practical Equivalence #> #> ROPE: [-0.10 0.10] #> #> X1 | H0 | inside ROPE | 95% HDI #> ------------------------------------------------- #> overall | Rejected | 0.00 % | [-4.74, -1.62] #> #> model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 9e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 1: 0.019 seconds (Sampling) #> Chain 1: 0.038 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 2: 0.02 seconds (Sampling) #> Chain 2: 0.04 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 3: 0.014 seconds (Sampling) #> Chain 3: 0.032 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.016 seconds (Sampling) #> Chain 4: 0.036 seconds (Total) #> Chain 4: equivalence_test(model) #> Possible multicollinearity between b_cyl and b_wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?equivalence_test'. #> # Test for Practical Equivalence #> #> ROPE: [-0.60 0.60] #> #> Parameter | H0 | inside ROPE | 95% HDI #> --------------------------------------------------- #> Intercept | Rejected | 0.00 % | [36.19, 43.16] #> wt | Rejected | 0.00 % | [-4.71, -1.64] #> cyl | Rejected | 0.00 % | [-2.36, -0.68] #> #> bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) # equivalence_test(bf) # }"},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":null,"dir":"Reference","previous_headings":"","what":"Density Estimation — estimate_density","title":"Density Estimation — estimate_density","text":"function wrapper different methods density estimation. default, uses base R density default uses different smoothing bandwidth (\"SJ\") legacy default implemented base R density function (\"nrd0\"). However, Deng Wickham suggest method = \"KernSmooth\" fastest accurate.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Density Estimation — estimate_density","text":"","code":"estimate_density(x, ...) # S3 method for class 'data.frame' estimate_density( x, method = \"kernel\", precision = 2^10, extend = FALSE, extend_scale = 0.1, bw = \"SJ\", ci = NULL, select = NULL, by = NULL, at = NULL, rvar_col = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Density Estimation — estimate_density","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. method Density estimation method. Can \"kernel\" (default), \"logspline\" \"KernSmooth\". precision Number points density data. See n parameter density. extend Extend range x axis factor extend_scale. extend_scale Ratio range extend x axis. value 0.1 means x axis extended 1/10 range data. bw See eponymous argument density. , default changed \"SJ\", recommended. ci confidence interval threshold. used method = \"kernel\". feature experimental, use caution. select Character vector column names. NULL (default), numeric variables selected. arguments datawizard::extract_column_names() (exclude) can also used. Optional character vector. NULL input data frame, density estimation performed group (subsets) indicated . See examples. Deprecated favour . rvar_col single character - name rvar column data frame processed. See example p_direction().","code":""},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Density Estimation — estimate_density","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Density Estimation — estimate_density","text":"Deng, H., & Wickham, H. (2011). Density estimation R. Electronic publication.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Density Estimation — estimate_density","text":"","code":"library(bayestestR) set.seed(1) x <- rnorm(250, mean = 1) # Basic usage density_kernel <- estimate_density(x) # default method is \"kernel\" hist(x, prob = TRUE) lines(density_kernel$x, density_kernel$y, col = \"black\", lwd = 2) lines(density_kernel$x, density_kernel$CI_low, col = \"gray\", lty = 2) lines(density_kernel$x, density_kernel$CI_high, col = \"gray\", lty = 2) legend(\"topright\", legend = c(\"Estimate\", \"95% CI\"), col = c(\"black\", \"gray\"), lwd = 2, lty = c(1, 2) ) # Other Methods density_logspline <- estimate_density(x, method = \"logspline\") density_KernSmooth <- estimate_density(x, method = \"KernSmooth\") density_mixture <- estimate_density(x, method = \"mixture\") hist(x, prob = TRUE) lines(density_kernel$x, density_kernel$y, col = \"black\", lwd = 2) lines(density_logspline$x, density_logspline$y, col = \"red\", lwd = 2) lines(density_KernSmooth$x, density_KernSmooth$y, col = \"blue\", lwd = 2) lines(density_mixture$x, density_mixture$y, col = \"green\", lwd = 2) # Extension density_extended <- estimate_density(x, extend = TRUE) density_default <- estimate_density(x, extend = FALSE) hist(x, prob = TRUE) lines(density_extended$x, density_extended$y, col = \"red\", lwd = 3) lines(density_default$x, density_default$y, col = \"black\", lwd = 3) # Multiple columns head(estimate_density(iris)) #> Parameter x y #> 1 Sepal.Length 4.300000 0.09643086 #> 2 Sepal.Length 4.303519 0.09759152 #> 3 Sepal.Length 4.307038 0.09875679 #> 4 Sepal.Length 4.310557 0.09993469 #> 5 Sepal.Length 4.314076 0.10111692 #> 6 Sepal.Length 4.317595 0.10230788 head(estimate_density(iris, select = \"Sepal.Width\")) #> Parameter x y #> 1 Sepal.Width 2.000000 0.04647877 #> 2 Sepal.Width 2.002346 0.04729167 #> 3 Sepal.Width 2.004692 0.04811925 #> 4 Sepal.Width 2.007038 0.04895638 #> 5 Sepal.Width 2.009384 0.04980346 #> 6 Sepal.Width 2.011730 0.05066768 # Grouped data head(estimate_density(iris, by = \"Species\")) #> Parameter x y Species #> 1 Sepal.Length 4.300000 0.2354858 setosa #> 2 Sepal.Length 4.301466 0.2374750 setosa #> 3 Sepal.Length 4.302933 0.2394634 setosa #> 4 Sepal.Length 4.304399 0.2414506 setosa #> 5 Sepal.Length 4.305865 0.2434373 setosa #> 6 Sepal.Length 4.307331 0.2454216 setosa head(estimate_density(iris$Petal.Width, by = iris$Species)) #> x y Group #> 1 0.1000000 9.011849 setosa #> 2 0.1004888 8.955321 setosa #> 3 0.1009775 8.792006 setosa #> 4 0.1014663 8.527789 setosa #> 5 0.1019550 8.171922 setosa #> 6 0.1024438 7.736494 setosa # \\donttest{ # rstanarm models # ----------------------------------------------- library(rstanarm) model <- suppressWarnings( stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) head(estimate_density(model)) #> Parameter x y #> 1 (Intercept) 24.19242 0.002028542 #> 2 (Intercept) 24.22067 0.002051099 #> 3 (Intercept) 24.24892 0.002073742 #> 4 (Intercept) 24.27717 0.002096471 #> 5 (Intercept) 24.30542 0.002119371 #> 6 (Intercept) 24.33367 0.002142358 library(emmeans) head(estimate_density(emtrends(model, ~1, \"wt\", data = mtcars))) #> X1 x y #> 1 overall -7.810281 0.01753665 #> 2 overall -7.806283 0.01762645 #> 3 overall -7.802285 0.01771463 #> 4 overall -7.798287 0.01780030 #> 5 overall -7.794289 0.01788463 #> 6 overall -7.790292 0.01796733 # brms models # ----------------------------------------------- library(brms) model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 1: 0.014 seconds (Sampling) #> Chain 1: 0.032 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 2: 0.013 seconds (Sampling) #> Chain 2: 0.032 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 3: 0.017 seconds (Sampling) #> Chain 3: 0.037 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.017 seconds (Sampling) #> Chain 4: 0.037 seconds (Total) #> Chain 4: estimate_density(model) #> Parameter x y #> 1 b_Intercept 31.677511642 2.750315e-04 #> 2 b_Intercept 31.692604903 2.747610e-04 #> 3 b_Intercept 31.707698163 2.740329e-04 #> 4 b_Intercept 31.722791424 2.728507e-04 #> 5 b_Intercept 31.737884685 2.712198e-04 #> 6 b_Intercept 31.752977946 2.691474e-04 #> 7 b_Intercept 31.768071206 2.666050e-04 #> 8 b_Intercept 31.783164467 2.635872e-04 #> 9 b_Intercept 31.798257728 2.601691e-04 #> 10 b_Intercept 31.813350988 2.563662e-04 #> 11 b_Intercept 31.828444249 2.521953e-04 #> 12 b_Intercept 31.843537510 2.476744e-04 #> 13 b_Intercept 31.858630770 2.428182e-04 #> 14 b_Intercept 31.873724031 2.375927e-04 #> 15 b_Intercept 31.888817292 2.320916e-04 #> 16 b_Intercept 31.903910552 2.263376e-04 #> 17 b_Intercept 31.919003813 2.203543e-04 #> 18 b_Intercept 31.934097074 2.141651e-04 #> 19 b_Intercept 31.949190334 2.077943e-04 #> 20 b_Intercept 31.964283595 2.012455e-04 #> 21 b_Intercept 31.979376856 1.945691e-04 #> 22 b_Intercept 31.994470116 1.877983e-04 #> 23 b_Intercept 32.009563377 1.809569e-04 #> 24 b_Intercept 32.024656638 1.740682e-04 #> 25 b_Intercept 32.039749899 1.671553e-04 #> 26 b_Intercept 32.054843159 1.602432e-04 #> 27 b_Intercept 32.069936420 1.533636e-04 #> 28 b_Intercept 32.085029681 1.465364e-04 #> 29 b_Intercept 32.100122941 1.397808e-04 #> 30 b_Intercept 32.115216202 1.331152e-04 #> 31 b_Intercept 32.130309463 1.265571e-04 #> 32 b_Intercept 32.145402723 1.201261e-04 #> 33 b_Intercept 32.160495984 1.138693e-04 #> 34 b_Intercept 32.175589245 1.077739e-04 #> 35 b_Intercept 32.190682505 1.018525e-04 #> 36 b_Intercept 32.205775766 9.611626e-05 #> 37 b_Intercept 32.220869027 9.057580e-05 #> 38 b_Intercept 32.235962287 8.524068e-05 #> 39 b_Intercept 32.251055548 8.015963e-05 #> 40 b_Intercept 32.266148809 7.531581e-05 #> 41 b_Intercept 32.281242069 7.070384e-05 #> 42 b_Intercept 32.296335330 6.632925e-05 #> 43 b_Intercept 32.311428591 6.219697e-05 #> 44 b_Intercept 32.326521851 5.831140e-05 #> 45 b_Intercept 32.341615112 5.470275e-05 #> 46 b_Intercept 32.356708373 5.138225e-05 #> 47 b_Intercept 32.371801634 4.832055e-05 #> 48 b_Intercept 32.386894894 4.552042e-05 #> 49 b_Intercept 32.401988155 4.298453e-05 #> 50 b_Intercept 32.417081416 4.071555e-05 #> 51 b_Intercept 32.432174676 3.872390e-05 #> 52 b_Intercept 32.447267937 3.706140e-05 #> 53 b_Intercept 32.462361198 3.567666e-05 #> 54 b_Intercept 32.477454458 3.457348e-05 #> 55 b_Intercept 32.492547719 3.375609e-05 #> 56 b_Intercept 32.507640980 3.322920e-05 #> 57 b_Intercept 32.522734240 3.299803e-05 #> 58 b_Intercept 32.537827501 3.312604e-05 #> 59 b_Intercept 32.552920762 3.358088e-05 #> 60 b_Intercept 32.568014022 3.435644e-05 #> 61 b_Intercept 32.583107283 3.546153e-05 #> 62 b_Intercept 32.598200544 3.690572e-05 #> 63 b_Intercept 32.613293804 3.869939e-05 #> 64 b_Intercept 32.628387065 4.089532e-05 #> 65 b_Intercept 32.643480326 4.351605e-05 #> 66 b_Intercept 32.658573587 4.653145e-05 #> 67 b_Intercept 32.673666847 4.995626e-05 #> 68 b_Intercept 32.688760108 5.380601e-05 #> 69 b_Intercept 32.703853369 5.809703e-05 #> 70 b_Intercept 32.718946629 6.286326e-05 #> 71 b_Intercept 32.734039890 6.820712e-05 #> 72 b_Intercept 32.749133151 7.405780e-05 #> 73 b_Intercept 32.764226411 8.043528e-05 #> 74 b_Intercept 32.779319672 8.736004e-05 #> 75 b_Intercept 32.794412933 9.485309e-05 #> 76 b_Intercept 32.809506193 1.029358e-04 #> 77 b_Intercept 32.824599454 1.117538e-04 #> 78 b_Intercept 32.839692715 1.212438e-04 #> 79 b_Intercept 32.854785975 1.314036e-04 #> 80 b_Intercept 32.869879236 1.422560e-04 #> 81 b_Intercept 32.884972497 1.538237e-04 #> 82 b_Intercept 32.900065757 1.661291e-04 #> 83 b_Intercept 32.915159018 1.792873e-04 #> 84 b_Intercept 32.930252279 1.933288e-04 #> 85 b_Intercept 32.945345539 2.081875e-04 #> 86 b_Intercept 32.960438800 2.238846e-04 #> 87 b_Intercept 32.975532061 2.404404e-04 #> 88 b_Intercept 32.990625322 2.578750e-04 #> 89 b_Intercept 33.005718582 2.762463e-04 #> 90 b_Intercept 33.020811843 2.957153e-04 #> 91 b_Intercept 33.035905104 3.161285e-04 #> 92 b_Intercept 33.050998364 3.375019e-04 #> 93 b_Intercept 33.066091625 3.598505e-04 #> 94 b_Intercept 33.081184886 3.831885e-04 #> 95 b_Intercept 33.096278146 4.075289e-04 #> 96 b_Intercept 33.111371407 4.330883e-04 #> 97 b_Intercept 33.126464668 4.597125e-04 #> 98 b_Intercept 33.141557928 4.873768e-04 #> 99 b_Intercept 33.156651189 5.160887e-04 #> 100 b_Intercept 33.171744450 5.458550e-04 #> 101 b_Intercept 33.186837710 5.766811e-04 #> 102 b_Intercept 33.201930971 6.087035e-04 #> 103 b_Intercept 33.217024232 6.419097e-04 #> 104 b_Intercept 33.232117492 6.761860e-04 #> 105 b_Intercept 33.247210753 7.115323e-04 #> 106 b_Intercept 33.262304014 7.479478e-04 #> 107 b_Intercept 33.277397275 7.854306e-04 #> 108 b_Intercept 33.292490535 8.240289e-04 #> 109 b_Intercept 33.307583796 8.638791e-04 #> 110 b_Intercept 33.322677057 9.047803e-04 #> 111 b_Intercept 33.337770317 9.467256e-04 #> 112 b_Intercept 33.352863578 9.897067e-04 #> 113 b_Intercept 33.367956839 1.033715e-03 #> 114 b_Intercept 33.383050099 1.078740e-03 #> 115 b_Intercept 33.398143360 1.124972e-03 #> 116 b_Intercept 33.413236621 1.172216e-03 #> 117 b_Intercept 33.428329881 1.220431e-03 #> 118 b_Intercept 33.443423142 1.269600e-03 #> 119 b_Intercept 33.458516403 1.319708e-03 #> 120 b_Intercept 33.473609663 1.370737e-03 #> 121 b_Intercept 33.488702924 1.422780e-03 #> 122 b_Intercept 33.503796185 1.475782e-03 #> 123 b_Intercept 33.518889445 1.529629e-03 #> 124 b_Intercept 33.533982706 1.584296e-03 #> 125 b_Intercept 33.549075967 1.639758e-03 #> 126 b_Intercept 33.564169228 1.695987e-03 #> 127 b_Intercept 33.579262488 1.752994e-03 #> 128 b_Intercept 33.594355749 1.810821e-03 #> 129 b_Intercept 33.609449010 1.869305e-03 #> 130 b_Intercept 33.624542270 1.928412e-03 #> 131 b_Intercept 33.639635531 1.988108e-03 #> 132 b_Intercept 33.654728792 2.048355e-03 #> 133 b_Intercept 33.669822052 2.109115e-03 #> 134 b_Intercept 33.684915313 2.170437e-03 #> 135 b_Intercept 33.700008574 2.232178e-03 #> 136 b_Intercept 33.715101834 2.294287e-03 #> 137 b_Intercept 33.730195095 2.356721e-03 #> 138 b_Intercept 33.745288356 2.419439e-03 #> 139 b_Intercept 33.760381616 2.482397e-03 #> 140 b_Intercept 33.775474877 2.545570e-03 #> 141 b_Intercept 33.790568138 2.608890e-03 #> 142 b_Intercept 33.805661398 2.672296e-03 #> 143 b_Intercept 33.820754659 2.735747e-03 #> 144 b_Intercept 33.835847920 2.799202e-03 #> 145 b_Intercept 33.850941180 2.862624e-03 #> 146 b_Intercept 33.866034441 2.925967e-03 #> 147 b_Intercept 33.881127702 2.989169e-03 #> 148 b_Intercept 33.896220963 3.052212e-03 #> 149 b_Intercept 33.911314223 3.115067e-03 #> 150 b_Intercept 33.926407484 3.177709e-03 #> 151 b_Intercept 33.941500745 3.240114e-03 #> 152 b_Intercept 33.956594005 3.302262e-03 #> 153 b_Intercept 33.971687266 3.364072e-03 #> 154 b_Intercept 33.986780527 3.425588e-03 #> 155 b_Intercept 34.001873787 3.486807e-03 #> 156 b_Intercept 34.016967048 3.547730e-03 #> 157 b_Intercept 34.032060309 3.608357e-03 #> 158 b_Intercept 34.047153569 3.668696e-03 #> 159 b_Intercept 34.062246830 3.728721e-03 #> 160 b_Intercept 34.077340091 3.788472e-03 #> 161 b_Intercept 34.092433351 3.847997e-03 #> 162 b_Intercept 34.107526612 3.907324e-03 #> 163 b_Intercept 34.122619873 3.966483e-03 #> 164 b_Intercept 34.137713133 4.025511e-03 #> 165 b_Intercept 34.152806394 4.084443e-03 #> 166 b_Intercept 34.167899655 4.143338e-03 #> 167 b_Intercept 34.182992916 4.202260e-03 #> 168 b_Intercept 34.198086176 4.261261e-03 #> 169 b_Intercept 34.213179437 4.320396e-03 #> 170 b_Intercept 34.228272698 4.379720e-03 #> 171 b_Intercept 34.243365958 4.439295e-03 #> 172 b_Intercept 34.258459219 4.499272e-03 #> 173 b_Intercept 34.273552480 4.559662e-03 #> 174 b_Intercept 34.288645740 4.620529e-03 #> 175 b_Intercept 34.303739001 4.681938e-03 #> 176 b_Intercept 34.318832262 4.743952e-03 #> 177 b_Intercept 34.333925522 4.806639e-03 #> 178 b_Intercept 34.349018783 4.870188e-03 #> 179 b_Intercept 34.364112044 4.934637e-03 #> 180 b_Intercept 34.379205304 4.999988e-03 #> 181 b_Intercept 34.394298565 5.066300e-03 #> 182 b_Intercept 34.409391826 5.133633e-03 #> 183 b_Intercept 34.424485086 5.202044e-03 #> 184 b_Intercept 34.439578347 5.271678e-03 #> 185 b_Intercept 34.454671608 5.342704e-03 #> 186 b_Intercept 34.469764868 5.414991e-03 #> 187 b_Intercept 34.484858129 5.488583e-03 #> 188 b_Intercept 34.499951390 5.563520e-03 #> 189 b_Intercept 34.515044651 5.639842e-03 #> 190 b_Intercept 34.530137911 5.717583e-03 #> 191 b_Intercept 34.545231172 5.797118e-03 #> 192 b_Intercept 34.560324433 5.878145e-03 #> 193 b_Intercept 34.575417693 5.960685e-03 #> 194 b_Intercept 34.590510954 6.044757e-03 #> 195 b_Intercept 34.605604215 6.130376e-03 #> 196 b_Intercept 34.620697475 6.217555e-03 #> 197 b_Intercept 34.635790736 6.306553e-03 #> 198 b_Intercept 34.650883997 6.397246e-03 #> 199 b_Intercept 34.665977257 6.489521e-03 #> 200 b_Intercept 34.681070518 6.583378e-03 #> 201 b_Intercept 34.696163779 6.678818e-03 #> 202 b_Intercept 34.711257039 6.775837e-03 #> 203 b_Intercept 34.726350300 6.874562e-03 #> 204 b_Intercept 34.741443561 6.975093e-03 #> 205 b_Intercept 34.756536821 7.077190e-03 #> 206 b_Intercept 34.771630082 7.180851e-03 #> 207 b_Intercept 34.786723343 7.286073e-03 #> 208 b_Intercept 34.801816604 7.392856e-03 #> 209 b_Intercept 34.816909864 7.501213e-03 #> 210 b_Intercept 34.832003125 7.611488e-03 #> 211 b_Intercept 34.847096386 7.723342e-03 #> 212 b_Intercept 34.862189646 7.836789e-03 #> 213 b_Intercept 34.877282907 7.951845e-03 #> 214 b_Intercept 34.892376168 8.068534e-03 #> 215 b_Intercept 34.907469428 8.186883e-03 #> 216 b_Intercept 34.922562689 8.307209e-03 #> 217 b_Intercept 34.937655950 8.429409e-03 #> 218 b_Intercept 34.952749210 8.553418e-03 #> 219 b_Intercept 34.967842471 8.679294e-03 #> 220 b_Intercept 34.982935732 8.807102e-03 #> 221 b_Intercept 34.998028992 8.936912e-03 #> 222 b_Intercept 35.013122253 9.069000e-03 #> 223 b_Intercept 35.028215514 9.203608e-03 #> 224 b_Intercept 35.043308774 9.340541e-03 #> 225 b_Intercept 35.058402035 9.479906e-03 #> 226 b_Intercept 35.073495296 9.621818e-03 #> 227 b_Intercept 35.088588556 9.766397e-03 #> 228 b_Intercept 35.103681817 9.913813e-03 #> 229 b_Intercept 35.118775078 1.006485e-02 #> 230 b_Intercept 35.133868339 1.021905e-02 #> 231 b_Intercept 35.148961599 1.037655e-02 #> 232 b_Intercept 35.164054860 1.053752e-02 #> 233 b_Intercept 35.179148121 1.070212e-02 #> 234 b_Intercept 35.194241381 1.087049e-02 #> 235 b_Intercept 35.209334642 1.104354e-02 #> 236 b_Intercept 35.224427903 1.122104e-02 #> 237 b_Intercept 35.239521163 1.140293e-02 #> 238 b_Intercept 35.254614424 1.158938e-02 #> 239 b_Intercept 35.269707685 1.178055e-02 #> 240 b_Intercept 35.284800945 1.197662e-02 #> 241 b_Intercept 35.299894206 1.217825e-02 #> 242 b_Intercept 35.314987467 1.238588e-02 #> 243 b_Intercept 35.330080727 1.259898e-02 #> 244 b_Intercept 35.345173988 1.281770e-02 #> 245 b_Intercept 35.360267249 1.304220e-02 #> 246 b_Intercept 35.375360509 1.327260e-02 #> 247 b_Intercept 35.390453770 1.350918e-02 #> 248 b_Intercept 35.405547031 1.375328e-02 #> 249 b_Intercept 35.420640292 1.400375e-02 #> 250 b_Intercept 35.435733552 1.426069e-02 #> 251 b_Intercept 35.450826813 1.452421e-02 #> 252 b_Intercept 35.465920074 1.479440e-02 #> 253 b_Intercept 35.481013334 1.507136e-02 #> 254 b_Intercept 35.496106595 1.535638e-02 #> 255 b_Intercept 35.511199856 1.564873e-02 #> 256 b_Intercept 35.526293116 1.594809e-02 #> 257 b_Intercept 35.541386377 1.625451e-02 #> 258 b_Intercept 35.556479638 1.656803e-02 #> 259 b_Intercept 35.571572898 1.688870e-02 #> 260 b_Intercept 35.586666159 1.721728e-02 #> 261 b_Intercept 35.601759420 1.755399e-02 #> 262 b_Intercept 35.616852680 1.789793e-02 #> 263 b_Intercept 35.631945941 1.824908e-02 #> 264 b_Intercept 35.647039202 1.860746e-02 #> 265 b_Intercept 35.662132462 1.897305e-02 #> 266 b_Intercept 35.677225723 1.934604e-02 #> 267 b_Intercept 35.692318984 1.972766e-02 #> 268 b_Intercept 35.707412244 2.011641e-02 #> 269 b_Intercept 35.722505505 2.051225e-02 #> 270 b_Intercept 35.737598766 2.091514e-02 #> 271 b_Intercept 35.752692027 2.132501e-02 #> 272 b_Intercept 35.767785287 2.174182e-02 #> 273 b_Intercept 35.782878548 2.216673e-02 #> 274 b_Intercept 35.797971809 2.259869e-02 #> 275 b_Intercept 35.813065069 2.303729e-02 #> 276 b_Intercept 35.828158330 2.348243e-02 #> 277 b_Intercept 35.843251591 2.393400e-02 #> 278 b_Intercept 35.858344851 2.439187e-02 #> 279 b_Intercept 35.873438112 2.485657e-02 #> 280 b_Intercept 35.888531373 2.532796e-02 #> 281 b_Intercept 35.903624633 2.580513e-02 #> 282 b_Intercept 35.918717894 2.628790e-02 #> 283 b_Intercept 35.933811155 2.677609e-02 #> 284 b_Intercept 35.948904415 2.726949e-02 #> 285 b_Intercept 35.963997676 2.776808e-02 #> 286 b_Intercept 35.979090937 2.827230e-02 #> 287 b_Intercept 35.994184197 2.878094e-02 #> 288 b_Intercept 36.009277458 2.929374e-02 #> 289 b_Intercept 36.024370719 2.981045e-02 #> 290 b_Intercept 36.039463980 3.033079e-02 #> 291 b_Intercept 36.054557240 3.085448e-02 #> 292 b_Intercept 36.069650501 3.138174e-02 #> 293 b_Intercept 36.084743762 3.191173e-02 #> 294 b_Intercept 36.099837022 3.244402e-02 #> 295 b_Intercept 36.114930283 3.297830e-02 #> 296 b_Intercept 36.130023544 3.351428e-02 #> 297 b_Intercept 36.145116804 3.405166e-02 #> 298 b_Intercept 36.160210065 3.459021e-02 #> 299 b_Intercept 36.175303326 3.512950e-02 #> 300 b_Intercept 36.190396586 3.566915e-02 #> 301 b_Intercept 36.205489847 3.620888e-02 #> 302 b_Intercept 36.220583108 3.674844e-02 #> 303 b_Intercept 36.235676368 3.728758e-02 #> 304 b_Intercept 36.250769629 3.782604e-02 #> 305 b_Intercept 36.265862890 3.836341e-02 #> 306 b_Intercept 36.280956150 3.889965e-02 #> 307 b_Intercept 36.296049411 3.943463e-02 #> 308 b_Intercept 36.311142672 3.996823e-02 #> 309 b_Intercept 36.326235933 4.050037e-02 #> 310 b_Intercept 36.341329193 4.103100e-02 #> 311 b_Intercept 36.356422454 4.155977e-02 #> 312 b_Intercept 36.371515715 4.208699e-02 #> 313 b_Intercept 36.386608975 4.261279e-02 #> 314 b_Intercept 36.401702236 4.313727e-02 #> 315 b_Intercept 36.416795497 4.366059e-02 #> 316 b_Intercept 36.431888757 4.418292e-02 #> 317 b_Intercept 36.446982018 4.470443e-02 #> 318 b_Intercept 36.462075279 4.522553e-02 #> 319 b_Intercept 36.477168539 4.574662e-02 #> 320 b_Intercept 36.492261800 4.626808e-02 #> 321 b_Intercept 36.507355061 4.679028e-02 #> 322 b_Intercept 36.522448321 4.731367e-02 #> 323 b_Intercept 36.537541582 4.783881e-02 #> 324 b_Intercept 36.552634843 4.836669e-02 #> 325 b_Intercept 36.567728103 4.889756e-02 #> 326 b_Intercept 36.582821364 4.943199e-02 #> 327 b_Intercept 36.597914625 4.997057e-02 #> 328 b_Intercept 36.613007885 5.051391e-02 #> 329 b_Intercept 36.628101146 5.106262e-02 #> 330 b_Intercept 36.643194407 5.161881e-02 #> 331 b_Intercept 36.658287668 5.218224e-02 #> 332 b_Intercept 36.673380928 5.275342e-02 #> 333 b_Intercept 36.688474189 5.333302e-02 #> 334 b_Intercept 36.703567450 5.392170e-02 #> 335 b_Intercept 36.718660710 5.452013e-02 #> 336 b_Intercept 36.733753971 5.513044e-02 #> 337 b_Intercept 36.748847232 5.575327e-02 #> 338 b_Intercept 36.763940492 5.638816e-02 #> 339 b_Intercept 36.779033753 5.703572e-02 #> 340 b_Intercept 36.794127014 5.769652e-02 #> 341 b_Intercept 36.809220274 5.837113e-02 #> 342 b_Intercept 36.824313535 5.906091e-02 #> 343 b_Intercept 36.839406796 5.976834e-02 #> 344 b_Intercept 36.854500056 6.049133e-02 #> 345 b_Intercept 36.869593317 6.123028e-02 #> 346 b_Intercept 36.884686578 6.198556e-02 #> 347 b_Intercept 36.899779838 6.275749e-02 #> 348 b_Intercept 36.914873099 6.354635e-02 #> 349 b_Intercept 36.929966360 6.435609e-02 #> 350 b_Intercept 36.945059621 6.518363e-02 #> 351 b_Intercept 36.960152881 6.602876e-02 #> 352 b_Intercept 36.975246142 6.689152e-02 #> 353 b_Intercept 36.990339403 6.777195e-02 #> 354 b_Intercept 37.005432663 6.867003e-02 #> 355 b_Intercept 37.020525924 6.958808e-02 #> 356 b_Intercept 37.035619185 7.052517e-02 #> 357 b_Intercept 37.050712445 7.147945e-02 #> 358 b_Intercept 37.065805706 7.245068e-02 #> 359 b_Intercept 37.080898967 7.343859e-02 #> 360 b_Intercept 37.095992227 7.444286e-02 #> 361 b_Intercept 37.111085488 7.546411e-02 #> 362 b_Intercept 37.126178749 7.650345e-02 #> 363 b_Intercept 37.141272009 7.755772e-02 #> 364 b_Intercept 37.156365270 7.862644e-02 #> 365 b_Intercept 37.171458531 7.970910e-02 #> 366 b_Intercept 37.186551791 8.080518e-02 #> 367 b_Intercept 37.201645052 8.191412e-02 #> 368 b_Intercept 37.216738313 8.303783e-02 #> 369 b_Intercept 37.231831573 8.417301e-02 #> 370 b_Intercept 37.246924834 8.531890e-02 #> 371 b_Intercept 37.262018095 8.647486e-02 #> 372 b_Intercept 37.277111356 8.764026e-02 #> 373 b_Intercept 37.292204616 8.881444e-02 #> 374 b_Intercept 37.307297877 8.999777e-02 #> 375 b_Intercept 37.322391138 9.118891e-02 #> 376 b_Intercept 37.337484398 9.238649e-02 #> 377 b_Intercept 37.352577659 9.358986e-02 #> 378 b_Intercept 37.367670920 9.479839e-02 #> 379 b_Intercept 37.382764180 9.601145e-02 #> 380 b_Intercept 37.397857441 9.722862e-02 #> 381 b_Intercept 37.412950702 9.844935e-02 #> 382 b_Intercept 37.428043962 9.967243e-02 #> 383 b_Intercept 37.443137223 1.008973e-01 #> 384 b_Intercept 37.458230484 1.021234e-01 #> 385 b_Intercept 37.473323744 1.033502e-01 #> 386 b_Intercept 37.488417005 1.045772e-01 #> 387 b_Intercept 37.503510266 1.058036e-01 #> 388 b_Intercept 37.518603526 1.070289e-01 #> 389 b_Intercept 37.533696787 1.082528e-01 #> 390 b_Intercept 37.548790048 1.094747e-01 #> 391 b_Intercept 37.563883309 1.106943e-01 #> 392 b_Intercept 37.578976569 1.119113e-01 #> 393 b_Intercept 37.594069830 1.131247e-01 #> 394 b_Intercept 37.609163091 1.143343e-01 #> 395 b_Intercept 37.624256351 1.155401e-01 #> 396 b_Intercept 37.639349612 1.167418e-01 #> 397 b_Intercept 37.654442873 1.179393e-01 #> 398 b_Intercept 37.669536133 1.191323e-01 #> 399 b_Intercept 37.684629394 1.203201e-01 #> 400 b_Intercept 37.699722655 1.215024e-01 #> 401 b_Intercept 37.714815915 1.226796e-01 #> 402 b_Intercept 37.729909176 1.238516e-01 #> 403 b_Intercept 37.745002437 1.250184e-01 #> 404 b_Intercept 37.760095697 1.261800e-01 #> 405 b_Intercept 37.775188958 1.273363e-01 #> 406 b_Intercept 37.790282219 1.284860e-01 #> 407 b_Intercept 37.805375479 1.296306e-01 #> 408 b_Intercept 37.820468740 1.307699e-01 #> 409 b_Intercept 37.835562001 1.319041e-01 #> 410 b_Intercept 37.850655261 1.330333e-01 #> 411 b_Intercept 37.865748522 1.341576e-01 #> 412 b_Intercept 37.880841783 1.352764e-01 #> 413 b_Intercept 37.895935044 1.363902e-01 #> 414 b_Intercept 37.911028304 1.374997e-01 #> 415 b_Intercept 37.926121565 1.386050e-01 #> 416 b_Intercept 37.941214826 1.397063e-01 #> 417 b_Intercept 37.956308086 1.408037e-01 #> 418 b_Intercept 37.971401347 1.418973e-01 #> 419 b_Intercept 37.986494608 1.429870e-01 #> 420 b_Intercept 38.001587868 1.440736e-01 #> 421 b_Intercept 38.016681129 1.451574e-01 #> 422 b_Intercept 38.031774390 1.462385e-01 #> 423 b_Intercept 38.046867650 1.473171e-01 #> 424 b_Intercept 38.061960911 1.483934e-01 #> 425 b_Intercept 38.077054172 1.494671e-01 #> 426 b_Intercept 38.092147432 1.505390e-01 #> 427 b_Intercept 38.107240693 1.516090e-01 #> 428 b_Intercept 38.122333954 1.526774e-01 #> 429 b_Intercept 38.137427214 1.537443e-01 #> 430 b_Intercept 38.152520475 1.548096e-01 #> 431 b_Intercept 38.167613736 1.558732e-01 #> 432 b_Intercept 38.182706997 1.569351e-01 #> 433 b_Intercept 38.197800257 1.579956e-01 #> 434 b_Intercept 38.212893518 1.590543e-01 #> 435 b_Intercept 38.227986779 1.601114e-01 #> 436 b_Intercept 38.243080039 1.611665e-01 #> 437 b_Intercept 38.258173300 1.622194e-01 #> 438 b_Intercept 38.273266561 1.632696e-01 #> 439 b_Intercept 38.288359821 1.643171e-01 #> 440 b_Intercept 38.303453082 1.653616e-01 #> 441 b_Intercept 38.318546343 1.664028e-01 #> 442 b_Intercept 38.333639603 1.674404e-01 #> 443 b_Intercept 38.348732864 1.684739e-01 #> 444 b_Intercept 38.363826125 1.695020e-01 #> 445 b_Intercept 38.378919385 1.705249e-01 #> 446 b_Intercept 38.394012646 1.715425e-01 #> 447 b_Intercept 38.409105907 1.725540e-01 #> 448 b_Intercept 38.424199167 1.735593e-01 #> 449 b_Intercept 38.439292428 1.745577e-01 #> 450 b_Intercept 38.454385689 1.755475e-01 #> 451 b_Intercept 38.469478949 1.765290e-01 #> 452 b_Intercept 38.484572210 1.775021e-01 #> 453 b_Intercept 38.499665471 1.784665e-01 #> 454 b_Intercept 38.514758732 1.794218e-01 #> 455 b_Intercept 38.529851992 1.803676e-01 #> 456 b_Intercept 38.544945253 1.813027e-01 #> 457 b_Intercept 38.560038514 1.822263e-01 #> 458 b_Intercept 38.575131774 1.831395e-01 #> 459 b_Intercept 38.590225035 1.840421e-01 #> 460 b_Intercept 38.605318296 1.849339e-01 #> 461 b_Intercept 38.620411556 1.858150e-01 #> 462 b_Intercept 38.635504817 1.866850e-01 #> 463 b_Intercept 38.650598078 1.875418e-01 #> 464 b_Intercept 38.665691338 1.883879e-01 #> 465 b_Intercept 38.680784599 1.892234e-01 #> 466 b_Intercept 38.695877860 1.900484e-01 #> 467 b_Intercept 38.710971120 1.908632e-01 #> 468 b_Intercept 38.726064381 1.916680e-01 #> 469 b_Intercept 38.741157642 1.924614e-01 #> 470 b_Intercept 38.756250902 1.932451e-01 #> 471 b_Intercept 38.771344163 1.940201e-01 #> 472 b_Intercept 38.786437424 1.947865e-01 #> 473 b_Intercept 38.801530685 1.955450e-01 #> 474 b_Intercept 38.816623945 1.962959e-01 #> 475 b_Intercept 38.831717206 1.970389e-01 #> 476 b_Intercept 38.846810467 1.977745e-01 #> 477 b_Intercept 38.861903727 1.985040e-01 #> 478 b_Intercept 38.876996988 1.992279e-01 #> 479 b_Intercept 38.892090249 1.999464e-01 #> 480 b_Intercept 38.907183509 2.006601e-01 #> 481 b_Intercept 38.922276770 2.013690e-01 #> 482 b_Intercept 38.937370031 2.020728e-01 #> 483 b_Intercept 38.952463291 2.027726e-01 #> 484 b_Intercept 38.967556552 2.034686e-01 #> 485 b_Intercept 38.982649813 2.041610e-01 #> 486 b_Intercept 38.997743073 2.048497e-01 #> 487 b_Intercept 39.012836334 2.055348e-01 #> 488 b_Intercept 39.027929595 2.062154e-01 #> 489 b_Intercept 39.043022855 2.068920e-01 #> 490 b_Intercept 39.058116116 2.075643e-01 #> 491 b_Intercept 39.073209377 2.082320e-01 #> 492 b_Intercept 39.088302638 2.088948e-01 #> 493 b_Intercept 39.103395898 2.095522e-01 #> 494 b_Intercept 39.118489159 2.102030e-01 #> 495 b_Intercept 39.133582420 2.108461e-01 #> 496 b_Intercept 39.148675680 2.114817e-01 #> 497 b_Intercept 39.163768941 2.121090e-01 #> 498 b_Intercept 39.178862202 2.127271e-01 #> 499 b_Intercept 39.193955462 2.133353e-01 #> 500 b_Intercept 39.209048723 2.139322e-01 #> 501 b_Intercept 39.224141984 2.145145e-01 #> 502 b_Intercept 39.239235244 2.150836e-01 #> 503 b_Intercept 39.254328505 2.156382e-01 #> 504 b_Intercept 39.269421766 2.161776e-01 #> 505 b_Intercept 39.284515026 2.167005e-01 #> 506 b_Intercept 39.299608287 2.172060e-01 #> 507 b_Intercept 39.314701548 2.176892e-01 #> 508 b_Intercept 39.329794808 2.181517e-01 #> 509 b_Intercept 39.344888069 2.185931e-01 #> 510 b_Intercept 39.359981330 2.190125e-01 #> 511 b_Intercept 39.375074590 2.194090e-01 #> 512 b_Intercept 39.390167851 2.197818e-01 #> 513 b_Intercept 39.405261112 2.201269e-01 #> 514 b_Intercept 39.420354373 2.204435e-01 #> 515 b_Intercept 39.435447633 2.207335e-01 #> 516 b_Intercept 39.450540894 2.209965e-01 #> 517 b_Intercept 39.465634155 2.212320e-01 #> 518 b_Intercept 39.480727415 2.214393e-01 #> 519 b_Intercept 39.495820676 2.216171e-01 #> 520 b_Intercept 39.510913937 2.217605e-01 #> 521 b_Intercept 39.526007197 2.218748e-01 #> 522 b_Intercept 39.541100458 2.219601e-01 #> 523 b_Intercept 39.556193719 2.220162e-01 #> 524 b_Intercept 39.571286979 2.220433e-01 #> 525 b_Intercept 39.586380240 2.220415e-01 #> 526 b_Intercept 39.601473501 2.220055e-01 #> 527 b_Intercept 39.616566761 2.219405e-01 #> 528 b_Intercept 39.631660022 2.218480e-01 #> 529 b_Intercept 39.646753283 2.217283e-01 #> 530 b_Intercept 39.661846543 2.215822e-01 #> 531 b_Intercept 39.676939804 2.214103e-01 #> 532 b_Intercept 39.692033065 2.212104e-01 #> 533 b_Intercept 39.707126326 2.209839e-01 #> 534 b_Intercept 39.722219586 2.207344e-01 #> 535 b_Intercept 39.737312847 2.204629e-01 #> 536 b_Intercept 39.752406108 2.201702e-01 #> 537 b_Intercept 39.767499368 2.198573e-01 #> 538 b_Intercept 39.782592629 2.195242e-01 #> 539 b_Intercept 39.797685890 2.191699e-01 #> 540 b_Intercept 39.812779150 2.187990e-01 #> 541 b_Intercept 39.827872411 2.184122e-01 #> 542 b_Intercept 39.842965672 2.180107e-01 #> 543 b_Intercept 39.858058932 2.175952e-01 #> 544 b_Intercept 39.873152193 2.171669e-01 #> 545 b_Intercept 39.888245454 2.167242e-01 #> 546 b_Intercept 39.903338714 2.162705e-01 #> 547 b_Intercept 39.918431975 2.158068e-01 #> 548 b_Intercept 39.933525236 2.153337e-01 #> 549 b_Intercept 39.948618496 2.148521e-01 #> 550 b_Intercept 39.963711757 2.143623e-01 #> 551 b_Intercept 39.978805018 2.138641e-01 #> 552 b_Intercept 39.993898278 2.133584e-01 #> 553 b_Intercept 40.008991539 2.128461e-01 #> 554 b_Intercept 40.024084800 2.123276e-01 #> 555 b_Intercept 40.039178061 2.118031e-01 #> 556 b_Intercept 40.054271321 2.112726e-01 #> 557 b_Intercept 40.069364582 2.107361e-01 #> 558 b_Intercept 40.084457843 2.101926e-01 #> 559 b_Intercept 40.099551103 2.096430e-01 #> 560 b_Intercept 40.114644364 2.090873e-01 #> 561 b_Intercept 40.129737625 2.085251e-01 #> 562 b_Intercept 40.144830885 2.079562e-01 #> 563 b_Intercept 40.159924146 2.073802e-01 #> 564 b_Intercept 40.175017407 2.067950e-01 #> 565 b_Intercept 40.190110667 2.062014e-01 #> 566 b_Intercept 40.205203928 2.055991e-01 #> 567 b_Intercept 40.220297189 2.049877e-01 #> 568 b_Intercept 40.235390449 2.043665e-01 #> 569 b_Intercept 40.250483710 2.037350e-01 #> 570 b_Intercept 40.265576971 2.030911e-01 #> 571 b_Intercept 40.280670231 2.024345e-01 #> 572 b_Intercept 40.295763492 2.017657e-01 #> 573 b_Intercept 40.310856753 2.010841e-01 #> 574 b_Intercept 40.325950014 2.003894e-01 #> 575 b_Intercept 40.341043274 1.996810e-01 #> 576 b_Intercept 40.356136535 1.989576e-01 #> 577 b_Intercept 40.371229796 1.982172e-01 #> 578 b_Intercept 40.386323056 1.974618e-01 #> 579 b_Intercept 40.401416317 1.966912e-01 #> 580 b_Intercept 40.416509578 1.959053e-01 #> 581 b_Intercept 40.431602838 1.951038e-01 #> 582 b_Intercept 40.446696099 1.942869e-01 #> 583 b_Intercept 40.461789360 1.934510e-01 #> 584 b_Intercept 40.476882620 1.925995e-01 #> 585 b_Intercept 40.491975881 1.917329e-01 #> 586 b_Intercept 40.507069142 1.908514e-01 #> 587 b_Intercept 40.522162402 1.899553e-01 #> 588 b_Intercept 40.537255663 1.890450e-01 #> 589 b_Intercept 40.552348924 1.881192e-01 #> 590 b_Intercept 40.567442184 1.871794e-01 #> 591 b_Intercept 40.582535445 1.862273e-01 #> 592 b_Intercept 40.597628706 1.852639e-01 #> 593 b_Intercept 40.612721966 1.842897e-01 #> 594 b_Intercept 40.627815227 1.833056e-01 #> 595 b_Intercept 40.642908488 1.823119e-01 #> 596 b_Intercept 40.658001749 1.813091e-01 #> 597 b_Intercept 40.673095009 1.802996e-01 #> 598 b_Intercept 40.688188270 1.792844e-01 #> 599 b_Intercept 40.703281531 1.782645e-01 #> 600 b_Intercept 40.718374791 1.772410e-01 #> 601 b_Intercept 40.733468052 1.762147e-01 #> 602 b_Intercept 40.748561313 1.751869e-01 #> 603 b_Intercept 40.763654573 1.741590e-01 #> 604 b_Intercept 40.778747834 1.731322e-01 #> 605 b_Intercept 40.793841095 1.721074e-01 #> 606 b_Intercept 40.808934355 1.710856e-01 #> 607 b_Intercept 40.824027616 1.700677e-01 #> 608 b_Intercept 40.839120877 1.690556e-01 #> 609 b_Intercept 40.854214137 1.680501e-01 #> 610 b_Intercept 40.869307398 1.670517e-01 #> 611 b_Intercept 40.884400659 1.660610e-01 #> 612 b_Intercept 40.899493919 1.650788e-01 #> 613 b_Intercept 40.914587180 1.641057e-01 #> 614 b_Intercept 40.929680441 1.631431e-01 #> 615 b_Intercept 40.944773702 1.621924e-01 #> 616 b_Intercept 40.959866962 1.612527e-01 #> 617 b_Intercept 40.974960223 1.603241e-01 #> 618 b_Intercept 40.990053484 1.594069e-01 #> 619 b_Intercept 41.005146744 1.585014e-01 #> 620 b_Intercept 41.020240005 1.576076e-01 #> 621 b_Intercept 41.035333266 1.567284e-01 #> 622 b_Intercept 41.050426526 1.558608e-01 #> 623 b_Intercept 41.065519787 1.550047e-01 #> 624 b_Intercept 41.080613048 1.541600e-01 #> 625 b_Intercept 41.095706308 1.533262e-01 #> 626 b_Intercept 41.110799569 1.525032e-01 #> 627 b_Intercept 41.125892830 1.516919e-01 #> 628 b_Intercept 41.140986090 1.508909e-01 #> 629 b_Intercept 41.156079351 1.500987e-01 #> 630 b_Intercept 41.171172612 1.493149e-01 #> 631 b_Intercept 41.186265872 1.485386e-01 #> 632 b_Intercept 41.201359133 1.477694e-01 #> 633 b_Intercept 41.216452394 1.470067e-01 #> 634 b_Intercept 41.231545654 1.462500e-01 #> 635 b_Intercept 41.246638915 1.454974e-01 #> 636 b_Intercept 41.261732176 1.447482e-01 #> 637 b_Intercept 41.276825437 1.440015e-01 #> 638 b_Intercept 41.291918697 1.432565e-01 #> 639 b_Intercept 41.307011958 1.425123e-01 #> 640 b_Intercept 41.322105219 1.417679e-01 #> 641 b_Intercept 41.337198479 1.410221e-01 #> 642 b_Intercept 41.352291740 1.402742e-01 #> 643 b_Intercept 41.367385001 1.395234e-01 #> 644 b_Intercept 41.382478261 1.387689e-01 #> 645 b_Intercept 41.397571522 1.380101e-01 #> 646 b_Intercept 41.412664783 1.372451e-01 #> 647 b_Intercept 41.427758043 1.364737e-01 #> 648 b_Intercept 41.442851304 1.356956e-01 #> 649 b_Intercept 41.457944565 1.349103e-01 #> 650 b_Intercept 41.473037825 1.341172e-01 #> 651 b_Intercept 41.488131086 1.333159e-01 #> 652 b_Intercept 41.503224347 1.325053e-01 #> 653 b_Intercept 41.518317607 1.316842e-01 #> 654 b_Intercept 41.533410868 1.308538e-01 #> 655 b_Intercept 41.548504129 1.300138e-01 #> 656 b_Intercept 41.563597390 1.291640e-01 #> 657 b_Intercept 41.578690650 1.283044e-01 #> 658 b_Intercept 41.593783911 1.274348e-01 #> 659 b_Intercept 41.608877172 1.265531e-01 #> 660 b_Intercept 41.623970432 1.256616e-01 #> 661 b_Intercept 41.639063693 1.247604e-01 #> 662 b_Intercept 41.654156954 1.238497e-01 #> 663 b_Intercept 41.669250214 1.229297e-01 #> 664 b_Intercept 41.684343475 1.220006e-01 #> 665 b_Intercept 41.699436736 1.210612e-01 #> 666 b_Intercept 41.714529996 1.201130e-01 #> 667 b_Intercept 41.729623257 1.191568e-01 #> 668 b_Intercept 41.744716518 1.181930e-01 #> 669 b_Intercept 41.759809778 1.172219e-01 #> 670 b_Intercept 41.774903039 1.162441e-01 #> 671 b_Intercept 41.789996300 1.152593e-01 #> 672 b_Intercept 41.805089560 1.142679e-01 #> 673 b_Intercept 41.820182821 1.132712e-01 #> 674 b_Intercept 41.835276082 1.122696e-01 #> 675 b_Intercept 41.850369343 1.112636e-01 #> 676 b_Intercept 41.865462603 1.102536e-01 #> 677 b_Intercept 41.880555864 1.092399e-01 #> 678 b_Intercept 41.895649125 1.082226e-01 #> 679 b_Intercept 41.910742385 1.072027e-01 #> 680 b_Intercept 41.925835646 1.061806e-01 #> 681 b_Intercept 41.940928907 1.051568e-01 #> 682 b_Intercept 41.956022167 1.041314e-01 #> 683 b_Intercept 41.971115428 1.031050e-01 #> 684 b_Intercept 41.986208689 1.020778e-01 #> 685 b_Intercept 42.001301949 1.010502e-01 #> 686 b_Intercept 42.016395210 1.000225e-01 #> 687 b_Intercept 42.031488471 9.899513e-02 #> 688 b_Intercept 42.046581731 9.796819e-02 #> 689 b_Intercept 42.061674992 9.694197e-02 #> 690 b_Intercept 42.076768253 9.591680e-02 #> 691 b_Intercept 42.091861513 9.489301e-02 #> 692 b_Intercept 42.106954774 9.387067e-02 #> 693 b_Intercept 42.122048035 9.284994e-02 #> 694 b_Intercept 42.137141295 9.183101e-02 #> 695 b_Intercept 42.152234556 9.081403e-02 #> 696 b_Intercept 42.167327817 8.979924e-02 #> 697 b_Intercept 42.182421078 8.878724e-02 #> 698 b_Intercept 42.197514338 8.777776e-02 #> 699 b_Intercept 42.212607599 8.677095e-02 #> 700 b_Intercept 42.227700860 8.576696e-02 #> 701 b_Intercept 42.242794120 8.476595e-02 #> 702 b_Intercept 42.257887381 8.376806e-02 #> 703 b_Intercept 42.272980642 8.277412e-02 #> 704 b_Intercept 42.288073902 8.178389e-02 #> 705 b_Intercept 42.303167163 8.079738e-02 #> 706 b_Intercept 42.318260424 7.981478e-02 #> 707 b_Intercept 42.333353684 7.883627e-02 #> 708 b_Intercept 42.348446945 7.786202e-02 #> 709 b_Intercept 42.363540206 7.689276e-02 #> 710 b_Intercept 42.378633466 7.592884e-02 #> 711 b_Intercept 42.393726727 7.496991e-02 #> 712 b_Intercept 42.408819988 7.401619e-02 #> 713 b_Intercept 42.423913248 7.306788e-02 #> 714 b_Intercept 42.439006509 7.212521e-02 #> 715 b_Intercept 42.454099770 7.118860e-02 #> 716 b_Intercept 42.469193031 7.025935e-02 #> 717 b_Intercept 42.484286291 6.933652e-02 #> 718 b_Intercept 42.499379552 6.842033e-02 #> 719 b_Intercept 42.514472813 6.751100e-02 #> 720 b_Intercept 42.529566073 6.660874e-02 #> 721 b_Intercept 42.544659334 6.571375e-02 #> 722 b_Intercept 42.559752595 6.482771e-02 #> 723 b_Intercept 42.574845855 6.394978e-02 #> 724 b_Intercept 42.589939116 6.307982e-02 #> 725 b_Intercept 42.605032377 6.221800e-02 #> 726 b_Intercept 42.620125637 6.136449e-02 #> 727 b_Intercept 42.635218898 6.051945e-02 #> 728 b_Intercept 42.650312159 5.968405e-02 #> 729 b_Intercept 42.665405419 5.885848e-02 #> 730 b_Intercept 42.680498680 5.804187e-02 #> 731 b_Intercept 42.695591941 5.723431e-02 #> 732 b_Intercept 42.710685201 5.643591e-02 #> 733 b_Intercept 42.725778462 5.564676e-02 #> 734 b_Intercept 42.740871723 5.486731e-02 #> 735 b_Intercept 42.755964983 5.409907e-02 #> 736 b_Intercept 42.771058244 5.334030e-02 #> 737 b_Intercept 42.786151505 5.259104e-02 #> 738 b_Intercept 42.801244766 5.185134e-02 #> 739 b_Intercept 42.816338026 5.112120e-02 #> 740 b_Intercept 42.831431287 5.040065e-02 #> 741 b_Intercept 42.846524548 4.969158e-02 #> 742 b_Intercept 42.861617808 4.899244e-02 #> 743 b_Intercept 42.876711069 4.830290e-02 #> 744 b_Intercept 42.891804330 4.762292e-02 #> 745 b_Intercept 42.906897590 4.695251e-02 #> 746 b_Intercept 42.921990851 4.629164e-02 #> 747 b_Intercept 42.937084112 4.564144e-02 #> 748 b_Intercept 42.952177372 4.500174e-02 #> 749 b_Intercept 42.967270633 4.437149e-02 #> 750 b_Intercept 42.982363894 4.375063e-02 #> 751 b_Intercept 42.997457154 4.313914e-02 #> 752 b_Intercept 43.012550415 4.253697e-02 #> 753 b_Intercept 43.027643676 4.194452e-02 #> 754 b_Intercept 43.042736936 4.136299e-02 #> 755 b_Intercept 43.057830197 4.079062e-02 #> 756 b_Intercept 43.072923458 4.022736e-02 #> 757 b_Intercept 43.088016719 3.967315e-02 #> 758 b_Intercept 43.103109979 3.912794e-02 #> 759 b_Intercept 43.118203240 3.859165e-02 #> 760 b_Intercept 43.133296501 3.806600e-02 #> 761 b_Intercept 43.148389761 3.754934e-02 #> 762 b_Intercept 43.163483022 3.704132e-02 #> 763 b_Intercept 43.178576283 3.654187e-02 #> 764 b_Intercept 43.193669543 3.605087e-02 #> 765 b_Intercept 43.208762804 3.556822e-02 #> 766 b_Intercept 43.223856065 3.509484e-02 #> 767 b_Intercept 43.238949325 3.463033e-02 #> 768 b_Intercept 43.254042586 3.417369e-02 #> 769 b_Intercept 43.269135847 3.372478e-02 #> 770 b_Intercept 43.284229107 3.328343e-02 #> 771 b_Intercept 43.299322368 3.284945e-02 #> 772 b_Intercept 43.314415629 3.242304e-02 #> 773 b_Intercept 43.329508889 3.200477e-02 #> 774 b_Intercept 43.344602150 3.159315e-02 #> 775 b_Intercept 43.359695411 3.118794e-02 #> 776 b_Intercept 43.374788671 3.078890e-02 #> 777 b_Intercept 43.389881932 3.039577e-02 #> 778 b_Intercept 43.404975193 3.000830e-02 #> 779 b_Intercept 43.420068454 2.962725e-02 #> 780 b_Intercept 43.435161714 2.925124e-02 #> 781 b_Intercept 43.450254975 2.887987e-02 #> 782 b_Intercept 43.465348236 2.851282e-02 #> 783 b_Intercept 43.480441496 2.814981e-02 #> 784 b_Intercept 43.495534757 2.779051e-02 #> 785 b_Intercept 43.510628018 2.743501e-02 #> 786 b_Intercept 43.525721278 2.708275e-02 #> 787 b_Intercept 43.540814539 2.673307e-02 #> 788 b_Intercept 43.555907800 2.638568e-02 #> 789 b_Intercept 43.571001060 2.604027e-02 #> 790 b_Intercept 43.586094321 2.569654e-02 #> 791 b_Intercept 43.601187582 2.535425e-02 #> 792 b_Intercept 43.616280842 2.501313e-02 #> 793 b_Intercept 43.631374103 2.467267e-02 #> 794 b_Intercept 43.646467364 2.433264e-02 #> 795 b_Intercept 43.661560624 2.399278e-02 #> 796 b_Intercept 43.676653885 2.365288e-02 #> 797 b_Intercept 43.691747146 2.331272e-02 #> 798 b_Intercept 43.706840407 2.297196e-02 #> 799 b_Intercept 43.721933667 2.263050e-02 #> 800 b_Intercept 43.737026928 2.228819e-02 #> 801 b_Intercept 43.752120189 2.194494e-02 #> 802 b_Intercept 43.767213449 2.160064e-02 #> 803 b_Intercept 43.782306710 2.125523e-02 #> 804 b_Intercept 43.797399971 2.090849e-02 #> 805 b_Intercept 43.812493231 2.056046e-02 #> 806 b_Intercept 43.827586492 2.021126e-02 #> 807 b_Intercept 43.842679753 1.986094e-02 #> 808 b_Intercept 43.857773013 1.950957e-02 #> 809 b_Intercept 43.872866274 1.915722e-02 #> 810 b_Intercept 43.887959535 1.880396e-02 #> 811 b_Intercept 43.903052795 1.844992e-02 #> 812 b_Intercept 43.918146056 1.809540e-02 #> 813 b_Intercept 43.933239317 1.774059e-02 #> 814 b_Intercept 43.948332577 1.738569e-02 #> 815 b_Intercept 43.963425838 1.703092e-02 #> 816 b_Intercept 43.978519099 1.667652e-02 #> 817 b_Intercept 43.993612360 1.632296e-02 #> 818 b_Intercept 44.008705620 1.597044e-02 #> 819 b_Intercept 44.023798881 1.561925e-02 #> 820 b_Intercept 44.038892142 1.526965e-02 #> 821 b_Intercept 44.053985402 1.492195e-02 #> 822 b_Intercept 44.069078663 1.457643e-02 #> 823 b_Intercept 44.084171924 1.423385e-02 #> 824 b_Intercept 44.099265184 1.389444e-02 #> 825 b_Intercept 44.114358445 1.355829e-02 #> 826 b_Intercept 44.129451706 1.322569e-02 #> 827 b_Intercept 44.144544966 1.289693e-02 #> 828 b_Intercept 44.159638227 1.257229e-02 #> 829 b_Intercept 44.174731488 1.225238e-02 #> 830 b_Intercept 44.189824748 1.193797e-02 #> 831 b_Intercept 44.204918009 1.162858e-02 #> 832 b_Intercept 44.220011270 1.132445e-02 #> 833 b_Intercept 44.235104530 1.102577e-02 #> 834 b_Intercept 44.250197791 1.073274e-02 #> 835 b_Intercept 44.265291052 1.044555e-02 #> 836 b_Intercept 44.280384312 1.016576e-02 #> 837 b_Intercept 44.295477573 9.892174e-03 #> 838 b_Intercept 44.310570834 9.624881e-03 #> 839 b_Intercept 44.325664095 9.363970e-03 #> 840 b_Intercept 44.340757355 9.109505e-03 #> 841 b_Intercept 44.355850616 8.861536e-03 #> 842 b_Intercept 44.370943877 8.621112e-03 #> 843 b_Intercept 44.386037137 8.387710e-03 #> 844 b_Intercept 44.401130398 8.160811e-03 #> 845 b_Intercept 44.416223659 7.940373e-03 #> 846 b_Intercept 44.431316919 7.726340e-03 #> 847 b_Intercept 44.446410180 7.518639e-03 #> 848 b_Intercept 44.461503441 7.317676e-03 #> 849 b_Intercept 44.476596701 7.123723e-03 #> 850 b_Intercept 44.491689962 6.935716e-03 #> 851 b_Intercept 44.506783223 6.753517e-03 #> 852 b_Intercept 44.521876483 6.576978e-03 #> 853 b_Intercept 44.536969744 6.405944e-03 #> 854 b_Intercept 44.552063005 6.240283e-03 #> 855 b_Intercept 44.567156265 6.080890e-03 #> 856 b_Intercept 44.582249526 5.926382e-03 #> 857 b_Intercept 44.597342787 5.776575e-03 #> 858 b_Intercept 44.612436048 5.631283e-03 #> 859 b_Intercept 44.627529308 5.490321e-03 #> 860 b_Intercept 44.642622569 5.353501e-03 #> 861 b_Intercept 44.657715830 5.221234e-03 #> 862 b_Intercept 44.672809090 5.092898e-03 #> 863 b_Intercept 44.687902351 4.968045e-03 #> 864 b_Intercept 44.702995612 4.846500e-03 #> 865 b_Intercept 44.718088872 4.728094e-03 #> 866 b_Intercept 44.733182133 4.612662e-03 #> 867 b_Intercept 44.748275394 4.500274e-03 #> 868 b_Intercept 44.763368654 4.390862e-03 #> 869 b_Intercept 44.778461915 4.283890e-03 #> 870 b_Intercept 44.793555176 4.179225e-03 #> 871 b_Intercept 44.808648436 4.076745e-03 #> 872 b_Intercept 44.823741697 3.976337e-03 #> 873 b_Intercept 44.838834958 3.877915e-03 #> 874 b_Intercept 44.853928218 3.781731e-03 #> 875 b_Intercept 44.869021479 3.687276e-03 #> 876 b_Intercept 44.884114740 3.594476e-03 #> 877 b_Intercept 44.899208000 3.503264e-03 #> 878 b_Intercept 44.914301261 3.413582e-03 #> 879 b_Intercept 44.929394522 3.325379e-03 #> 880 b_Intercept 44.944487783 3.238845e-03 #> 881 b_Intercept 44.959581043 3.153781e-03 #> 882 b_Intercept 44.974674304 3.070077e-03 #> 883 b_Intercept 44.989767565 2.987714e-03 #> 884 b_Intercept 45.004860825 2.906682e-03 #> 885 b_Intercept 45.019954086 2.826973e-03 #> 886 b_Intercept 45.035047347 2.748712e-03 #> 887 b_Intercept 45.050140607 2.671962e-03 #> 888 b_Intercept 45.065233868 2.596555e-03 #> 889 b_Intercept 45.080327129 2.522504e-03 #> 890 b_Intercept 45.095420389 2.449828e-03 #> 891 b_Intercept 45.110513650 2.378547e-03 #> 892 b_Intercept 45.125606911 2.308712e-03 #> 893 b_Intercept 45.140700171 2.240635e-03 #> 894 b_Intercept 45.155793432 2.174045e-03 #> 895 b_Intercept 45.170886693 2.108968e-03 #> 896 b_Intercept 45.185979953 2.045434e-03 #> 897 b_Intercept 45.201073214 1.983472e-03 #> 898 b_Intercept 45.216166475 1.923112e-03 #> 899 b_Intercept 45.231259736 1.864677e-03 #> 900 b_Intercept 45.246352996 1.808007e-03 #> 901 b_Intercept 45.261446257 1.753039e-03 #> 902 b_Intercept 45.276539518 1.699798e-03 #> 903 b_Intercept 45.291632778 1.648307e-03 #> 904 b_Intercept 45.306726039 1.598589e-03 #> 905 b_Intercept 45.321819300 1.550849e-03 #> 906 b_Intercept 45.336912560 1.505163e-03 #> 907 b_Intercept 45.352005821 1.461311e-03 #> 908 b_Intercept 45.367099082 1.419306e-03 #> 909 b_Intercept 45.382192342 1.379155e-03 #> 910 b_Intercept 45.397285603 1.340865e-03 #> 911 b_Intercept 45.412378864 1.304490e-03 #> 912 b_Intercept 45.427472124 1.270361e-03 #> 913 b_Intercept 45.442565385 1.238090e-03 #> 914 b_Intercept 45.457658646 1.207671e-03 #> 915 b_Intercept 45.472751906 1.179091e-03 #> 916 b_Intercept 45.487845167 1.152336e-03 #> 917 b_Intercept 45.502938428 1.127389e-03 #> 918 b_Intercept 45.518031688 1.104550e-03 #> 919 b_Intercept 45.533124949 1.083544e-03 #> 920 b_Intercept 45.548218210 1.064254e-03 #> 921 b_Intercept 45.563311471 1.046649e-03 #> 922 b_Intercept 45.578404731 1.030691e-03 #> 923 b_Intercept 45.593497992 1.016342e-03 #> 924 b_Intercept 45.608591253 1.003722e-03 #> 925 b_Intercept 45.623684513 9.927868e-04 #> 926 b_Intercept 45.638777774 9.832930e-04 #> 927 b_Intercept 45.653871035 9.751872e-04 #> 928 b_Intercept 45.668964295 9.684133e-04 #> 929 b_Intercept 45.684057556 9.629128e-04 #> 930 b_Intercept 45.699150817 9.586621e-04 #> 931 b_Intercept 45.714244077 9.557610e-04 #> 932 b_Intercept 45.729337338 9.539022e-04 #> 933 b_Intercept 45.744430599 9.530162e-04 #> 934 b_Intercept 45.759523859 9.530321e-04 #> 935 b_Intercept 45.774617120 9.538776e-04 #> 936 b_Intercept 45.789710381 9.554791e-04 #> 937 b_Intercept 45.804803641 9.578687e-04 #> 938 b_Intercept 45.819896902 9.608453e-04 #> 939 b_Intercept 45.834990163 9.643046e-04 #> 940 b_Intercept 45.850083424 9.681698e-04 #> 941 b_Intercept 45.865176684 9.723643e-04 #> 942 b_Intercept 45.880269945 9.768116e-04 #> 943 b_Intercept 45.895363206 9.814439e-04 #> 944 b_Intercept 45.910456466 9.861536e-04 #> 945 b_Intercept 45.925549727 9.908492e-04 #> 946 b_Intercept 45.940642988 9.954604e-04 #> 947 b_Intercept 45.955736248 9.999190e-04 #> 948 b_Intercept 45.970829509 1.004159e-03 #> 949 b_Intercept 45.985922770 1.008101e-03 #> 950 b_Intercept 46.001016030 1.011611e-03 #> 951 b_Intercept 46.016109291 1.014690e-03 #> 952 b_Intercept 46.031202552 1.017285e-03 #> 953 b_Intercept 46.046295812 1.019352e-03 #> 954 b_Intercept 46.061389073 1.020848e-03 #> 955 b_Intercept 46.076482334 1.021733e-03 #> 956 b_Intercept 46.091575594 1.021838e-03 #> 957 b_Intercept 46.106668855 1.021233e-03 #> 958 b_Intercept 46.121762116 1.019920e-03 #> 959 b_Intercept 46.136855376 1.017886e-03 #> 960 b_Intercept 46.151948637 1.015118e-03 #> 961 b_Intercept 46.167041898 1.011613e-03 #> 962 b_Intercept 46.182135159 1.007279e-03 #> 963 b_Intercept 46.197228419 1.002137e-03 #> 964 b_Intercept 46.212321680 9.962804e-04 #> 965 b_Intercept 46.227414941 9.897290e-04 #> 966 b_Intercept 46.242508201 9.825055e-04 #> 967 b_Intercept 46.257601462 9.746368e-04 #> 968 b_Intercept 46.272694723 9.661276e-04 #> 969 b_Intercept 46.287787983 9.569492e-04 #> 970 b_Intercept 46.302881244 9.472568e-04 #> 971 b_Intercept 46.317974505 9.370950e-04 #> 972 b_Intercept 46.333067765 9.265107e-04 #> 973 b_Intercept 46.348161026 9.155530e-04 #> 974 b_Intercept 46.363254287 9.042728e-04 #> 975 b_Intercept 46.378347547 8.926850e-04 #> 976 b_Intercept 46.393440808 8.809063e-04 #> 977 b_Intercept 46.408534069 8.689985e-04 #> 978 b_Intercept 46.423627329 8.570156e-04 #> 979 b_Intercept 46.438720590 8.450111e-04 #> 980 b_Intercept 46.453813851 8.330378e-04 #> 981 b_Intercept 46.468907112 8.211670e-04 #> 982 b_Intercept 46.484000372 8.094661e-04 #> 983 b_Intercept 46.499093633 7.979708e-04 #> 984 b_Intercept 46.514186894 7.867233e-04 #> 985 b_Intercept 46.529280154 7.757631e-04 #> 986 b_Intercept 46.544373415 7.651263e-04 #> 987 b_Intercept 46.559466676 7.548669e-04 #> 988 b_Intercept 46.574559936 7.450713e-04 #> 989 b_Intercept 46.589653197 7.356966e-04 #> 990 b_Intercept 46.604746458 7.267591e-04 #> 991 b_Intercept 46.619839718 7.182713e-04 #> 992 b_Intercept 46.634932979 7.102415e-04 #> 993 b_Intercept 46.650026240 7.026738e-04 #> 994 b_Intercept 46.665119500 6.956628e-04 #> 995 b_Intercept 46.680212761 6.891119e-04 #> 996 b_Intercept 46.695306022 6.829982e-04 #> 997 b_Intercept 46.710399282 6.773049e-04 #> 998 b_Intercept 46.725492543 6.720118e-04 #> 999 b_Intercept 46.740585804 6.670952e-04 #> 1000 b_Intercept 46.755679065 6.625690e-04 #> 1001 b_Intercept 46.770772325 6.583749e-04 #> 1002 b_Intercept 46.785865586 6.544444e-04 #> 1003 b_Intercept 46.800958847 6.507399e-04 #> 1004 b_Intercept 46.816052107 6.472224e-04 #> 1005 b_Intercept 46.831145368 6.438514e-04 #> 1006 b_Intercept 46.846238629 6.405879e-04 #> 1007 b_Intercept 46.861331889 6.373781e-04 #> 1008 b_Intercept 46.876425150 6.341615e-04 #> 1009 b_Intercept 46.891518411 6.308952e-04 #> 1010 b_Intercept 46.906611671 6.275372e-04 #> 1011 b_Intercept 46.921704932 6.240460e-04 #> 1012 b_Intercept 46.936798193 6.203815e-04 #> 1013 b_Intercept 46.951891453 6.164465e-04 #> 1014 b_Intercept 46.966984714 6.122388e-04 #> 1015 b_Intercept 46.982077975 6.077294e-04 #> 1016 b_Intercept 46.997171235 6.028879e-04 #> 1017 b_Intercept 47.012264496 5.976871e-04 #> 1018 b_Intercept 47.027357757 5.921020e-04 #> 1019 b_Intercept 47.042451017 5.860494e-04 #> 1020 b_Intercept 47.057544278 5.795267e-04 #> 1021 b_Intercept 47.072637539 5.725586e-04 #> 1022 b_Intercept 47.087730800 5.651363e-04 #> 1023 b_Intercept 47.102824060 5.572541e-04 #> 1024 b_Intercept 47.117917321 5.489094e-04 #> 1025 b_wt -6.882027746 7.178788e-04 #> 1026 b_wt -6.875683507 7.244908e-04 #> 1027 b_wt -6.869339268 7.304031e-04 #> 1028 b_wt -6.862995028 7.357482e-04 #> 1029 b_wt -6.856650789 7.405577e-04 #> 1030 b_wt -6.850306549 7.448527e-04 #> 1031 b_wt -6.843962310 7.486560e-04 #> 1032 b_wt -6.837618071 7.519925e-04 #> 1033 b_wt -6.831273831 7.548166e-04 #> 1034 b_wt -6.824929592 7.572221e-04 #> 1035 b_wt -6.818585352 7.592649e-04 #> 1036 b_wt -6.812241113 7.609759e-04 #> 1037 b_wt -6.805896873 7.623862e-04 #> 1038 b_wt -6.799552634 7.635272e-04 #> 1039 b_wt -6.793208395 7.644019e-04 #> 1040 b_wt -6.786864155 7.650659e-04 #> 1041 b_wt -6.780519916 7.655699e-04 #> 1042 b_wt -6.774175676 7.659411e-04 #> 1043 b_wt -6.767831437 7.662052e-04 #> 1044 b_wt -6.761487198 7.663864e-04 #> 1045 b_wt -6.755142958 7.665032e-04 #> 1046 b_wt -6.748798719 7.665820e-04 #> 1047 b_wt -6.742454479 7.666448e-04 #> 1048 b_wt -6.736110240 7.667035e-04 #> 1049 b_wt -6.729766000 7.667674e-04 #> 1050 b_wt -6.723421761 7.668429e-04 #> 1051 b_wt -6.717077522 7.669349e-04 #> 1052 b_wt -6.710733282 7.670438e-04 #> 1053 b_wt -6.704389043 7.671615e-04 #> 1054 b_wt -6.698044803 7.672799e-04 #> 1055 b_wt -6.691700564 7.673884e-04 #> 1056 b_wt -6.685356325 7.674739e-04 #> 1057 b_wt -6.679012085 7.675187e-04 #> 1058 b_wt -6.672667846 7.674871e-04 #> 1059 b_wt -6.666323606 7.673652e-04 #> 1060 b_wt -6.659979367 7.671300e-04 #> 1061 b_wt -6.653635128 7.667574e-04 #> 1062 b_wt -6.647290888 7.662221e-04 #> 1063 b_wt -6.640946649 7.654980e-04 #> 1064 b_wt -6.634602409 7.644974e-04 #> 1065 b_wt -6.628258170 7.632392e-04 #> 1066 b_wt -6.621913930 7.616979e-04 #> 1067 b_wt -6.615569691 7.598489e-04 #> 1068 b_wt -6.609225452 7.576688e-04 #> 1069 b_wt -6.602881212 7.551353e-04 #> 1070 b_wt -6.596536973 7.521447e-04 #> 1071 b_wt -6.590192733 7.487394e-04 #> 1072 b_wt -6.583848494 7.449190e-04 #> 1073 b_wt -6.577504255 7.406726e-04 #> 1074 b_wt -6.571160015 7.359922e-04 #> 1075 b_wt -6.564815776 7.308726e-04 #> 1076 b_wt -6.558471536 7.252373e-04 #> 1077 b_wt -6.552127297 7.191355e-04 #> 1078 b_wt -6.545783057 7.126072e-04 #> 1079 b_wt -6.539438818 7.056642e-04 #> 1080 b_wt -6.533094579 6.983223e-04 #> 1081 b_wt -6.526750339 6.906000e-04 #> 1082 b_wt -6.520406100 6.824768e-04 #> 1083 b_wt -6.514061860 6.740029e-04 #> 1084 b_wt -6.507717621 6.652481e-04 #> 1085 b_wt -6.501373382 6.562478e-04 #> 1086 b_wt -6.495029142 6.470397e-04 #> 1087 b_wt -6.488684903 6.376641e-04 #> 1088 b_wt -6.482340663 6.281575e-04 #> 1089 b_wt -6.475996424 6.185835e-04 #> 1090 b_wt -6.469652184 6.090074e-04 #> 1091 b_wt -6.463307945 5.994785e-04 #> 1092 b_wt -6.456963706 5.900470e-04 #> 1093 b_wt -6.450619466 5.807638e-04 #> 1094 b_wt -6.444275227 5.716970e-04 #> 1095 b_wt -6.437930987 5.629494e-04 #> 1096 b_wt -6.431586748 5.545349e-04 #> 1097 b_wt -6.425242509 5.465027e-04 #> 1098 b_wt -6.418898269 5.389005e-04 #> 1099 b_wt -6.412554030 5.317750e-04 #> 1100 b_wt -6.406209790 5.251862e-04 #> 1101 b_wt -6.399865551 5.192987e-04 #> 1102 b_wt -6.393521311 5.140377e-04 #> 1103 b_wt -6.387177072 5.094376e-04 #> 1104 b_wt -6.380832833 5.055297e-04 #> 1105 b_wt -6.374488593 5.023425e-04 #> 1106 b_wt -6.368144354 4.999012e-04 #> 1107 b_wt -6.361800114 4.984114e-04 #> 1108 b_wt -6.355455875 4.977214e-04 #> 1109 b_wt -6.349111636 4.978340e-04 #> 1110 b_wt -6.342767396 4.987557e-04 #> 1111 b_wt -6.336423157 5.004892e-04 #> 1112 b_wt -6.330078917 5.030336e-04 #> 1113 b_wt -6.323734678 5.065392e-04 #> 1114 b_wt -6.317390438 5.108699e-04 #> 1115 b_wt -6.311046199 5.159736e-04 #> 1116 b_wt -6.304701960 5.218314e-04 #> 1117 b_wt -6.298357720 5.284212e-04 #> 1118 b_wt -6.292013481 5.357184e-04 #> 1119 b_wt -6.285669241 5.437965e-04 #> 1120 b_wt -6.279325002 5.525577e-04 #> 1121 b_wt -6.272980763 5.619127e-04 #> 1122 b_wt -6.266636523 5.718253e-04 #> 1123 b_wt -6.260292284 5.822579e-04 #> 1124 b_wt -6.253948044 5.931720e-04 #> 1125 b_wt -6.247603805 6.045753e-04 #> 1126 b_wt -6.241259565 6.164057e-04 #> 1127 b_wt -6.234915326 6.285720e-04 #> 1128 b_wt -6.228571087 6.410343e-04 #> 1129 b_wt -6.222226847 6.537537e-04 #> 1130 b_wt -6.215882608 6.666919e-04 #> 1131 b_wt -6.209538368 6.798236e-04 #> 1132 b_wt -6.203194129 6.931079e-04 #> 1133 b_wt -6.196849890 7.064864e-04 #> 1134 b_wt -6.190505650 7.199302e-04 #> 1135 b_wt -6.184161411 7.334126e-04 #> 1136 b_wt -6.177817171 7.469098e-04 #> 1137 b_wt -6.171472932 7.603989e-04 #> 1138 b_wt -6.165128692 7.738524e-04 #> 1139 b_wt -6.158784453 7.872606e-04 #> 1140 b_wt -6.152440214 8.006152e-04 #> 1141 b_wt -6.146095974 8.139115e-04 #> 1142 b_wt -6.139751735 8.271476e-04 #> 1143 b_wt -6.133407495 8.403243e-04 #> 1144 b_wt -6.127063256 8.534391e-04 #> 1145 b_wt -6.120719017 8.665173e-04 #> 1146 b_wt -6.114374777 8.795738e-04 #> 1147 b_wt -6.108030538 8.926262e-04 #> 1148 b_wt -6.101686298 9.056948e-04 #> 1149 b_wt -6.095342059 9.188027e-04 #> 1150 b_wt -6.088997819 9.319999e-04 #> 1151 b_wt -6.082653580 9.453112e-04 #> 1152 b_wt -6.076309341 9.587672e-04 #> 1153 b_wt -6.069965101 9.724016e-04 #> 1154 b_wt -6.063620862 9.862495e-04 #> 1155 b_wt -6.057276622 1.000347e-03 #> 1156 b_wt -6.050932383 1.014796e-03 #> 1157 b_wt -6.044588144 1.029612e-03 #> 1158 b_wt -6.038243904 1.044817e-03 #> 1159 b_wt -6.031899665 1.060452e-03 #> 1160 b_wt -6.025555425 1.076555e-03 #> 1161 b_wt -6.019211186 1.093168e-03 #> 1162 b_wt -6.012866947 1.110418e-03 #> 1163 b_wt -6.006522707 1.128331e-03 #> 1164 b_wt -6.000178468 1.146895e-03 #> 1165 b_wt -5.993834228 1.166147e-03 #> 1166 b_wt -5.987489989 1.186125e-03 #> 1167 b_wt -5.981145749 1.206867e-03 #> 1168 b_wt -5.974801510 1.228505e-03 #> 1169 b_wt -5.968457271 1.251108e-03 #> 1170 b_wt -5.962113031 1.274608e-03 #> 1171 b_wt -5.955768792 1.299042e-03 #> 1172 b_wt -5.949424552 1.324450e-03 #> 1173 b_wt -5.943080313 1.350868e-03 #> 1174 b_wt -5.936736074 1.378422e-03 #> 1175 b_wt -5.930391834 1.407268e-03 #> 1176 b_wt -5.924047595 1.437277e-03 #> 1177 b_wt -5.917703355 1.468496e-03 #> 1178 b_wt -5.911359116 1.500974e-03 #> 1179 b_wt -5.905014876 1.534765e-03 #> 1180 b_wt -5.898670637 1.569982e-03 #> 1181 b_wt -5.892326398 1.606942e-03 #> 1182 b_wt -5.885982158 1.645439e-03 #> 1183 b_wt -5.879637919 1.685547e-03 #> 1184 b_wt -5.873293679 1.727345e-03 #> 1185 b_wt -5.866949440 1.770917e-03 #> 1186 b_wt -5.860605201 1.816362e-03 #> 1187 b_wt -5.854260961 1.864271e-03 #> 1188 b_wt -5.847916722 1.914316e-03 #> 1189 b_wt -5.841572482 1.966611e-03 #> 1190 b_wt -5.835228243 2.021281e-03 #> 1191 b_wt -5.828884003 2.078455e-03 #> 1192 b_wt -5.822539764 2.138270e-03 #> 1193 b_wt -5.816195525 2.201504e-03 #> 1194 b_wt -5.809851285 2.267863e-03 #> 1195 b_wt -5.803507046 2.337424e-03 #> 1196 b_wt -5.797162806 2.410358e-03 #> 1197 b_wt -5.790818567 2.486839e-03 #> 1198 b_wt -5.784474328 2.567049e-03 #> 1199 b_wt -5.778130088 2.651906e-03 #> 1200 b_wt -5.771785849 2.741262e-03 #> 1201 b_wt -5.765441609 2.835047e-03 #> 1202 b_wt -5.759097370 2.933463e-03 #> 1203 b_wt -5.752753130 3.036712e-03 #> 1204 b_wt -5.746408891 3.144997e-03 #> 1205 b_wt -5.740064652 3.259284e-03 #> 1206 b_wt -5.733720412 3.379684e-03 #> 1207 b_wt -5.727376173 3.505850e-03 #> 1208 b_wt -5.721031933 3.637974e-03 #> 1209 b_wt -5.714687694 3.776243e-03 #> 1210 b_wt -5.708343455 3.920840e-03 #> 1211 b_wt -5.701999215 4.072622e-03 #> 1212 b_wt -5.695654976 4.232079e-03 #> 1213 b_wt -5.689310736 4.398445e-03 #> 1214 b_wt -5.682966497 4.571851e-03 #> 1215 b_wt -5.676622257 4.752415e-03 #> 1216 b_wt -5.670278018 4.940244e-03 #> 1217 b_wt -5.663933779 5.135915e-03 #> 1218 b_wt -5.657589539 5.340342e-03 #> 1219 b_wt -5.651245300 5.552267e-03 #> 1220 b_wt -5.644901060 5.771709e-03 #> 1221 b_wt -5.638556821 5.998665e-03 #> 1222 b_wt -5.632212582 6.233116e-03 #> 1223 b_wt -5.625868342 6.475238e-03 #> 1224 b_wt -5.619524103 6.726273e-03 #> 1225 b_wt -5.613179863 6.984538e-03 #> 1226 b_wt -5.606835624 7.249901e-03 #> 1227 b_wt -5.600491384 7.522212e-03 #> 1228 b_wt -5.594147145 7.801302e-03 #> 1229 b_wt -5.587802906 8.086977e-03 #> 1230 b_wt -5.581458666 8.380429e-03 #> 1231 b_wt -5.575114427 8.679895e-03 #> 1232 b_wt -5.568770187 8.985060e-03 #> 1233 b_wt -5.562425948 9.295636e-03 #> 1234 b_wt -5.556081709 9.611319e-03 #> 1235 b_wt -5.549737469 9.931790e-03 #> 1236 b_wt -5.543393230 1.025749e-02 #> 1237 b_wt -5.537048990 1.058726e-02 #> 1238 b_wt -5.530704751 1.092054e-02 #> 1239 b_wt -5.524360511 1.125697e-02 #> 1240 b_wt -5.518016272 1.159616e-02 #> 1241 b_wt -5.511672033 1.193775e-02 #> 1242 b_wt -5.505327793 1.228158e-02 #> 1243 b_wt -5.498983554 1.262695e-02 #> 1244 b_wt -5.492639314 1.297334e-02 #> 1245 b_wt -5.486295075 1.332040e-02 #> 1246 b_wt -5.479950836 1.366776e-02 #> 1247 b_wt -5.473606596 1.401509e-02 #> 1248 b_wt -5.467262357 1.436195e-02 #> 1249 b_wt -5.460918117 1.470791e-02 #> 1250 b_wt -5.454573878 1.505273e-02 #> 1251 b_wt -5.448229638 1.539617e-02 #> 1252 b_wt -5.441885399 1.573800e-02 #> 1253 b_wt -5.435541160 1.607800e-02 #> 1254 b_wt -5.429196920 1.641579e-02 #> 1255 b_wt -5.422852681 1.675104e-02 #> 1256 b_wt -5.416508441 1.708396e-02 #> 1257 b_wt -5.410164202 1.741449e-02 #> 1258 b_wt -5.403819963 1.774258e-02 #> 1259 b_wt -5.397475723 1.806823e-02 #> 1260 b_wt -5.391131484 1.839134e-02 #> 1261 b_wt -5.384787244 1.871169e-02 #> 1262 b_wt -5.378443005 1.902985e-02 #> 1263 b_wt -5.372098765 1.934598e-02 #> 1264 b_wt -5.365754526 1.966024e-02 #> 1265 b_wt -5.359410287 1.997283e-02 #> 1266 b_wt -5.353066047 2.028398e-02 #> 1267 b_wt -5.346721808 2.059380e-02 #> 1268 b_wt -5.340377568 2.090293e-02 #> 1269 b_wt -5.334033329 2.121169e-02 #> 1270 b_wt -5.327689090 2.152042e-02 #> 1271 b_wt -5.321344850 2.182948e-02 #> 1272 b_wt -5.315000611 2.213925e-02 #> 1273 b_wt -5.308656371 2.245050e-02 #> 1274 b_wt -5.302312132 2.276354e-02 #> 1275 b_wt -5.295967893 2.307874e-02 #> 1276 b_wt -5.289623653 2.339654e-02 #> 1277 b_wt -5.283279414 2.371734e-02 #> 1278 b_wt -5.276935174 2.404158e-02 #> 1279 b_wt -5.270590935 2.437051e-02 #> 1280 b_wt -5.264246695 2.470418e-02 #> 1281 b_wt -5.257902456 2.504276e-02 #> 1282 b_wt -5.251558217 2.538666e-02 #> 1283 b_wt -5.245213977 2.573624e-02 #> 1284 b_wt -5.238869738 2.609186e-02 #> 1285 b_wt -5.232525498 2.645493e-02 #> 1286 b_wt -5.226181259 2.682550e-02 #> 1287 b_wt -5.219837020 2.720329e-02 #> 1288 b_wt -5.213492780 2.758857e-02 #> 1289 b_wt -5.207148541 2.798160e-02 #> 1290 b_wt -5.200804301 2.838262e-02 #> 1291 b_wt -5.194460062 2.879282e-02 #> 1292 b_wt -5.188115822 2.921256e-02 #> 1293 b_wt -5.181771583 2.964097e-02 #> 1294 b_wt -5.175427344 3.007818e-02 #> 1295 b_wt -5.169083104 3.052429e-02 #> 1296 b_wt -5.162738865 3.097943e-02 #> 1297 b_wt -5.156394625 3.144439e-02 #> 1298 b_wt -5.150050386 3.192001e-02 #> 1299 b_wt -5.143706147 3.240487e-02 #> 1300 b_wt -5.137361907 3.289899e-02 #> 1301 b_wt -5.131017668 3.340238e-02 #> 1302 b_wt -5.124673428 3.391504e-02 #> 1303 b_wt -5.118329189 3.443739e-02 #> 1304 b_wt -5.111984949 3.497084e-02 #> 1305 b_wt -5.105640710 3.551353e-02 #> 1306 b_wt -5.099296471 3.606545e-02 #> 1307 b_wt -5.092952231 3.662659e-02 #> 1308 b_wt -5.086607992 3.719693e-02 #> 1309 b_wt -5.080263752 3.777651e-02 #> 1310 b_wt -5.073919513 3.836741e-02 #> 1311 b_wt -5.067575274 3.896747e-02 #> 1312 b_wt -5.061231034 3.957670e-02 #> 1313 b_wt -5.054886795 4.019511e-02 #> 1314 b_wt -5.048542555 4.082272e-02 #> 1315 b_wt -5.042198316 4.145955e-02 #> 1316 b_wt -5.035854076 4.210761e-02 #> 1317 b_wt -5.029509837 4.276525e-02 #> 1318 b_wt -5.023165598 4.343225e-02 #> 1319 b_wt -5.016821358 4.410867e-02 #> 1320 b_wt -5.010477119 4.479456e-02 #> 1321 b_wt -5.004132879 4.549000e-02 #> 1322 b_wt -4.997788640 4.619674e-02 #> 1323 b_wt -4.991444401 4.691382e-02 #> 1324 b_wt -4.985100161 4.764067e-02 #> 1325 b_wt -4.978755922 4.837735e-02 #> 1326 b_wt -4.972411682 4.912392e-02 #> 1327 b_wt -4.966067443 4.988043e-02 #> 1328 b_wt -4.959723203 5.064834e-02 #> 1329 b_wt -4.953378964 5.142730e-02 #> 1330 b_wt -4.947034725 5.221632e-02 #> 1331 b_wt -4.940690485 5.301539e-02 #> 1332 b_wt -4.934346246 5.382451e-02 #> 1333 b_wt -4.928002006 5.464365e-02 #> 1334 b_wt -4.921657767 5.547381e-02 #> 1335 b_wt -4.915313528 5.631523e-02 #> 1336 b_wt -4.908969288 5.716644e-02 #> 1337 b_wt -4.902625049 5.802735e-02 #> 1338 b_wt -4.896280809 5.889783e-02 #> 1339 b_wt -4.889936570 5.977774e-02 #> 1340 b_wt -4.883592330 6.066755e-02 #> 1341 b_wt -4.877248091 6.156796e-02 #> 1342 b_wt -4.870903852 6.247717e-02 #> 1343 b_wt -4.864559612 6.339498e-02 #> 1344 b_wt -4.858215373 6.432115e-02 #> 1345 b_wt -4.851871133 6.525547e-02 #> 1346 b_wt -4.845526894 6.619794e-02 #> 1347 b_wt -4.839182655 6.714958e-02 #> 1348 b_wt -4.832838415 6.810849e-02 #> 1349 b_wt -4.826494176 6.907443e-02 #> 1350 b_wt -4.820149936 7.004716e-02 #> 1351 b_wt -4.813805697 7.102644e-02 #> 1352 b_wt -4.807461457 7.201207e-02 #> 1353 b_wt -4.801117218 7.300519e-02 #> 1354 b_wt -4.794772979 7.400414e-02 #> 1355 b_wt -4.788428739 7.500874e-02 #> 1356 b_wt -4.782084500 7.601885e-02 #> 1357 b_wt -4.775740260 7.703434e-02 #> 1358 b_wt -4.769396021 7.805512e-02 #> 1359 b_wt -4.763051782 7.908215e-02 #> 1360 b_wt -4.756707542 8.011456e-02 #> 1361 b_wt -4.750363303 8.115217e-02 #> 1362 b_wt -4.744019063 8.219502e-02 #> 1363 b_wt -4.737674824 8.324320e-02 #> 1364 b_wt -4.731330584 8.429682e-02 #> 1365 b_wt -4.724986345 8.535701e-02 #> 1366 b_wt -4.718642106 8.642355e-02 #> 1367 b_wt -4.712297866 8.749626e-02 #> 1368 b_wt -4.705953627 8.857544e-02 #> 1369 b_wt -4.699609387 8.966139e-02 #> 1370 b_wt -4.693265148 9.075445e-02 #> 1371 b_wt -4.686920909 9.185604e-02 #> 1372 b_wt -4.680576669 9.296665e-02 #> 1373 b_wt -4.674232430 9.408587e-02 #> 1374 b_wt -4.667888190 9.521417e-02 #> 1375 b_wt -4.661543951 9.635204e-02 #> 1376 b_wt -4.655199712 9.749999e-02 #> 1377 b_wt -4.648855472 9.865958e-02 #> 1378 b_wt -4.642511233 9.983222e-02 #> 1379 b_wt -4.636166993 1.010169e-01 #> 1380 b_wt -4.629822754 1.022140e-01 #> 1381 b_wt -4.623478514 1.034243e-01 #> 1382 b_wt -4.617134275 1.046482e-01 #> 1383 b_wt -4.610790036 1.058871e-01 #> 1384 b_wt -4.604445796 1.071436e-01 #> 1385 b_wt -4.598101557 1.084157e-01 #> 1386 b_wt -4.591757317 1.097037e-01 #> 1387 b_wt -4.585413078 1.110082e-01 #> 1388 b_wt -4.579068839 1.123295e-01 #> 1389 b_wt -4.572724599 1.136686e-01 #> 1390 b_wt -4.566380360 1.150293e-01 #> 1391 b_wt -4.560036120 1.164084e-01 #> 1392 b_wt -4.553691881 1.178061e-01 #> 1393 b_wt -4.547347641 1.192226e-01 #> 1394 b_wt -4.541003402 1.206582e-01 #> 1395 b_wt -4.534659163 1.221132e-01 #> 1396 b_wt -4.528314923 1.235922e-01 #> 1397 b_wt -4.521970684 1.250912e-01 #> 1398 b_wt -4.515626444 1.266102e-01 #> 1399 b_wt -4.509282205 1.281490e-01 #> 1400 b_wt -4.502937966 1.297078e-01 #> 1401 b_wt -4.496593726 1.312866e-01 #> 1402 b_wt -4.490249487 1.328891e-01 #> 1403 b_wt -4.483905247 1.345125e-01 #> 1404 b_wt -4.477561008 1.361557e-01 #> 1405 b_wt -4.471216768 1.378185e-01 #> 1406 b_wt -4.464872529 1.395008e-01 #> 1407 b_wt -4.458528290 1.412024e-01 #> 1408 b_wt -4.452184050 1.429261e-01 #> 1409 b_wt -4.445839811 1.446702e-01 #> 1410 b_wt -4.439495571 1.464329e-01 #> 1411 b_wt -4.433151332 1.482138e-01 #> 1412 b_wt -4.426807093 1.500128e-01 #> 1413 b_wt -4.420462853 1.518295e-01 #> 1414 b_wt -4.414118614 1.536656e-01 #> 1415 b_wt -4.407774374 1.555207e-01 #> 1416 b_wt -4.401430135 1.573924e-01 #> 1417 b_wt -4.395085895 1.592803e-01 #> 1418 b_wt -4.388741656 1.611841e-01 #> 1419 b_wt -4.382397417 1.631033e-01 #> 1420 b_wt -4.376053177 1.650388e-01 #> 1421 b_wt -4.369708938 1.669911e-01 #> 1422 b_wt -4.363364698 1.689574e-01 #> 1423 b_wt -4.357020459 1.709372e-01 #> 1424 b_wt -4.350676220 1.729302e-01 #> 1425 b_wt -4.344331980 1.749359e-01 #> 1426 b_wt -4.337987741 1.769543e-01 #> 1427 b_wt -4.331643501 1.789866e-01 #> 1428 b_wt -4.325299262 1.810297e-01 #> 1429 b_wt -4.318955022 1.830833e-01 #> 1430 b_wt -4.312610783 1.851468e-01 #> 1431 b_wt -4.306266544 1.872197e-01 #> 1432 b_wt -4.299922304 1.893016e-01 #> 1433 b_wt -4.293578065 1.913934e-01 #> 1434 b_wt -4.287233825 1.934927e-01 #> 1435 b_wt -4.280889586 1.955988e-01 #> 1436 b_wt -4.274545347 1.977112e-01 #> 1437 b_wt -4.268201107 1.998294e-01 #> 1438 b_wt -4.261856868 2.019527e-01 #> 1439 b_wt -4.255512628 2.040815e-01 #> 1440 b_wt -4.249168389 2.062140e-01 #> 1441 b_wt -4.242824149 2.083498e-01 #> 1442 b_wt -4.236479910 2.104882e-01 #> 1443 b_wt -4.230135671 2.126287e-01 #> 1444 b_wt -4.223791431 2.147709e-01 #> 1445 b_wt -4.217447192 2.169143e-01 #> 1446 b_wt -4.211102952 2.190581e-01 #> 1447 b_wt -4.204758713 2.212021e-01 #> 1448 b_wt -4.198414474 2.233456e-01 #> 1449 b_wt -4.192070234 2.254885e-01 #> 1450 b_wt -4.185725995 2.276304e-01 #> 1451 b_wt -4.179381755 2.297707e-01 #> 1452 b_wt -4.173037516 2.319093e-01 #> 1453 b_wt -4.166693276 2.340461e-01 #> 1454 b_wt -4.160349037 2.361811e-01 #> 1455 b_wt -4.154004798 2.383141e-01 #> 1456 b_wt -4.147660558 2.404451e-01 #> 1457 b_wt -4.141316319 2.425739e-01 #> 1458 b_wt -4.134972079 2.447008e-01 #> 1459 b_wt -4.128627840 2.468260e-01 #> 1460 b_wt -4.122283601 2.489499e-01 #> 1461 b_wt -4.115939361 2.510726e-01 #> 1462 b_wt -4.109595122 2.531944e-01 #> 1463 b_wt -4.103250882 2.553158e-01 #> 1464 b_wt -4.096906643 2.574373e-01 #> 1465 b_wt -4.090562403 2.595595e-01 #> 1466 b_wt -4.084218164 2.616827e-01 #> 1467 b_wt -4.077873925 2.638076e-01 #> 1468 b_wt -4.071529685 2.659347e-01 #> 1469 b_wt -4.065185446 2.680647e-01 #> 1470 b_wt -4.058841206 2.701990e-01 #> 1471 b_wt -4.052496967 2.723375e-01 #> 1472 b_wt -4.046152728 2.744811e-01 #> 1473 b_wt -4.039808488 2.766301e-01 #> 1474 b_wt -4.033464249 2.787854e-01 #> 1475 b_wt -4.027120009 2.809473e-01 #> 1476 b_wt -4.020775770 2.831186e-01 #> 1477 b_wt -4.014431530 2.852981e-01 #> 1478 b_wt -4.008087291 2.874864e-01 #> 1479 b_wt -4.001743052 2.896838e-01 #> 1480 b_wt -3.995398812 2.918910e-01 #> 1481 b_wt -3.989054573 2.941082e-01 #> 1482 b_wt -3.982710333 2.963380e-01 #> 1483 b_wt -3.976366094 2.985792e-01 #> 1484 b_wt -3.970021855 3.008314e-01 #> 1485 b_wt -3.963677615 3.030949e-01 #> 1486 b_wt -3.957333376 3.053697e-01 #> 1487 b_wt -3.950989136 3.076558e-01 #> 1488 b_wt -3.944644897 3.099552e-01 #> 1489 b_wt -3.938300658 3.122665e-01 #> 1490 b_wt -3.931956418 3.145887e-01 #> 1491 b_wt -3.925612179 3.169216e-01 #> 1492 b_wt -3.919267939 3.192647e-01 #> 1493 b_wt -3.912923700 3.216177e-01 #> 1494 b_wt -3.906579460 3.239812e-01 #> 1495 b_wt -3.900235221 3.263543e-01 #> 1496 b_wt -3.893890982 3.287352e-01 #> 1497 b_wt -3.887546742 3.311234e-01 #> 1498 b_wt -3.881202503 3.335180e-01 #> 1499 b_wt -3.874858263 3.359183e-01 #> 1500 b_wt -3.868514024 3.383239e-01 #> 1501 b_wt -3.862169785 3.407338e-01 #> 1502 b_wt -3.855825545 3.431462e-01 #> 1503 b_wt -3.849481306 3.455604e-01 #> 1504 b_wt -3.843137066 3.479753e-01 #> 1505 b_wt -3.836792827 3.503901e-01 #> 1506 b_wt -3.830448587 3.528037e-01 #> 1507 b_wt -3.824104348 3.552145e-01 #> 1508 b_wt -3.817760109 3.576218e-01 #> 1509 b_wt -3.811415869 3.600247e-01 #> 1510 b_wt -3.805071630 3.624222e-01 #> 1511 b_wt -3.798727390 3.648136e-01 #> 1512 b_wt -3.792383151 3.671979e-01 #> 1513 b_wt -3.786038912 3.695723e-01 #> 1514 b_wt -3.779694672 3.719377e-01 #> 1515 b_wt -3.773350433 3.742936e-01 #> 1516 b_wt -3.767006193 3.766391e-01 #> 1517 b_wt -3.760661954 3.789738e-01 #> 1518 b_wt -3.754317714 3.812972e-01 #> 1519 b_wt -3.747973475 3.836058e-01 #> 1520 b_wt -3.741629236 3.859019e-01 #> 1521 b_wt -3.735284996 3.881853e-01 #> 1522 b_wt -3.728940757 3.904556e-01 #> 1523 b_wt -3.722596517 3.927129e-01 #> 1524 b_wt -3.716252278 3.949568e-01 #> 1525 b_wt -3.709908039 3.971848e-01 #> 1526 b_wt -3.703563799 3.993988e-01 #> 1527 b_wt -3.697219560 4.015996e-01 #> 1528 b_wt -3.690875320 4.037873e-01 #> 1529 b_wt -3.684531081 4.059620e-01 #> 1530 b_wt -3.678186841 4.081240e-01 #> 1531 b_wt -3.671842602 4.102716e-01 #> 1532 b_wt -3.665498363 4.124061e-01 #> 1533 b_wt -3.659154123 4.145290e-01 #> 1534 b_wt -3.652809884 4.166406e-01 #> 1535 b_wt -3.646465644 4.187412e-01 #> 1536 b_wt -3.640121405 4.208313e-01 #> 1537 b_wt -3.633777166 4.229099e-01 #> 1538 b_wt -3.627432926 4.249778e-01 #> 1539 b_wt -3.621088687 4.270364e-01 #> 1540 b_wt -3.614744447 4.290860e-01 #> 1541 b_wt -3.608400208 4.311271e-01 #> 1542 b_wt -3.602055968 4.331597e-01 #> 1543 b_wt -3.595711729 4.351837e-01 #> 1544 b_wt -3.589367490 4.371986e-01 #> 1545 b_wt -3.583023250 4.392058e-01 #> 1546 b_wt -3.576679011 4.412053e-01 #> 1547 b_wt -3.570334771 4.431973e-01 #> 1548 b_wt -3.563990532 4.451816e-01 #> 1549 b_wt -3.557646293 4.471577e-01 #> 1550 b_wt -3.551302053 4.491241e-01 #> 1551 b_wt -3.544957814 4.510822e-01 #> 1552 b_wt -3.538613574 4.530313e-01 #> 1553 b_wt -3.532269335 4.549712e-01 #> 1554 b_wt -3.525925095 4.569012e-01 #> 1555 b_wt -3.519580856 4.588207e-01 #> 1556 b_wt -3.513236617 4.607262e-01 #> 1557 b_wt -3.506892377 4.626193e-01 #> 1558 b_wt -3.500548138 4.644991e-01 #> 1559 b_wt -3.494203898 4.663647e-01 #> 1560 b_wt -3.487859659 4.682151e-01 #> 1561 b_wt -3.481515420 4.700493e-01 #> 1562 b_wt -3.475171180 4.718619e-01 #> 1563 b_wt -3.468826941 4.736548e-01 #> 1564 b_wt -3.462482701 4.754272e-01 #> 1565 b_wt -3.456138462 4.771777e-01 #> 1566 b_wt -3.449794222 4.789051e-01 #> 1567 b_wt -3.443449983 4.806081e-01 #> 1568 b_wt -3.437105744 4.822802e-01 #> 1569 b_wt -3.430761504 4.839227e-01 #> 1570 b_wt -3.424417265 4.855359e-01 #> 1571 b_wt -3.418073025 4.871183e-01 #> 1572 b_wt -3.411728786 4.886686e-01 #> 1573 b_wt -3.405384547 4.901855e-01 #> 1574 b_wt -3.399040307 4.916622e-01 #> 1575 b_wt -3.392696068 4.930988e-01 #> 1576 b_wt -3.386351828 4.944973e-01 #> 1577 b_wt -3.380007589 4.958565e-01 #> 1578 b_wt -3.373663349 4.971754e-01 #> 1579 b_wt -3.367319110 4.984527e-01 #> 1580 b_wt -3.360974871 4.996826e-01 #> 1581 b_wt -3.354630631 5.008627e-01 #> 1582 b_wt -3.348286392 5.019978e-01 #> 1583 b_wt -3.341942152 5.030870e-01 #> 1584 b_wt -3.335597913 5.041294e-01 #> 1585 b_wt -3.329253674 5.051245e-01 #> 1586 b_wt -3.322909434 5.060678e-01 #> 1587 b_wt -3.316565195 5.069541e-01 #> 1588 b_wt -3.310220955 5.077908e-01 #> 1589 b_wt -3.303876716 5.085775e-01 #> 1590 b_wt -3.297532477 5.093138e-01 #> 1591 b_wt -3.291188237 5.099994e-01 #> 1592 b_wt -3.284843998 5.106320e-01 #> 1593 b_wt -3.278499758 5.112029e-01 #> 1594 b_wt -3.272155519 5.117222e-01 #> 1595 b_wt -3.265811279 5.121898e-01 #> 1596 b_wt -3.259467040 5.126057e-01 #> 1597 b_wt -3.253122801 5.129698e-01 #> 1598 b_wt -3.246778561 5.132821e-01 #> 1599 b_wt -3.240434322 5.135302e-01 #> 1600 b_wt -3.234090082 5.137266e-01 #> 1601 b_wt -3.227745843 5.138715e-01 #> 1602 b_wt -3.221401604 5.139649e-01 #> 1603 b_wt -3.215057364 5.140071e-01 #> 1604 b_wt -3.208713125 5.139981e-01 #> 1605 b_wt -3.202368885 5.139278e-01 #> 1606 b_wt -3.196024646 5.138052e-01 #> 1607 b_wt -3.189680406 5.136323e-01 #> 1608 b_wt -3.183336167 5.134094e-01 #> 1609 b_wt -3.176991928 5.131369e-01 #> 1610 b_wt -3.170647688 5.128149e-01 #> 1611 b_wt -3.164303449 5.124357e-01 #> 1612 b_wt -3.157959209 5.120045e-01 #> 1613 b_wt -3.151614970 5.115254e-01 #> 1614 b_wt -3.145270731 5.109988e-01 #> 1615 b_wt -3.138926491 5.104251e-01 #> 1616 b_wt -3.132582252 5.098049e-01 #> 1617 b_wt -3.126238012 5.091327e-01 #> 1618 b_wt -3.119893773 5.084106e-01 #> 1619 b_wt -3.113549533 5.076441e-01 #> 1620 b_wt -3.107205294 5.068340e-01 #> 1621 b_wt -3.100861055 5.059811e-01 #> 1622 b_wt -3.094516815 5.050860e-01 #> 1623 b_wt -3.088172576 5.041458e-01 #> 1624 b_wt -3.081828336 5.031599e-01 #> 1625 b_wt -3.075484097 5.021353e-01 #> 1626 b_wt -3.069139858 5.010728e-01 #> 1627 b_wt -3.062795618 4.999736e-01 #> 1628 b_wt -3.056451379 4.988388e-01 #> 1629 b_wt -3.050107139 4.976675e-01 #> 1630 b_wt -3.043762900 4.964576e-01 #> 1631 b_wt -3.037418660 4.952164e-01 #> 1632 b_wt -3.031074421 4.939453e-01 #> 1633 b_wt -3.024730182 4.926457e-01 #> 1634 b_wt -3.018385942 4.913188e-01 #> 1635 b_wt -3.012041703 4.899656e-01 #> 1636 b_wt -3.005697463 4.885832e-01 #> 1637 b_wt -2.999353224 4.871788e-01 #> 1638 b_wt -2.993008985 4.857539e-01 #> 1639 b_wt -2.986664745 4.843098e-01 #> 1640 b_wt -2.980320506 4.828479e-01 #> 1641 b_wt -2.973976266 4.813699e-01 #> 1642 b_wt -2.967632027 4.798741e-01 #> 1643 b_wt -2.961287787 4.783657e-01 #> 1644 b_wt -2.954943548 4.768461e-01 #> 1645 b_wt -2.948599309 4.753167e-01 #> 1646 b_wt -2.942255069 4.737787e-01 #> 1647 b_wt -2.935910830 4.722335e-01 #> 1648 b_wt -2.929566590 4.706813e-01 #> 1649 b_wt -2.923222351 4.691247e-01 #> 1650 b_wt -2.916878112 4.675648e-01 #> 1651 b_wt -2.910533872 4.660026e-01 #> 1652 b_wt -2.904189633 4.644390e-01 #> 1653 b_wt -2.897845393 4.628749e-01 #> 1654 b_wt -2.891501154 4.613112e-01 #> 1655 b_wt -2.885156914 4.597486e-01 #> 1656 b_wt -2.878812675 4.581878e-01 #> 1657 b_wt -2.872468436 4.566291e-01 #> 1658 b_wt -2.866124196 4.550729e-01 #> 1659 b_wt -2.859779957 4.535193e-01 #> 1660 b_wt -2.853435717 4.519690e-01 #> 1661 b_wt -2.847091478 4.504220e-01 #> 1662 b_wt -2.840747239 4.488778e-01 #> 1663 b_wt -2.834402999 4.473362e-01 #> 1664 b_wt -2.828058760 4.457970e-01 #> 1665 b_wt -2.821714520 4.442599e-01 #> 1666 b_wt -2.815370281 4.427246e-01 #> 1667 b_wt -2.809026041 4.411903e-01 #> 1668 b_wt -2.802681802 4.396564e-01 #> 1669 b_wt -2.796337563 4.381219e-01 #> 1670 b_wt -2.789993323 4.365864e-01 #> 1671 b_wt -2.783649084 4.350489e-01 #> 1672 b_wt -2.777304844 4.335084e-01 #> 1673 b_wt -2.770960605 4.319633e-01 #> 1674 b_wt -2.764616366 4.304131e-01 #> 1675 b_wt -2.758272126 4.288568e-01 #> 1676 b_wt -2.751927887 4.272935e-01 #> 1677 b_wt -2.745583647 4.257221e-01 #> 1678 b_wt -2.739239408 4.241416e-01 #> 1679 b_wt -2.732895168 4.225483e-01 #> 1680 b_wt -2.726550929 4.209434e-01 #> 1681 b_wt -2.720206690 4.193259e-01 #> 1682 b_wt -2.713862450 4.176946e-01 #> 1683 b_wt -2.707518211 4.160488e-01 #> 1684 b_wt -2.701173971 4.143875e-01 #> 1685 b_wt -2.694829732 4.127058e-01 #> 1686 b_wt -2.688485493 4.110058e-01 #> 1687 b_wt -2.682141253 4.092873e-01 #> 1688 b_wt -2.675797014 4.075493e-01 #> 1689 b_wt -2.669452774 4.057912e-01 #> 1690 b_wt -2.663108535 4.040122e-01 #> 1691 b_wt -2.656764295 4.022077e-01 #> 1692 b_wt -2.650420056 4.003796e-01 #> 1693 b_wt -2.644075817 3.985288e-01 #> 1694 b_wt -2.637731577 3.966546e-01 #> 1695 b_wt -2.631387338 3.947569e-01 #> 1696 b_wt -2.625043098 3.928352e-01 #> 1697 b_wt -2.618698859 3.908857e-01 #> 1698 b_wt -2.612354620 3.889094e-01 #> 1699 b_wt -2.606010380 3.869086e-01 #> 1700 b_wt -2.599666141 3.848831e-01 #> 1701 b_wt -2.593321901 3.828329e-01 #> 1702 b_wt -2.586977662 3.807582e-01 #> 1703 b_wt -2.580633423 3.786563e-01 #> 1704 b_wt -2.574289183 3.765270e-01 #> 1705 b_wt -2.567944944 3.743737e-01 #> 1706 b_wt -2.561600704 3.721968e-01 #> 1707 b_wt -2.555256465 3.699966e-01 #> 1708 b_wt -2.548912225 3.677733e-01 #> 1709 b_wt -2.542567986 3.655259e-01 #> 1710 b_wt -2.536223747 3.632530e-01 #> 1711 b_wt -2.529879507 3.609589e-01 #> 1712 b_wt -2.523535268 3.586442e-01 #> 1713 b_wt -2.517191028 3.563094e-01 #> 1714 b_wt -2.510846789 3.539552e-01 #> 1715 b_wt -2.504502550 3.515816e-01 #> 1716 b_wt -2.498158310 3.491866e-01 #> 1717 b_wt -2.491814071 3.467750e-01 #> 1718 b_wt -2.485469831 3.443473e-01 #> 1719 b_wt -2.479125592 3.419045e-01 #> 1720 b_wt -2.472781352 3.394474e-01 #> 1721 b_wt -2.466437113 3.369769e-01 #> 1722 b_wt -2.460092874 3.344912e-01 #> 1723 b_wt -2.453748634 3.319945e-01 #> 1724 b_wt -2.447404395 3.294878e-01 #> 1725 b_wt -2.441060155 3.269719e-01 #> 1726 b_wt -2.434715916 3.244480e-01 #> 1727 b_wt -2.428371677 3.219169e-01 #> 1728 b_wt -2.422027437 3.193786e-01 #> 1729 b_wt -2.415683198 3.168357e-01 #> 1730 b_wt -2.409338958 3.142891e-01 #> 1731 b_wt -2.402994719 3.117400e-01 #> 1732 b_wt -2.396650479 3.091893e-01 #> 1733 b_wt -2.390306240 3.066379e-01 #> 1734 b_wt -2.383962001 3.040870e-01 #> 1735 b_wt -2.377617761 3.015378e-01 #> 1736 b_wt -2.371273522 2.989911e-01 #> 1737 b_wt -2.364929282 2.964479e-01 #> 1738 b_wt -2.358585043 2.939087e-01 #> 1739 b_wt -2.352240804 2.913743e-01 #> 1740 b_wt -2.345896564 2.888464e-01 #> 1741 b_wt -2.339552325 2.863255e-01 #> 1742 b_wt -2.333208085 2.838117e-01 #> 1743 b_wt -2.326863846 2.813053e-01 #> 1744 b_wt -2.320519606 2.788068e-01 #> 1745 b_wt -2.314175367 2.763165e-01 #> 1746 b_wt -2.307831128 2.738356e-01 #> 1747 b_wt -2.301486888 2.713646e-01 #> 1748 b_wt -2.295142649 2.689025e-01 #> 1749 b_wt -2.288798409 2.664492e-01 #> 1750 b_wt -2.282454170 2.640047e-01 #> 1751 b_wt -2.276109931 2.615689e-01 #> 1752 b_wt -2.269765691 2.591420e-01 #> 1753 b_wt -2.263421452 2.567247e-01 #> 1754 b_wt -2.257077212 2.543150e-01 #> 1755 b_wt -2.250732973 2.519126e-01 #> 1756 b_wt -2.244388733 2.495169e-01 #> 1757 b_wt -2.238044494 2.471276e-01 #> 1758 b_wt -2.231700255 2.447441e-01 #> 1759 b_wt -2.225356015 2.423666e-01 #> 1760 b_wt -2.219011776 2.399932e-01 #> 1761 b_wt -2.212667536 2.376233e-01 #> 1762 b_wt -2.206323297 2.352562e-01 #> 1763 b_wt -2.199979058 2.328913e-01 #> 1764 b_wt -2.193634818 2.305279e-01 #> 1765 b_wt -2.187290579 2.281652e-01 #> 1766 b_wt -2.180946339 2.258023e-01 #> 1767 b_wt -2.174602100 2.234385e-01 #> 1768 b_wt -2.168257860 2.210733e-01 #> 1769 b_wt -2.161913621 2.187062e-01 #> 1770 b_wt -2.155569382 2.163365e-01 #> 1771 b_wt -2.149225142 2.139631e-01 #> 1772 b_wt -2.142880903 2.115859e-01 #> 1773 b_wt -2.136536663 2.092047e-01 #> 1774 b_wt -2.130192424 2.068194e-01 #> 1775 b_wt -2.123848185 2.044297e-01 #> 1776 b_wt -2.117503945 2.020353e-01 #> 1777 b_wt -2.111159706 1.996357e-01 #> 1778 b_wt -2.104815466 1.972311e-01 #> 1779 b_wt -2.098471227 1.948220e-01 #> 1780 b_wt -2.092126987 1.924088e-01 #> 1781 b_wt -2.085782748 1.899916e-01 #> 1782 b_wt -2.079438509 1.875709e-01 #> 1783 b_wt -2.073094269 1.851466e-01 #> 1784 b_wt -2.066750030 1.827197e-01 #> 1785 b_wt -2.060405790 1.802911e-01 #> 1786 b_wt -2.054061551 1.778615e-01 #> 1787 b_wt -2.047717312 1.754316e-01 #> 1788 b_wt -2.041373072 1.730023e-01 #> 1789 b_wt -2.035028833 1.705745e-01 #> 1790 b_wt -2.028684593 1.681499e-01 #> 1791 b_wt -2.022340354 1.657292e-01 #> 1792 b_wt -2.015996114 1.633135e-01 #> 1793 b_wt -2.009651875 1.609037e-01 #> 1794 b_wt -2.003307636 1.585010e-01 #> 1795 b_wt -1.996963396 1.561071e-01 #> 1796 b_wt -1.990619157 1.537249e-01 #> 1797 b_wt -1.984274917 1.513539e-01 #> 1798 b_wt -1.977930678 1.489954e-01 #> 1799 b_wt -1.971586439 1.466506e-01 #> 1800 b_wt -1.965242199 1.443206e-01 #> 1801 b_wt -1.958897960 1.420070e-01 #> 1802 b_wt -1.952553720 1.397150e-01 #> 1803 b_wt -1.946209481 1.374420e-01 #> 1804 b_wt -1.939865242 1.351892e-01 #> 1805 b_wt -1.933521002 1.329577e-01 #> 1806 b_wt -1.927176763 1.307485e-01 #> 1807 b_wt -1.920832523 1.285627e-01 #> 1808 b_wt -1.914488284 1.264070e-01 #> 1809 b_wt -1.908144044 1.242776e-01 #> 1810 b_wt -1.901799805 1.221750e-01 #> 1811 b_wt -1.895455566 1.200999e-01 #> 1812 b_wt -1.889111326 1.180530e-01 #> 1813 b_wt -1.882767087 1.160352e-01 #> 1814 b_wt -1.876422847 1.140527e-01 #> 1815 b_wt -1.870078608 1.121023e-01 #> 1816 b_wt -1.863734369 1.101830e-01 #> 1817 b_wt -1.857390129 1.082952e-01 #> 1818 b_wt -1.851045890 1.064392e-01 #> 1819 b_wt -1.844701650 1.046153e-01 #> 1820 b_wt -1.838357411 1.028287e-01 #> 1821 b_wt -1.832013171 1.010777e-01 #> 1822 b_wt -1.825668932 9.935938e-02 #> 1823 b_wt -1.819324693 9.767385e-02 #> 1824 b_wt -1.812980453 9.602105e-02 #> 1825 b_wt -1.806636214 9.440092e-02 #> 1826 b_wt -1.800291974 9.281703e-02 #> 1827 b_wt -1.793947735 9.126951e-02 #> 1828 b_wt -1.787603496 8.975397e-02 #> 1829 b_wt -1.781259256 8.827014e-02 #> 1830 b_wt -1.774915017 8.681769e-02 #> 1831 b_wt -1.768570777 8.539627e-02 #> 1832 b_wt -1.762226538 8.400786e-02 #> 1833 b_wt -1.755882298 8.265439e-02 #> 1834 b_wt -1.749538059 8.133041e-02 #> 1835 b_wt -1.743193820 8.003546e-02 #> 1836 b_wt -1.736849580 7.876904e-02 #> 1837 b_wt -1.730505341 7.753064e-02 #> 1838 b_wt -1.724161101 7.632089e-02 #> 1839 b_wt -1.717816862 7.514324e-02 #> 1840 b_wt -1.711472623 7.399169e-02 #> 1841 b_wt -1.705128383 7.286569e-02 #> 1842 b_wt -1.698784144 7.176470e-02 #> 1843 b_wt -1.692439904 7.068816e-02 #> 1844 b_wt -1.686095665 6.963569e-02 #> 1845 b_wt -1.679751425 6.861184e-02 #> 1846 b_wt -1.673407186 6.761045e-02 #> 1847 b_wt -1.667062947 6.663100e-02 #> 1848 b_wt -1.660718707 6.567293e-02 #> 1849 b_wt -1.654374468 6.473572e-02 #> 1850 b_wt -1.648030228 6.381883e-02 #> 1851 b_wt -1.641685989 6.292573e-02 #> 1852 b_wt -1.635341750 6.205214e-02 #> 1853 b_wt -1.628997510 6.119694e-02 #> 1854 b_wt -1.622653271 6.035962e-02 #> 1855 b_wt -1.616309031 5.953964e-02 #> 1856 b_wt -1.609964792 5.873649e-02 #> 1857 b_wt -1.603620552 5.795233e-02 #> 1858 b_wt -1.597276313 5.718466e-02 #> 1859 b_wt -1.590932074 5.643187e-02 #> 1860 b_wt -1.584587834 5.569343e-02 #> 1861 b_wt -1.578243595 5.496879e-02 #> 1862 b_wt -1.571899355 5.425741e-02 #> 1863 b_wt -1.565555116 5.356038e-02 #> 1864 b_wt -1.559210877 5.287641e-02 #> 1865 b_wt -1.552866637 5.220368e-02 #> 1866 b_wt -1.546522398 5.154159e-02 #> 1867 b_wt -1.540178158 5.088959e-02 #> 1868 b_wt -1.533833919 5.024711e-02 #> 1869 b_wt -1.527489679 4.961439e-02 #> 1870 b_wt -1.521145440 4.899089e-02 #> 1871 b_wt -1.514801201 4.837482e-02 #> 1872 b_wt -1.508456961 4.776560e-02 #> 1873 b_wt -1.502112722 4.716267e-02 #> 1874 b_wt -1.495768482 4.656547e-02 #> 1875 b_wt -1.489424243 4.597376e-02 #> 1876 b_wt -1.483080004 4.538726e-02 #> 1877 b_wt -1.476735764 4.480455e-02 #> 1878 b_wt -1.470391525 4.422514e-02 #> 1879 b_wt -1.464047285 4.364856e-02 #> 1880 b_wt -1.457703046 4.307433e-02 #> 1881 b_wt -1.451358806 4.250205e-02 #> 1882 b_wt -1.445014567 4.193142e-02 #> 1883 b_wt -1.438670328 4.136166e-02 #> 1884 b_wt -1.432326088 4.079243e-02 #> 1885 b_wt -1.425981849 4.022341e-02 #> 1886 b_wt -1.419637609 3.965432e-02 #> 1887 b_wt -1.413293370 3.908489e-02 #> 1888 b_wt -1.406949131 3.851469e-02 #> 1889 b_wt -1.400604891 3.794363e-02 #> 1890 b_wt -1.394260652 3.737159e-02 #> 1891 b_wt -1.387916412 3.679846e-02 #> 1892 b_wt -1.381572173 3.622417e-02 #> 1893 b_wt -1.375227933 3.564868e-02 #> 1894 b_wt -1.368883694 3.507174e-02 #> 1895 b_wt -1.362539455 3.449360e-02 #> 1896 b_wt -1.356195215 3.391439e-02 #> 1897 b_wt -1.349850976 3.333422e-02 #> 1898 b_wt -1.343506736 3.275322e-02 #> 1899 b_wt -1.337162497 3.217155e-02 #> 1900 b_wt -1.330818258 3.158935e-02 #> 1901 b_wt -1.324474018 3.100698e-02 #> 1902 b_wt -1.318129779 3.042471e-02 #> 1903 b_wt -1.311785539 2.984280e-02 #> 1904 b_wt -1.305441300 2.926152e-02 #> 1905 b_wt -1.299097060 2.868117e-02 #> 1906 b_wt -1.292752821 2.810227e-02 #> 1907 b_wt -1.286408582 2.752525e-02 #> 1908 b_wt -1.280064342 2.695031e-02 #> 1909 b_wt -1.273720103 2.637778e-02 #> 1910 b_wt -1.267375863 2.580799e-02 #> 1911 b_wt -1.261031624 2.524129e-02 #> 1912 b_wt -1.254687385 2.467838e-02 #> 1913 b_wt -1.248343145 2.411998e-02 #> 1914 b_wt -1.241998906 2.356590e-02 #> 1915 b_wt -1.235654666 2.301649e-02 #> 1916 b_wt -1.229310427 2.247207e-02 #> 1917 b_wt -1.222966188 2.193296e-02 #> 1918 b_wt -1.216621948 2.139984e-02 #> 1919 b_wt -1.210277709 2.087389e-02 #> 1920 b_wt -1.203933469 2.035438e-02 #> 1921 b_wt -1.197589230 1.984159e-02 #> 1922 b_wt -1.191244990 1.933578e-02 #> 1923 b_wt -1.184900751 1.883724e-02 #> 1924 b_wt -1.178556512 1.834636e-02 #> 1925 b_wt -1.172212272 1.786505e-02 #> 1926 b_wt -1.165868033 1.739186e-02 #> 1927 b_wt -1.159523793 1.692701e-02 #> 1928 b_wt -1.153179554 1.647068e-02 #> 1929 b_wt -1.146835315 1.602306e-02 #> 1930 b_wt -1.140491075 1.558433e-02 #> 1931 b_wt -1.134146836 1.515673e-02 #> 1932 b_wt -1.127802596 1.473855e-02 #> 1933 b_wt -1.121458357 1.432977e-02 #> 1934 b_wt -1.115114117 1.393048e-02 #> 1935 b_wt -1.108769878 1.354079e-02 #> 1936 b_wt -1.102425639 1.316077e-02 #> 1937 b_wt -1.096081399 1.279237e-02 #> 1938 b_wt -1.089737160 1.243430e-02 #> 1939 b_wt -1.083392920 1.208608e-02 #> 1940 b_wt -1.077048681 1.174772e-02 #> 1941 b_wt -1.070704442 1.141924e-02 #> 1942 b_wt -1.064360202 1.110062e-02 #> 1943 b_wt -1.058015963 1.079333e-02 #> 1944 b_wt -1.051671723 1.049666e-02 #> 1945 b_wt -1.045327484 1.020968e-02 #> 1946 b_wt -1.038983244 9.932323e-03 #> 1947 b_wt -1.032639005 9.664488e-03 #> 1948 b_wt -1.026294766 9.406076e-03 #> 1949 b_wt -1.019950526 9.158038e-03 #> 1950 b_wt -1.013606287 8.920240e-03 #> 1951 b_wt -1.007262047 8.691378e-03 #> 1952 b_wt -1.000917808 8.471290e-03 #> 1953 b_wt -0.994573569 8.259803e-03 #> 1954 b_wt -0.988229329 8.056733e-03 #> 1955 b_wt -0.981885090 7.862535e-03 #> 1956 b_wt -0.975540850 7.677537e-03 #> 1957 b_wt -0.969196611 7.500211e-03 #> 1958 b_wt -0.962852371 7.330330e-03 #> 1959 b_wt -0.956508132 7.167661e-03 #> 1960 b_wt -0.950163893 7.011967e-03 #> 1961 b_wt -0.943819653 6.863293e-03 #> 1962 b_wt -0.937475414 6.722259e-03 #> 1963 b_wt -0.931131174 6.587294e-03 #> 1964 b_wt -0.924786935 6.458140e-03 #> 1965 b_wt -0.918442696 6.334540e-03 #> 1966 b_wt -0.912098456 6.216233e-03 #> 1967 b_wt -0.905754217 6.103004e-03 #> 1968 b_wt -0.899409977 5.995553e-03 #> 1969 b_wt -0.893065738 5.892459e-03 #> 1970 b_wt -0.886721498 5.793470e-03 #> 1971 b_wt -0.880377259 5.698336e-03 #> 1972 b_wt -0.874033020 5.606812e-03 #> 1973 b_wt -0.867688780 5.518658e-03 #> 1974 b_wt -0.861344541 5.434230e-03 #> 1975 b_wt -0.855000301 5.352649e-03 #> 1976 b_wt -0.848656062 5.273614e-03 #> 1977 b_wt -0.842311823 5.196916e-03 #> 1978 b_wt -0.835967583 5.122353e-03 #> 1979 b_wt -0.829623344 5.049727e-03 #> 1980 b_wt -0.823279104 4.979115e-03 #> 1981 b_wt -0.816934865 4.910074e-03 #> 1982 b_wt -0.810590625 4.842337e-03 #> 1983 b_wt -0.804246386 4.775748e-03 #> 1984 b_wt -0.797902147 4.710164e-03 #> 1985 b_wt -0.791557907 4.645446e-03 #> 1986 b_wt -0.785213668 4.581546e-03 #> 1987 b_wt -0.778869428 4.518268e-03 #> 1988 b_wt -0.772525189 4.455440e-03 #> 1989 b_wt -0.766180950 4.392969e-03 #> 1990 b_wt -0.759836710 4.330770e-03 #> 1991 b_wt -0.753492471 4.268765e-03 #> 1992 b_wt -0.747148231 4.206889e-03 #> 1993 b_wt -0.740803992 4.145060e-03 #> 1994 b_wt -0.734459752 4.083217e-03 #> 1995 b_wt -0.728115513 4.021321e-03 #> 1996 b_wt -0.721771274 3.959341e-03 #> 1997 b_wt -0.715427034 3.897251e-03 #> 1998 b_wt -0.709082795 3.835020e-03 #> 1999 b_wt -0.702738555 3.772621e-03 #> 2000 b_wt -0.696394316 3.710075e-03 #> 2001 b_wt -0.690050077 3.647385e-03 #> 2002 b_wt -0.683705837 3.584560e-03 #> 2003 b_wt -0.677361598 3.521614e-03 #> 2004 b_wt -0.671017358 3.458562e-03 #> 2005 b_wt -0.664673119 3.395420e-03 #> 2006 b_wt -0.658328879 3.332242e-03 #> 2007 b_wt -0.651984640 3.269057e-03 #> 2008 b_wt -0.645640401 3.205901e-03 #> 2009 b_wt -0.639296161 3.142810e-03 #> 2010 b_wt -0.632951922 3.079823e-03 #> 2011 b_wt -0.626607682 3.017031e-03 #> 2012 b_wt -0.620263443 2.954455e-03 #> 2013 b_wt -0.613919204 2.892138e-03 #> 2014 b_wt -0.607574964 2.830124e-03 #> 2015 b_wt -0.601230725 2.768458e-03 #> 2016 b_wt -0.594886485 2.707186e-03 #> 2017 b_wt -0.588542246 2.646454e-03 #> 2018 b_wt -0.582198007 2.586248e-03 #> 2019 b_wt -0.575853767 2.526593e-03 #> 2020 b_wt -0.569509528 2.467529e-03 #> 2021 b_wt -0.563165288 2.409095e-03 #> 2022 b_wt -0.556821049 2.351327e-03 #> 2023 b_wt -0.550476809 2.294388e-03 #> 2024 b_wt -0.544132570 2.238255e-03 #> 2025 b_wt -0.537788331 2.182905e-03 #> 2026 b_wt -0.531444091 2.128361e-03 #> 2027 b_wt -0.525099852 2.074648e-03 #> 2028 b_wt -0.518755612 2.021785e-03 #> 2029 b_wt -0.512411373 1.969906e-03 #> 2030 b_wt -0.506067134 1.919010e-03 #> 2031 b_wt -0.499722894 1.869011e-03 #> 2032 b_wt -0.493378655 1.819914e-03 #> 2033 b_wt -0.487034415 1.771721e-03 #> 2034 b_wt -0.480690176 1.724433e-03 #> 2035 b_wt -0.474345936 1.678131e-03 #> 2036 b_wt -0.468001697 1.632853e-03 #> 2037 b_wt -0.461657458 1.588457e-03 #> 2038 b_wt -0.455313218 1.544929e-03 #> 2039 b_wt -0.448968979 1.502258e-03 #> 2040 b_wt -0.442624739 1.460427e-03 #> 2041 b_wt -0.436280500 1.419465e-03 #> 2042 b_wt -0.429936261 1.379446e-03 #> 2043 b_wt -0.423592021 1.340197e-03 #> 2044 b_wt -0.417247782 1.301700e-03 #> 2045 b_wt -0.410903542 1.263931e-03 #> 2046 b_wt -0.404559303 1.226869e-03 #> 2047 b_wt -0.398215063 1.190508e-03 #> 2048 b_wt -0.391870824 1.154946e-03 #> 2049 b_cyl -3.179600735 1.304231e-03 #> 2050 b_cyl -3.176131314 1.317718e-03 #> 2051 b_cyl -3.172661893 1.330586e-03 #> 2052 b_cyl -3.169192471 1.342903e-03 #> 2053 b_cyl -3.165723050 1.354751e-03 #> 2054 b_cyl -3.162253628 1.366345e-03 #> 2055 b_cyl -3.158784207 1.377810e-03 #> 2056 b_cyl -3.155314786 1.389275e-03 #> 2057 b_cyl -3.151845364 1.400880e-03 #> 2058 b_cyl -3.148375943 1.412770e-03 #> 2059 b_cyl -3.144906521 1.425262e-03 #> 2060 b_cyl -3.141437100 1.438453e-03 #> 2061 b_cyl -3.137967679 1.452509e-03 #> 2062 b_cyl -3.134498257 1.467603e-03 #> 2063 b_cyl -3.131028836 1.483906e-03 #> 2064 b_cyl -3.127559414 1.501593e-03 #> 2065 b_cyl -3.124089993 1.521188e-03 #> 2066 b_cyl -3.120620572 1.542707e-03 #> 2067 b_cyl -3.117151150 1.566243e-03 #> 2068 b_cyl -3.113681729 1.591967e-03 #> 2069 b_cyl -3.110212307 1.620045e-03 #> 2070 b_cyl -3.106742886 1.650638e-03 #> 2071 b_cyl -3.103273465 1.684309e-03 #> 2072 b_cyl -3.099804043 1.721149e-03 #> 2073 b_cyl -3.096334622 1.761029e-03 #> 2074 b_cyl -3.092865200 1.804071e-03 #> 2075 b_cyl -3.089395779 1.850390e-03 #> 2076 b_cyl -3.085926358 1.900090e-03 #> 2077 b_cyl -3.082456936 1.953595e-03 #> 2078 b_cyl -3.078987515 2.011202e-03 #> 2079 b_cyl -3.075518093 2.072457e-03 #> 2080 b_cyl -3.072048672 2.137403e-03 #> 2081 b_cyl -3.068579251 2.206066e-03 #> 2082 b_cyl -3.065109829 2.278462e-03 #> 2083 b_cyl -3.061640408 2.354756e-03 #> 2084 b_cyl -3.058170986 2.435477e-03 #> 2085 b_cyl -3.054701565 2.519861e-03 #> 2086 b_cyl -3.051232144 2.607849e-03 #> 2087 b_cyl -3.047762722 2.699375e-03 #> 2088 b_cyl -3.044293301 2.794357e-03 #> 2089 b_cyl -3.040823879 2.892702e-03 #> 2090 b_cyl -3.037354458 2.995020e-03 #> 2091 b_cyl -3.033885037 3.100409e-03 #> 2092 b_cyl -3.030415615 3.208715e-03 #> 2093 b_cyl -3.026946194 3.319791e-03 #> 2094 b_cyl -3.023476772 3.433482e-03 #> 2095 b_cyl -3.020007351 3.549627e-03 #> 2096 b_cyl -3.016537930 3.668426e-03 #> 2097 b_cyl -3.013068508 3.789338e-03 #> 2098 b_cyl -3.009599087 3.912062e-03 #> 2099 b_cyl -3.006129665 4.036413e-03 #> 2100 b_cyl -3.002660244 4.162204e-03 #> 2101 b_cyl -2.999190823 4.289249e-03 #> 2102 b_cyl -2.995721401 4.417469e-03 #> 2103 b_cyl -2.992251980 4.546565e-03 #> 2104 b_cyl -2.988782558 4.676252e-03 #> 2105 b_cyl -2.985313137 4.806360e-03 #> 2106 b_cyl -2.981843716 4.936719e-03 #> 2107 b_cyl -2.978374294 5.067169e-03 #> 2108 b_cyl -2.974904873 5.197530e-03 #> 2109 b_cyl -2.971435451 5.327581e-03 #> 2110 b_cyl -2.967966030 5.457205e-03 #> 2111 b_cyl -2.964496609 5.586281e-03 #> 2112 b_cyl -2.961027187 5.714698e-03 #> 2113 b_cyl -2.957557766 5.842354e-03 #> 2114 b_cyl -2.954088344 5.969116e-03 #> 2115 b_cyl -2.950618923 6.094726e-03 #> 2116 b_cyl -2.947149502 6.219291e-03 #> 2117 b_cyl -2.943680080 6.342758e-03 #> 2118 b_cyl -2.940210659 6.465083e-03 #> 2119 b_cyl -2.936741238 6.586229e-03 #> 2120 b_cyl -2.933271816 6.706169e-03 #> 2121 b_cyl -2.929802395 6.824598e-03 #> 2122 b_cyl -2.926332973 6.941778e-03 #> 2123 b_cyl -2.922863552 7.057719e-03 #> 2124 b_cyl -2.919394131 7.172430e-03 #> 2125 b_cyl -2.915924709 7.285920e-03 #> 2126 b_cyl -2.912455288 7.398206e-03 #> 2127 b_cyl -2.908985866 7.509099e-03 #> 2128 b_cyl -2.905516445 7.618777e-03 #> 2129 b_cyl -2.902047024 7.727335e-03 #> 2130 b_cyl -2.898577602 7.834802e-03 #> 2131 b_cyl -2.895108181 7.941205e-03 #> 2132 b_cyl -2.891638759 8.046577e-03 #> 2133 b_cyl -2.888169338 8.150819e-03 #> 2134 b_cyl -2.884699917 8.254001e-03 #> 2135 b_cyl -2.881230495 8.356261e-03 #> 2136 b_cyl -2.877761074 8.457626e-03 #> 2137 b_cyl -2.874291652 8.558128e-03 #> 2138 b_cyl -2.870822231 8.657796e-03 #> 2139 b_cyl -2.867352810 8.756597e-03 #> 2140 b_cyl -2.863883388 8.854524e-03 #> 2141 b_cyl -2.860413967 8.951734e-03 #> 2142 b_cyl -2.856944545 9.048265e-03 #> 2143 b_cyl -2.853475124 9.144158e-03 #> 2144 b_cyl -2.850005703 9.239457e-03 #> 2145 b_cyl -2.846536281 9.334193e-03 #> 2146 b_cyl -2.843066860 9.428365e-03 #> 2147 b_cyl -2.839597438 9.522153e-03 #> 2148 b_cyl -2.836128017 9.615632e-03 #> 2149 b_cyl -2.832658596 9.708889e-03 #> 2150 b_cyl -2.829189174 9.802018e-03 #> 2151 b_cyl -2.825719753 9.895124e-03 #> 2152 b_cyl -2.822250331 9.988394e-03 #> 2153 b_cyl -2.818780910 1.008198e-02 #> 2154 b_cyl -2.815311489 1.017604e-02 #> 2155 b_cyl -2.811842067 1.027074e-02 #> 2156 b_cyl -2.808372646 1.036627e-02 #> 2157 b_cyl -2.804903224 1.046282e-02 #> 2158 b_cyl -2.801433803 1.056089e-02 #> 2159 b_cyl -2.797964382 1.066065e-02 #> 2160 b_cyl -2.794494960 1.076229e-02 #> 2161 b_cyl -2.791025539 1.086609e-02 #> 2162 b_cyl -2.787556117 1.097232e-02 #> 2163 b_cyl -2.784086696 1.108128e-02 #> 2164 b_cyl -2.780617275 1.119375e-02 #> 2165 b_cyl -2.777147853 1.131015e-02 #> 2166 b_cyl -2.773678432 1.143049e-02 #> 2167 b_cyl -2.770209010 1.155512e-02 #> 2168 b_cyl -2.766739589 1.168442e-02 #> 2169 b_cyl -2.763270168 1.181876e-02 #> 2170 b_cyl -2.759800746 1.195900e-02 #> 2171 b_cyl -2.756331325 1.210616e-02 #> 2172 b_cyl -2.752861903 1.225977e-02 #> 2173 b_cyl -2.749392482 1.242023e-02 #> 2174 b_cyl -2.745923061 1.258794e-02 #> 2175 b_cyl -2.742453639 1.276329e-02 #> 2176 b_cyl -2.738984218 1.294694e-02 #> 2177 b_cyl -2.735514796 1.314092e-02 #> 2178 b_cyl -2.732045375 1.334396e-02 #> 2179 b_cyl -2.728575954 1.355641e-02 #> 2180 b_cyl -2.725106532 1.377865e-02 #> 2181 b_cyl -2.721637111 1.401104e-02 #> 2182 b_cyl -2.718167689 1.425391e-02 #> 2183 b_cyl -2.714698268 1.451009e-02 #> 2184 b_cyl -2.711228847 1.477780e-02 #> 2185 b_cyl -2.707759425 1.505714e-02 #> 2186 b_cyl -2.704290004 1.534838e-02 #> 2187 b_cyl -2.700820582 1.565180e-02 #> 2188 b_cyl -2.697351161 1.596764e-02 #> 2189 b_cyl -2.693881740 1.629837e-02 #> 2190 b_cyl -2.690412318 1.664293e-02 #> 2191 b_cyl -2.686942897 1.700068e-02 #> 2192 b_cyl -2.683473475 1.737176e-02 #> 2193 b_cyl -2.680004054 1.775634e-02 #> 2194 b_cyl -2.676534633 1.815454e-02 #> 2195 b_cyl -2.673065211 1.856819e-02 #> 2196 b_cyl -2.669595790 1.899729e-02 #> 2197 b_cyl -2.666126368 1.944032e-02 #> 2198 b_cyl -2.662656947 1.989732e-02 #> 2199 b_cyl -2.659187526 2.036832e-02 #> 2200 b_cyl -2.655718104 2.085332e-02 #> 2201 b_cyl -2.652248683 2.135336e-02 #> 2202 b_cyl -2.648779261 2.186961e-02 #> 2203 b_cyl -2.645309840 2.239973e-02 #> 2204 b_cyl -2.641840419 2.294364e-02 #> 2205 b_cyl -2.638370997 2.350125e-02 #> 2206 b_cyl -2.634901576 2.407246e-02 #> 2207 b_cyl -2.631432154 2.465749e-02 #> 2208 b_cyl -2.627962733 2.525862e-02 #> 2209 b_cyl -2.624493312 2.587283e-02 #> 2210 b_cyl -2.621023890 2.649997e-02 #> 2211 b_cyl -2.617554469 2.713985e-02 #> 2212 b_cyl -2.614085047 2.779228e-02 #> 2213 b_cyl -2.610615626 2.845707e-02 #> 2214 b_cyl -2.607146205 2.913657e-02 #> 2215 b_cyl -2.603676783 2.982816e-02 #> 2216 b_cyl -2.600207362 3.053136e-02 #> 2217 b_cyl -2.596737940 3.124595e-02 #> 2218 b_cyl -2.593268519 3.197173e-02 #> 2219 b_cyl -2.589799098 3.270848e-02 #> 2220 b_cyl -2.586329676 3.345776e-02 #> 2221 b_cyl -2.582860255 3.421825e-02 #> 2222 b_cyl -2.579390833 3.498901e-02 #> 2223 b_cyl -2.575921412 3.576988e-02 #> 2224 b_cyl -2.572451991 3.656070e-02 #> 2225 b_cyl -2.568982569 3.736132e-02 #> 2226 b_cyl -2.565513148 3.817275e-02 #> 2227 b_cyl -2.562043726 3.899485e-02 #> 2228 b_cyl -2.558574305 3.982639e-02 #> 2229 b_cyl -2.555104884 4.066734e-02 #> 2230 b_cyl -2.551635462 4.151768e-02 #> 2231 b_cyl -2.548166041 4.237740e-02 #> 2232 b_cyl -2.544696619 4.324720e-02 #> 2233 b_cyl -2.541227198 4.412810e-02 #> 2234 b_cyl -2.537757777 4.501864e-02 #> 2235 b_cyl -2.534288355 4.591895e-02 #> 2236 b_cyl -2.530818934 4.682921e-02 #> 2237 b_cyl -2.527349512 4.774962e-02 #> 2238 b_cyl -2.523880091 4.868062e-02 #> 2239 b_cyl -2.520410670 4.962467e-02 #> 2240 b_cyl -2.516941248 5.057983e-02 #> 2241 b_cyl -2.513471827 5.154644e-02 #> 2242 b_cyl -2.510002405 5.252486e-02 #> 2243 b_cyl -2.506532984 5.351548e-02 #> 2244 b_cyl -2.503063563 5.451870e-02 #> 2245 b_cyl -2.499594141 5.553787e-02 #> 2246 b_cyl -2.496124720 5.657118e-02 #> 2247 b_cyl -2.492655298 5.761875e-02 #> 2248 b_cyl -2.489185877 5.868109e-02 #> 2249 b_cyl -2.485716456 5.975871e-02 #> 2250 b_cyl -2.482247034 6.085212e-02 #> 2251 b_cyl -2.478777613 6.196464e-02 #> 2252 b_cyl -2.475308191 6.309559e-02 #> 2253 b_cyl -2.471838770 6.424426e-02 #> 2254 b_cyl -2.468369349 6.541119e-02 #> 2255 b_cyl -2.464899927 6.659691e-02 #> 2256 b_cyl -2.461430506 6.780195e-02 #> 2257 b_cyl -2.457961084 6.902918e-02 #> 2258 b_cyl -2.454491663 7.027953e-02 #> 2259 b_cyl -2.451022242 7.155104e-02 #> 2260 b_cyl -2.447552820 7.284416e-02 #> 2261 b_cyl -2.444083399 7.415937e-02 #> 2262 b_cyl -2.440613977 7.549711e-02 #> 2263 b_cyl -2.437144556 7.685937e-02 #> 2264 b_cyl -2.433675135 7.824921e-02 #> 2265 b_cyl -2.430205713 7.966304e-02 #> 2266 b_cyl -2.426736292 8.110124e-02 #> 2267 b_cyl -2.423266870 8.256415e-02 #> 2268 b_cyl -2.419797449 8.405209e-02 #> 2269 b_cyl -2.416328028 8.556583e-02 #> 2270 b_cyl -2.412858606 8.711099e-02 #> 2271 b_cyl -2.409389185 8.868225e-02 #> 2272 b_cyl -2.405919763 9.027985e-02 #> 2273 b_cyl -2.402450342 9.190405e-02 #> 2274 b_cyl -2.398980921 9.355508e-02 #> 2275 b_cyl -2.395511499 9.523318e-02 #> 2276 b_cyl -2.392042078 9.694428e-02 #> 2277 b_cyl -2.388572656 9.868387e-02 #> 2278 b_cyl -2.385103235 1.004513e-01 #> 2279 b_cyl -2.381633814 1.022468e-01 #> 2280 b_cyl -2.378164392 1.040705e-01 #> 2281 b_cyl -2.374694971 1.059227e-01 #> 2282 b_cyl -2.371225549 1.078083e-01 #> 2283 b_cyl -2.367756128 1.097251e-01 #> 2284 b_cyl -2.364286707 1.116713e-01 #> 2285 b_cyl -2.360817285 1.136470e-01 #> 2286 b_cyl -2.357347864 1.156525e-01 #> 2287 b_cyl -2.353878442 1.176880e-01 #> 2288 b_cyl -2.350409021 1.197572e-01 #> 2289 b_cyl -2.346939600 1.218610e-01 #> 2290 b_cyl -2.343470178 1.239959e-01 #> 2291 b_cyl -2.340000757 1.261621e-01 #> 2292 b_cyl -2.336531335 1.283600e-01 #> 2293 b_cyl -2.333061914 1.305898e-01 #> 2294 b_cyl -2.329592493 1.328540e-01 #> 2295 b_cyl -2.326123071 1.351567e-01 #> 2296 b_cyl -2.322653650 1.374925e-01 #> 2297 b_cyl -2.319184228 1.398618e-01 #> 2298 b_cyl -2.315714807 1.422650e-01 #> 2299 b_cyl -2.312245386 1.447023e-01 #> 2300 b_cyl -2.308775964 1.471745e-01 #> 2301 b_cyl -2.305306543 1.496895e-01 #> 2302 b_cyl -2.301837121 1.522399e-01 #> 2303 b_cyl -2.298367700 1.548259e-01 #> 2304 b_cyl -2.294898279 1.574479e-01 #> 2305 b_cyl -2.291428857 1.601061e-01 #> 2306 b_cyl -2.287959436 1.628008e-01 #> 2307 b_cyl -2.284490014 1.655399e-01 #> 2308 b_cyl -2.281020593 1.683174e-01 #> 2309 b_cyl -2.277551172 1.711324e-01 #> 2310 b_cyl -2.274081750 1.739851e-01 #> 2311 b_cyl -2.270612329 1.768756e-01 #> 2312 b_cyl -2.267142907 1.798042e-01 #> 2313 b_cyl -2.263673486 1.827770e-01 #> 2314 b_cyl -2.260204065 1.857914e-01 #> 2315 b_cyl -2.256734643 1.888446e-01 #> 2316 b_cyl -2.253265222 1.919365e-01 #> 2317 b_cyl -2.249795800 1.950673e-01 #> 2318 b_cyl -2.246326379 1.982372e-01 #> 2319 b_cyl -2.242856958 2.014504e-01 #> 2320 b_cyl -2.239387536 2.047081e-01 #> 2321 b_cyl -2.235918115 2.080053e-01 #> 2322 b_cyl -2.232448693 2.113420e-01 #> 2323 b_cyl -2.228979272 2.147183e-01 #> 2324 b_cyl -2.225509851 2.181343e-01 #> 2325 b_cyl -2.222040429 2.215925e-01 #> 2326 b_cyl -2.218571008 2.250978e-01 #> 2327 b_cyl -2.215101586 2.286432e-01 #> 2328 b_cyl -2.211632165 2.322289e-01 #> 2329 b_cyl -2.208162744 2.358549e-01 #> 2330 b_cyl -2.204693322 2.395213e-01 #> 2331 b_cyl -2.201223901 2.432287e-01 #> 2332 b_cyl -2.197754479 2.469861e-01 #> 2333 b_cyl -2.194285058 2.507844e-01 #> 2334 b_cyl -2.190815637 2.546236e-01 #> 2335 b_cyl -2.187346215 2.585040e-01 #> 2336 b_cyl -2.183876794 2.624256e-01 #> 2337 b_cyl -2.180407372 2.663886e-01 #> 2338 b_cyl -2.176937951 2.704011e-01 #> 2339 b_cyl -2.173468530 2.744568e-01 #> 2340 b_cyl -2.169999108 2.785540e-01 #> 2341 b_cyl -2.166529687 2.826927e-01 #> 2342 b_cyl -2.163060265 2.868729e-01 #> 2343 b_cyl -2.159590844 2.910945e-01 #> 2344 b_cyl -2.156121423 2.953635e-01 #> 2345 b_cyl -2.152652001 2.996772e-01 #> 2346 b_cyl -2.149182580 3.040317e-01 #> 2347 b_cyl -2.145713158 3.084266e-01 #> 2348 b_cyl -2.142243737 3.128616e-01 #> 2349 b_cyl -2.138774316 3.173363e-01 #> 2350 b_cyl -2.135304894 3.218539e-01 #> 2351 b_cyl -2.131835473 3.264149e-01 #> 2352 b_cyl -2.128366051 3.310133e-01 #> 2353 b_cyl -2.124896630 3.356482e-01 #> 2354 b_cyl -2.121427209 3.403187e-01 #> 2355 b_cyl -2.117957787 3.450238e-01 #> 2356 b_cyl -2.114488366 3.497640e-01 #> 2357 b_cyl -2.111018944 3.545418e-01 #> 2358 b_cyl -2.107549523 3.593498e-01 #> 2359 b_cyl -2.104080102 3.641863e-01 #> 2360 b_cyl -2.100610680 3.690498e-01 #> 2361 b_cyl -2.097141259 3.739386e-01 #> 2362 b_cyl -2.093671837 3.788511e-01 #> 2363 b_cyl -2.090202416 3.837897e-01 #> 2364 b_cyl -2.086732995 3.887468e-01 #> 2365 b_cyl -2.083263573 3.937204e-01 #> 2366 b_cyl -2.079794152 3.987082e-01 #> 2367 b_cyl -2.076324730 4.037080e-01 #> 2368 b_cyl -2.072855309 4.087177e-01 #> 2369 b_cyl -2.069385888 4.137355e-01 #> 2370 b_cyl -2.065916466 4.187571e-01 #> 2371 b_cyl -2.062447045 4.237799e-01 #> 2372 b_cyl -2.058977623 4.288012e-01 #> 2373 b_cyl -2.055508202 4.338186e-01 #> 2374 b_cyl -2.052038781 4.388295e-01 #> 2375 b_cyl -2.048569359 4.438297e-01 #> 2376 b_cyl -2.045099938 4.488161e-01 #> 2377 b_cyl -2.041630516 4.537870e-01 #> 2378 b_cyl -2.038161095 4.587400e-01 #> 2379 b_cyl -2.034691674 4.636727e-01 #> 2380 b_cyl -2.031222252 4.685828e-01 #> 2381 b_cyl -2.027752831 4.734652e-01 #> 2382 b_cyl -2.024283409 4.783155e-01 #> 2383 b_cyl -2.020813988 4.831352e-01 #> 2384 b_cyl -2.017344567 4.879223e-01 #> 2385 b_cyl -2.013875145 4.926748e-01 #> 2386 b_cyl -2.010405724 4.973909e-01 #> 2387 b_cyl -2.006936302 5.020669e-01 #> 2388 b_cyl -2.003466881 5.066949e-01 #> 2389 b_cyl -1.999997460 5.112807e-01 #> 2390 b_cyl -1.996528038 5.158229e-01 #> 2391 b_cyl -1.993058617 5.203204e-01 #> 2392 b_cyl -1.989589195 5.247720e-01 #> 2393 b_cyl -1.986119774 5.291767e-01 #> 2394 b_cyl -1.982650353 5.335220e-01 #> 2395 b_cyl -1.979180931 5.378185e-01 #> 2396 b_cyl -1.975711510 5.420655e-01 #> 2397 b_cyl -1.972242088 5.462626e-01 #> 2398 b_cyl -1.968772667 5.504096e-01 #> 2399 b_cyl -1.965303246 5.545063e-01 #> 2400 b_cyl -1.961833824 5.585429e-01 #> 2401 b_cyl -1.958364403 5.625268e-01 #> 2402 b_cyl -1.954894981 5.664606e-01 #> 2403 b_cyl -1.951425560 5.703446e-01 #> 2404 b_cyl -1.947956139 5.741794e-01 #> 2405 b_cyl -1.944486717 5.779654e-01 #> 2406 b_cyl -1.941017296 5.816967e-01 #> 2407 b_cyl -1.937547874 5.853766e-01 #> 2408 b_cyl -1.934078453 5.890105e-01 #> 2409 b_cyl -1.930609032 5.925996e-01 #> 2410 b_cyl -1.927139610 5.961450e-01 #> 2411 b_cyl -1.923670189 5.996478e-01 #> 2412 b_cyl -1.920200767 6.031057e-01 #> 2413 b_cyl -1.916731346 6.065188e-01 #> 2414 b_cyl -1.913261925 6.098943e-01 #> 2415 b_cyl -1.909792503 6.132341e-01 #> 2416 b_cyl -1.906323082 6.165398e-01 #> 2417 b_cyl -1.902853660 6.198132e-01 #> 2418 b_cyl -1.899384239 6.230548e-01 #> 2419 b_cyl -1.895914818 6.262632e-01 #> 2420 b_cyl -1.892445396 6.294462e-01 #> 2421 b_cyl -1.888975975 6.326059e-01 #> 2422 b_cyl -1.885506553 6.357446e-01 #> 2423 b_cyl -1.882037132 6.388643e-01 #> 2424 b_cyl -1.878567711 6.419674e-01 #> 2425 b_cyl -1.875098289 6.450535e-01 #> 2426 b_cyl -1.871628868 6.481290e-01 #> 2427 b_cyl -1.868159446 6.511962e-01 #> 2428 b_cyl -1.864690025 6.542575e-01 #> 2429 b_cyl -1.861220604 6.573153e-01 #> 2430 b_cyl -1.857751182 6.603720e-01 #> 2431 b_cyl -1.854281761 6.634307e-01 #> 2432 b_cyl -1.850812339 6.664946e-01 #> 2433 b_cyl -1.847342918 6.695659e-01 #> 2434 b_cyl -1.843873497 6.726467e-01 #> 2435 b_cyl -1.840404075 6.757393e-01 #> 2436 b_cyl -1.836934654 6.788458e-01 #> 2437 b_cyl -1.833465232 6.819709e-01 #> 2438 b_cyl -1.829995811 6.851168e-01 #> 2439 b_cyl -1.826526390 6.882836e-01 #> 2440 b_cyl -1.823056968 6.914731e-01 #> 2441 b_cyl -1.819587547 6.946868e-01 #> 2442 b_cyl -1.816118125 6.979262e-01 #> 2443 b_cyl -1.812648704 7.011952e-01 #> 2444 b_cyl -1.809179283 7.044970e-01 #> 2445 b_cyl -1.805709861 7.078285e-01 #> 2446 b_cyl -1.802240440 7.111905e-01 #> 2447 b_cyl -1.798771018 7.145834e-01 #> 2448 b_cyl -1.795301597 7.180077e-01 #> 2449 b_cyl -1.791832176 7.214650e-01 #> 2450 b_cyl -1.788362754 7.249601e-01 #> 2451 b_cyl -1.784893333 7.284864e-01 #> 2452 b_cyl -1.781423911 7.320436e-01 #> 2453 b_cyl -1.777954490 7.356310e-01 #> 2454 b_cyl -1.774485069 7.392476e-01 #> 2455 b_cyl -1.771015647 7.428926e-01 #> 2456 b_cyl -1.767546226 7.465705e-01 #> 2457 b_cyl -1.764076804 7.502735e-01 #> 2458 b_cyl -1.760607383 7.539997e-01 #> 2459 b_cyl -1.757137962 7.577474e-01 #> 2460 b_cyl -1.753668540 7.615145e-01 #> 2461 b_cyl -1.750199119 7.652990e-01 #> 2462 b_cyl -1.746729697 7.691010e-01 #> 2463 b_cyl -1.743260276 7.729153e-01 #> 2464 b_cyl -1.739790855 7.767385e-01 #> 2465 b_cyl -1.736321433 7.805682e-01 #> 2466 b_cyl -1.732852012 7.844016e-01 #> 2467 b_cyl -1.729382590 7.882361e-01 #> 2468 b_cyl -1.725913169 7.920683e-01 #> 2469 b_cyl -1.722443748 7.958940e-01 #> 2470 b_cyl -1.718974326 7.997107e-01 #> 2471 b_cyl -1.715504905 8.035157e-01 #> 2472 b_cyl -1.712035483 8.073060e-01 #> 2473 b_cyl -1.708566062 8.110790e-01 #> 2474 b_cyl -1.705096641 8.148299e-01 #> 2475 b_cyl -1.701627219 8.185532e-01 #> 2476 b_cyl -1.698157798 8.222494e-01 #> 2477 b_cyl -1.694688376 8.259161e-01 #> 2478 b_cyl -1.691218955 8.295508e-01 #> 2479 b_cyl -1.687749534 8.331511e-01 #> 2480 b_cyl -1.684280112 8.367134e-01 #> 2481 b_cyl -1.680810691 8.402282e-01 #> 2482 b_cyl -1.677341269 8.437010e-01 #> 2483 b_cyl -1.673871848 8.471300e-01 #> 2484 b_cyl -1.670402427 8.505134e-01 #> 2485 b_cyl -1.666933005 8.538496e-01 #> 2486 b_cyl -1.663463584 8.571372e-01 #> 2487 b_cyl -1.659994162 8.603629e-01 #> 2488 b_cyl -1.656524741 8.635360e-01 #> 2489 b_cyl -1.653055320 8.666560e-01 #> 2490 b_cyl -1.649585898 8.697219e-01 #> 2491 b_cyl -1.646116477 8.727330e-01 #> 2492 b_cyl -1.642647055 8.756885e-01 #> 2493 b_cyl -1.639177634 8.785776e-01 #> 2494 b_cyl -1.635708213 8.814060e-01 #> 2495 b_cyl -1.632238791 8.841767e-01 #> 2496 b_cyl -1.628769370 8.868895e-01 #> 2497 b_cyl -1.625299948 8.895439e-01 #> 2498 b_cyl -1.621830527 8.921396e-01 #> 2499 b_cyl -1.618361106 8.946688e-01 #> 2500 b_cyl -1.614891684 8.971321e-01 #> 2501 b_cyl -1.611422263 8.995359e-01 #> 2502 b_cyl -1.607952841 9.018798e-01 #> 2503 b_cyl -1.604483420 9.041636e-01 #> 2504 b_cyl -1.601013999 9.063870e-01 #> 2505 b_cyl -1.597544577 9.085449e-01 #> 2506 b_cyl -1.594075156 9.106317e-01 #> 2507 b_cyl -1.590605734 9.126566e-01 #> 2508 b_cyl -1.587136313 9.146192e-01 #> 2509 b_cyl -1.583666892 9.165187e-01 #> 2510 b_cyl -1.580197470 9.183545e-01 #> 2511 b_cyl -1.576728049 9.201238e-01 #> 2512 b_cyl -1.573258627 9.218137e-01 #> 2513 b_cyl -1.569789206 9.234367e-01 #> 2514 b_cyl -1.566319785 9.249916e-01 #> 2515 b_cyl -1.562850363 9.264771e-01 #> 2516 b_cyl -1.559380942 9.278920e-01 #> 2517 b_cyl -1.555911520 9.292350e-01 #> 2518 b_cyl -1.552442099 9.304878e-01 #> 2519 b_cyl -1.548972678 9.316634e-01 #> 2520 b_cyl -1.545503256 9.327613e-01 #> 2521 b_cyl -1.542033835 9.337799e-01 #> 2522 b_cyl -1.538564413 9.347175e-01 #> 2523 b_cyl -1.535094992 9.355723e-01 #> 2524 b_cyl -1.531625571 9.363274e-01 #> 2525 b_cyl -1.528156149 9.369893e-01 #> 2526 b_cyl -1.524686728 9.375619e-01 #> 2527 b_cyl -1.521217306 9.380433e-01 #> 2528 b_cyl -1.517747885 9.384319e-01 #> 2529 b_cyl -1.514278464 9.387259e-01 #> 2530 b_cyl -1.510809042 9.389116e-01 #> 2531 b_cyl -1.507339621 9.389877e-01 #> 2532 b_cyl -1.503870200 9.389637e-01 #> 2533 b_cyl -1.500400778 9.388386e-01 #> 2534 b_cyl -1.496931357 9.386112e-01 #> 2535 b_cyl -1.493461935 9.382804e-01 #> 2536 b_cyl -1.489992514 9.378377e-01 #> 2537 b_cyl -1.486523093 9.372730e-01 #> 2538 b_cyl -1.483053671 9.366028e-01 #> 2539 b_cyl -1.479584250 9.358271e-01 #> 2540 b_cyl -1.476114828 9.349460e-01 #> 2541 b_cyl -1.472645407 9.339596e-01 #> 2542 b_cyl -1.469175986 9.328657e-01 #> 2543 b_cyl -1.465706564 9.316461e-01 #> 2544 b_cyl -1.462237143 9.303239e-01 #> 2545 b_cyl -1.458767721 9.289004e-01 #> 2546 b_cyl -1.455298300 9.273771e-01 #> 2547 b_cyl -1.451828879 9.257557e-01 #> 2548 b_cyl -1.448359457 9.240379e-01 #> 2549 b_cyl -1.444890036 9.222062e-01 #> 2550 b_cyl -1.441420614 9.202819e-01 #> 2551 b_cyl -1.437951193 9.182695e-01 #> 2552 b_cyl -1.434481772 9.161719e-01 #> 2553 b_cyl -1.431012350 9.139917e-01 #> 2554 b_cyl -1.427542929 9.117320e-01 #> 2555 b_cyl -1.424073507 9.093837e-01 #> 2556 b_cyl -1.420604086 9.069588e-01 #> 2557 b_cyl -1.417134665 9.044659e-01 #> 2558 b_cyl -1.413665243 9.019084e-01 #> 2559 b_cyl -1.410195822 8.992897e-01 #> 2560 b_cyl -1.406726400 8.966132e-01 #> 2561 b_cyl -1.403256979 8.938765e-01 #> 2562 b_cyl -1.399787558 8.910848e-01 #> 2563 b_cyl -1.396318136 8.882477e-01 #> 2564 b_cyl -1.392848715 8.853685e-01 #> 2565 b_cyl -1.389379293 8.824507e-01 #> 2566 b_cyl -1.385909872 8.794974e-01 #> 2567 b_cyl -1.382440451 8.765097e-01 #> 2568 b_cyl -1.378971029 8.734892e-01 #> 2569 b_cyl -1.375501608 8.704439e-01 #> 2570 b_cyl -1.372032186 8.673765e-01 #> 2571 b_cyl -1.368562765 8.642896e-01 #> 2572 b_cyl -1.365093344 8.611855e-01 #> 2573 b_cyl -1.361623922 8.580662e-01 #> 2574 b_cyl -1.358154501 8.549321e-01 #> 2575 b_cyl -1.354685079 8.517881e-01 #> 2576 b_cyl -1.351215658 8.486357e-01 #> 2577 b_cyl -1.347746237 8.454765e-01 #> 2578 b_cyl -1.344276815 8.423116e-01 #> 2579 b_cyl -1.340807394 8.391421e-01 #> 2580 b_cyl -1.337337972 8.359684e-01 #> 2581 b_cyl -1.333868551 8.327921e-01 #> 2582 b_cyl -1.330399130 8.296135e-01 #> 2583 b_cyl -1.326929708 8.264330e-01 #> 2584 b_cyl -1.323460287 8.232507e-01 #> 2585 b_cyl -1.319990865 8.200666e-01 #> 2586 b_cyl -1.316521444 8.168801e-01 #> 2587 b_cyl -1.313052023 8.136908e-01 #> 2588 b_cyl -1.309582601 8.104985e-01 #> 2589 b_cyl -1.306113180 8.073023e-01 #> 2590 b_cyl -1.302643758 8.041017e-01 #> 2591 b_cyl -1.299174337 8.008956e-01 #> 2592 b_cyl -1.295704916 7.976824e-01 #> 2593 b_cyl -1.292235494 7.944604e-01 #> 2594 b_cyl -1.288766073 7.912293e-01 #> 2595 b_cyl -1.285296651 7.879879e-01 #> 2596 b_cyl -1.281827230 7.847349e-01 #> 2597 b_cyl -1.278357809 7.814691e-01 #> 2598 b_cyl -1.274888387 7.781882e-01 #> 2599 b_cyl -1.271418966 7.748886e-01 #> 2600 b_cyl -1.267949544 7.715715e-01 #> 2601 b_cyl -1.264480123 7.682356e-01 #> 2602 b_cyl -1.261010702 7.648794e-01 #> 2603 b_cyl -1.257541280 7.615018e-01 #> 2604 b_cyl -1.254071859 7.581011e-01 #> 2605 b_cyl -1.250602437 7.546706e-01 #> 2606 b_cyl -1.247133016 7.512143e-01 #> 2607 b_cyl -1.243663595 7.477311e-01 #> 2608 b_cyl -1.240194173 7.442199e-01 #> 2609 b_cyl -1.236724752 7.406797e-01 #> 2610 b_cyl -1.233255330 7.371096e-01 #> 2611 b_cyl -1.229785909 7.335020e-01 #> 2612 b_cyl -1.226316488 7.298616e-01 #> 2613 b_cyl -1.222847066 7.261885e-01 #> 2614 b_cyl -1.219377645 7.224822e-01 #> 2615 b_cyl -1.215908223 7.187423e-01 #> 2616 b_cyl -1.212438802 7.149683e-01 #> 2617 b_cyl -1.208969381 7.111544e-01 #> 2618 b_cyl -1.205499959 7.073032e-01 #> 2619 b_cyl -1.202030538 7.034172e-01 #> 2620 b_cyl -1.198561116 6.994965e-01 #> 2621 b_cyl -1.195091695 6.955413e-01 #> 2622 b_cyl -1.191622274 6.915518e-01 #> 2623 b_cyl -1.188152852 6.875246e-01 #> 2624 b_cyl -1.184683431 6.834597e-01 #> 2625 b_cyl -1.181214009 6.793620e-01 #> 2626 b_cyl -1.177744588 6.752323e-01 #> 2627 b_cyl -1.174275167 6.710711e-01 #> 2628 b_cyl -1.170805745 6.668794e-01 #> 2629 b_cyl -1.167336324 6.626563e-01 #> 2630 b_cyl -1.163866902 6.583999e-01 #> 2631 b_cyl -1.160397481 6.541165e-01 #> 2632 b_cyl -1.156928060 6.498074e-01 #> 2633 b_cyl -1.153458638 6.454737e-01 #> 2634 b_cyl -1.149989217 6.411168e-01 #> 2635 b_cyl -1.146519795 6.367378e-01 #> 2636 b_cyl -1.143050374 6.323343e-01 #> 2637 b_cyl -1.139580953 6.279128e-01 #> 2638 b_cyl -1.136111531 6.234749e-01 #> 2639 b_cyl -1.132642110 6.190221e-01 #> 2640 b_cyl -1.129172688 6.145560e-01 #> 2641 b_cyl -1.125703267 6.100785e-01 #> 2642 b_cyl -1.122233846 6.055897e-01 #> 2643 b_cyl -1.118764424 6.010935e-01 #> 2644 b_cyl -1.115295003 5.965921e-01 #> 2645 b_cyl -1.111825581 5.920873e-01 #> 2646 b_cyl -1.108356160 5.875809e-01 #> 2647 b_cyl -1.104886739 5.830745e-01 #> 2648 b_cyl -1.101417317 5.785709e-01 #> 2649 b_cyl -1.097947896 5.740723e-01 #> 2650 b_cyl -1.094478474 5.695805e-01 #> 2651 b_cyl -1.091009053 5.650970e-01 #> 2652 b_cyl -1.087539632 5.606237e-01 #> 2653 b_cyl -1.084070210 5.561624e-01 #> 2654 b_cyl -1.080600789 5.517163e-01 #> 2655 b_cyl -1.077131367 5.472885e-01 #> 2656 b_cyl -1.073661946 5.428785e-01 #> 2657 b_cyl -1.070192525 5.384880e-01 #> 2658 b_cyl -1.066723103 5.341186e-01 #> 2659 b_cyl -1.063253682 5.297715e-01 #> 2660 b_cyl -1.059784260 5.254498e-01 #> 2661 b_cyl -1.056314839 5.211584e-01 #> 2662 b_cyl -1.052845418 5.168943e-01 #> 2663 b_cyl -1.049375996 5.126585e-01 #> 2664 b_cyl -1.045906575 5.084522e-01 #> 2665 b_cyl -1.042437153 5.042764e-01 #> 2666 b_cyl -1.038967732 5.001322e-01 #> 2667 b_cyl -1.035498311 4.960280e-01 #> 2668 b_cyl -1.032028889 4.919571e-01 #> 2669 b_cyl -1.028559468 4.879203e-01 #> 2670 b_cyl -1.025090046 4.839179e-01 #> 2671 b_cyl -1.021620625 4.799504e-01 #> 2672 b_cyl -1.018151204 4.760182e-01 #> 2673 b_cyl -1.014681782 4.721284e-01 #> 2674 b_cyl -1.011212361 4.682756e-01 #> 2675 b_cyl -1.007742939 4.644584e-01 #> 2676 b_cyl -1.004273518 4.606764e-01 #> 2677 b_cyl -1.000804097 4.569296e-01 #> 2678 b_cyl -0.997334675 4.532176e-01 #> 2679 b_cyl -0.993865254 4.495450e-01 #> 2680 b_cyl -0.990395832 4.459088e-01 #> 2681 b_cyl -0.986926411 4.423054e-01 #> 2682 b_cyl -0.983456990 4.387341e-01 #> 2683 b_cyl -0.979987568 4.351939e-01 #> 2684 b_cyl -0.976518147 4.316841e-01 #> 2685 b_cyl -0.973048725 4.282063e-01 #> 2686 b_cyl -0.969579304 4.247601e-01 #> 2687 b_cyl -0.966109883 4.213404e-01 #> 2688 b_cyl -0.962640461 4.179458e-01 #> 2689 b_cyl -0.959171040 4.145752e-01 #> 2690 b_cyl -0.955701618 4.112273e-01 #> 2691 b_cyl -0.952232197 4.079018e-01 #> 2692 b_cyl -0.948762776 4.045993e-01 #> 2693 b_cyl -0.945293354 4.013145e-01 #> 2694 b_cyl -0.941823933 3.980460e-01 #> 2695 b_cyl -0.938354511 3.947924e-01 #> 2696 b_cyl -0.934885090 3.915523e-01 #> 2697 b_cyl -0.931415669 3.883242e-01 #> 2698 b_cyl -0.927946247 3.851086e-01 #> 2699 b_cyl -0.924476826 3.819013e-01 #> 2700 b_cyl -0.921007404 3.787010e-01 #> 2701 b_cyl -0.917537983 3.755062e-01 #> 2702 b_cyl -0.914068562 3.723156e-01 #> 2703 b_cyl -0.910599140 3.691279e-01 #> 2704 b_cyl -0.907129719 3.659419e-01 #> 2705 b_cyl -0.903660297 3.627555e-01 #> 2706 b_cyl -0.900190876 3.595677e-01 #> 2707 b_cyl -0.896721455 3.563773e-01 #> 2708 b_cyl -0.893252033 3.531833e-01 #> 2709 b_cyl -0.889782612 3.499845e-01 #> 2710 b_cyl -0.886313190 3.467792e-01 #> 2711 b_cyl -0.882843769 3.435662e-01 #> 2712 b_cyl -0.879374348 3.403453e-01 #> 2713 b_cyl -0.875904926 3.371157e-01 #> 2714 b_cyl -0.872435505 3.338769e-01 #> 2715 b_cyl -0.868966083 3.306280e-01 #> 2716 b_cyl -0.865496662 3.273675e-01 #> 2717 b_cyl -0.862027241 3.240940e-01 #> 2718 b_cyl -0.858557819 3.208088e-01 #> 2719 b_cyl -0.855088398 3.175115e-01 #> 2720 b_cyl -0.851618976 3.142018e-01 #> 2721 b_cyl -0.848149555 3.108795e-01 #> 2722 b_cyl -0.844680134 3.075438e-01 #> 2723 b_cyl -0.841210712 3.041927e-01 #> 2724 b_cyl -0.837741291 3.008289e-01 #> 2725 b_cyl -0.834271869 2.974524e-01 #> 2726 b_cyl -0.830802448 2.940634e-01 #> 2727 b_cyl -0.827333027 2.906622e-01 #> 2728 b_cyl -0.823863605 2.872491e-01 #> 2729 b_cyl -0.820394184 2.838220e-01 #> 2730 b_cyl -0.816924762 2.803842e-01 #> 2731 b_cyl -0.813455341 2.769365e-01 #> 2732 b_cyl -0.809985920 2.734796e-01 #> 2733 b_cyl -0.806516498 2.700143e-01 #> 2734 b_cyl -0.803047077 2.665414e-01 #> 2735 b_cyl -0.799577655 2.630610e-01 #> 2736 b_cyl -0.796108234 2.595755e-01 #> 2737 b_cyl -0.792638813 2.560864e-01 #> 2738 b_cyl -0.789169391 2.525951e-01 #> 2739 b_cyl -0.785699970 2.491028e-01 #> 2740 b_cyl -0.782230548 2.456110e-01 #> 2741 b_cyl -0.778761127 2.421219e-01 #> 2742 b_cyl -0.775291706 2.386378e-01 #> 2743 b_cyl -0.771822284 2.351600e-01 #> 2744 b_cyl -0.768352863 2.316904e-01 #> 2745 b_cyl -0.764883441 2.282309e-01 #> 2746 b_cyl -0.761414020 2.247833e-01 #> 2747 b_cyl -0.757944599 2.213511e-01 #> 2748 b_cyl -0.754475177 2.179381e-01 #> 2749 b_cyl -0.751005756 2.145442e-01 #> 2750 b_cyl -0.747536334 2.111715e-01 #> 2751 b_cyl -0.744066913 2.078220e-01 #> 2752 b_cyl -0.740597492 2.044978e-01 #> 2753 b_cyl -0.737128070 2.012023e-01 #> 2754 b_cyl -0.733658649 1.979428e-01 #> 2755 b_cyl -0.730189227 1.947159e-01 #> 2756 b_cyl -0.726719806 1.915235e-01 #> 2757 b_cyl -0.723250385 1.883676e-01 #> 2758 b_cyl -0.719780963 1.852500e-01 #> 2759 b_cyl -0.716311542 1.821723e-01 #> 2760 b_cyl -0.712842120 1.791467e-01 #> 2761 b_cyl -0.709372699 1.761654e-01 #> 2762 b_cyl -0.705903278 1.732297e-01 #> 2763 b_cyl -0.702433856 1.703408e-01 #> 2764 b_cyl -0.698964435 1.674999e-01 #> 2765 b_cyl -0.695495013 1.647081e-01 #> 2766 b_cyl -0.692025592 1.619757e-01 #> 2767 b_cyl -0.688556171 1.592971e-01 #> 2768 b_cyl -0.685086749 1.566702e-01 #> 2769 b_cyl -0.681617328 1.540952e-01 #> 2770 b_cyl -0.678147906 1.515725e-01 #> 2771 b_cyl -0.674678485 1.491022e-01 #> 2772 b_cyl -0.671209064 1.466912e-01 #> 2773 b_cyl -0.667739642 1.443375e-01 #> 2774 b_cyl -0.664270221 1.420351e-01 #> 2775 b_cyl -0.660800799 1.397835e-01 #> 2776 b_cyl -0.657331378 1.375820e-01 #> 2777 b_cyl -0.653861957 1.354298e-01 #> 2778 b_cyl -0.650392535 1.333299e-01 #> 2779 b_cyl -0.646923114 1.312838e-01 #> 2780 b_cyl -0.643453692 1.292828e-01 #> 2781 b_cyl -0.639984271 1.273257e-01 #> 2782 b_cyl -0.636514850 1.254110e-01 #> 2783 b_cyl -0.633045428 1.235372e-01 #> 2784 b_cyl -0.629576007 1.217042e-01 #> 2785 b_cyl -0.626106585 1.199158e-01 #> 2786 b_cyl -0.622637164 1.181624e-01 #> 2787 b_cyl -0.619167743 1.164424e-01 #> 2788 b_cyl -0.615698321 1.147540e-01 #> 2789 b_cyl -0.612228900 1.130956e-01 #> 2790 b_cyl -0.608759478 1.114654e-01 #> 2791 b_cyl -0.605290057 1.098674e-01 #> 2792 b_cyl -0.601820636 1.082935e-01 #> 2793 b_cyl -0.598351214 1.067420e-01 #> 2794 b_cyl -0.594881793 1.052115e-01 #> 2795 b_cyl -0.591412371 1.037004e-01 #> 2796 b_cyl -0.587942950 1.022075e-01 #> 2797 b_cyl -0.584473529 1.007340e-01 #> 2798 b_cyl -0.581004107 9.927636e-02 #> 2799 b_cyl -0.577534686 9.783256e-02 #> 2800 b_cyl -0.574065264 9.640166e-02 #> 2801 b_cyl -0.570595843 9.498279e-02 #> 2802 b_cyl -0.567126422 9.357518e-02 #> 2803 b_cyl -0.563657000 9.217946e-02 #> 2804 b_cyl -0.560187579 9.079460e-02 #> 2805 b_cyl -0.556718157 8.941911e-02 #> 2806 b_cyl -0.553248736 8.805272e-02 #> 2807 b_cyl -0.549779315 8.669525e-02 #> 2808 b_cyl -0.546309893 8.534660e-02 #> 2809 b_cyl -0.542840472 8.400751e-02 #> 2810 b_cyl -0.539371050 8.267875e-02 #> 2811 b_cyl -0.535901629 8.135920e-02 #> 2812 b_cyl -0.532432208 8.004918e-02 #> 2813 b_cyl -0.528962786 7.874903e-02 #> 2814 b_cyl -0.525493365 7.745917e-02 #> 2815 b_cyl -0.522023943 7.618047e-02 #> 2816 b_cyl -0.518554522 7.491558e-02 #> 2817 b_cyl -0.515085101 7.366289e-02 #> 2818 b_cyl -0.511615679 7.242302e-02 #> 2819 b_cyl -0.508146258 7.119660e-02 #> 2820 b_cyl -0.504676836 6.998427e-02 #> 2821 b_cyl -0.501207415 6.878668e-02 #> 2822 b_cyl -0.497737994 6.760816e-02 #> 2823 b_cyl -0.494268572 6.644627e-02 #> 2824 b_cyl -0.490799151 6.530140e-02 #> 2825 b_cyl -0.487329729 6.417415e-02 #> 2826 b_cyl -0.483860308 6.306505e-02 #> 2827 b_cyl -0.480390887 6.197466e-02 #> 2828 b_cyl -0.476921465 6.090694e-02 #> 2829 b_cyl -0.473452044 5.986026e-02 #> 2830 b_cyl -0.469982622 5.883379e-02 #> 2831 b_cyl -0.466513201 5.782781e-02 #> 2832 b_cyl -0.463043780 5.684257e-02 #> 2833 b_cyl -0.459574358 5.587824e-02 #> 2834 b_cyl -0.456104937 5.493767e-02 #> 2835 b_cyl -0.452635515 5.402051e-02 #> 2836 b_cyl -0.449166094 5.312440e-02 #> 2837 b_cyl -0.445696673 5.224921e-02 #> 2838 b_cyl -0.442227251 5.139477e-02 #> 2839 b_cyl -0.438757830 5.056085e-02 #> 2840 b_cyl -0.435288408 4.974872e-02 #> 2841 b_cyl -0.431818987 4.895941e-02 #> 2842 b_cyl -0.428349566 4.818927e-02 #> 2843 b_cyl -0.424880144 4.743779e-02 #> 2844 b_cyl -0.421410723 4.670442e-02 #> 2845 b_cyl -0.417941301 4.598857e-02 #> 2846 b_cyl -0.414471880 4.529010e-02 #> 2847 b_cyl -0.411002459 4.461092e-02 #> 2848 b_cyl -0.407533037 4.394680e-02 #> 2849 b_cyl -0.404063616 4.329699e-02 #> 2850 b_cyl -0.400594194 4.266072e-02 #> 2851 b_cyl -0.397124773 4.203721e-02 #> 2852 b_cyl -0.393655352 4.142567e-02 #> 2853 b_cyl -0.390185930 4.082745e-02 #> 2854 b_cyl -0.386716509 4.023928e-02 #> 2855 b_cyl -0.383247087 3.966016e-02 #> 2856 b_cyl -0.379777666 3.908931e-02 #> 2857 b_cyl -0.376308245 3.852595e-02 #> 2858 b_cyl -0.372838823 3.796934e-02 #> 2859 b_cyl -0.369369402 3.741956e-02 #> 2860 b_cyl -0.365899980 3.687504e-02 #> 2861 b_cyl -0.362430559 3.633474e-02 #> 2862 b_cyl -0.358961138 3.579805e-02 #> 2863 b_cyl -0.355491716 3.526438e-02 #> 2864 b_cyl -0.352022295 3.473319e-02 #> 2865 b_cyl -0.348552873 3.420414e-02 #> 2866 b_cyl -0.345083452 3.367656e-02 #> 2867 b_cyl -0.341614031 3.314988e-02 #> 2868 b_cyl -0.338144609 3.262377e-02 #> 2869 b_cyl -0.334675188 3.209795e-02 #> 2870 b_cyl -0.331205766 3.157218e-02 #> 2871 b_cyl -0.327736345 3.104626e-02 #> 2872 b_cyl -0.324266924 3.051998e-02 #> 2873 b_cyl -0.320797502 2.999331e-02 #> 2874 b_cyl -0.317328081 2.946627e-02 #> 2875 b_cyl -0.313858659 2.893887e-02 #> 2876 b_cyl -0.310389238 2.841119e-02 #> 2877 b_cyl -0.306919817 2.788334e-02 #> 2878 b_cyl -0.303450395 2.735555e-02 #> 2879 b_cyl -0.299980974 2.682807e-02 #> 2880 b_cyl -0.296511552 2.630114e-02 #> 2881 b_cyl -0.293042131 2.577505e-02 #> 2882 b_cyl -0.289572710 2.525007e-02 #> 2883 b_cyl -0.286103288 2.472654e-02 #> 2884 b_cyl -0.282633867 2.420527e-02 #> 2885 b_cyl -0.279164445 2.368640e-02 #> 2886 b_cyl -0.275695024 2.317025e-02 #> 2887 b_cyl -0.272225603 2.265721e-02 #> 2888 b_cyl -0.268756181 2.214765e-02 #> 2889 b_cyl -0.265286760 2.164197e-02 #> 2890 b_cyl -0.261817338 2.114136e-02 #> 2891 b_cyl -0.258347917 2.064592e-02 #> 2892 b_cyl -0.254878496 2.015572e-02 #> 2893 b_cyl -0.251409074 1.967112e-02 #> 2894 b_cyl -0.247939653 1.919246e-02 #> 2895 b_cyl -0.244470231 1.872007e-02 #> 2896 b_cyl -0.241000810 1.825511e-02 #> 2897 b_cyl -0.237531389 1.779801e-02 #> 2898 b_cyl -0.234061967 1.734824e-02 #> 2899 b_cyl -0.230592546 1.690604e-02 #> 2900 b_cyl -0.227123124 1.647163e-02 #> 2901 b_cyl -0.223653703 1.604523e-02 #> 2902 b_cyl -0.220184282 1.562761e-02 #> 2903 b_cyl -0.216714860 1.521979e-02 #> 2904 b_cyl -0.213245439 1.482053e-02 #> 2905 b_cyl -0.209776017 1.442993e-02 #> 2906 b_cyl -0.206306596 1.404805e-02 #> 2907 b_cyl -0.202837175 1.367497e-02 #> 2908 b_cyl -0.199367753 1.331093e-02 #> 2909 b_cyl -0.195898332 1.295766e-02 #> 2910 b_cyl -0.192428910 1.261325e-02 #> 2911 b_cyl -0.188959489 1.227768e-02 #> 2912 b_cyl -0.185490068 1.195090e-02 #> 2913 b_cyl -0.182020646 1.163288e-02 #> 2914 b_cyl -0.178551225 1.132354e-02 #> 2915 b_cyl -0.175081803 1.102461e-02 #> 2916 b_cyl -0.171612382 1.073439e-02 #> 2917 b_cyl -0.168142961 1.045255e-02 #> 2918 b_cyl -0.164673539 1.017898e-02 #> 2919 b_cyl -0.161204118 9.913579e-03 #> 2920 b_cyl -0.157734696 9.656235e-03 #> 2921 b_cyl -0.154265275 9.408094e-03 #> 2922 b_cyl -0.150795854 9.168290e-03 #> 2923 b_cyl -0.147326432 8.936109e-03 #> 2924 b_cyl -0.143857011 8.711428e-03 #> 2925 b_cyl -0.140387589 8.494125e-03 #> 2926 b_cyl -0.136918168 8.284078e-03 #> 2927 b_cyl -0.133448747 8.081962e-03 #> 2928 b_cyl -0.129979325 7.887665e-03 #> 2929 b_cyl -0.126509904 7.700190e-03 #> 2930 b_cyl -0.123040482 7.519422e-03 #> 2931 b_cyl -0.119571061 7.345247e-03 #> 2932 b_cyl -0.116101640 7.177553e-03 #> 2933 b_cyl -0.112632218 7.016638e-03 #> 2934 b_cyl -0.109162797 6.863006e-03 #> 2935 b_cyl -0.105693375 6.715459e-03 #> 2936 b_cyl -0.102223954 6.573887e-03 #> 2937 b_cyl -0.098754533 6.438185e-03 #> 2938 b_cyl -0.095285111 6.308246e-03 #> 2939 b_cyl -0.091815690 6.184061e-03 #> 2940 b_cyl -0.088346269 6.066602e-03 #> 2941 b_cyl -0.084876847 5.954516e-03 #> 2942 b_cyl -0.081407426 5.847690e-03 #> 2943 b_cyl -0.077938004 5.746012e-03 #> 2944 b_cyl -0.074468583 5.649367e-03 #> 2945 b_cyl -0.070999162 5.557637e-03 #> 2946 b_cyl -0.067529740 5.471656e-03 #> 2947 b_cyl -0.064060319 5.390414e-03 #> 2948 b_cyl -0.060590897 5.313636e-03 #> 2949 b_cyl -0.057121476 5.241187e-03 #> 2950 b_cyl -0.053652055 5.172932e-03 #> 2951 b_cyl -0.050182633 5.108730e-03 #> 2952 b_cyl -0.046713212 5.049023e-03 #> 2953 b_cyl -0.043243790 4.993289e-03 #> 2954 b_cyl -0.039774369 4.941068e-03 #> 2955 b_cyl -0.036304948 4.892199e-03 #> 2956 b_cyl -0.032835526 4.846522e-03 #> 2957 b_cyl -0.029366105 4.803871e-03 #> 2958 b_cyl -0.025896683 4.764367e-03 #> 2959 b_cyl -0.022427262 4.727805e-03 #> 2960 b_cyl -0.018957841 4.693646e-03 #> 2961 b_cyl -0.015488419 4.661715e-03 #> 2962 b_cyl -0.012018998 4.631831e-03 #> 2963 b_cyl -0.008549576 4.603816e-03 #> 2964 b_cyl -0.005080155 4.577577e-03 #> 2965 b_cyl -0.001610734 4.553026e-03 #> 2966 b_cyl 0.001858688 4.529688e-03 #> 2967 b_cyl 0.005328109 4.507383e-03 #> 2968 b_cyl 0.008797531 4.485934e-03 #> 2969 b_cyl 0.012266952 4.465165e-03 #> 2970 b_cyl 0.015736373 4.444904e-03 #> 2971 b_cyl 0.019205795 4.424986e-03 #> 2972 b_cyl 0.022675216 4.405135e-03 #> 2973 b_cyl 0.026144638 4.385191e-03 #> 2974 b_cyl 0.029614059 4.365001e-03 #> 2975 b_cyl 0.033083480 4.344416e-03 #> 2976 b_cyl 0.036552902 4.323293e-03 #> 2977 b_cyl 0.040022323 4.301316e-03 #> 2978 b_cyl 0.043491745 4.278435e-03 #> 2979 b_cyl 0.046961166 4.254560e-03 #> 2980 b_cyl 0.050430587 4.229586e-03 #> 2981 b_cyl 0.053900009 4.203415e-03 #> 2982 b_cyl 0.057369430 4.175956e-03 #> 2983 b_cyl 0.060838852 4.146903e-03 #> 2984 b_cyl 0.064308273 4.116263e-03 #> 2985 b_cyl 0.067777694 4.084100e-03 #> 2986 b_cyl 0.071247116 4.050376e-03 #> 2987 b_cyl 0.074716537 4.015061e-03 #> 2988 b_cyl 0.078185959 3.978138e-03 #> 2989 b_cyl 0.081655380 3.939427e-03 #> 2990 b_cyl 0.085124801 3.898898e-03 #> 2991 b_cyl 0.088594223 3.856791e-03 #> 2992 b_cyl 0.092063644 3.813142e-03 #> 2993 b_cyl 0.095533066 3.767998e-03 #> 2994 b_cyl 0.099002487 3.721413e-03 #> 2995 b_cyl 0.102471908 3.673381e-03 #> 2996 b_cyl 0.105941330 3.623857e-03 #> 2997 b_cyl 0.109410751 3.573184e-03 #> 2998 b_cyl 0.112880173 3.521465e-03 #> 2999 b_cyl 0.116349594 3.468810e-03 #> 3000 b_cyl 0.119819015 3.415337e-03 #> 3001 b_cyl 0.123288437 3.361166e-03 #> 3002 b_cyl 0.126757858 3.306357e-03 #> 3003 b_cyl 0.130227280 3.251211e-03 #> 3004 b_cyl 0.133696701 3.195872e-03 #> 3005 b_cyl 0.137166122 3.140486e-03 #> 3006 b_cyl 0.140635544 3.085204e-03 #> 3007 b_cyl 0.144104965 3.030175e-03 #> 3008 b_cyl 0.147574387 2.975678e-03 #> 3009 b_cyl 0.151043808 2.921847e-03 #> 3010 b_cyl 0.154513229 2.868811e-03 #> 3011 b_cyl 0.157982651 2.816715e-03 #> 3012 b_cyl 0.161452072 2.765698e-03 #> 3013 b_cyl 0.164921494 2.715896e-03 #> 3014 b_cyl 0.168390915 2.667670e-03 #> 3015 b_cyl 0.171860336 2.621106e-03 #> 3016 b_cyl 0.175329758 2.576193e-03 #> 3017 b_cyl 0.178799179 2.533033e-03 #> 3018 b_cyl 0.182268601 2.491718e-03 #> 3019 b_cyl 0.185738022 2.452332e-03 #> 3020 b_cyl 0.189207443 2.415162e-03 #> 3021 b_cyl 0.192676865 2.380367e-03 #> 3022 b_cyl 0.196146286 2.347715e-03 #> 3023 b_cyl 0.199615708 2.317236e-03 #> 3024 b_cyl 0.203085129 2.288952e-03 #> 3025 b_cyl 0.206554550 2.262874e-03 #> 3026 b_cyl 0.210023972 2.239116e-03 #> 3027 b_cyl 0.213493393 2.217947e-03 #> 3028 b_cyl 0.216962815 2.198923e-03 #> 3029 b_cyl 0.220432236 2.182000e-03 #> 3030 b_cyl 0.223901657 2.167126e-03 #> 3031 b_cyl 0.227371079 2.154238e-03 #> 3032 b_cyl 0.230840500 2.143271e-03 #> 3033 b_cyl 0.234309922 2.134530e-03 #> 3034 b_cyl 0.237779343 2.127469e-03 #> 3035 b_cyl 0.241248764 2.121985e-03 #> 3036 b_cyl 0.244718186 2.117967e-03 #> 3037 b_cyl 0.248187607 2.115300e-03 #> 3038 b_cyl 0.251657029 2.113862e-03 #> 3039 b_cyl 0.255126450 2.113702e-03 #> 3040 b_cyl 0.258595871 2.114479e-03 #> 3041 b_cyl 0.262065293 2.116011e-03 #> 3042 b_cyl 0.265534714 2.118161e-03 #> 3043 b_cyl 0.269004136 2.120792e-03 #> 3044 b_cyl 0.272473557 2.123764e-03 #> 3045 b_cyl 0.275942978 2.126940e-03 #> 3046 b_cyl 0.279412400 2.130115e-03 #> 3047 b_cyl 0.282881821 2.133137e-03 #> 3048 b_cyl 0.286351243 2.135878e-03 #> 3049 b_cyl 0.289820664 2.138209e-03 #> 3050 b_cyl 0.293290085 2.140006e-03 #> 3051 b_cyl 0.296759507 2.141069e-03 #> 3052 b_cyl 0.300228928 2.141200e-03 #> 3053 b_cyl 0.303698350 2.140389e-03 #> 3054 b_cyl 0.307167771 2.138538e-03 #> 3055 b_cyl 0.310637192 2.135554e-03 #> 3056 b_cyl 0.314106614 2.131351e-03 #> 3057 b_cyl 0.317576035 2.125780e-03 #> 3058 b_cyl 0.321045457 2.118549e-03 #> 3059 b_cyl 0.324514878 2.109847e-03 #> 3060 b_cyl 0.327984299 2.099622e-03 #> 3061 b_cyl 0.331453721 2.087831e-03 #> 3062 b_cyl 0.334923142 2.074437e-03 #> 3063 b_cyl 0.338392564 2.059411e-03 #> 3064 b_cyl 0.341861985 2.042333e-03 #> 3065 b_cyl 0.345331406 2.023585e-03 #> 3066 b_cyl 0.348800828 2.003168e-03 #> 3067 b_cyl 0.352270249 1.981088e-03 #> 3068 b_cyl 0.355739671 1.957361e-03 #> 3069 b_cyl 0.359209092 1.932006e-03 #> 3070 b_cyl 0.362678513 1.904759e-03 #> 3071 b_cyl 0.366147935 1.875895e-03 #> 3072 b_cyl 0.369617356 1.845538e-03 # }"},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":null,"dir":"Reference","previous_headings":"","what":"Equal-Tailed Interval (ETI) — eti","title":"Equal-Tailed Interval (ETI) — eti","text":"Compute Equal-Tailed Interval (ETI) posterior distributions using quantiles method. probability interval equal probability . ETI can used context uncertainty characterisation posterior distributions Credible Interval (CI).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Equal-Tailed Interval (ETI) — eti","text":"","code":"eti(x, ...) # S3 method for class 'numeric' eti(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'data.frame' eti(x, ci = 0.95, rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'stanreg' eti( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' eti( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'get_predicted' eti(x, ci = 0.95, use_iterations = FALSE, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Equal-Tailed Interval (ETI) — eti","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. ci Value vector probability (credible) interval - CI (0 1) estimated. Default .95 (95%). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Equal-Tailed Interval (ETI) — eti","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability credible interval. CI_low, CI_high lower upper credible interval limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Equal-Tailed Interval (ETI) — eti","text":"Unlike equal-tailed intervals (see eti()) typically exclude 2.5% tail distribution always include median, HDI equal-tailed therefore always includes mode(s) posterior distributions. can useful better represent credibility mass distribution, HDI also limitations. See spi() details. 95% 89% Credible Intervals (CI) two reasonable ranges characterize uncertainty related estimation (see discussion differences two values). 89% intervals (ci = 0.89) deemed stable , instance, 95% intervals (Kruschke, 2014). effective sample size least 10.000 recommended one wants estimate 95% intervals high precision (Kruschke, 2014, p. 183ff). Unfortunately, default number posterior samples Bayes packages (e.g., rstanarm brms) 4.000 (thus, might want increase fitting model). Moreover, 89 indicates arbitrariness interval limits - remarkable property highest prime number exceed already unstable 95% threshold (McElreath, 2015). However, 95% advantages . instance, shares (case normal posterior distribution) intuitive relationship standard deviation conveys accurate image (artificial) bounds distribution. Also, wider, makes analyses conservative (.e., probability covering 0 larger 95% CI lower ranges 89%), good thing context reproducibility crisis. 95% equal-tailed interval (ETI) 2.5% distribution either side limits. indicates 2.5th percentile 97.5h percentile. symmetric distributions, two methods computing credible intervals, ETI HDI, return similar results. case skewed distributions. Indeed, possible parameter values ETI lower credibility (less probable) parameter values outside ETI. property seems undesirable summary credible values distribution. hand, ETI range change transformations applied distribution (instance, log odds scale probabilities): lower higher bounds transformed distribution correspond transformed lower higher bounds original distribution. contrary, applying transformations distribution change resulting HDI.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Equal-Tailed Interval (ETI) — eti","text":"","code":"library(bayestestR) posterior <- rnorm(1000) eti(posterior) #> 95% ETI: [-1.93, 1.84] eti(posterior, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> 80% ETI | 89% ETI | 95% ETI #> --------------------------------------------- #> [-1.29, 1.25] | [-1.58, 1.53] | [-1.93, 1.84] df <- data.frame(replicate(4, rnorm(100))) eti(df) #> Equal-Tailed Interval #> #> Parameter | 95% ETI #> ------------------------- #> X1 | [-1.93, 2.19] #> X2 | [-1.70, 1.96] #> X3 | [-1.91, 1.63] #> X4 | [-1.87, 1.87] eti(df, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | 95% ETI #> --------------------------------------------------------- #> X1 | [-1.16, 1.28] | [-1.74, 1.78] | [-1.93, 2.19] #> X2 | [-0.96, 1.62] | [-1.40, 1.70] | [-1.70, 1.96] #> X3 | [-1.07, 0.88] | [-1.52, 1.15] | [-1.91, 1.63] #> X4 | [-1.20, 1.18] | [-1.67, 1.47] | [-1.87, 1.87] # \\donttest{ model <- suppressWarnings( rstanarm::stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) eti(model) #> Equal-Tailed Interval #> #> Parameter | 95% ETI | Effects | Component #> ---------------------------------------------------- #> (Intercept) | [29.80, 50.38] | fixed | conditional #> wt | [-6.99, -3.94] | fixed | conditional #> gear | [-2.21, 1.24] | fixed | conditional eti(model, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | 95% ETI | Effects | Component #> -------------------------------------------------------------------------------------- #> (Intercept) | [32.43, 45.66] | [31.18, 47.98] | [29.80, 50.38] | fixed | conditional #> wt | [-6.44, -4.62] | [-6.66, -4.35] | [-6.99, -3.94] | fixed | conditional #> gear | [-1.52, 0.79] | [-1.88, 0.99] | [-2.21, 1.24] | fixed | conditional eti(emmeans::emtrends(model, ~1, \"wt\", data = mtcars)) #> Equal-Tailed Interval #> #> X1 | 95% ETI #> ------------------------ #> overall | [-6.99, -3.94] model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 1: 0.018 seconds (Sampling) #> Chain 1: 0.036 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 2: 0.016 seconds (Sampling) #> Chain 2: 0.035 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 3: 0.018 seconds (Sampling) #> Chain 3: 0.038 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.017 seconds (Sampling) #> Chain 4: 0.037 seconds (Total) #> Chain 4: eti(model) #> Equal-Tailed Interval #> #> Parameter | 95% ETI | Effects | Component #> ---------------------------------------------------- #> b_Intercept | [36.12, 43.25] | fixed | conditional #> b_wt | [-4.78, -1.64] | fixed | conditional #> b_cyl | [-2.38, -0.66] | fixed | conditional eti(model, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | 95% ETI | Effects | Component #> -------------------------------------------------------------------------------------- #> b_Intercept | [37.42, 41.98] | [36.82, 42.64] | [36.12, 43.25] | fixed | conditional #> b_wt | [-4.19, -2.20] | [-4.45, -1.95] | [-4.78, -1.64] | fixed | conditional #> b_cyl | [-2.06, -0.96] | [-2.19, -0.82] | [-2.38, -0.66] | fixed | conditional bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) eti(bf) #> Equal-Tailed Interval #> #> Parameter | 95% ETI #> ------------------------- #> Difference | [0.85, 1.26] eti(bf, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | 95% ETI #> ------------------------------------------------------- #> Difference | [0.92, 1.19] | [0.89, 1.22] | [0.85, 1.26] # }"},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":null,"dir":"Reference","previous_headings":"","what":"Highest Density Interval (HDI) — hdi","title":"Highest Density Interval (HDI) — hdi","text":"Compute Highest Density Interval (HDI) posterior distributions. points within interval higher probability density points outside interval. HDI can used context uncertainty characterisation posterior distributions Credible Interval (CI).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Highest Density Interval (HDI) — hdi","text":"","code":"hdi(x, ...) # S3 method for class 'numeric' hdi(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'data.frame' hdi(x, ci = 0.95, rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'stanreg' hdi( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' hdi( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'get_predicted' hdi(x, ci = 0.95, use_iterations = FALSE, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Highest Density Interval (HDI) — hdi","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. ci Value vector probability (credible) interval - CI (0 1) estimated. Default .95 (95%). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Highest Density Interval (HDI) — hdi","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability credible interval. CI_low, CI_high lower upper credible interval limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Highest Density Interval (HDI) — hdi","text":"Unlike equal-tailed intervals (see eti()) typically exclude 2.5% tail distribution always include median, HDI equal-tailed therefore always includes mode(s) posterior distributions. can useful better represent credibility mass distribution, HDI also limitations. See spi() details. 95% 89% Credible Intervals (CI) two reasonable ranges characterize uncertainty related estimation (see discussion differences two values). 89% intervals (ci = 0.89) deemed stable , instance, 95% intervals (Kruschke, 2014). effective sample size least 10.000 recommended one wants estimate 95% intervals high precision (Kruschke, 2014, p. 183ff). Unfortunately, default number posterior samples Bayes packages (e.g., rstanarm brms) 4.000 (thus, might want increase fitting model). Moreover, 89 indicates arbitrariness interval limits - remarkable property highest prime number exceed already unstable 95% threshold (McElreath, 2015). However, 95% advantages . instance, shares (case normal posterior distribution) intuitive relationship standard deviation conveys accurate image (artificial) bounds distribution. Also, wider, makes analyses conservative (.e., probability covering 0 larger 95% CI lower ranges 89%), good thing context reproducibility crisis. 95% equal-tailed interval (ETI) 2.5% distribution either side limits. indicates 2.5th percentile 97.5h percentile. symmetric distributions, two methods computing credible intervals, ETI HDI, return similar results. case skewed distributions. Indeed, possible parameter values ETI lower credibility (less probable) parameter values outside ETI. property seems undesirable summary credible values distribution. hand, ETI range change transformations applied distribution (instance, log odds scale probabilities): lower higher bounds transformed distribution correspond transformed lower higher bounds original distribution. contrary, applying transformations distribution change resulting HDI.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Highest Density Interval (HDI) — hdi","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Highest Density Interval (HDI) — hdi","text":"Kruschke, J. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press. McElreath, R. (2015). Statistical rethinking: Bayesian course examples R Stan. Chapman Hall/CRC.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Highest Density Interval (HDI) — hdi","text":"Credits go ggdistribute HDInterval.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Highest Density Interval (HDI) — hdi","text":"","code":"library(bayestestR) posterior <- rnorm(1000) hdi(posterior, ci = 0.89) #> 89% HDI: [-1.46, 1.79] hdi(posterior, ci = c(0.80, 0.90, 0.95)) #> Highest Density Interval #> #> 80% HDI | 90% HDI | 95% HDI #> --------------------------------------------- #> [-1.39, 1.24] | [-1.59, 1.77] | [-2.10, 1.80] bayestestR::hdi(iris[1:4]) #> Identical densities found along different segments of the distribution, #> choosing rightmost. #> Highest Density Interval #> #> Parameter | 95% HDI #> --------------------------- #> Sepal.Length | [4.60, 7.70] #> Sepal.Width | [2.20, 3.90] #> Petal.Length | [1.00, 6.10] #> Petal.Width | [0.10, 2.30] bayestestR::hdi(iris[1:4], ci = c(0.80, 0.90, 0.95)) #> Identical densities found along different segments of the distribution, #> choosing rightmost. #> Identical densities found along different segments of the distribution, #> choosing rightmost. #> Identical densities found along different segments of the distribution, #> choosing rightmost. #> Highest Density Interval #> #> Parameter | 80% HDI | 90% HDI | 95% HDI #> --------------------------------------------------------- #> Sepal.Length | [4.90, 6.90] | [4.40, 6.90] | [4.60, 7.70] #> Sepal.Width | [2.50, 3.60] | [2.40, 3.80] | [2.20, 3.90] #> Petal.Length | [1.30, 5.50] | [1.10, 5.80] | [1.00, 6.10] #> Petal.Width | [0.10, 1.90] | [0.20, 2.30] | [0.10, 2.30] # \\donttest{ model <- suppressWarnings( rstanarm::stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) bayestestR::hdi(model) #> Highest Density Interval #> #> Parameter | 95% HDI #> ---------------------------- #> (Intercept) | [29.21, 49.50] #> wt | [-6.99, -4.05] #> gear | [-2.18, 1.68] bayestestR::hdi(model, ci = c(0.80, 0.90, 0.95)) #> Highest Density Interval #> #> Parameter | 80% HDI | 90% HDI | 95% HDI #> -------------------------------------------------------------- #> (Intercept) | [31.68, 46.67] | [29.21, 47.18] | [29.21, 49.50] #> wt | [-6.30, -4.23] | [-6.70, -4.11] | [-6.99, -4.05] #> gear | [-1.53, 1.08] | [-1.89, 1.41] | [-2.18, 1.68] bayestestR::hdi(emmeans::emtrends(model, ~1, \"wt\", data = mtcars)) #> Highest Density Interval #> #> X1 | 95% HDI #> ------------------------ #> overall | [-6.99, -4.05] model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 1: 0.016 seconds (Sampling) #> Chain 1: 0.036 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 2: 0.017 seconds (Sampling) #> Chain 2: 0.036 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 3: 0.017 seconds (Sampling) #> Chain 3: 0.037 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 4: 0.018 seconds (Sampling) #> Chain 4: 0.037 seconds (Total) #> Chain 4: bayestestR::hdi(model) #> Highest Density Interval #> #> Parameter | 95% HDI #> ---------------------------- #> (Intercept) | [36.15, 43.24] #> wt | [-4.81, -1.60] #> cyl | [-2.37, -0.66] bayestestR::hdi(model, ci = c(0.80, 0.90, 0.95)) #> Highest Density Interval #> #> Parameter | 80% HDI | 90% HDI | 95% HDI #> -------------------------------------------------------------- #> (Intercept) | [37.05, 41.78] | [36.79, 42.75] | [36.15, 43.24] #> wt | [-4.22, -2.23] | [-4.53, -1.88] | [-4.81, -1.60] #> cyl | [-2.07, -0.97] | [-2.20, -0.78] | [-2.37, -0.66] bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) bayestestR::hdi(bf) #> Highest Density Interval #> #> Parameter | 95% HDI #> ------------------------- #> Difference | [0.77, 1.19] bayestestR::hdi(bf, ci = c(0.80, 0.90, 0.95)) #> Highest Density Interval #> #> Parameter | 80% HDI | 90% HDI | 95% HDI #> ------------------------------------------------------- #> Difference | [0.85, 1.12] | [0.82, 1.17] | [0.78, 1.20] # }"},{"path":"https://easystats.github.io/bayestestR/reference/map_estimate.html","id":null,"dir":"Reference","previous_headings":"","what":"Maximum A Posteriori probability estimate (MAP) — map_estimate","title":"Maximum A Posteriori probability estimate (MAP) — map_estimate","text":"Find Highest Maximum Posteriori probability estimate (MAP) posterior, .e., value associated highest probability density (\"peak\" posterior distribution). words, estimation mode continuous parameters. Note function relies estimate_density(), default uses different smoothing bandwidth (\"SJ\") compared legacy default implemented base R density() function (\"nrd0\").","code":""},{"path":"https://easystats.github.io/bayestestR/reference/map_estimate.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Maximum A Posteriori probability estimate (MAP) — map_estimate","text":"","code":"map_estimate(x, ...) # S3 method for class 'numeric' map_estimate(x, precision = 2^10, method = \"kernel\", ...) # S3 method for class 'stanreg' map_estimate( x, precision = 2^10, method = \"kernel\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... ) # S3 method for class 'brmsfit' map_estimate( x, precision = 2^10, method = \"kernel\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ... ) # S3 method for class 'data.frame' map_estimate(x, precision = 2^10, method = \"kernel\", rvar_col = NULL, ...) # S3 method for class 'get_predicted' map_estimate( x, precision = 2^10, method = \"kernel\", use_iterations = FALSE, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/map_estimate.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Maximum A Posteriori probability estimate (MAP) — map_estimate","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. precision Number points density data. See n parameter density. method Density estimation method. Can \"kernel\" (default), \"logspline\" \"KernSmooth\". effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. rvar_col single character - name rvar column data frame processed. See example p_direction(). use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/map_estimate.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Maximum A Posteriori probability estimate (MAP) — map_estimate","text":"numeric value x vector. x model-object, returns data frame following columns: Parameter: model parameter(s), x model-object. x vector, column missing. MAP_Estimate: MAP estimate posterior model parameter.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/map_estimate.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Maximum A Posteriori probability estimate (MAP) — map_estimate","text":"","code":"# \\donttest{ library(bayestestR) posterior <- rnorm(10000) map_estimate(posterior) #> MAP Estimate #> #> Parameter | MAP_Estimate #> ------------------------ #> x | 0.06 plot(density(posterior)) abline(v = as.numeric(map_estimate(posterior)), col = \"red\") model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.042 seconds (Warm-up) #> Chain 1: 0.041 seconds (Sampling) #> Chain 1: 0.083 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 9e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.049 seconds (Warm-up) #> Chain 2: 0.05 seconds (Sampling) #> Chain 2: 0.099 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.046 seconds (Warm-up) #> Chain 3: 0.045 seconds (Sampling) #> Chain 3: 0.091 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 8e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 4: 0.047 seconds (Sampling) #> Chain 4: 0.092 seconds (Total) #> Chain 4: map_estimate(model) #> MAP Estimate #> #> Parameter | MAP_Estimate #> -------------------------- #> (Intercept) | 39.51 #> wt | -3.24 #> cyl | -1.39 model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 1: 0.017 seconds (Sampling) #> Chain 1: 0.036 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 2: 0.017 seconds (Sampling) #> Chain 2: 0.036 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.016 seconds (Sampling) #> Chain 3: 0.035 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.019 seconds (Sampling) #> Chain 4: 0.039 seconds (Total) #> Chain 4: map_estimate(model) #> MAP Estimate #> #> Parameter | MAP_Estimate #> -------------------------- #> b_Intercept | 39.67 #> b_wt | -3.06 #> b_cyl | -1.58 # }"},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":null,"dir":"Reference","previous_headings":"","what":"Monte-Carlo Standard Error (MCSE) — mcse","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"function returns Monte Carlo Standard Error (MCSE).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"","code":"mcse(model, ...) # S3 method for class 'stanreg' mcse( model, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"model stanreg, stanfit, brmsfit, blavaan, MCMCglmm object. ... Currently used. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"Monte Carlo Standard Error (MCSE) another measure accuracy chains. defined standard deviation chains divided effective sample size (formula mcse() Kruschke 2015, p. 187). MCSE “provides quantitative suggestion big estimation noise ”.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"Kruschke, J. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"","code":"# \\donttest{ library(bayestestR) model <- suppressWarnings( rstanarm::stan_glm(mpg ~ wt + am, data = mtcars, chains = 1, refresh = 0) ) mcse(model) #> Parameter MCSE #> 1 (Intercept) 0.15783056 #> 2 wt 0.04047611 #> 3 am 0.07802692 # }"},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":null,"dir":"Reference","previous_headings":"","what":"Summary of Bayesian multivariate-response mediation-models — mediation","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"mediation() short summary multivariate-response mediation-models, .e. function computes average direct average causal mediation effects multivariate response models.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"","code":"mediation(model, ...) # S3 method for class 'brmsfit' mediation( model, treatment, mediator, response = NULL, centrality = \"median\", ci = 0.95, method = \"ETI\", ... ) # S3 method for class 'stanmvreg' mediation( model, treatment, mediator, response = NULL, centrality = \"median\", ci = 0.95, method = \"ETI\", ... )"},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"model brmsfit stanmvreg object. ... used. treatment Character, name treatment variable (direct effect) (multivariate response) mediator-model. missing, mediation() tries find treatment variable automatically, however, may fail. mediator Character, name mediator variable (multivariate response) mediator-model. missing, mediation() tries find treatment variable automatically, however, may fail. response named character vector, indicating names response variables used mediation analysis. Usually can NULL, case variables retrieved automatically. NULL, names match names model formulas, names(insight::find_response(model, combine = TRUE)). can useful , instance, mediator variable used predictor different name mediator variable used response. might occur mediator transformed one model, used \"\" response variable model. Example: mediator m used response variable, centered version m_center used mediator variable. second response variable (treatment model, mediator additional predictor), y, transformed. use response like : mediation(model, response = c(m = \"m_center\", y = \"y\")). centrality point-estimates (centrality indices) compute. Character (vector) list one options: \"median\", \"mean\", \"MAP\" (see map_estimate()), \"trimmed\" (just mean(x, trim = threshold)), \"mode\" \"\". ci Value vector probability CI (0 1) estimated. Default 0.95 (95%). method Can \"ETI\" (default), \"HDI\", \"BCI\", \"SPI\" \"SI\".","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"data frame direct, indirect, mediator total effect multivariate-response mediation-model, well proportion mediated. effect sizes median values posterior samples (use centrality centrality indices).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"mediation() returns data frame information direct effect (mean value posterior samples treatment outcome model), mediator effect (mean value posterior samples mediator outcome model), indirect effect (mean value multiplication posterior samples mediator outcome model posterior samples treatment mediation model) total effect (mean value sums posterior samples used direct indirect effect). proportion mediated indirect effect divided total effect. values, 89% credible intervals calculated default. Use ci calculate different interval. arguments treatment mediator necessarily need specified. missing, mediation() tries find treatment mediator variable automatically. work, specify variables. direct effect also called average direct effect (ADE), indirect effect also called average causal mediation effects (ACME). See also Tingley et al. 2014 Imai et al. 2010.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":".data.frame() method returns posterior samples effects, can used processing different bayestestR package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"Imai, K., Keele, L. Tingley, D. (2010) General Approach Causal Mediation Analysis, Psychological Methods, Vol. 15, . 4 (December), pp. 309-334. Tingley, D., Yamamoto, T., Hirose, K., Imai, K. Keele, L. (2014). mediation: R package Causal Mediation Analysis, Journal Statistical Software, Vol. 59, . 5, pp. 1-38.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"","code":"# \\donttest{ library(mediation) library(brms) library(rstanarm) # load sample data data(jobs) set.seed(123) # linear models, for mediation analysis b1 <- lm(job_seek ~ treat + econ_hard + sex + age, data = jobs) b2 <- lm(depress2 ~ treat + job_seek + econ_hard + sex + age, data = jobs) # mediation analysis, for comparison with Stan models m1 <- mediate(b1, b2, sims = 1000, treat = \"treat\", mediator = \"job_seek\") # Fit Bayesian mediation model in brms f1 <- bf(job_seek ~ treat + econ_hard + sex + age) f2 <- bf(depress2 ~ treat + job_seek + econ_hard + sex + age) m2 <- brm(f1 + f2 + set_rescor(FALSE), data = jobs, refresh = 0) #> Compiling Stan program... #> Start sampling # Fit Bayesian mediation model in rstanarm m3 <- suppressWarnings(stan_mvmer( list( job_seek ~ treat + econ_hard + sex + age + (1 | occp), depress2 ~ treat + job_seek + econ_hard + sex + age + (1 | occp) ), data = jobs, refresh = 0 )) #> Fitting a multivariate glmer model. #> #> Please note the warmup may be much slower than later iterations! summary(m1) #> #> Causal Mediation Analysis #> #> Quasi-Bayesian Confidence Intervals #> #> Estimate 95% CI Lower 95% CI Upper p-value #> ACME -0.0157 -0.0387 0.01 0.19 #> ADE -0.0438 -0.1315 0.04 0.35 #> Total Effect -0.0595 -0.1530 0.02 0.21 #> Prop. Mediated 0.2137 -2.0277 2.70 0.32 #> #> Sample Size Used: 899 #> #> #> Simulations: 1000 #> mediation(m2, centrality = \"mean\", ci = 0.95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.125, 0.045] #> Indirect Effect (ACME) | -0.016 | [-0.041, 0.009] #> Mediator Effect | -0.240 | [-0.295, -0.183] #> Total Effect | -0.056 | [-0.139, 0.032] #> #> Proportion mediated: 27.87% [-169.41%, 225.15%] #> mediation(m3, centrality = \"mean\", ci = 0.95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.041 | [-0.124, 0.041] #> Indirect Effect (ACME) | -0.018 | [-0.043, 0.006] #> Mediator Effect | -0.241 | [-0.298, -0.183] #> Total Effect | -0.059 | [-0.145, 0.029] #> #> Proportion mediated: 30.30% [-216.03%, 276.63%] #> # }"},{"path":"https://easystats.github.io/bayestestR/reference/model_to_priors.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert model's posteriors to priors (EXPERIMENTAL) — model_to_priors","title":"Convert model's posteriors to priors (EXPERIMENTAL) — model_to_priors","text":"Convert model's posteriors (normal) priors.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/model_to_priors.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert model's posteriors to priors (EXPERIMENTAL) — model_to_priors","text":"","code":"model_to_priors(model, scale_multiply = 3, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/model_to_priors.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert model's posteriors to priors (EXPERIMENTAL) — model_to_priors","text":"model Bayesian model. scale_multiply SD posterior multiplied amount set prior avoid overly narrow priors. ... arguments insight::get_prior() describe_posterior.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/model_to_priors.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convert model's posteriors to priors (EXPERIMENTAL) — model_to_priors","text":"","code":"# \\donttest{ # brms models # ----------------------------------------------- if (require(\"brms\")) { formula <- brms::brmsformula(mpg ~ wt + cyl, center = FALSE) model <- brms::brm(formula, data = mtcars, refresh = 0) priors <- model_to_priors(model) priors <- brms::validate_prior(priors, formula, data = mtcars) priors model2 <- brms::brm(formula, data = mtcars, prior = priors, refresh = 0) } #> Compiling Stan program... #> Start sampling #> Compiling Stan program... #> Start sampling # }"},{"path":"https://easystats.github.io/bayestestR/reference/overlap.html","id":null,"dir":"Reference","previous_headings":"","what":"Overlap Coefficient — overlap","title":"Overlap Coefficient — overlap","text":"method calculate overlap coefficient two empirical distributions (can used measure similarity two samples).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/overlap.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Overlap Coefficient — overlap","text":"","code":"overlap( x, y, method_density = \"kernel\", method_auc = \"trapezoid\", precision = 2^10, extend = TRUE, extend_scale = 0.1, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/overlap.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Overlap Coefficient — overlap","text":"x Vector x values. y Vector x values. method_density Density estimation method. See estimate_density(). method_auc Area Curve (AUC) estimation method. See area_under_curve(). precision Number points density data. See n parameter density. extend Extend range x axis factor extend_scale. extend_scale Ratio range extend x axis. value 0.1 means x axis extended 1/10 range data. ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/overlap.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Overlap Coefficient — overlap","text":"","code":"library(bayestestR) x <- distribution_normal(1000, 2, 0.5) y <- distribution_normal(1000, 0, 1) overlap(x, y) #> # Overlap #> #> 18.6% plot(overlap(x, y))"},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":null,"dir":"Reference","previous_headings":"","what":"Probability of Direction (pd) — p_direction","title":"Probability of Direction (pd) — p_direction","text":"Compute Probability Direction (pd, also known Maximum Probability Effect - MPE). can interpreted probability parameter (described posterior distribution) strictly positive negative (whichever probable). Although differently expressed, index fairly similar (.e., strongly correlated) frequentist p-value (see details).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Probability of Direction (pd) — p_direction","text":"","code":"p_direction(x, ...) pd(x, ...) # S3 method for class 'numeric' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'data.frame' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, rvar_col = NULL, ... ) # S3 method for class 'MCMCglmm' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'emmGrid' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'slopes' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'stanreg' p_direction( x, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'brmsfit' p_direction( x, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'BFBayesFactor' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'get_predicted' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, use_iterations = FALSE, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Probability of Direction (pd) — p_direction","text":"x vector representing posterior distribution, data frame posterior draws (samples parameter). Can also Bayesian model. ... Currently used. method Can \"direct\" one methods estimate_density(), \"kernel\", \"logspline\" \"KernSmooth\". See details. null value considered \"null\" effect. Traditionally 0, also 1 case ratios change (, IRR, ...). as_p TRUE, p-direction (pd) values converted frequentist p-value using pd_to_p(). remove_na missing values removed computation? Note Inf (infinity) removed. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Probability of Direction (pd) — p_direction","text":"Values 0.5 1 0 1 (see ) corresponding probability direction (pd).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Probability of Direction (pd) — p_direction","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"what-is-the-pd-","dir":"Reference","previous_headings":"","what":"What is the pd?","title":"Probability of Direction (pd) — p_direction","text":"Probability Direction (pd) index effect existence, representing certainty effect goes particular direction (.e., positive negative / sign), typically ranging 0.5 1 (see next section cases can range 0 1). Beyond simplicity interpretation, understanding computation, index also presents interesting properties: Like posterior-based indices, pd solely based posterior distributions require additional information data model (e.g., priors, case Bayes factors). robust scale response variable predictors. strongly correlated frequentist p-value, can thus used draw parallels give reference readers non-familiar Bayesian statistics (Makowski et al., 2019).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"relationship-with-the-p-value","dir":"Reference","previous_headings":"","what":"Relationship with the p-value","title":"Probability of Direction (pd) — p_direction","text":"cases, seems pd direct correspondence frequentist one-sided p-value formula (two-sided p): p = 2 * (1 - pd) Thus, two-sided p-value respectively .1, .05, .01 .001 correspond approximately pd 95%, 97.5%, 99.5% 99.95%. See pd_to_p() details.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"possible-range-of-values","dir":"Reference","previous_headings":"","what":"Possible Range of Values","title":"Probability of Direction (pd) — p_direction","text":"largest value pd can take 1 - posterior strictly directional. However, smallest value pd can take depends parameter space represented posterior. continuous parameter space, exact values 0 (point null value) possible, 100% posterior sign, positive, negative. Therefore, smallest pd can 0.5 - equal posterior mass positive negative values. Values close 0.5 used support null hypothesis (parameter direction) similar large p-values used support null hypothesis (see pd_to_p(); Makowski et al., 2019). discrete parameter space parameter space mixture discrete continuous spaces, exact values 0 (point null value) possible! Therefore, smallest pd can 0 - 100% posterior mass 0. Thus values close 0 can used support null hypothesis (see van den Bergh et al., 2021). Examples posteriors representing discrete parameter space: parameter can take discrete values. mixture prior/posterior used (spike--slab prior; see van den Bergh et al., 2021). conducting Bayesian model averaging (e.g., weighted_posteriors() brms::posterior_average).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"methods-of-computation","dir":"Reference","previous_headings":"","what":"Methods of computation","title":"Probability of Direction (pd) — p_direction","text":"pd defined : $$p_d = max({Pr(\\hat{\\theta} < \\theta_{null}), Pr(\\hat{\\theta} > \\theta_{null})})$$ simple direct way compute pd compute proportion positive (larger null) posterior samples, proportion negative (smaller null) posterior samples, take larger two. \"simple\" method straightforward, precision directly tied number posterior draws. second approach relies density estimation: starts estimating continuous-smooth density function (many methods available), computing area curve (AUC) density curve either side null taking maximum . Note approach assumes continuous density function, posterior represents (partially) discrete parameter space, direct method must used (see ).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Probability of Direction (pd) — p_direction","text":"Makowski, D., Ben-Shachar, M. S., Chen, S. ., & Lüdecke, D. (2019). Indices effect existence significance Bayesian framework. Frontiers psychology, 10, 2767. doi:10.3389/fpsyg.2019.02767 van den Bergh, D., Haaf, J. M., Ly, ., Rouder, J. N., & Wagenmakers, E. J. (2021). cautionary note estimating effect size. Advances Methods Practices Psychological Science, 4(1). doi:10.1177/2515245921992035","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Probability of Direction (pd) — p_direction","text":"","code":"library(bayestestR) # Simulate a posterior distribution of mean 1 and SD 1 # ---------------------------------------------------- posterior <- rnorm(1000, mean = 1, sd = 1) p_direction(posterior) #> Probability of Direction #> #> Parameter | pd #> ------------------ #> Posterior | 84.50% p_direction(posterior, method = \"kernel\") #> Probability of Direction #> #> Parameter | pd #> ------------------ #> Posterior | 83.17% # Simulate a dataframe of posterior distributions # ----------------------------------------------- df <- data.frame(replicate(4, rnorm(100))) p_direction(df) #> Probability of Direction #> #> Parameter | pd #> ------------------ #> X1 | 51.00% #> X2 | 52.00% #> X3 | 51.00% #> X4 | 58.00% p_direction(df, method = \"kernel\") #> Probability of Direction #> #> Parameter | pd #> ------------------ #> X1 | 51.24% #> X2 | 51.93% #> X3 | 50.15% #> X4 | 59.86% # \\donttest{ # rstanarm models # ----------------------------------------------- model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars, chains = 2, refresh = 0 ) p_direction(model) #> Probability of Direction #> #> Parameter | pd #> ------------------ #> (Intercept) | 100% #> wt | 100% #> cyl | 100% p_direction(model, method = \"kernel\") #> Probability of Direction #> #> Parameter | pd #> --------------------- #> (Intercept) | 100.00% #> wt | 99.98% #> cyl | 99.97% # emmeans # ----------------------------------------------- p_direction(emmeans::emtrends(model, ~1, \"wt\", data = mtcars)) #> Probability of Direction #> #> X1 | pd #> -------------- #> overall | 100% # brms models # ----------------------------------------------- model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 1: 0.021 seconds (Sampling) #> Chain 1: 0.042 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 2: 0.016 seconds (Sampling) #> Chain 2: 0.036 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 3: 0.017 seconds (Sampling) #> Chain 3: 0.037 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 4: 0.017 seconds (Sampling) #> Chain 4: 0.036 seconds (Total) #> Chain 4: p_direction(model) #> Probability of Direction #> #> Parameter | pd #> -------------------- #> (Intercept) | 100% #> wt | 100% #> cyl | 99.98% p_direction(model, method = \"kernel\") #> Probability of Direction #> #> Parameter | pd #> -------------------- #> (Intercept) | 100% #> wt | 99.99% #> cyl | 99.97% # BayesFactor objects # ----------------------------------------------- bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) p_direction(bf) #> Probability of Direction #> #> Parameter | pd #> ----------------- #> Difference | 100% p_direction(bf, method = \"kernel\") #> Probability of Direction #> #> Parameter | pd #> ----------------- #> Difference | 100% # } # Using \"rvar_col\" x <- data.frame(mu = c(0, 0.5, 1), sigma = c(1, 0.5, 0.25)) x$my_rvar <- posterior::rvar_rng(rnorm, 3, mean = x$mu, sd = x$sigma) x #> mu sigma my_rvar #> 1 0.0 1.00 -0.01 ± 0.98 #> 2 0.5 0.50 0.49 ± 0.50 #> 3 1.0 0.25 1.00 ± 0.25 p_direction(x, rvar_col = \"my_rvar\") #> Probability of Direction #> #> mu | sigma | pd #> --------------------- #> 0.00 | 1.00 | 50.10% #> 0.50 | 0.50 | 83.90% #> 1.00 | 0.25 | 100%"},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"Compute Bayesian equivalent p-value, related odds parameter (described posterior distribution) null hypothesis (h0) using Mills' (2014, 2017) Objective Bayesian Hypothesis Testing framework. corresponds density value null (e.g., 0) divided density Maximum Posteriori (MAP).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"","code":"p_map(x, ...) p_pointnull(x, ...) # S3 method for class 'numeric' p_map(x, null = 0, precision = 2^10, method = \"kernel\", ...) # S3 method for class 'get_predicted' p_map( x, null = 0, precision = 2^10, method = \"kernel\", use_iterations = FALSE, verbose = TRUE, ... ) # S3 method for class 'data.frame' p_map(x, null = 0, precision = 2^10, method = \"kernel\", rvar_col = NULL, ...) # S3 method for class 'stanreg' p_map( x, null = 0, precision = 2^10, method = \"kernel\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... ) # S3 method for class 'brmsfit' p_map( x, null = 0, precision = 2^10, method = \"kernel\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. null value considered \"null\" effect. Traditionally 0, also 1 case ratios change (, IRR, ...). precision Number points density data. See n parameter density. method Density estimation method. Can \"kernel\" (default), \"logspline\" \"KernSmooth\". use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"Note method sensitive density estimation method (see section examples ).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"strengths-and-limitations","dir":"Reference","previous_headings":"","what":"Strengths and Limitations","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"Strengths: Straightforward computation. Objective property posterior distribution. Limitations: Limited information favoring null hypothesis. Relates density approximation. Indirect relationship mathematical definition interpretation. suitable weak / diffused priors.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"Makowski D, Ben-Shachar MS, Chen SHA, Lüdecke D (2019) Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767 Mills, J. . (2018). Objective Bayesian Precise Hypothesis Testing. University Cincinnati.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"","code":"library(bayestestR) p_map(rnorm(1000, 0, 1)) #> MAP-based p-value #> #> Parameter | p (MAP) #> ------------------- #> Posterior | 0.998 p_map(rnorm(1000, 10, 1)) #> MAP-based p-value #> #> Parameter | p (MAP) #> ------------------- #> Posterior | < .001 # \\donttest{ model <- suppressWarnings( rstanarm::stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) p_map(model) #> MAP-based p-value #> #> Parameter | p (MAP) #> --------------------- #> (Intercept) | < .001 #> wt | < .001 #> gear | 0.963 p_map(suppressWarnings( emmeans::emtrends(model, ~1, \"wt\", data = mtcars) )) #> MAP-based p-value #> #> X1 | p (MAP) #> ----------------- #> overall | < .001 model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 9e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 1: 0.017 seconds (Sampling) #> Chain 1: 0.038 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 2: 0.017 seconds (Sampling) #> Chain 2: 0.036 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 3: 0.02 seconds (Sampling) #> Chain 3: 0.04 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 4: 0.02 seconds (Sampling) #> Chain 4: 0.039 seconds (Total) #> Chain 4: p_map(model) #> MAP-based p-value #> #> Parameter | p (MAP) #> --------------------- #> (Intercept) | < .001 #> wt | 0.002 #> cyl | 0.005 bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) p_map(bf) #> MAP-based p-value #> #> Parameter | p (MAP) #> -------------------- #> Difference | < .001 # --------------------------------------- # Robustness to density estimation method set.seed(333) data <- data.frame() for (iteration in 1:250) { x <- rnorm(1000, 1, 1) result <- data.frame( Kernel = as.numeric(p_map(x, method = \"kernel\")), KernSmooth = as.numeric(p_map(x, method = \"KernSmooth\")), logspline = as.numeric(p_map(x, method = \"logspline\")) ) data <- rbind(data, result) } data$KernSmooth <- data$Kernel - data$KernSmooth data$logspline <- data$Kernel - data$logspline summary(data$KernSmooth) #> Min. 1st Qu. Median Mean 3rd Qu. Max. #> -0.039724 -0.007909 -0.003885 -0.005338 -0.001128 0.056325 summary(data$logspline) #> Min. 1st Qu. Median Mean 3rd Qu. Max. #> -0.092243 -0.009008 0.022214 0.026966 0.066303 0.166870 boxplot(data[c(\"KernSmooth\", \"logspline\")]) # }"},{"path":"https://easystats.github.io/bayestestR/reference/p_rope.html","id":null,"dir":"Reference","previous_headings":"","what":"Probability of being in the ROPE — p_rope","title":"Probability of being in the ROPE — p_rope","text":"Compute proportion whole posterior distribution lie within region practical equivalence (ROPE). equivalent running rope(..., ci = 1).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_rope.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Probability of being in the ROPE — p_rope","text":"","code":"p_rope(x, ...) # S3 method for class 'numeric' p_rope(x, range = \"default\", verbose = TRUE, ...) # S3 method for class 'data.frame' p_rope(x, range = \"default\", rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'stanreg' p_rope( x, range = \"default\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' p_rope( x, range = \"default\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/p_rope.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Probability of being in the ROPE — p_rope","text":"x Vector representing posterior distribution. Can also stanreg brmsfit model. ... Currently used. range ROPE's lower higher bounds. \"default\" depending number outcome variables vector list. models one response, range can : vector length two (e.g., c(-0.1, 0.1)), list numeric vector length numbers parameters (see 'Examples'). list named numeric vectors, names correspond parameter names. case, parameters matching name range set \"default\". multivariate models, range list another list (one response variable) numeric vectors . Vector names correspond name response variables. \"default\" input vector, range set c(-0.1, 0.1). \"default\" input Bayesian model, rope_range() used. See 'Examples'. verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_rope.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Probability of being in the ROPE — p_rope","text":"","code":"library(bayestestR) p_rope(x = rnorm(1000, 0, 0.01), range = c(-0.1, 0.1)) #> Proportion of samples inside the ROPE [-0.10, 0.10]: > .999 p_rope(x = mtcars, range = c(-0.1, 0.1)) #> Proportion of samples inside the ROPE [-0.10, 0.10] #> #> Parameter | p (ROPE) #> -------------------- #> mpg | < .001 #> cyl | < .001 #> disp | < .001 #> hp | < .001 #> drat | < .001 #> wt | < .001 #> qsec | < .001 #> vs | 0.562 #> am | 0.594 #> gear | < .001 #> carb | < .001"},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":null,"dir":"Reference","previous_headings":"","what":"Practical Significance (ps) — p_significance","title":"Practical Significance (ps) — p_significance","text":"Compute probability Practical Significance (ps), can conceptualized unidirectional equivalence test. returns probability effect given threshold corresponding negligible effect median's direction. Mathematically, defined proportion posterior distribution median sign threshold.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Practical Significance (ps) — p_significance","text":"","code":"p_significance(x, ...) # S3 method for class 'numeric' p_significance(x, threshold = \"default\", ...) # S3 method for class 'get_predicted' p_significance( x, threshold = \"default\", use_iterations = FALSE, verbose = TRUE, ... ) # S3 method for class 'data.frame' p_significance(x, threshold = \"default\", rvar_col = NULL, ...) # S3 method for class 'stanreg' p_significance( x, threshold = \"default\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' p_significance( x, threshold = \"default\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Practical Significance (ps) — p_significance","text":"x Vector representing posterior distribution. Can also stanreg brmsfit model. ... Currently used. threshold threshold value separates significant negligible effect, can following possible values: \"default\", case range set 0.1 input vector, based rope_range() (Bayesian) model provided. single numeric value (e.g., 0.1), used range around zero (.e. threshold range set -0.1 0.1, .e. reflects symmetric interval) numeric vector length two (e.g., c(-0.2, 0.1)), useful asymmetric intervals list numeric vectors, vector corresponds parameter list named numeric vectors, names correspond parameter names. case, parameters matching name threshold set \"default\". use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Practical Significance (ps) — p_significance","text":"Values 0 1 corresponding probability practical significance (ps).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Practical Significance (ps) — p_significance","text":"p_significance() returns proportion probability distribution (x) outside certain range (negligible effect, ROPE, see argument threshold). values distribution ROPE, p_significance() returns higher probability value outside ROPE. Typically, value larger 0.5 indicate practical significance. However, range negligible effect rather large compared range probability distribution x, p_significance() less 0.5, indicates clear practical significance.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Practical Significance (ps) — p_significance","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Practical Significance (ps) — p_significance","text":"","code":"library(bayestestR) # Simulate a posterior distribution of mean 1 and SD 1 # ---------------------------------------------------- posterior <- rnorm(1000, mean = 1, sd = 1) p_significance(posterior) #> Practical Significance (threshold: 0.10) #> #> Parameter | ps #> ---------------- #> Posterior | 0.80 # Simulate a dataframe of posterior distributions # ----------------------------------------------- df <- data.frame(replicate(4, rnorm(100))) p_significance(df) #> Practical Significance (threshold: 0.10) #> #> Parameter | ps #> ---------------- #> X1 | 0.51 #> X2 | 0.55 #> X3 | 0.47 #> X4 | 0.47 # \\donttest{ # rstanarm models # ----------------------------------------------- model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars, chains = 2, refresh = 0 ) p_significance(model) #> Practical Significance (threshold: 0.60) #> #> Parameter | ps #> ------------------ #> (Intercept) | 1.00 #> wt | 1.00 #> cyl | 0.98 # multiple thresholds - asymmetric, symmetric, default p_significance(model, threshold = list(c(-10, 5), 0.2, \"default\")) #> Practical Significance #> #> Parameter | ps | ROPE #> ----------------------------------- #> (Intercept) | 1.00 | [-10.00, 5.00] #> wt | 1.00 | [ -0.20, 0.20] #> cyl | 0.98 | [ -0.60, 0.60] # named thresholds p_significance(model, threshold = list(wt = 0.2, `(Intercept)` = c(-10, 5))) #> Practical Significance #> #> Parameter | ps | ROPE #> ----------------------------------- #> (Intercept) | 1.00 | [-10.00, 5.00] #> wt | 1.00 | [ -0.20, 0.20] #> cyl | 0.98 | [ -0.60, 0.60] # }"},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"Convert p-values (pseudo) Bayes Factors. transformation suggested Wagenmakers (2022), based vast amount assumptions. might therefore reliable. Use risks. accurate approximate Bayes factors, use bic_to_bf() instead.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"","code":"p_to_bf(x, ...) # S3 method for class 'numeric' p_to_bf(x, log = FALSE, n_obs = NULL, ...) # Default S3 method p_to_bf(x, log = FALSE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"x (frequentist) model object, (numeric) vector p-values. ... arguments passed (used now). log Wether return log Bayes Factors. Note: print() method always shows BF - \"log_BF\" column accessible returned data frame. n_obs Number observations. Either length 1, length p.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"data frame p-values pseudo-Bayes factors (null).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"Wagenmakers, E.J. (2022). Approximate objective Bayes factors p-values sample size: 3p(sqrt(n)) rule. Preprint available ArXiv: https://psyarxiv.com/egydq","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"","code":"data(iris) model <- lm(Petal.Length ~ Sepal.Length + Species, data = iris) p_to_bf(model) #> Pseudo-BF (against NULL) #> #> Parameter | p | BF #> ------------------------------------- #> (Intercept) | < .001 | 2.71e+09 #> Sepal.Length | < .001 | 2.43e+26 #> Speciesversicolor | < .001 | 2.82e+64 #> Speciesvirginica | < .001 | 5.53e+68 # Examples that demonstrate comparison between # BIC-approximated and pseudo BF # -------------------------------------------- m0 <- lm(mpg ~ 1, mtcars) m1 <- lm(mpg ~ am, mtcars) m2 <- lm(mpg ~ factor(cyl), mtcars) # In this first example, BIC-approximated BF and # pseudo-BF based on p-values are close... # BIC-approximated BF, m1 against null model bic_to_bf(BIC(m1), denominator = BIC(m0)) #> [1] 222.005 # pseudo-BF based on p-values - dropping intercept p_to_bf(m1)[-1, ] #> Pseudo-BF (against NULL) #> #> Parameter | p | BF #> --------------------------- #> am | < .001 | 206.74 # The second example shows that results from pseudo-BF are less accurate # and should be handled wit caution! bic_to_bf(BIC(m2), denominator = BIC(m0)) #> [1] 45355714 p_to_bf(anova(m2), n_obs = nrow(mtcars)) #> Pseudo-BF (against NULL) #> #> Parameter | p | BF #> ------------------------------- #> factor(cyl) | < .001 | 1.18e+07"},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"Enables conversion Probability Direction (pd) p-value.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"","code":"pd_to_p(pd, ...) # S3 method for class 'numeric' pd_to_p(pd, direction = \"two-sided\", verbose = TRUE, ...) p_to_pd(p, direction = \"two-sided\", ...) convert_p_to_pd(p, direction = \"two-sided\", ...) convert_pd_to_p(pd, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"pd Probability Direction (pd) value (0 1). Can also data frame column named pd, p_direction, PD, returned p_direction(). case, column converted p-values new data frame returned. ... Arguments passed methods. direction type p-value requested provided. Can \"two-sided\" (default, two tailed) \"one-sided\" (one tailed). verbose Toggle warnings. p p-value.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"p-value data frame p-value column.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"Conversion done using following equation (see Makowski et al., 2019): direction = \"two-sided\" p = 2 * (1 - pd) direction = \"one-sided\" p = 1 - pd Note conversion valid lowest possible values pd 0.5 - .e., posterior represents continuous parameter space (see p_direction()). pd < 0.5 detected, converted p 1, warning given.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"Makowski, D., Ben-Shachar, M. S., Chen, S. H. ., Lüdecke, D. (2019). Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767","code":""},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"","code":"pd_to_p(pd = 0.95) #> [1] 0.1 pd_to_p(pd = 0.95, direction = \"one-sided\") #> [1] 0.05"},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":null,"dir":"Reference","previous_headings":"","what":"Point-estimates of posterior distributions — point_estimate","title":"Point-estimates of posterior distributions — point_estimate","text":"Compute various point-estimates, mean, median MAP, describe posterior distributions.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Point-estimates of posterior distributions — point_estimate","text":"","code":"point_estimate(x, ...) # S3 method for class 'numeric' point_estimate(x, centrality = \"all\", dispersion = FALSE, threshold = 0.1, ...) # S3 method for class 'data.frame' point_estimate( x, centrality = \"all\", dispersion = FALSE, threshold = 0.1, rvar_col = NULL, ... ) # S3 method for class 'stanreg' point_estimate( x, centrality = \"all\", dispersion = FALSE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... ) # S3 method for class 'brmsfit' point_estimate( x, centrality = \"all\", dispersion = FALSE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ... ) # S3 method for class 'BFBayesFactor' point_estimate(x, centrality = \"all\", dispersion = FALSE, ...) # S3 method for class 'get_predicted' point_estimate( x, centrality = \"all\", dispersion = FALSE, use_iterations = FALSE, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Point-estimates of posterior distributions — point_estimate","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Additional arguments passed methods. centrality point-estimates (centrality indices) compute. Character (vector) list one options: \"median\", \"mean\", \"MAP\" (see map_estimate()), \"trimmed\" (just mean(x, trim = threshold)), \"mode\" \"\". dispersion Logical, TRUE, computes indices dispersion related estimate(s) (SD MAD mean median, respectively). Dispersion available \"MAP\" \"mode\" centrality indices. threshold centrality = \"trimmed\" (.e. trimmed mean), indicates fraction (0 0.5) observations trimmed end vector mean computed. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Point-estimates of posterior distributions — point_estimate","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Point-estimates of posterior distributions — point_estimate","text":"Makowski, D., Ben-Shachar, M. S., Chen, S. H. ., Lüdecke, D. (2019). Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767","code":""},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Point-estimates of posterior distributions — point_estimate","text":"","code":"library(bayestestR) point_estimate(rnorm(1000)) #> Point Estimate #> #> Median | Mean | MAP #> ---------------------- #> 5.58e-03 | 0.01 | 0.03 point_estimate(rnorm(1000), centrality = \"all\", dispersion = TRUE) #> Point Estimate #> #> Median | MAD | Mean | SD | MAP #> --------------------------------------- #> -0.02 | 0.97 | -5.33e-03 | 1.00 | 0.05 point_estimate(rnorm(1000), centrality = c(\"median\", \"MAP\")) #> Point Estimate #> #> Median | MAP #> -------------- #> 0.03 | -0.07 df <- data.frame(replicate(4, rnorm(100))) point_estimate(df, centrality = \"all\", dispersion = TRUE) #> Point Estimate #> #> Parameter | Median | MAD | Mean | SD | MAP #> ------------------------------------------------ #> X1 | -0.02 | 1.21 | 0.02 | 1.12 | 0.85 #> X2 | -0.07 | 1.07 | -0.10 | 1.00 | -0.18 #> X3 | -0.29 | 1.02 | -0.33 | 0.89 | -0.10 #> X4 | -0.08 | 0.90 | -0.14 | 0.89 | 0.11 point_estimate(df, centrality = c(\"median\", \"MAP\")) #> Point Estimate #> #> Parameter | Median | MAP #> -------------------------- #> X1 | -0.02 | 0.85 #> X2 | -0.07 | -0.18 #> X3 | -0.29 | -0.10 #> X4 | -0.08 | 0.11 # \\donttest{ # rstanarm models # ----------------------------------------------- model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.043 seconds (Warm-up) #> Chain 1: 0.04 seconds (Sampling) #> Chain 1: 0.083 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.1e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.044 seconds (Warm-up) #> Chain 2: 0.039 seconds (Sampling) #> Chain 2: 0.083 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.1e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.043 seconds (Warm-up) #> Chain 3: 0.048 seconds (Sampling) #> Chain 3: 0.091 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.1e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 4: 0.041 seconds (Sampling) #> Chain 4: 0.086 seconds (Total) #> Chain 4: point_estimate(model, centrality = \"all\", dispersion = TRUE) #> Point Estimate #> #> Parameter | Median | MAD | Mean | SD | MAP #> -------------------------------------------------- #> (Intercept) | 39.65 | 1.86 | 39.69 | 1.84 | 39.57 #> wt | -3.19 | 0.81 | -3.18 | 0.82 | -3.19 #> cyl | -1.51 | 0.44 | -1.52 | 0.44 | -1.54 point_estimate(model, centrality = c(\"median\", \"MAP\")) #> Point Estimate #> #> Parameter | Median | MAP #> ---------------------------- #> (Intercept) | 39.65 | 39.57 #> wt | -3.19 | -3.19 #> cyl | -1.51 | -1.54 # emmeans estimates # ----------------------------------------------- point_estimate( emmeans::emtrends(model, ~1, \"wt\", data = mtcars), centrality = c(\"median\", \"MAP\") ) #> Point Estimate #> #> X1 | Median | MAP #> ------------------------ #> overall | -3.19 | -3.19 # brms models # ----------------------------------------------- model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 1: 0.019 seconds (Sampling) #> Chain 1: 0.039 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 2: 0.017 seconds (Sampling) #> Chain 2: 0.036 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.019 seconds (Sampling) #> Chain 3: 0.038 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 4: 0.018 seconds (Sampling) #> Chain 4: 0.037 seconds (Total) #> Chain 4: point_estimate(model, centrality = \"all\", dispersion = TRUE) #> Point Estimate #> #> Parameter | Median | MAD | Mean | SD | MAP #> -------------------------------------------------- #> (Intercept) | 39.67 | 1.71 | 39.67 | 1.78 | 39.86 #> wt | -3.22 | 0.78 | -3.20 | 0.80 | -3.32 #> cyl | -1.49 | 0.44 | -1.50 | 0.43 | -1.46 point_estimate(model, centrality = c(\"median\", \"MAP\")) #> Point Estimate #> #> Parameter | Median | MAP #> ---------------------------- #> (Intercept) | 39.67 | 39.86 #> wt | -3.22 | -3.32 #> cyl | -1.49 | -1.46 # BayesFactor objects # ----------------------------------------------- bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) point_estimate(bf, centrality = \"all\", dispersion = TRUE) #> Point Estimate #> #> Parameter | Median | MAD | Mean | SD | MAP #> ----------------------------------------------- #> Difference | 1.03 | 0.11 | 1.03 | 0.11 | 1.02 point_estimate(bf, centrality = c(\"median\", \"MAP\")) #> Point Estimate #> #> Parameter | Median | MAP #> -------------------------- #> Difference | 1.03 | 1.04 # }"},{"path":"https://easystats.github.io/bayestestR/reference/reexports.html","id":null,"dir":"Reference","previous_headings":"","what":"Objects exported from other packages — reexports","title":"Objects exported from other packages — reexports","text":"objects imported packages. Follow links see documentation. insight print_html, print_md","code":""},{"path":"https://easystats.github.io/bayestestR/reference/reshape_iterations.html","id":null,"dir":"Reference","previous_headings":"","what":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","title":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","text":"Reshape wide data.frame iterations (posterior draws bootsrapped samples) columns long format. Instead iterations columns (e.g., iter_1, iter_2, ...), return 3 columns \\*_index (previous index row), \\*_group (iteration number) \\*_value (value said iteration).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/reshape_iterations.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","text":"","code":"reshape_iterations(x, prefix = c(\"draw\", \"iter\", \"iteration\", \"sim\")) reshape_draws(x, prefix = c(\"draw\", \"iter\", \"iteration\", \"sim\"))"},{"path":"https://easystats.github.io/bayestestR/reference/reshape_iterations.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","text":"x data.frame containing posterior draws obtained estimate_response estimate_link. prefix prefix draws (instance, \"iter_\" columns named iter_1, iter_2, iter_3). one provided, search first one matches.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/reshape_iterations.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","text":"Data frame reshaped draws long format.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/reshape_iterations.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","text":"","code":"# \\donttest{ if (require(\"rstanarm\")) { model <- stan_glm(mpg ~ am, data = mtcars, refresh = 0) draws <- insight::get_predicted(model) long_format <- reshape_iterations(draws) head(long_format) } #> Predicted iter_index iter_group iter_value #> 1 24.38890 1 1 24.05244 #> 2 24.38890 2 1 24.05244 #> 3 24.38890 3 1 24.05244 #> 4 17.14047 4 1 17.27725 #> 5 17.14047 5 1 17.27725 #> 6 17.14047 6 1 17.27725 # }"},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":null,"dir":"Reference","previous_headings":"","what":"Region of Practical Equivalence (ROPE) — rope","title":"Region of Practical Equivalence (ROPE) — rope","text":"Compute proportion HDI (default 89% HDI) posterior distribution lies within region practical equivalence.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Region of Practical Equivalence (ROPE) — rope","text":"","code":"rope(x, ...) # S3 method for class 'numeric' rope(x, range = \"default\", ci = 0.95, ci_method = \"ETI\", verbose = TRUE, ...) # S3 method for class 'data.frame' rope( x, range = \"default\", ci = 0.95, ci_method = \"ETI\", rvar_col = NULL, verbose = TRUE, ... ) # S3 method for class 'stanreg' rope( x, range = \"default\", ci = 0.95, ci_method = \"ETI\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' rope( x, range = \"default\", ci = 0.95, ci_method = \"ETI\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Region of Practical Equivalence (ROPE) — rope","text":"x Vector representing posterior distribution. Can also stanreg brmsfit model. ... Currently used. range ROPE's lower higher bounds. \"default\" depending number outcome variables vector list. models one response, range can : vector length two (e.g., c(-0.1, 0.1)), list numeric vector length numbers parameters (see 'Examples'). list named numeric vectors, names correspond parameter names. case, parameters matching name range set \"default\". multivariate models, range list another list (one response variable) numeric vectors . Vector names correspond name response variables. \"default\" input vector, range set c(-0.1, 0.1). \"default\" input Bayesian model, rope_range() used. See 'Examples'. ci Credible Interval (CI) probability, corresponding proportion HDI, use percentage ROPE. ci_method type interval use quantify percentage ROPE. Can 'HDI' (default) 'ETI'. See ci(). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Region of Practical Equivalence (ROPE) — rope","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"rope","dir":"Reference","previous_headings":"","what":"ROPE","title":"Region of Practical Equivalence (ROPE) — rope","text":"Statistically, probability posterior distribution different 0 make much sense (probability single value null hypothesis continuous distribution 0). Therefore, idea underlining ROPE let user define area around null value enclosing values equivalent null value practical purposes (Kruschke 2010, 2011, 2014). Kruschke (2018) suggests null value set, default, -0.1 0.1 range standardized parameter (negligible effect size according Cohen, 1988). generalized: instance, linear models, ROPE set 0 +/- .1 * sd(y). ROPE range can automatically computed models using rope_range() function. Kruschke (2010, 2011, 2014) suggests using proportion 95% (89%, considered stable) HDI falls within ROPE index \"null-hypothesis\" testing (understood Bayesian framework, see equivalence_test()).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"sensitivity-to-parameter-s-scale","dir":"Reference","previous_headings":"","what":"Sensitivity to parameter's scale","title":"Region of Practical Equivalence (ROPE) — rope","text":"important consider unit (.e., scale) predictors using index based ROPE, correct interpretation ROPE representing region practical equivalence zero dependent scale predictors. Indeed, percentage ROPE depend unit parameter. words, ROPE represents fixed portion response's scale, proximity coefficient depends scale coefficient .","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"multicollinearity-non-independent-covariates","dir":"Reference","previous_headings":"","what":"Multicollinearity - Non-independent covariates","title":"Region of Practical Equivalence (ROPE) — rope","text":"parameters show strong correlations, .e. covariates independent, joint parameter distributions may shift towards away ROPE. Collinearity invalidates ROPE hypothesis testing based univariate marginals, probabilities conditional independence. problematic parameters partial overlap ROPE region. case collinearity, (joint) distributions parameters may either get increased decreased ROPE, means inferences based rope() inappropriate (Kruschke 2014, 340f). rope() performs simple check pairwise correlations parameters, can collinearity two variables, first step check assumptions hypothesis testing look different pair plots. even sophisticated check projection predictive variable selection (Piironen Vehtari 2017).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"strengths-and-limitations","dir":"Reference","previous_headings":"","what":"Strengths and Limitations","title":"Region of Practical Equivalence (ROPE) — rope","text":"Strengths: Provides information related practical relevance effects. Limitations: ROPE range needs arbitrarily defined. Sensitive scale (unit) predictors. sensitive highly significant effects.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Region of Practical Equivalence (ROPE) — rope","text":"Cohen, J. (1988). Statistical power analysis behavioural sciences. Kruschke, J. K. (2010). believe: Bayesian methods data analysis. Trends cognitive sciences, 14(7), 293-300. doi:10.1016/j.tics.2010.05.001 . Kruschke, J. K. (2011). Bayesian assessment null values via parameter estimation model comparison. Perspectives Psychological Science, 6(3), 299-312. doi:10.1177/1745691611406925 . Kruschke, J. K. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press. doi:10.1177/2515245918771304 . Kruschke, J. K. (2018). Rejecting accepting parameter values Bayesian estimation. Advances Methods Practices Psychological Science, 1(2), 270-280. doi:10.1177/2515245918771304 . Makowski D, Ben-Shachar MS, Chen SHA, Lüdecke D (2019) Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767 Piironen, J., & Vehtari, . (2017). Comparison Bayesian predictive methods model selection. Statistics Computing, 27(3), 711–735. doi:10.1007/s11222-016-9649-y","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Region of Practical Equivalence (ROPE) — rope","text":"","code":"library(bayestestR) rope(x = rnorm(1000, 0, 0.01), range = c(-0.1, 0.1)) #> # Proportion of samples inside the ROPE [-0.10, 0.10]: #> #> inside ROPE #> ----------- #> 100.00 % #> rope(x = rnorm(1000, 0, 1), range = c(-0.1, 0.1)) #> # Proportion of samples inside the ROPE [-0.10, 0.10]: #> #> inside ROPE #> ----------- #> 8.32 % #> rope(x = rnorm(1000, 1, 0.01), range = c(-0.1, 0.1)) #> # Proportion of samples inside the ROPE [-0.10, 0.10]: #> #> inside ROPE #> ----------- #> 0.00 % #> rope(x = rnorm(1000, 1, 1), ci = c(0.90, 0.95)) #> # Proportions of samples inside the ROPE [-0.10, 0.10]: #> #> ROPE for the 90% HDI: #> #> inside ROPE #> ----------- #> 4.89 % #> #> #> ROPE for the 95% HDI: #> #> inside ROPE #> ----------- #> 4.63 % #> #> # \\donttest{ library(rstanarm) model <- suppressWarnings( stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) rope(model) #> # Proportion of samples inside the ROPE [-0.60, 0.60]: #> #> Parameter | inside ROPE #> ------------------------- #> (Intercept) | 0.00 % #> wt | 0.00 % #> gear | 43.68 % #> rope(model, ci = c(0.90, 0.95)) #> # Proportions of samples inside the ROPE [-0.60, 0.60]: #> #> ROPE for the 90% HDI: #> #> Parameter | inside ROPE #> ------------------------- #> (Intercept) | 0.00 % #> wt | 0.00 % #> gear | 46.11 % #> #> #> ROPE for the 95% HDI: #> #> Parameter | inside ROPE #> ------------------------- #> (Intercept) | 0.00 % #> wt | 0.00 % #> gear | 43.68 % #> #> # multiple ROPE ranges rope(model, range = list(c(-10, 5), c(-0.2, 0.2), \"default\")) #> # Proportion of samples inside the ROPE [-10.00, 5.00]: #> #> Parameter | inside ROPE #> ------------------------- #> (Intercept) | 0.00 % #> wt | 0.00 % #> gear | 10.53 % #> # named ROPE ranges rope(model, range = list(gear = c(-3, 2), wt = c(-0.2, 0.2))) #> # Proportion of samples inside the ROPE [-0.10, 0.10]: #> #> Parameter | inside ROPE #> ------------------------- #> (Intercept) | 0.00 % #> wt | 0.00 % #> gear | 100.00 % #> library(emmeans) rope(emtrends(model, ~1, \"wt\"), ci = c(0.90, 0.95)) #> # Proportions of samples inside the ROPE [-0.10, 0.10]: #> #> ROPE for the 90% HDI: #> #> X1 | inside ROPE #> --------------------- #> overall | 0.00 % #> #> #> ROPE for the 95% HDI: #> #> X1 | inside ROPE #> --------------------- #> overall | 0.00 % #> #> library(brms) model <- brm(mpg ~ wt + cyl, data = mtcars, refresh = 0) #> Compiling Stan program... #> Start sampling rope(model) #> Possible multicollinearity between b_cyl and b_wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?rope'. #> # Proportion of samples inside the ROPE [-0.60, 0.60]: #> #> Parameter | inside ROPE #> ----------------------- #> Intercept | 0.00 % #> wt | 0.00 % #> cyl | 0.00 % #> rope(model, ci = c(0.90, 0.95)) #> Possible multicollinearity between b_cyl and b_wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?rope'. #> # Proportions of samples inside the ROPE [-0.60, 0.60]: #> #> ROPE for the 90% HDI: #> #> Parameter | inside ROPE #> ----------------------- #> Intercept | 0.00 % #> wt | 0.00 % #> cyl | 0.00 % #> #> #> ROPE for the 95% HDI: #> #> Parameter | inside ROPE #> ----------------------- #> Intercept | 0.00 % #> wt | 0.00 % #> cyl | 0.00 % #> #> library(brms) model <- brm( bf(mvbind(mpg, disp) ~ wt + cyl) + set_rescor(rescor = TRUE), data = mtcars, refresh = 0 ) #> Compiling Stan program... #> Start sampling rope(model) #> Possible multicollinearity between b_mpg_cyl and b_mpg_wt (r = 0.79), #> b_disp_cyl and b_disp_wt (r = 0.79). This might lead to inappropriate #> results. See 'Details' in '?rope'. #> # Proportion of samples inside the ROPE. #> ROPE with depends on outcome variable. #> #> Parameter | inside ROPE | ROPE width #> ---------------------------------------------- #> mpg_Intercept | 0.00 % | [-0.60, 0.60] #> mpg_wt | 0.00 % | [-0.60, 0.60] #> mpg_cyl | 0.00 % | [-0.60, 0.60] #> disp_Intercept | 0.00 % | [-12.39, 12.39] #> disp_wt | 0.00 % | [-12.39, 12.39] #> disp_cyl | 0.00 % | [-12.39, 12.39] #> rope(model, ci = c(0.90, 0.95)) #> Possible multicollinearity between b_mpg_cyl and b_mpg_wt (r = 0.79), #> b_disp_cyl and b_disp_wt (r = 0.79). This might lead to inappropriate #> results. See 'Details' in '?rope'. #> # Proportions of samples inside the ROPE. #> ROPE with depends on outcome variable. #> #> ROPE for the 90% HDI: #> #> Parameter | inside ROPE | ROPE width #> ---------------------------------------------- #> mpg_Intercept | 0.00 % | [-0.60, 0.60] #> mpg_wt | 0.00 % | [-0.60, 0.60] #> mpg_cyl | 0.00 % | [-0.60, 0.60] #> disp_Intercept | 0.00 % | [-12.39, 12.39] #> disp_wt | 0.00 % | [-12.39, 12.39] #> disp_cyl | 0.00 % | [-12.39, 12.39] #> #> #> ROPE for the 95% HDI: #> #> Parameter | inside ROPE | ROPE width #> ---------------------------------------------- #> mpg_Intercept | 0.00 % | [-0.60, 0.60] #> mpg_wt | 0.00 % | [-0.60, 0.60] #> mpg_cyl | 0.00 % | [-0.60, 0.60] #> disp_Intercept | 0.00 % | [-12.39, 12.39] #> disp_wt | 0.00 % | [-12.39, 12.39] #> disp_cyl | 0.00 % | [-12.39, 12.39] #> #> # different ROPE ranges for model parameters. For each response, a named # list (with the name of the response variable) is required as list-element # for the `range` argument. rope( model, range = list( mpg = list(b_mpg_wt = c(-1, 1), b_mpg_cyl = c(-2, 2)), disp = list(b_disp_wt = c(-5, 5), b_disp_cyl = c(-4, 4)) ) ) #> Possible multicollinearity between b_mpg_cyl and b_mpg_wt (r = 0.79), #> b_disp_cyl and b_disp_wt (r = 0.79). This might lead to inappropriate #> results. See 'Details' in '?rope'. #> # Proportion of samples inside the ROPE. #> ROPE with depends on outcome variable. #> #> Parameter | inside ROPE | ROPE width #> -------------------------------------------- #> mpg_Intercept | 0.00 % | [-0.10, 0.10] #> mpg_wt | 0.00 % | [-1.00, 1.00] #> mpg_cyl | 89.82 % | [-2.00, 2.00] #> disp_Intercept | 0.00 % | [-0.10, 0.10] #> disp_wt | 0.00 % | [-5.00, 5.00] #> disp_cyl | 0.00 % | [-4.00, 4.00] #> library(BayesFactor) bf <- ttestBF(x = rnorm(100, 1, 1)) rope(bf) #> # Proportion of samples inside the ROPE [-0.10, 0.10]: #> #> Parameter | inside ROPE #> ------------------------ #> Difference | 0.00 % #> rope(bf, ci = c(0.90, 0.95)) #> # Proportions of samples inside the ROPE [-0.10, 0.10]: #> #> ROPE for the 90% HDI: #> #> Parameter | inside ROPE #> ------------------------ #> Difference | 0.00 % #> #> #> ROPE for the 95% HDI: #> #> Parameter | inside ROPE #> ------------------------ #> Difference | 0.00 % #> #> # }"},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":null,"dir":"Reference","previous_headings":"","what":"Find Default Equivalence (ROPE) Region Bounds — rope_range","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"function attempts automatically finding suitable \"default\" values Region Practical Equivalence (ROPE).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"","code":"rope_range(x, ...) # Default S3 method rope_range(x, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"x stanreg, brmsfit BFBayesFactor object, frequentist regression model. ... Currently used. verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"Kruschke (2018) suggests region practical equivalence set, default, range -0.1 0.1 standardized parameter (negligible effect size according Cohen, 1988). linear models (lm), can generalised -0.1 * SDy, 0.1 * SDy. logistic models, parameters expressed log odds ratio can converted standardized difference formula π/√(3), resulting range -0.18 0.18. models binary outcome, strongly recommended manually specify rope argument. Currently, default applied logistic models. models count data, residual variance used. rather experimental threshold probably often similar -0.1, 0.1, used care! t-tests, standard deviation response used, similarly linear models (see ). correlations, -0.05, 0.05 used, .e., half value negligible correlation suggested Cohen's (1988) rules thumb. models, -0.1, 0.1 used determine ROPE limits, strongly advised specify manually.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"Kruschke, J. K. (2018). Rejecting accepting parameter values Bayesian estimation. Advances Methods Practices Psychological Science, 1(2), 270-280. doi:10.1177/2515245918771304 .","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"","code":"# \\donttest{ model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0 )) rope_range(model) #> [1] -0.6026948 0.6026948 model <- suppressWarnings( rstanarm::stan_glm(vs ~ mpg, data = mtcars, family = \"binomial\", refresh = 0) ) rope_range(model) #> [1] -0.1813799 0.1813799 model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 9e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 1: 0.02 seconds (Sampling) #> Chain 1: 0.04 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 4e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 2: 0.018 seconds (Sampling) #> Chain 2: 0.038 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.02 seconds (Sampling) #> Chain 3: 0.039 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 4: 0.017 seconds (Sampling) #> Chain 4: 0.036 seconds (Total) #> Chain 4: rope_range(model) #> [1] -0.6026948 0.6026948 model <- BayesFactor::ttestBF(mtcars[mtcars$vs == 1, \"mpg\"], mtcars[mtcars$vs == 0, \"mpg\"]) rope_range(model) #> [1] -0.6026948 0.6026948 model <- lmBF(mpg ~ vs, data = mtcars) rope_range(model) #> [1] -0.6026948 0.6026948 # }"},{"path":"https://easystats.github.io/bayestestR/reference/sensitivity_to_prior.html","id":null,"dir":"Reference","previous_headings":"","what":"Sensitivity to Prior — sensitivity_to_prior","title":"Sensitivity to Prior — sensitivity_to_prior","text":"Computes sensitivity priors specification. represents proportion change indices model fitted antagonistic prior (prior shape located opposite effect).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sensitivity_to_prior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sensitivity to Prior — sensitivity_to_prior","text":"","code":"sensitivity_to_prior(model, ...) # S3 method for class 'stanreg' sensitivity_to_prior(model, index = \"Median\", magnitude = 10, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/sensitivity_to_prior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sensitivity to Prior — sensitivity_to_prior","text":"model Bayesian model (stanreg brmsfit). ... Arguments passed methods. index indices compute sensitivity. Can one multiple names columns returned describe_posterior. case important (e.g., write 'Median' instead 'median'). magnitude represent magnitude shift antagonistic prior (test sensitivity). instance, magnitude 10 (default) means mode wil updated prior located 10 standard deviations original location.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/sensitivity_to_prior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sensitivity to Prior — sensitivity_to_prior","text":"","code":"# \\donttest{ library(bayestestR) # rstanarm models # ----------------------------------------------- model <- rstanarm::stan_glm(mpg ~ wt, data = mtcars) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.03 seconds (Warm-up) #> Chain 1: 0.027 seconds (Sampling) #> Chain 1: 0.057 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.037 seconds (Warm-up) #> Chain 2: 0.027 seconds (Sampling) #> Chain 2: 0.064 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.026 seconds (Warm-up) #> Chain 3: 0.027 seconds (Sampling) #> Chain 3: 0.053 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.028 seconds (Warm-up) #> Chain 4: 0.031 seconds (Sampling) #> Chain 4: 0.059 seconds (Total) #> Chain 4: sensitivity_to_prior(model) #> Parameter Sensitivity_Median #> 1 wt 0.04105146 model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.041 seconds (Warm-up) #> Chain 1: 0.043 seconds (Sampling) #> Chain 1: 0.084 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.1e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.044 seconds (Warm-up) #> Chain 2: 0.04 seconds (Sampling) #> Chain 2: 0.084 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.1e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 3: 0.043 seconds (Sampling) #> Chain 3: 0.088 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.1e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 4: 0.036 seconds (Sampling) #> Chain 4: 0.081 seconds (Total) #> Chain 4: sensitivity_to_prior(model, index = c(\"Median\", \"MAP\")) #> Parameter Sensitivity_Median Sensitivity_MAP #> 1 wt 0.03611038 0.03354017 #> 2 cyl 0.02489034 0.05805163 # }"},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":null,"dir":"Reference","previous_headings":"","what":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"SEXIT new framework describe Bayesian effects, guiding indices use. Accordingly, sexit() function returns minimal (optimal) required information describe models' parameters Bayesian framework. includes following indices: Centrality: median posterior distribution. probabilistic terms, 50% probability effect higher lower. See point_estimate(). Uncertainty: 95% Highest Density Interval (HDI). probabilistic terms, 95% probability effect within confidence interval. See ci(). Existence: probability direction allows quantify certainty effect positive negative. critical index show effect manipulation harmful (instance clinical studies) assess direction link. See p_direction(). Significance: existence demonstrated high certainty, can assess whether effect sufficient size considered significant (.e., negligible). useful index determine effects actually important worthy discussion given process. See p_significance(). Size: Finally, index gives idea strength effect. However, beware, studies shown big effect size can also suggestive low statistical power (see details section).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"","code":"sexit(x, significant = \"default\", large = \"default\", ci = 0.95, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"x vector representing posterior distribution, data frame posterior draws (samples parameter). Can also Bayesian model. significant, large threshold values use significant large probabilities. left 'default', selected sexit_thresholds(). See details section . ci Value vector probability (credible) interval - CI (0 1) estimated. Default .95 (95%). ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"dataframe text attribute.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"rationale","dir":"Reference","previous_headings":"","what":"Rationale","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"assessment \"significance\" (broadest meaning) pervasive issue science, historical index, p-value, strongly criticized deemed played important role replicability crisis. reaction, scientists tuned Bayesian methods, offering alternative set tools answer questions. However, Bayesian framework offers wide variety possible indices related \"significance\", debate raging index best, one report. situation can lead mindless reporting possible indices (hopes reader satisfied), often without writer understanding interpreting . indeed complicated juggle many indices complicated definitions subtle differences. SEXIT aims offering practical framework Bayesian effects reporting, focus put intuitiveness, explicitness usefulness indices' interpretation. end, suggest system description parameters intuitive, easy learn apply, mathematically accurate useful taking decision. thresholds significance (.e., ROPE) one \"large\" effect explicitly defined, SEXIT framework make interpretation, .e., label effects, just sequentially gives 3 probabilities (direction, significance large, respectively) -top characteristics posterior (using median HDI centrality uncertainty description). Thus, provides lot information posterior distribution (mass different 'sections' posterior) clear meaningful way.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"threshold-selection","dir":"Reference","previous_headings":"","what":"Threshold selection","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"One important thing SEXIT framework relies two \"arbitrary\" thresholds (.e., absolute meaning). ones related effect size (inherently subjective notion), namely thresholds significant large effects. set, default, 0.05 0.3 standard deviation outcome variable (tiny large effect sizes correlations according Funder Ozer, 2019). However, defaults chosen lack better option, might adapted case. Thus, handled care, chosen thresholds always explicitly reported justified. linear models (lm), can generalised 0.05 * SDy 0.3 * SDy significant large effects, respectively. logistic models, parameters expressed log odds ratio can converted standardized difference formula π/√(3), resulting threshold 0.09 0.54. models binary outcome, strongly recommended manually specify rope argument. Currently, default applied logistic models. models count data, residual variance used. rather experimental threshold probably often similar 0.05 0.3, used care! t-tests, standard deviation response used, similarly linear models (see ). correlations,0.05 0.3 used. models, 0.05 0.3 used, strongly advised specify manually.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"three values existence, significance size provide useful description posterior distribution effects. possible scenarios include: probability existence low, probability large high: suggests posterior wide (covering large territories side 0). statistical power might low, warrant confident conclusion. probability existence significance high, probability large small: suggests effect , high confidence, large (posterior mostly contained significance large thresholds). 3 indices low: suggests effect null high confidence (posterior closely centred around 0).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"Makowski, D., Ben-Shachar, M. S., & Lüdecke, D. (2019). bayestestR: Describing Effects Uncertainty, Existence Significance within Bayesian Framework. Journal Open Source Software, 4(40), 1541. doi:10.21105/joss.01541 Makowski D, Ben-Shachar MS, Chen SHA, Lüdecke D (2019) Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"","code":"# \\donttest{ library(bayestestR) s <- sexit(rnorm(1000, -1, 1)) s #> # Following the Sequential Effect eXistence and sIgnificance Testing (SEXIT) framework, we report the median of the posterior distribution and its 95% CI (Highest Density Interval), along the probability of direction (pd), the probability of significance and the probability of being large. The thresholds beyond which the effect is considered as significant (i.e., non-negligible) and large are |0.05| and |0.30|. #> #> The effect (Median = -0.96, 95% CI [-2.80, 1.08]) has a 84.70% probability of being negative (< 0), 83.50% of being significant (< -0.05), and 76.70% of being large (< -0.30) #> #> Median | 95% CI | Direction | Significance (> |0.05|) | Large (> |0.30|) #> ------------------------------------------------------------------------------- #> -0.96 | [-2.80, 1.08] | 0.85 | 0.83 | 0.77 #> print(s, summary = TRUE) #> # The thresholds beyond which the effect is considered as significant (i.e., non-negligible) and large are |0.05| and |0.30|. #> #> The effect (Median = -0.96, 95% CI [-2.80, 1.08]) has 84.70%, 83.50% and 76.70% probability of being negative (< 0), significant (< -0.05) and large (< -0.30) s <- sexit(iris) s #> # Following the Sequential Effect eXistence and sIgnificance Testing (SEXIT) framework, we report the median of the posterior distribution and its 95% CI (Highest Density Interval), along the probability of direction (pd), the probability of significance and the probability of being large. The thresholds beyond which the effect is considered as significant (i.e., non-negligible) and large are |0.05| and |0.30|. #> #> - Sepal.Length (Median = 5.80, 95% CI [4.47, 7.70]) has a 100.00% probability of being positive (> 0), 100.00% of being significant (> 0.05), and 100.00% of being large (> 0.30) #> - Sepal.Width (Median = 3.00, 95% CI [2.27, 3.93]) has a 100.00% probability of being positive (> 0), 100.00% of being significant (> 0.05), and 100.00% of being large (> 0.30) #> - Petal.Length (Median = 4.35, 95% CI [1.27, 6.46]) has a 100.00% probability of being positive (> 0), 100.00% of being significant (> 0.05), and 100.00% of being large (> 0.30) #> - Petal.Width (Median = 1.30, 95% CI [0.10, 2.40]) has a 100.00% probability of being positive (> 0), 100.00% of being significant (> 0.05), and 72.67% of being large (> 0.30) #> #> Parameter | Median | 95% CI | Direction | Significance (> |0.05|) | Large (> |0.30|) #> --------------------------------------------------------------------------------------------- #> Sepal.Length | 5.80 | [4.47, 7.70] | 1 | 1 | 1.00 #> Sepal.Width | 3.00 | [2.27, 3.93] | 1 | 1 | 1.00 #> Petal.Length | 4.35 | [1.27, 6.46] | 1 | 1 | 1.00 #> Petal.Width | 1.30 | [0.10, 2.40] | 1 | 1 | 0.73 #> print(s, summary = TRUE) #> # The thresholds beyond which the effect is considered as significant (i.e., non-negligible) and large are |0.05| and |0.30|. #> #> - Sepal.Length (Median = 5.80, 95% CI [4.47, 7.70]) has 100.00%, 100.00% and 100.00% probability of being positive (> 0), significant (> 0.05) and large (> 0.30) #> - Sepal.Width (Median = 3.00, 95% CI [2.27, 3.93]) has 100.00%, 100.00% and 100.00% probability of being positive (> 0), significant (> 0.05) and large (> 0.30) #> - Petal.Length (Median = 4.35, 95% CI [1.27, 6.46]) has 100.00%, 100.00% and 100.00% probability of being positive (> 0), significant (> 0.05) and large (> 0.30) #> - Petal.Width (Median = 1.30, 95% CI [0.10, 2.40]) has 100.00%, 100.00% and 72.67% probability of being positive (> 0), significant (> 0.05) and large (> 0.30) if (require(\"rstanarm\")) { model <- suppressWarnings(rstanarm::stan_glm(mpg ~ wt * cyl, data = mtcars, iter = 400, refresh = 0 )) s <- sexit(model) s print(s, summary = TRUE) } #> # The thresholds beyond which the effect is considered as significant (i.e., non-negligible) and large are |0.30| and |1.81| (corresponding respectively to 0.05 and 0.30 of the outcome's SD). #> #> - (Intercept) (Median = 52.52, 95% CI [40.70, 64.08]) has 100.00%, 100.00% and 100.00% probability of being positive (> 0), significant (> 0.30) and large (> 1.81) #> - wt (Median = -8.04, 95% CI [-12.59, -3.18]) has 100.00%, 99.88% and 99.50% probability of being negative (< 0), significant (< -0.30) and large (< -1.81) #> - cyl (Median = -3.49, 95% CI [-5.59, -1.60]) has 100.00%, 99.88% and 96.00% probability of being negative (< 0), significant (< -0.30) and large (< -1.81) #> - wt:cyl (Median = 0.71, 95% CI [0.08, 1.32]) has 98.88%, 89.12% and 0.00% probability of being positive (> 0), significant (> 0.30) and large (> 1.81) # }"},{"path":"https://easystats.github.io/bayestestR/reference/sexit_thresholds.html","id":null,"dir":"Reference","previous_headings":"","what":"Find Effect Size Thresholds — sexit_thresholds","title":"Find Effect Size Thresholds — sexit_thresholds","text":"function attempts automatically finding suitable default values \"significant\" (.e., non-negligible) \"large\" effect. used care, chosen threshold always explicitly reported justified. See detail section sexit() information.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit_thresholds.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Find Effect Size Thresholds — sexit_thresholds","text":"","code":"sexit_thresholds(x, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/sexit_thresholds.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Find Effect Size Thresholds — sexit_thresholds","text":"x Vector representing posterior distribution. Can also stanreg brmsfit model. ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit_thresholds.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Find Effect Size Thresholds — sexit_thresholds","text":"Kruschke, J. K. (2018). Rejecting accepting parameter values Bayesian estimation. Advances Methods Practices Psychological Science, 1(2), 270-280. doi:10.1177/2515245918771304 .","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit_thresholds.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Find Effect Size Thresholds — sexit_thresholds","text":"","code":"sexit_thresholds(rnorm(1000)) #> [1] 0.05 0.30 # \\donttest{ if (require(\"rstanarm\")) { model <- suppressWarnings(stan_glm( mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0 )) sexit_thresholds(model) model <- suppressWarnings( stan_glm(vs ~ mpg, data = mtcars, family = \"binomial\", refresh = 0) ) sexit_thresholds(model) } #> [1] 0.09068997 0.54413981 if (require(\"brms\")) { model <- brm(mpg ~ wt + cyl, data = mtcars) sexit_thresholds(model) } #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 1: 0.019 seconds (Sampling) #> Chain 1: 0.038 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 2: 0.015 seconds (Sampling) #> Chain 2: 0.035 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 3: 0.017 seconds (Sampling) #> Chain 3: 0.037 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.017 seconds (Sampling) #> Chain 4: 0.037 seconds (Total) #> Chain 4: #> [1] 0.3013474 1.8080844 if (require(\"BayesFactor\")) { bf <- ttestBF(x = rnorm(100, 1, 1)) sexit_thresholds(bf) } #> [1] 0.0498231 0.2989386 # }"},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute Support Intervals — si","title":"Compute Support Intervals — si","text":"support interval contains values parameter predict observed data better average, degree k; values parameter associated updating factor greater equal k. perspective Savage-Dickey Bayes factor, testing point null hypothesis value within support interval yield Bayes factor smaller 1/k.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute Support Intervals — si","text":"","code":"si(posterior, ...) # S3 method for class 'numeric' si(posterior, prior = NULL, BF = 1, verbose = TRUE, ...) # S3 method for class 'stanreg' si( posterior, prior = NULL, BF = 1, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"conditional\", \"all\", \"smooth_terms\", \"sigma\", \"auxiliary\", \"distributional\"), parameters = NULL, ... ) # S3 method for class 'brmsfit' si( posterior, prior = NULL, BF = 1, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"conditional\", \"all\", \"smooth_terms\", \"sigma\", \"auxiliary\", \"distributional\"), parameters = NULL, ... ) # S3 method for class 'blavaan' si( posterior, prior = NULL, BF = 1, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"conditional\", \"all\", \"smooth_terms\", \"sigma\", \"auxiliary\", \"distributional\"), parameters = NULL, ... ) # S3 method for class 'emmGrid' si(posterior, prior = NULL, BF = 1, verbose = TRUE, ...) # S3 method for class 'get_predicted' si( posterior, prior = NULL, BF = 1, use_iterations = FALSE, verbose = TRUE, ... ) # S3 method for class 'data.frame' si(posterior, prior = NULL, BF = 1, rvar_col = NULL, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute Support Intervals — si","text":"posterior numerical vector, stanreg / brmsfit object, emmGrid data frame - representing posterior distribution(s) (see 'Details'). ... Arguments passed methods. (Can used pass arguments internal logspline::logspline().) prior object representing prior distribution (see 'Details'). BF amount support required included support interval. verbose Toggle warnings. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). rvar_col single character - name rvar column data frame processed. See example p_direction().","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compute Support Intervals — si","text":"data frame containing lower upper bounds SI. Note level requested support higher observed data, interval [NA,NA].","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Compute Support Intervals — si","text":"info, particular specifying correct priors factors 2 levels, see Bayes factors vignette. method used compute support intervals based prior posterior distributions. computation support intervals, model priors must proper priors (least flat, preferable informative - note default, brms::brm() uses flat priors fixed-effects; see example ).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Compute Support Intervals — si","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"choosing-a-value-of-bf","dir":"Reference","previous_headings":"","what":"Choosing a value of BF","title":"Compute Support Intervals — si","text":"choice BF (level support) depends want interval represent: BF = 1 contains values whose credibility decreased observing data. BF > 1 contains values received impressive support data. BF < 1 contains values whose credibility impressively decreased observing data. Testing values outside interval produce Bayes factor larger 1/BF support alternative. E.g., SI (BF = 1/3) excludes 0, Bayes factor point-null larger 3.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"setting-the-correct-prior","dir":"Reference","previous_headings":"","what":"Setting the correct prior","title":"Compute Support Intervals — si","text":"computation Bayes factors, model priors must proper priors (least flat, preferable informative); priors alternative get wider, likelihood null value(s) increases, extreme completely flat priors null infinitely favorable alternative (called Jeffreys-Lindley-Bartlett paradox). Thus, ever try (want) compute Bayes factor informed prior. (Note default, brms::brm() uses flat priors fixed-effects; See example .) important provide correct prior meaningful results, match posterior-type input: numeric vector - prior also numeric vector, representing prior-estimate. data frame - prior also data frame, representing prior-estimates, matching column order. rvar_col specified, prior name rvar column represents prior-estimates. Supported Bayesian model (stanreg, brmsfit, etc.) prior model equivalent model MCMC samples priors . See unupdate(). prior set NULL, unupdate() called internally (supported brmsfit_multiple model). Output {marginaleffects} function - prior also equivalent output {marginaleffects} function based prior-model (See unupdate()). Output {emmeans} function prior also equivalent output {emmeans} function based prior-model (See unupdate()). prior can also original (posterior) model, case function try \"unupdate\" estimates (supported estimates undergone transformations – \"log\", \"response\", etc. – regriding).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Compute Support Intervals — si","text":"Wagenmakers, E., Gronau, Q. F., Dablander, F., & Etz, . (2018, November 22). Support Interval. doi:10.31234/osf.io/zwnxb","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compute Support Intervals — si","text":"","code":"library(bayestestR) prior <- distribution_normal(1000, mean = 0, sd = 1) posterior <- distribution_normal(1000, mean = 0.5, sd = 0.3) si(posterior, prior, verbose = FALSE) #> BF = 1 SI: [0.04, 1.04] # \\donttest{ # rstanarm models # --------------- library(rstanarm) contrasts(sleep$group) <- contr.equalprior_pairs # see vignette stan_model <- stan_lmer(extra ~ group + (1 | ID), data = sleep) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.9e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.29 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.188 seconds (Warm-up) #> Chain 1: 0.192 seconds (Sampling) #> Chain 1: 0.38 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.5e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.191 seconds (Warm-up) #> Chain 2: 0.171 seconds (Sampling) #> Chain 2: 0.362 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.5e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.168 seconds (Warm-up) #> Chain 3: 0.251 seconds (Sampling) #> Chain 3: 0.419 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.5e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.167 seconds (Warm-up) #> Chain 4: 0.181 seconds (Sampling) #> Chain 4: 0.348 seconds (Total) #> Chain 4: si(stan_model, verbose = FALSE) #> Support Interval #> #> Parameter | BF = 1 SI | Effects | Component #> -------------------------------------------------- #> (Intercept) | [0.41, 2.72] | fixed | conditional #> group1 | [0.44, 2.75] | fixed | conditional si(stan_model, BF = 3, verbose = FALSE) #> Support Interval #> #> Parameter | BF = 3 SI | Effects | Component #> -------------------------------------------------- #> (Intercept) | [0.83, 2.33] | fixed | conditional #> group1 | [0.66, 2.44] | fixed | conditional # emmGrid objects # --------------- library(emmeans) group_diff <- pairs(emmeans(stan_model, ~group)) si(group_diff, prior = stan_model, verbose = FALSE) #> Support Interval #> #> contrast | BF = 1 SI #> -------------------------------- #> group1 - group2 | [-2.76, -0.34] # brms models # ----------- library(brms) contrasts(sleep$group) <- contr.equalprior_pairs # see vingette my_custom_priors <- set_prior(\"student_t(3, 0, 1)\", class = \"b\") + set_prior(\"student_t(3, 0, 1)\", class = \"sd\", group = \"ID\") brms_model <- suppressWarnings(brm(extra ~ group + (1 | ID), data = sleep, prior = my_custom_priors, refresh = 0 )) #> Compiling Stan program... #> Start sampling si(brms_model, verbose = FALSE) #> Support Interval #> #> Parameter | BF = 1 SI | Effects | Component #> -------------------------------------------------- #> b_Intercept | [0.65, 2.47] | fixed | conditional #> b_group1 | [0.70, 2.43] | fixed | conditional # }"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_correlation.html","id":null,"dir":"Reference","previous_headings":"","what":"Data Simulation — simulate_correlation","title":"Data Simulation — simulate_correlation","text":"Simulate data specific characteristics.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_correlation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Data Simulation — simulate_correlation","text":"","code":"simulate_correlation(n = 100, r = 0.5, mean = 0, sd = 1, names = NULL, ...) simulate_ttest(n = 100, d = 0.5, names = NULL, ...) simulate_difference(n = 100, d = 0.5, names = NULL, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_correlation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Data Simulation — simulate_correlation","text":"n number observations generated. r value vector corresponding desired correlation coefficients. mean value vector corresponding mean variables. sd value vector corresponding SD variables. names character vector desired variable names. ... Arguments passed methods. d value vector corresponding desired difference groups.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_correlation.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Data Simulation — simulate_correlation","text":"","code":"# Correlation -------------------------------- data <- simulate_correlation(r = 0.5) plot(data$V1, data$V2) cor.test(data$V1, data$V2) #> #> \tPearson's product-moment correlation #> #> data: data$V1 and data$V2 #> t = 5.7155, df = 98, p-value = 1.18e-07 #> alternative hypothesis: true correlation is not equal to 0 #> 95 percent confidence interval: #> 0.3366433 0.6341398 #> sample estimates: #> cor #> 0.5 #> summary(lm(V2 ~ V1, data = data)) #> #> Call: #> lm(formula = V2 ~ V1, data = data) #> #> Residuals: #> Min 1Q Median 3Q Max #> -1.8566 -0.5694 -0.1116 0.5070 2.4567 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) -1.548e-17 8.704e-02 0.000 1 #> V1 5.000e-01 8.748e-02 5.715 1.18e-07 *** #> --- #> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 #> #> Residual standard error: 0.8704 on 98 degrees of freedom #> Multiple R-squared: 0.25,\tAdjusted R-squared: 0.2423 #> F-statistic: 32.67 on 1 and 98 DF, p-value: 1.18e-07 #> # Specify mean and SD data <- simulate_correlation(r = 0.5, n = 50, mean = c(0, 1), sd = c(0.7, 1.7)) cor.test(data$V1, data$V2) #> #> \tPearson's product-moment correlation #> #> data: data$V1 and data$V2 #> t = 4, df = 48, p-value = 0.000218 #> alternative hypothesis: true correlation is not equal to 0 #> 95 percent confidence interval: #> 0.2574879 0.6832563 #> sample estimates: #> cor #> 0.5 #> round(c(mean(data$V1), sd(data$V1)), 1) #> [1] 0.0 0.7 round(c(mean(data$V2), sd(data$V2)), 1) #> [1] 1.0 1.7 summary(lm(V2 ~ V1, data = data)) #> #> Call: #> lm(formula = V2 ~ V1, data = data) #> #> Residuals: #> Min 1Q Median 3Q Max #> -3.2354 -0.9753 -0.0633 1.2648 3.3477 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) 1.0000 0.2104 4.754 1.86e-05 *** #> V1 1.2143 0.3036 4.000 0.000218 *** #> --- #> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 #> #> Residual standard error: 1.487 on 48 degrees of freedom #> Multiple R-squared: 0.25,\tAdjusted R-squared: 0.2344 #> F-statistic: 16 on 1 and 48 DF, p-value: 0.000218 #> # Generate multiple variables cor_matrix <- matrix( c( 1.0, 0.2, 0.4, 0.2, 1.0, 0.3, 0.4, 0.3, 1.0 ), nrow = 3 ) data <- simulate_correlation(r = cor_matrix, names = c(\"y\", \"x1\", \"x2\")) cor(data) #> y x1 x2 #> y 1.0 0.2 0.4 #> x1 0.2 1.0 0.3 #> x2 0.4 0.3 1.0 summary(lm(y ~ x1, data = data)) #> #> Call: #> lm(formula = y ~ x1, data = data) #> #> Residuals: #> Min 1Q Median 3Q Max #> -2.12568 -0.76836 -0.08657 0.61647 2.76996 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) -2.077e-17 9.848e-02 0.000 1.000 #> x1 2.000e-01 9.897e-02 2.021 0.046 * #> --- #> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 #> #> Residual standard error: 0.9848 on 98 degrees of freedom #> Multiple R-squared: 0.04,\tAdjusted R-squared: 0.0302 #> F-statistic: 4.083 on 1 and 98 DF, p-value: 0.04604 #> # t-test -------------------------------- data <- simulate_ttest(n = 30, d = 0.3) plot(data$V1, data$V0) round(c(mean(data$V1), sd(data$V1)), 1) #> [1] 0 1 diff(t.test(data$V1 ~ data$V0)$estimate) #> mean in group 1 #> 0.09185722 summary(lm(V1 ~ V0, data = data)) #> #> Call: #> lm(formula = V1 ~ V0, data = data) #> #> Residuals: #> Min 1Q Median 3Q Max #> -2.0821 -0.6721 0.0000 0.6032 2.0821 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) -0.04593 0.26139 -0.176 0.862 #> V01 0.09186 0.36966 0.248 0.806 #> #> Residual standard error: 1.012 on 28 degrees of freedom #> Multiple R-squared: 0.0022,\tAdjusted R-squared: -0.03344 #> F-statistic: 0.06175 on 1 and 28 DF, p-value: 0.8056 #> summary(glm(V0 ~ V1, data = data, family = \"binomial\")) #> #> Call: #> glm(formula = V0 ~ V1, family = \"binomial\", data = data) #> #> Coefficients: #> Estimate Std. Error z value Pr(>|z|) #> (Intercept) -2.983e-17 3.656e-01 0.000 1.000 #> V1 9.601e-02 3.740e-01 0.257 0.797 #> #> (Dispersion parameter for binomial family taken to be 1) #> #> Null deviance: 41.589 on 29 degrees of freedom #> Residual deviance: 41.523 on 28 degrees of freedom #> AIC: 45.523 #> #> Number of Fisher Scoring iterations: 3 #> # Difference -------------------------------- data <- simulate_difference(n = 30, d = 0.3) plot(data$V1, data$V0) round(c(mean(data$V1), sd(data$V1)), 1) #> [1] 0 1 diff(t.test(data$V1 ~ data$V0)$estimate) #> mean in group 1 #> 0.3 summary(lm(V1 ~ V0, data = data)) #> #> Call: #> lm(formula = V1 ~ V0, data = data) #> #> Residuals: #> Min 1Q Median 3Q Max #> -1.834 -0.677 0.000 0.677 1.834 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) -0.1500 0.2562 -0.586 0.563 #> V01 0.3000 0.3623 0.828 0.415 #> #> Residual standard error: 0.9922 on 28 degrees of freedom #> Multiple R-squared: 0.0239,\tAdjusted R-squared: -0.01096 #> F-statistic: 0.6857 on 1 and 28 DF, p-value: 0.4146 #> summary(glm(V0 ~ V1, data = data, family = \"binomial\")) #> #> Call: #> glm(formula = V0 ~ V1, family = \"binomial\", data = data) #> #> Coefficients: #> Estimate Std. Error z value Pr(>|z|) #> (Intercept) -4.569e-17 3.696e-01 0.000 1.000 #> V1 3.251e-01 3.877e-01 0.839 0.402 #> #> (Dispersion parameter for binomial family taken to be 1) #> #> Null deviance: 41.589 on 29 degrees of freedom #> Residual deviance: 40.865 on 28 degrees of freedom #> AIC: 44.865 #> #> Number of Fisher Scoring iterations: 4 #>"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_prior.html","id":null,"dir":"Reference","previous_headings":"","what":"Returns Priors of a Model as Empirical Distributions — simulate_prior","title":"Returns Priors of a Model as Empirical Distributions — simulate_prior","text":"Transforms priors information actual distributions.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_prior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Returns Priors of a Model as Empirical Distributions — simulate_prior","text":"","code":"simulate_prior(model, n = 1000, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_prior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Returns Priors of a Model as Empirical Distributions — simulate_prior","text":"model stanreg, stanfit, brmsfit, blavaan, MCMCglmm object. n Size simulated prior distributions. ... Currently used.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/simulate_prior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Returns Priors of a Model as Empirical Distributions — simulate_prior","text":"","code":"# \\donttest{ library(bayestestR) if (require(\"rstanarm\")) { model <- suppressWarnings( stan_glm(mpg ~ wt + am, data = mtcars, chains = 1, refresh = 0) ) simulate_prior(model) } #> (Intercept) wt am #> 1 -29.48895919 -50.67117077 -99.35969268 #> 2 -24.62538077 -45.70051165 -89.61286514 #> 3 -22.20399176 -43.22581126 -84.76029381 #> 4 -20.54372566 -41.52899133 -81.43304669 #> 5 -19.26616155 -40.22329926 -78.87275136 #> 6 -18.22115924 -39.15528929 -76.77852025 #> 7 -17.33324626 -38.24782726 -74.99910312 #> 8 -16.55893057 -37.45646357 -73.44734004 #> 9 -15.87078383 -36.75316600 -72.06826334 #> 10 -15.25035829 -36.11908064 -70.82490296 #> 11 -14.68463134 -35.54089816 -69.69116098 #> 12 -14.16405635 -35.00886175 -68.64790554 #> 13 -13.68141951 -34.51559876 -67.68067983 #> 14 -13.23113238 -34.05539773 -66.77828439 #> 15 -12.80877469 -33.62374105 -65.93186079 #> 16 -12.41078784 -33.21699178 -65.13427743 #> 17 -12.03426309 -32.83217714 -64.37970508 #> 18 -11.67679137 -32.46683501 -63.66331584 #> 19 -11.33635417 -32.11890246 -62.98106456 #> 20 -11.01124290 -31.78663325 -62.32952709 #> 21 -10.69999799 -31.46853572 -61.70577846 #> 22 -10.40136247 -31.16332518 -61.10729958 #> 23 -10.11424576 -30.86988706 -60.53190492 #> 24 -9.83769537 -30.58724790 -59.97768563 #> 25 -9.57087431 -30.31455229 -59.44296437 #> 26 -9.31304305 -30.05104439 -58.92625904 #> 27 -9.06354475 -29.79605292 -58.42625335 #> 28 -8.82179333 -29.54897890 -57.94177276 #> 29 -8.58726358 -29.30928553 -57.47176469 #> 30 -8.35948289 -29.07648983 -57.01528205 #> 31 -8.13802443 -28.85015555 -56.57146946 #> 32 -7.92250139 -28.62988737 -56.13955169 #> 33 -7.71256203 -28.41532581 -55.71882387 #> 34 -7.50788557 -28.20614302 -55.30864314 #> 35 -7.30817862 -28.00203915 -54.90842153 #> 36 -7.11317211 -27.80273920 -54.51761980 #> 37 -6.92261864 -27.60799035 -54.13574218 #> 38 -6.73629024 -27.41755958 -53.76233176 #> 39 -6.55397631 -27.23123167 -53.39696653 #> 40 -6.37548191 -27.04880738 -53.03925581 #> 41 -6.20062624 -26.87010194 -52.68883727 #> 42 -6.02924127 -26.69494362 -52.34537417 #> 43 -5.86117058 -26.52317254 -52.00855303 #> 44 -5.69626827 -26.35463960 -51.67808147 #> 45 -5.53439806 -26.18920551 -51.35368636 #> 46 -5.37543243 -26.02673996 -51.03511216 #> 47 -5.21925189 -25.86712082 -50.72211942 #> 48 -5.06574433 -25.71023350 -50.41448343 #> 49 -4.91480437 -25.55597032 -50.11199304 #> 50 -4.76633288 -25.40422995 -49.81444956 #> 51 -4.62023645 -25.25491694 -49.52166582 #> 52 -4.47642699 -25.10794124 -49.23346526 #> 53 -4.33482129 -24.96321783 -48.94968113 #> 54 -4.19534069 -24.82066630 -48.67015580 #> 55 -4.05791074 -24.68021059 -48.39474008 #> 56 -3.92246092 -24.54177859 -48.12329261 #> 57 -3.78892433 -24.40530194 -47.85567935 #> 58 -3.65723748 -24.27071576 -47.59177303 #> 59 -3.52734003 -24.13795837 -47.33145276 #> 60 -3.39917460 -24.00697114 -47.07460353 #> 61 -3.27268657 -23.87769825 -46.82111591 #> 62 -3.14782393 -23.75008652 -46.57088562 #> 63 -3.02453707 -23.62408527 -46.32381326 #> 64 -2.90277865 -23.49964612 -46.07980397 #> 65 -2.78250349 -23.37672287 -45.83876719 #> 66 -2.66366839 -23.25527140 -45.60061638 #> 67 -2.54623207 -23.13524951 -45.36526877 #> 68 -2.43015501 -23.01661679 -45.13264517 #> 69 -2.31539937 -22.89933459 -44.90266975 #> 70 -2.20192888 -22.78336584 -44.67526985 #> 71 -2.08970880 -22.66867502 -44.45037581 #> 72 -1.97870577 -22.55522805 -44.22792080 #> 73 -1.86888777 -22.44299221 -44.00784065 #> 74 -1.76022407 -22.33193608 -43.79007376 #> 75 -1.65268511 -22.22202945 -43.57456090 #> 76 -1.54624248 -22.11324328 -43.36124513 #> 77 -1.44086885 -22.00554966 -43.15007168 #> 78 -1.33653790 -21.89892167 -42.94098782 #> 79 -1.23322430 -21.79333343 -42.73394277 #> 80 -1.13090363 -21.68875998 -42.52888760 #> 81 -1.02955236 -21.58517727 -42.32577514 #> 82 -0.92914779 -21.48256211 -42.12455992 #> 83 -0.82966803 -21.38089212 -41.92519805 #> 84 -0.73109193 -21.28014568 -41.72764716 #> 85 -0.63339908 -21.18030194 -41.53186634 #> 86 -0.53656976 -21.08134074 -41.33781607 #> 87 -0.44058490 -20.98324260 -41.14545815 #> 88 -0.34542608 -20.88598868 -40.95475563 #> 89 -0.25107546 -20.78956076 -40.76567280 #> 90 -0.15751581 -20.69394121 -40.57817509 #> 91 -0.06473041 -20.59911297 -40.39222901 #> 92 0.02729691 -20.50505950 -40.20780218 #> 93 0.11858180 -20.41176480 -40.02486319 #> 94 0.20913946 -20.31921335 -39.84338162 #> 95 0.29898459 -20.22739012 -39.66332799 #> 96 0.38813146 -20.13628051 -39.48467368 #> 97 0.47659393 -20.04587038 -39.30739097 #> 98 0.56438541 -19.95614600 -39.13145293 #> 99 0.65151895 -19.86709405 -38.95683343 #> 100 0.73800722 -19.77870159 -38.78350711 #> 101 0.82386249 -19.69095605 -38.61144932 #> 102 0.90909672 -19.60384523 -38.44063612 #> 103 0.99372152 -19.51735726 -38.27104425 #> 104 1.07774818 -19.43148060 -38.10265110 #> 105 1.16118766 -19.34620404 -37.93543467 #> 106 1.24405064 -19.26151668 -37.76937357 #> 107 1.32634750 -19.17740790 -37.60444700 #> 108 1.40808835 -19.09386737 -37.44063469 #> 109 1.48928303 -19.01088505 -37.27791695 #> 110 1.56994110 -18.92845114 -37.11627458 #> 111 1.65007191 -18.84655610 -36.95568888 #> 112 1.72968453 -18.76519066 -36.79614165 #> 113 1.80878781 -18.68434577 -36.63761514 #> 114 1.88739040 -18.60401260 -36.48009207 #> 115 1.96550069 -18.52418257 -36.32355557 #> 116 2.04312689 -18.44484728 -36.16798921 #> 117 2.12027700 -18.36599857 -36.01337696 #> 118 2.19695882 -18.28762846 -35.85970318 #> 119 2.27317996 -18.20972917 -35.70695262 #> 120 2.34894785 -18.13229310 -35.55511039 #> 121 2.42426975 -18.05531285 -35.40416195 #> 122 2.49915274 -17.97878118 -35.25409312 #> 123 2.57360372 -17.90269102 -35.10489004 #> 124 2.64762945 -17.82703547 -34.95653917 #> 125 2.72123654 -17.75180779 -34.80902730 #> 126 2.79443142 -17.67700139 -34.66234151 #> 127 2.86722039 -17.60260984 -34.51646917 #> 128 2.93960961 -17.52862683 -34.37139794 #> 129 3.01160511 -17.45504622 -34.22711577 #> 130 3.08321276 -17.38186199 -34.08361085 #> 131 3.15443832 -17.30906827 -33.94087165 #> 132 3.22528743 -17.23665928 -33.79888689 #> 133 3.29576559 -17.16462942 -33.65764552 #> 134 3.36587820 -17.09297315 -33.51713674 #> 135 3.43563052 -17.02168511 -33.37734998 #> 136 3.50502772 -16.95075999 -33.23827490 #> 137 3.57407486 -16.88019266 -33.09990136 #> 138 3.64277688 -16.80997803 -32.96221945 #> 139 3.71113864 -16.74011116 -32.82521945 #> 140 3.77916488 -16.67058720 -32.68889184 #> 141 3.84686025 -16.60140139 -32.55322731 #> 142 3.91422930 -16.53254908 -32.41821673 #> 143 3.98127651 -16.46402570 -32.28385114 #> 144 4.04800624 -16.39582679 -32.15012178 #> 145 4.11442279 -16.32794796 -32.01702006 #> 146 4.18053037 -16.26038490 -31.88453754 #> 147 4.24633308 -16.19313342 -31.75266597 #> 148 4.31183498 -16.12618937 -31.62139725 #> 149 4.37704004 -16.05954871 -31.49072342 #> 150 4.44195213 -15.99320746 -31.36063671 #> 151 4.50657507 -15.92716172 -31.23112945 #> 152 4.57091261 -15.86140767 -31.10219416 #> 153 4.63496842 -15.79594155 -30.97382346 #> 154 4.69874610 -15.73075968 -30.84601015 #> 155 4.76224920 -15.66585845 -30.71874712 #> 156 4.82548118 -15.60123430 -30.59202742 #> 157 4.88844545 -15.53688376 -30.46584422 #> 158 4.95114536 -15.47280339 -30.34019081 #> 159 5.01358420 -15.40898984 -30.21506060 #> 160 5.07576519 -15.34543982 -30.09044711 #> 161 5.13769152 -15.28215007 -29.96634400 #> 162 5.19936629 -15.21911742 -29.84274503 #> 163 5.26079256 -15.15633874 -29.71964405 #> 164 5.32197334 -15.09381095 -29.59703504 #> 165 5.38291158 -15.03153103 -29.47491209 #> 166 5.44361020 -14.96949602 -29.35326936 #> 167 5.50407204 -14.90770300 -29.23210114 #> 168 5.56429991 -14.84614910 -29.11140180 #> 169 5.62429657 -14.78483150 -28.99116583 #> 170 5.68406474 -14.72374743 -28.87138777 #> 171 5.74360707 -14.66289417 -28.75206229 #> 172 5.80292619 -14.60226902 -28.63318412 #> 173 5.86202469 -14.54186936 -28.51474811 #> 174 5.92090509 -14.48169260 -28.39674915 #> 175 5.97956990 -14.42173617 -28.27918226 #> 176 6.03802158 -14.36199757 -28.16204251 #> 177 6.09626253 -14.30247434 -28.04532505 #> 178 6.15429514 -14.24316404 -27.92902512 #> 179 6.21212174 -14.18406427 -27.81313804 #> 180 6.26974464 -14.12517269 -27.69765917 #> 181 6.32716611 -14.06648698 -27.58258399 #> 182 6.38438838 -14.00800486 -27.46790803 #> 183 6.44141364 -13.94972408 -27.35362686 #> 184 6.49824406 -13.89164244 -27.23973617 #> 185 6.55488177 -13.83375775 -27.12623169 #> 186 6.61132886 -13.77606787 -27.01310920 #> 187 6.66758741 -13.71857068 -26.90036458 #> 188 6.72365944 -13.66126413 -26.78799373 #> 189 6.77954695 -13.60414614 -26.67599266 #> 190 6.83525194 -13.54721471 -26.56435740 #> 191 6.89077633 -13.49046784 -26.45308404 #> 192 6.94612205 -13.43390359 -26.34216877 #> 193 7.00129098 -13.37752001 -26.23160778 #> 194 7.05628498 -13.32131521 -26.12139735 #> 195 7.11110589 -13.26528732 -26.01153380 #> 196 7.16575552 -13.20943448 -25.90201352 #> 197 7.22023564 -13.15375487 -25.79283292 #> 198 7.27454802 -13.09824671 -25.68398851 #> 199 7.32869438 -13.04290822 -25.57547679 #> 200 7.38267643 -12.98773765 -25.46729436 #> 201 7.43649585 -12.93273329 -25.35943784 #> 202 7.49015431 -12.87789344 -25.25190390 #> 203 7.54365344 -12.82321643 -25.14468927 #> 204 7.59699485 -12.76870061 -25.03779071 #> 205 7.65018014 -12.71434435 -24.93120503 #> 206 7.70321087 -12.66014605 -24.82492908 #> 207 7.75608859 -12.60610412 -24.71895976 #> 208 7.80881484 -12.55221701 -24.61329402 #> 209 7.86139112 -12.49848317 -24.50792882 #> 210 7.91381891 -12.44490108 -24.40286118 #> 211 7.96609969 -12.39146924 -24.29808818 #> 212 8.01823491 -12.33818617 -24.19360689 #> 213 8.07022598 -12.28505042 -24.08941447 #> 214 8.12207432 -12.23206054 -23.98550808 #> 215 8.17378133 -12.17921510 -23.88188493 #> 216 8.22534838 -12.12651271 -23.77854228 #> 217 8.27677682 -12.07395198 -23.67547739 #> 218 8.32806800 -12.02153153 -23.57268759 #> 219 8.37922323 -11.96925002 -23.47017023 #> 220 8.43024382 -11.91710612 -23.36792270 #> 221 8.48113106 -11.86509850 -23.26594240 #> 222 8.53188623 -11.81322587 -23.16422679 #> 223 8.58251057 -11.76148693 -23.06277335 #> 224 8.63300533 -11.70988043 -22.96157960 #> 225 8.68337174 -11.65840511 -22.86064307 #> 226 8.73361101 -11.60705973 -22.75996135 #> 227 8.78372433 -11.55584308 -22.65953202 #> 228 8.83371289 -11.50475393 -22.55935273 #> 229 8.88357785 -11.45379110 -22.45942114 #> 230 8.93332036 -11.40295342 -22.35973493 #> 231 8.98294156 -11.35223971 -22.26029182 #> 232 9.03244259 -11.30164882 -22.16108956 #> 233 9.08182455 -11.25117963 -22.06212592 #> 234 9.13108854 -11.20083100 -21.96339869 #> 235 9.18023565 -11.15060182 -21.86490569 #> 236 9.22926695 -11.10049101 -21.76664478 #> 237 9.27818351 -11.05049746 -21.66861383 #> 238 9.32698637 -11.00062012 -21.57081073 #> 239 9.37567657 -10.95085792 -21.47323341 #> 240 9.42425513 -10.90120981 -21.37587981 #> 241 9.47272307 -10.85167476 -21.27874790 #> 242 9.52108139 -10.80225174 -21.18183568 #> 243 9.56933108 -10.75293974 -21.08514115 #> 244 9.61747311 -10.70373777 -20.98866237 #> 245 9.66550847 -10.65464482 -20.89239737 #> 246 9.71343811 -10.60565993 -20.79634425 #> 247 9.76126296 -10.55678212 -20.70050111 #> 248 9.80898398 -10.50801043 -20.60486607 #> 249 9.85660209 -10.45934393 -20.50943726 #> 250 9.90411820 -10.41078166 -20.41421287 #> 251 9.95153321 -10.36232271 -20.31919106 #> 252 9.99884804 -10.31396617 -20.22437005 #> 253 10.04606356 -10.26571111 -20.12974805 #> 254 10.09318065 -10.21755665 -20.03532330 #> 255 10.14020018 -10.16950190 -19.94109407 #> 256 10.18712301 -10.12154597 -19.84705863 #> 257 10.23394999 -10.07368801 -19.75321528 #> 258 10.28068196 -10.02592715 -19.65956234 #> 259 10.32731975 -9.97826254 -19.56609813 #> 260 10.37386419 -9.93069334 -19.47282100 #> 261 10.42031609 -9.88321871 -19.37972932 #> 262 10.46667626 -9.83583784 -19.28682148 #> 263 10.51294550 -9.78854990 -19.19409587 #> 264 10.55912460 -9.74135408 -19.10155090 #> 265 10.60521433 -9.69424959 -19.00918501 #> 266 10.65121549 -9.64723563 -18.91699665 #> 267 10.69712883 -9.60031143 -18.82498428 #> 268 10.74295511 -9.55347620 -18.73314638 #> 269 10.78869508 -9.50672918 -18.64148144 #> 270 10.83434949 -9.46006960 -18.54998796 #> 271 10.87991908 -9.41349671 -18.45866448 #> 272 10.92540457 -9.36700977 -18.36750953 #> 273 10.97080670 -9.32060803 -18.27652165 #> 274 11.01612616 -9.27429077 -18.18569943 #> 275 11.06136368 -9.22805727 -18.09504142 #> 276 11.10651996 -9.18190679 -18.00454624 #> 277 11.15159568 -9.13583864 -17.91421248 #> 278 11.19659155 -9.08985211 -17.82403877 #> 279 11.24150823 -9.04394650 -17.73402373 #> 280 11.28634642 -8.99812112 -17.64416601 #> 281 11.33110677 -8.95237528 -17.55446427 #> 282 11.37578995 -8.90670832 -17.46491719 #> 283 11.42039663 -8.86111955 -17.37552343 #> 284 11.46492744 -8.81560831 -17.28628171 #> 285 11.50938303 -8.77017394 -17.19719072 #> 286 11.55376405 -8.72481579 -17.10824918 #> 287 11.59807112 -8.67953321 -17.01945583 #> 288 11.64230489 -8.63432557 -16.93080940 #> 289 11.68646595 -8.58919221 -16.84230866 #> 290 11.73055495 -8.54413251 -16.75395236 #> 291 11.77457247 -8.49914586 -16.66573927 #> 292 11.81851914 -8.45423162 -16.57766819 #> 293 11.86239555 -8.40938919 -16.48973791 #> 294 11.90620230 -8.36461796 -16.40194724 #> 295 11.94993998 -8.31991732 -16.31429500 #> 296 11.99360916 -8.27528667 -16.22678001 #> 297 12.03721044 -8.23072544 -16.13940111 #> 298 12.08074438 -8.18623301 -16.05215715 #> 299 12.12421156 -8.14180882 -15.96504699 #> 300 12.16761254 -8.09745229 -15.87806950 #> 301 12.21094788 -8.05316284 -15.79122354 #> 302 12.25421814 -8.00893991 -15.70450802 #> 303 12.29742386 -7.96478293 -15.61792182 #> 304 12.34056561 -7.92069134 -15.53146385 #> 305 12.38364390 -7.87666459 -15.44513303 #> 306 12.42665930 -7.83270214 -15.35892827 #> 307 12.46961232 -7.78880343 -15.27284851 #> 308 12.51250349 -7.74496792 -15.18689268 #> 309 12.55533334 -7.70119509 -15.10105975 #> 310 12.59810240 -7.65748440 -15.01534866 #> 311 12.64081117 -7.61383531 -14.92975839 #> 312 12.68346017 -7.57024732 -14.84428789 #> 313 12.72604990 -7.52671989 -14.75893617 #> 314 12.76858088 -7.48325252 -14.67370221 #> 315 12.81105359 -7.43984469 -14.58858500 #> 316 12.85346854 -7.39649589 -14.50358355 #> 317 12.89582622 -7.35320563 -14.41869688 #> 318 12.93812711 -7.30997341 -14.33392401 #> 319 12.98037170 -7.26679872 -14.24926396 #> 320 13.02256048 -7.22368108 -14.16471578 #> 321 13.06469391 -7.18062000 -14.08027851 #> 322 13.10677248 -7.13761500 -13.99595119 #> 323 13.14879665 -7.09466559 -13.91173289 #> 324 13.19076688 -7.05177130 -13.82762266 #> 325 13.23268365 -7.00893166 -13.74361960 #> 326 13.27454741 -6.96614619 -13.65972276 #> 327 13.31635862 -6.92341443 -13.57593124 #> 328 13.35811773 -6.88073592 -13.49224413 #> 329 13.39982519 -6.83811019 -13.40866052 #> 330 13.44148144 -6.79553680 -13.32517953 #> 331 13.48308694 -6.75301528 -13.24180027 #> 332 13.52464211 -6.71054519 -13.15852185 #> 333 13.56614741 -6.66812608 -13.07534340 #> 334 13.60760325 -6.62575751 -12.99226404 #> 335 13.64901007 -6.58343904 -12.90928293 #> 336 13.69036830 -6.54117023 -12.82639919 #> 337 13.73167837 -6.49895064 -12.74361197 #> 338 13.77294069 -6.45677985 -12.66092044 #> 339 13.81415569 -6.41465743 -12.57832375 #> 340 13.85532378 -6.37258295 -12.49582107 #> 341 13.89644537 -6.33055599 -12.41341157 #> 342 13.93752088 -6.28857613 -12.33109443 #> 343 13.97855071 -6.24664295 -12.24886882 #> 344 14.01953527 -6.20475604 -12.16673395 #> 345 14.06047496 -6.16291499 -12.08468899 #> 346 14.10137017 -6.12111939 -12.00273316 #> 347 14.14222132 -6.07936883 -11.92086565 #> 348 14.18302878 -6.03766292 -11.83908567 #> 349 14.22379296 -5.99600124 -11.75739245 #> 350 14.26451424 -5.95438340 -11.67578519 #> 351 14.30519301 -5.91280902 -11.59426312 #> 352 14.34582965 -5.87127768 -11.51282548 #> 353 14.38642456 -5.82978901 -11.43147149 #> 354 14.42697809 -5.78834261 -11.35020040 #> 355 14.46749064 -5.74693810 -11.26901144 #> 356 14.50796258 -5.70557509 -11.18790388 #> 357 14.54839428 -5.66425321 -11.10687695 #> 358 14.58878612 -5.62297208 -11.02592992 #> 359 14.62913845 -5.58173132 -10.94506205 #> 360 14.66945165 -5.54053055 -10.86427261 #> 361 14.70972608 -5.49936940 -10.78356086 #> 362 14.74996210 -5.45824751 -10.70292608 #> 363 14.79016007 -5.41716451 -10.62236756 #> 364 14.83032035 -5.37612002 -10.54188457 #> 365 14.87044329 -5.33511370 -10.46147640 #> 366 14.91052926 -5.29414517 -10.38114235 #> 367 14.95057859 -5.25321408 -10.30088171 #> 368 14.99059164 -5.21232007 -10.22069378 #> 369 15.03056875 -5.17146278 -10.14057786 #> 370 15.07051028 -5.13064187 -10.06053327 #> 371 15.11041656 -5.08985698 -9.98055931 #> 372 15.15028793 -5.04910776 -9.90065530 #> 373 15.19012474 -5.00839387 -9.82082056 #> 374 15.22992733 -4.96771495 -9.74105440 #> 375 15.26969602 -4.92707068 -9.66135617 #> 376 15.30943116 -4.88646070 -9.58172519 #> 377 15.34913308 -4.84588467 -9.50216078 #> 378 15.38880210 -4.80534226 -9.42266230 #> 379 15.42843855 -4.76483314 -9.34322907 #> 380 15.46804277 -4.72435695 -9.26386045 #> 381 15.50761508 -4.68391339 -9.18455579 #> 382 15.54715581 -4.64350210 -9.10531442 #> 383 15.58666526 -4.60312277 -9.02613571 #> 384 15.62614378 -4.56277507 -8.94701901 #> 385 15.66559166 -4.52245866 -8.86796369 #> 386 15.70500924 -4.48217323 -8.78896911 #> 387 15.74439683 -4.44191845 -8.71003463 #> 388 15.78375474 -4.40169400 -8.63115963 #> 389 15.82308328 -4.36149957 -8.55234348 #> 390 15.86238278 -4.32133483 -8.47358555 #> 391 15.90165352 -4.28119946 -8.39488522 #> 392 15.94089584 -4.24109315 -8.31624188 #> 393 15.98011002 -4.20101560 -8.23765491 #> 394 16.01929638 -4.16096648 -8.15912370 #> 395 16.05845522 -4.12094548 -8.08064764 #> 396 16.09758685 -4.08095230 -8.00222611 #> 397 16.13669156 -4.04098662 -7.92385853 #> 398 16.17576966 -4.00104815 -7.84554428 #> 399 16.21482144 -3.96113657 -7.76728277 #> 400 16.25384721 -3.92125158 -7.68907341 #> 401 16.29284725 -3.88139288 -7.61091559 #> 402 16.33182186 -3.84156017 -7.53280873 #> 403 16.37077134 -3.80175314 -7.45475224 #> 404 16.40969598 -3.76197150 -7.37674553 #> 405 16.44859607 -3.72221496 -7.29878802 #> 406 16.48747190 -3.68248321 -7.22087913 #> 407 16.52632376 -3.64277595 -7.14301828 #> 408 16.56515193 -3.60309291 -7.06520490 #> 409 16.60395670 -3.56343377 -6.98743840 #> 410 16.64273836 -3.52379826 -6.90971823 #> 411 16.68149720 -3.48418608 -6.83204380 #> 412 16.72023348 -3.44459694 -6.75441456 #> 413 16.75894751 -3.40503056 -6.67682993 #> 414 16.79763955 -3.36548664 -6.59928936 #> 415 16.83630989 -3.32596490 -6.52179228 #> 416 16.87495880 -3.28646506 -6.44433813 #> 417 16.91358657 -3.24698683 -6.36692637 #> 418 16.95219347 -3.20752992 -6.28955642 #> 419 16.99077978 -3.16809407 -6.21222775 #> 420 17.02934576 -3.12867898 -6.13493980 #> 421 17.06789170 -3.08928438 -6.05769202 #> 422 17.10641787 -3.04990999 -5.98048386 #> 423 17.14492454 -3.01055552 -5.90331478 #> 424 17.18341198 -2.97122071 -5.82618424 #> 425 17.22188046 -2.93190528 -5.74909170 #> 426 17.26033025 -2.89260895 -5.67203661 #> 427 17.29876161 -2.85333144 -5.59501844 #> 428 17.33717483 -2.81407249 -5.51803666 #> 429 17.37557015 -2.77483182 -5.44109072 #> 430 17.41394785 -2.73560916 -5.36418010 #> 431 17.45230820 -2.69640425 -5.28730426 #> 432 17.49065145 -2.65721680 -5.21046268 #> 433 17.52897787 -2.61804655 -5.13365483 #> 434 17.56728772 -2.57889323 -5.05688017 #> 435 17.60558127 -2.53975658 -4.98013820 #> 436 17.64385878 -2.50063633 -4.90342838 #> 437 17.68212049 -2.46153220 -4.82675019 #> 438 17.72036669 -2.42244395 -4.75010312 #> 439 17.75859762 -2.38337129 -4.67348663 #> 440 17.79681354 -2.34431398 -4.59690023 #> 441 17.83501471 -2.30527174 -4.52034338 #> 442 17.87320139 -2.26624431 -4.44381558 #> 443 17.91137383 -2.22723144 -4.36731632 #> 444 17.94953228 -2.18823285 -4.29084507 #> 445 17.98767701 -2.14924829 -4.21440134 #> 446 18.02580827 -2.11027751 -4.13798460 #> 447 18.06392630 -2.07132023 -4.06159436 #> 448 18.10203137 -2.03237621 -3.98523010 #> 449 18.14012373 -1.99344518 -3.90889133 #> 450 18.17820362 -1.95452688 -3.83257753 #> 451 18.21627131 -1.91562107 -3.75628820 #> 452 18.25432703 -1.87672748 -3.68002284 #> 453 18.29237104 -1.83784586 -3.60378095 #> 454 18.33040359 -1.79897595 -3.52756203 #> 455 18.36842493 -1.76011750 -3.45136557 #> 456 18.40643530 -1.72127026 -3.37519109 #> 457 18.44443496 -1.68243397 -3.29903808 #> 458 18.48242415 -1.64360837 -3.22290605 #> 459 18.52040313 -1.60479322 -3.14679450 #> 460 18.55837212 -1.56598827 -3.07070294 #> 461 18.59633139 -1.52719325 -2.99463087 #> 462 18.63428118 -1.48840793 -2.91857781 #> 463 18.67222174 -1.44963205 -2.84254325 #> 464 18.71015330 -1.41086535 -2.76652672 #> 465 18.74807611 -1.37210760 -2.69052771 #> 466 18.78599042 -1.33335853 -2.61454574 #> 467 18.82389647 -1.29461791 -2.53858033 #> 468 18.86179451 -1.25588547 -2.46263098 #> 469 18.89968477 -1.21716099 -2.38669720 #> 470 18.93756751 -1.17844419 -2.31077852 #> 471 18.97544296 -1.13973485 -2.23487444 #> 472 19.01331136 -1.10103270 -2.15898447 #> 473 19.05117296 -1.06233751 -2.08310815 #> 474 19.08902799 -1.02364903 -2.00724497 #> 475 19.12687670 -0.98496700 -1.93139446 #> 476 19.16471934 -0.94629119 -1.85555614 #> 477 19.20255613 -0.90762135 -1.77972952 #> 478 19.24038733 -0.86895723 -1.70391412 #> 479 19.27821316 -0.83029859 -1.62810946 #> 480 19.31603388 -0.79164518 -1.55231507 #> 481 19.35384972 -0.75299676 -1.47653045 #> 482 19.39166091 -0.71435308 -1.40075513 #> 483 19.42946771 -0.67571390 -1.32498863 #> 484 19.46727034 -0.63707897 -1.24923047 #> 485 19.50506905 -0.59844805 -1.17348017 #> 486 19.54286408 -0.55982090 -1.09773726 #> 487 19.58065566 -0.52119727 -1.02200126 #> 488 19.61844403 -0.48257691 -0.94627168 #> 489 19.65622943 -0.44395960 -0.87054806 #> 490 19.69401210 -0.40534507 -0.79482991 #> 491 19.73179227 -0.36673310 -0.71911675 #> 492 19.76957019 -0.32812343 -0.64340812 #> 493 19.80734609 -0.28951582 -0.56770353 #> 494 19.84512021 -0.25091003 -0.49200252 #> 495 19.88289279 -0.21230582 -0.41630459 #> 496 19.92066406 -0.17370294 -0.34060928 #> 497 19.95843427 -0.13510116 -0.26491611 #> 498 19.99620364 -0.09650022 -0.18922460 #> 499 20.03397242 -0.05789989 -0.11353429 #> 500 20.07174085 -0.01929992 -0.03784468 #> 501 20.10950915 0.01929992 0.03784468 #> 502 20.14727758 0.05789989 0.11353429 #> 503 20.18504636 0.09650022 0.18922460 #> 504 20.22281573 0.13510116 0.26491611 #> 505 20.26058594 0.17370294 0.34060928 #> 506 20.29835721 0.21230582 0.41630459 #> 507 20.33612979 0.25091003 0.49200252 #> 508 20.37390391 0.28951582 0.56770353 #> 509 20.41167981 0.32812343 0.64340812 #> 510 20.44945773 0.36673310 0.71911675 #> 511 20.48723790 0.40534507 0.79482991 #> 512 20.52502057 0.44395960 0.87054806 #> 513 20.56280597 0.48257691 0.94627168 #> 514 20.60059434 0.52119727 1.02200126 #> 515 20.63838592 0.55982090 1.09773726 #> 516 20.67618095 0.59844805 1.17348017 #> 517 20.71397966 0.63707897 1.24923047 #> 518 20.75178229 0.67571390 1.32498863 #> 519 20.78958909 0.71435308 1.40075513 #> 520 20.82740028 0.75299676 1.47653045 #> 521 20.86521612 0.79164518 1.55231507 #> 522 20.90303684 0.83029859 1.62810946 #> 523 20.94086267 0.86895723 1.70391412 #> 524 20.97869387 0.90762135 1.77972952 #> 525 21.01653066 0.94629119 1.85555614 #> 526 21.05437330 0.98496700 1.93139446 #> 527 21.09222201 1.02364903 2.00724497 #> 528 21.13007704 1.06233751 2.08310815 #> 529 21.16793864 1.10103270 2.15898447 #> 530 21.20580704 1.13973485 2.23487444 #> 531 21.24368249 1.17844419 2.31077852 #> 532 21.28156523 1.21716099 2.38669720 #> 533 21.31945549 1.25588547 2.46263098 #> 534 21.35735353 1.29461791 2.53858033 #> 535 21.39525958 1.33335853 2.61454574 #> 536 21.43317389 1.37210760 2.69052771 #> 537 21.47109670 1.41086535 2.76652672 #> 538 21.50902826 1.44963205 2.84254325 #> 539 21.54696882 1.48840793 2.91857781 #> 540 21.58491861 1.52719325 2.99463087 #> 541 21.62287788 1.56598827 3.07070294 #> 542 21.66084687 1.60479322 3.14679450 #> 543 21.69882585 1.64360837 3.22290605 #> 544 21.73681504 1.68243397 3.29903808 #> 545 21.77481470 1.72127026 3.37519109 #> 546 21.81282507 1.76011750 3.45136557 #> 547 21.85084641 1.79897595 3.52756203 #> 548 21.88887896 1.83784586 3.60378095 #> 549 21.92692297 1.87672748 3.68002284 #> 550 21.96497869 1.91562107 3.75628820 #> 551 22.00304638 1.95452688 3.83257753 #> 552 22.04112627 1.99344518 3.90889133 #> 553 22.07921863 2.03237621 3.98523010 #> 554 22.11732370 2.07132023 4.06159436 #> 555 22.15544173 2.11027751 4.13798460 #> 556 22.19357299 2.14924829 4.21440134 #> 557 22.23171772 2.18823285 4.29084507 #> 558 22.26987617 2.22723144 4.36731632 #> 559 22.30804861 2.26624431 4.44381558 #> 560 22.34623529 2.30527174 4.52034338 #> 561 22.38443646 2.34431398 4.59690023 #> 562 22.42265238 2.38337129 4.67348663 #> 563 22.46088331 2.42244395 4.75010312 #> 564 22.49912951 2.46153220 4.82675019 #> 565 22.53739122 2.50063633 4.90342838 #> 566 22.57566873 2.53975658 4.98013820 #> 567 22.61396228 2.57889323 5.05688017 #> 568 22.65227213 2.61804655 5.13365483 #> 569 22.69059855 2.65721680 5.21046268 #> 570 22.72894180 2.69640425 5.28730426 #> 571 22.76730215 2.73560916 5.36418010 #> 572 22.80567985 2.77483182 5.44109072 #> 573 22.84407517 2.81407249 5.51803666 #> 574 22.88248839 2.85333144 5.59501844 #> 575 22.92091975 2.89260895 5.67203661 #> 576 22.95936954 2.93190528 5.74909170 #> 577 22.99783802 2.97122071 5.82618424 #> 578 23.03632546 3.01055552 5.90331478 #> 579 23.07483213 3.04990999 5.98048386 #> 580 23.11335830 3.08928438 6.05769202 #> 581 23.15190424 3.12867898 6.13493980 #> 582 23.19047022 3.16809407 6.21222775 #> 583 23.22905653 3.20752992 6.28955642 #> 584 23.26766343 3.24698683 6.36692637 #> 585 23.30629120 3.28646506 6.44433813 #> 586 23.34494011 3.32596490 6.52179228 #> 587 23.38361045 3.36548664 6.59928936 #> 588 23.42230249 3.40503056 6.67682993 #> 589 23.46101652 3.44459694 6.75441456 #> 590 23.49975280 3.48418608 6.83204380 #> 591 23.53851164 3.52379826 6.90971823 #> 592 23.57729330 3.56343377 6.98743840 #> 593 23.61609807 3.60309291 7.06520490 #> 594 23.65492624 3.64277595 7.14301828 #> 595 23.69377810 3.68248321 7.22087913 #> 596 23.73265393 3.72221496 7.29878802 #> 597 23.77155402 3.76197150 7.37674553 #> 598 23.81047866 3.80175314 7.45475224 #> 599 23.84942814 3.84156017 7.53280873 #> 600 23.88840275 3.88139288 7.61091559 #> 601 23.92740279 3.92125158 7.68907341 #> 602 23.96642856 3.96113657 7.76728277 #> 603 24.00548034 4.00104815 7.84554428 #> 604 24.04455844 4.04098662 7.92385853 #> 605 24.08366315 4.08095230 8.00222611 #> 606 24.12279478 4.12094548 8.08064764 #> 607 24.16195362 4.16096648 8.15912370 #> 608 24.20113998 4.20101560 8.23765491 #> 609 24.24035416 4.24109315 8.31624188 #> 610 24.27959648 4.28119946 8.39488522 #> 611 24.31886722 4.32133483 8.47358555 #> 612 24.35816672 4.36149957 8.55234348 #> 613 24.39749526 4.40169400 8.63115963 #> 614 24.43685317 4.44191845 8.71003463 #> 615 24.47624076 4.48217323 8.78896911 #> 616 24.51565834 4.52245866 8.86796369 #> 617 24.55510622 4.56277507 8.94701901 #> 618 24.59458474 4.60312277 9.02613571 #> 619 24.63409419 4.64350210 9.10531442 #> 620 24.67363492 4.68391339 9.18455579 #> 621 24.71320723 4.72435695 9.26386045 #> 622 24.75281145 4.76483314 9.34322907 #> 623 24.79244790 4.80534226 9.42266230 #> 624 24.83211692 4.84588467 9.50216078 #> 625 24.87181884 4.88646070 9.58172519 #> 626 24.91155398 4.92707068 9.66135617 #> 627 24.95132267 4.96771495 9.74105440 #> 628 24.99112526 5.00839387 9.82082056 #> 629 25.03096207 5.04910776 9.90065530 #> 630 25.07083344 5.08985698 9.98055931 #> 631 25.11073972 5.13064187 10.06053327 #> 632 25.15068125 5.17146278 10.14057786 #> 633 25.19065836 5.21232007 10.22069378 #> 634 25.23067141 5.25321408 10.30088171 #> 635 25.27072074 5.29414517 10.38114235 #> 636 25.31080671 5.33511370 10.46147640 #> 637 25.35092965 5.37612002 10.54188457 #> 638 25.39108993 5.41716451 10.62236756 #> 639 25.43128790 5.45824751 10.70292608 #> 640 25.47152392 5.49936940 10.78356086 #> 641 25.51179835 5.54053055 10.86427261 #> 642 25.55211155 5.58173132 10.94506205 #> 643 25.59246388 5.62297208 11.02592992 #> 644 25.63285572 5.66425321 11.10687695 #> 645 25.67328742 5.70557509 11.18790388 #> 646 25.71375936 5.74693810 11.26901144 #> 647 25.75427191 5.78834261 11.35020040 #> 648 25.79482544 5.82978901 11.43147149 #> 649 25.83542035 5.87127768 11.51282548 #> 650 25.87605699 5.91280902 11.59426312 #> 651 25.91673576 5.95438340 11.67578519 #> 652 25.95745704 5.99600124 11.75739245 #> 653 25.99822122 6.03766292 11.83908567 #> 654 26.03902868 6.07936883 11.92086565 #> 655 26.07987983 6.12111939 12.00273316 #> 656 26.12077504 6.16291499 12.08468899 #> 657 26.16171473 6.20475604 12.16673395 #> 658 26.20269929 6.24664295 12.24886882 #> 659 26.24372912 6.28857613 12.33109443 #> 660 26.28480463 6.33055599 12.41341157 #> 661 26.32592622 6.37258295 12.49582107 #> 662 26.36709431 6.41465743 12.57832375 #> 663 26.40830931 6.45677985 12.66092044 #> 664 26.44957163 6.49895064 12.74361197 #> 665 26.49088170 6.54117023 12.82639919 #> 666 26.53223993 6.58343904 12.90928293 #> 667 26.57364675 6.62575751 12.99226404 #> 668 26.61510259 6.66812608 13.07534340 #> 669 26.65660789 6.71054519 13.15852185 #> 670 26.69816306 6.75301528 13.24180027 #> 671 26.73976856 6.79553680 13.32517953 #> 672 26.78142481 6.83811019 13.40866052 #> 673 26.82313227 6.88073592 13.49224413 #> 674 26.86489138 6.92341443 13.57593124 #> 675 26.90670259 6.96614619 13.65972276 #> 676 26.94856635 7.00893166 13.74361960 #> 677 26.99048312 7.05177130 13.82762266 #> 678 27.03245335 7.09466559 13.91173289 #> 679 27.07447752 7.13761500 13.99595119 #> 680 27.11655609 7.18062000 14.08027851 #> 681 27.15868952 7.22368108 14.16471578 #> 682 27.20087830 7.26679872 14.24926396 #> 683 27.24312289 7.30997341 14.33392401 #> 684 27.28542378 7.35320563 14.41869688 #> 685 27.32778146 7.39649589 14.50358355 #> 686 27.37019641 7.43984469 14.58858500 #> 687 27.41266912 7.48325252 14.67370221 #> 688 27.45520010 7.52671989 14.75893617 #> 689 27.49778983 7.57024732 14.84428789 #> 690 27.54043883 7.61383531 14.92975839 #> 691 27.58314760 7.65748440 15.01534866 #> 692 27.62591666 7.70119509 15.10105975 #> 693 27.66874651 7.74496792 15.18689268 #> 694 27.71163768 7.78880343 15.27284851 #> 695 27.75459070 7.83270214 15.35892827 #> 696 27.79760610 7.87666459 15.44513303 #> 697 27.84068439 7.92069134 15.53146385 #> 698 27.88382614 7.96478293 15.61792182 #> 699 27.92703186 8.00893991 15.70450802 #> 700 27.97030212 8.05316284 15.79122354 #> 701 28.01363746 8.09745229 15.87806950 #> 702 28.05703844 8.14180882 15.96504699 #> 703 28.10050562 8.18623301 16.05215715 #> 704 28.14403956 8.23072544 16.13940111 #> 705 28.18764084 8.27528667 16.22678001 #> 706 28.23131002 8.31991732 16.31429500 #> 707 28.27504770 8.36461796 16.40194724 #> 708 28.31885445 8.40938919 16.48973791 #> 709 28.36273086 8.45423162 16.57766819 #> 710 28.40667753 8.49914586 16.66573927 #> 711 28.45069505 8.54413251 16.75395236 #> 712 28.49478405 8.58919221 16.84230866 #> 713 28.53894511 8.63432557 16.93080940 #> 714 28.58317888 8.67953321 17.01945583 #> 715 28.62748595 8.72481579 17.10824918 #> 716 28.67186697 8.77017394 17.19719072 #> 717 28.71632256 8.81560831 17.28628171 #> 718 28.76085337 8.86111955 17.37552343 #> 719 28.80546005 8.90670832 17.46491719 #> 720 28.85014323 8.95237528 17.55446427 #> 721 28.89490358 8.99812112 17.64416601 #> 722 28.93974177 9.04394650 17.73402373 #> 723 28.98465845 9.08985211 17.82403877 #> 724 29.02965432 9.13583864 17.91421248 #> 725 29.07473004 9.18190679 18.00454624 #> 726 29.11988632 9.22805727 18.09504142 #> 727 29.16512384 9.27429077 18.18569943 #> 728 29.21044330 9.32060803 18.27652165 #> 729 29.25584543 9.36700977 18.36750953 #> 730 29.30133092 9.41349671 18.45866448 #> 731 29.34690051 9.46006960 18.54998796 #> 732 29.39255492 9.50672918 18.64148144 #> 733 29.43829489 9.55347620 18.73314638 #> 734 29.48412117 9.60031143 18.82498428 #> 735 29.53003451 9.64723563 18.91699665 #> 736 29.57603567 9.69424959 19.00918501 #> 737 29.62212540 9.74135408 19.10155090 #> 738 29.66830450 9.78854990 19.19409587 #> 739 29.71457374 9.83583784 19.28682148 #> 740 29.76093391 9.88321871 19.37972932 #> 741 29.80738581 9.93069334 19.47282100 #> 742 29.85393025 9.97826254 19.56609813 #> 743 29.90056804 10.02592715 19.65956234 #> 744 29.94730001 10.07368801 19.75321528 #> 745 29.99412699 10.12154597 19.84705863 #> 746 30.04104982 10.16950190 19.94109407 #> 747 30.08806935 10.21755665 20.03532330 #> 748 30.13518644 10.26571111 20.12974805 #> 749 30.18240196 10.31396617 20.22437005 #> 750 30.22971679 10.36232271 20.31919106 #> 751 30.27713180 10.41078166 20.41421287 #> 752 30.32464791 10.45934393 20.50943726 #> 753 30.37226602 10.50801043 20.60486607 #> 754 30.41998704 10.55678212 20.70050111 #> 755 30.46781189 10.60565993 20.79634425 #> 756 30.51574153 10.65464482 20.89239737 #> 757 30.56377689 10.70373777 20.98866237 #> 758 30.61191892 10.75293974 21.08514115 #> 759 30.66016861 10.80225174 21.18183568 #> 760 30.70852693 10.85167476 21.27874790 #> 761 30.75699487 10.90120981 21.37587981 #> 762 30.80557343 10.95085792 21.47323341 #> 763 30.85426363 11.00062012 21.57081073 #> 764 30.90306649 11.05049746 21.66861383 #> 765 30.95198305 11.10049101 21.76664478 #> 766 31.00101435 11.15060182 21.86490569 #> 767 31.05016146 11.20083100 21.96339869 #> 768 31.09942545 11.25117963 22.06212592 #> 769 31.14880741 11.30164882 22.16108956 #> 770 31.19830844 11.35223971 22.26029182 #> 771 31.24792964 11.40295342 22.35973493 #> 772 31.29767215 11.45379110 22.45942114 #> 773 31.34753711 11.50475393 22.55935273 #> 774 31.39752567 11.55584308 22.65953202 #> 775 31.44763899 11.60705973 22.75996135 #> 776 31.49787826 11.65840511 22.86064307 #> 777 31.54824467 11.70988043 22.96157960 #> 778 31.59873943 11.76148693 23.06277335 #> 779 31.64936377 11.81322587 23.16422679 #> 780 31.70011894 11.86509850 23.26594240 #> 781 31.75100618 11.91710612 23.36792270 #> 782 31.80202677 11.96925002 23.47017023 #> 783 31.85318200 12.02153153 23.57268759 #> 784 31.90447318 12.07395198 23.67547739 #> 785 31.95590162 12.12651271 23.77854228 #> 786 32.00746867 12.17921510 23.88188493 #> 787 32.05917568 12.23206054 23.98550808 #> 788 32.11102402 12.28505042 24.08941447 #> 789 32.16301509 12.33818617 24.19360689 #> 790 32.21515031 12.39146924 24.29808818 #> 791 32.26743109 12.44490108 24.40286118 #> 792 32.31985888 12.49848317 24.50792882 #> 793 32.37243516 12.55221701 24.61329402 #> 794 32.42516141 12.60610412 24.71895976 #> 795 32.47803913 12.66014605 24.82492908 #> 796 32.53106986 12.71434435 24.93120503 #> 797 32.58425515 12.76870061 25.03779071 #> 798 32.63759656 12.82321643 25.14468927 #> 799 32.69109569 12.87789344 25.25190390 #> 800 32.74475415 12.93273329 25.35943784 #> 801 32.79857357 12.98773765 25.46729436 #> 802 32.85255562 13.04290822 25.57547679 #> 803 32.90670198 13.09824671 25.68398851 #> 804 32.96101436 13.15375487 25.79283292 #> 805 33.01549448 13.20943448 25.90201352 #> 806 33.07014411 13.26528732 26.01153380 #> 807 33.12496502 13.32131521 26.12139735 #> 808 33.17995902 13.37752001 26.23160778 #> 809 33.23512795 13.43390359 26.34216877 #> 810 33.29047367 13.49046784 26.45308404 #> 811 33.34599806 13.54721471 26.56435740 #> 812 33.40170305 13.60414614 26.67599266 #> 813 33.45759056 13.66126413 26.78799373 #> 814 33.51366259 13.71857068 26.90036458 #> 815 33.56992114 13.77606787 27.01310920 #> 816 33.62636823 13.83375775 27.12623169 #> 817 33.68300594 13.89164244 27.23973617 #> 818 33.73983636 13.94972408 27.35362686 #> 819 33.79686162 14.00800486 27.46790803 #> 820 33.85408389 14.06648698 27.58258399 #> 821 33.91150536 14.12517269 27.69765917 #> 822 33.96912826 14.18406427 27.81313804 #> 823 34.02695486 14.24316404 27.92902512 #> 824 34.08498747 14.30247434 28.04532505 #> 825 34.14322842 14.36199757 28.16204251 #> 826 34.20168010 14.42173617 28.27918226 #> 827 34.26034491 14.48169260 28.39674915 #> 828 34.31922531 14.54186936 28.51474811 #> 829 34.37832381 14.60226902 28.63318412 #> 830 34.43764293 14.66289417 28.75206229 #> 831 34.49718526 14.72374743 28.87138777 #> 832 34.55695343 14.78483150 28.99116583 #> 833 34.61695009 14.84614910 29.11140180 #> 834 34.67717796 14.90770300 29.23210114 #> 835 34.73763980 14.96949602 29.35326936 #> 836 34.79833842 15.03153103 29.47491209 #> 837 34.85927666 15.09381095 29.59703504 #> 838 34.92045744 15.15633874 29.71964405 #> 839 34.98188371 15.21911742 29.84274503 #> 840 35.04355848 15.28215007 29.96634400 #> 841 35.10548481 15.34543982 30.09044711 #> 842 35.16766580 15.40898984 30.21506060 #> 843 35.23010464 15.47280339 30.34019081 #> 844 35.29280455 15.53688376 30.46584422 #> 845 35.35576882 15.60123430 30.59202742 #> 846 35.41900080 15.66585845 30.71874712 #> 847 35.48250390 15.73075968 30.84601015 #> 848 35.54628158 15.79594155 30.97382346 #> 849 35.61033739 15.86140767 31.10219416 #> 850 35.67467493 15.92716172 31.23112945 #> 851 35.73929787 15.99320746 31.36063671 #> 852 35.80420996 16.05954871 31.49072342 #> 853 35.86941502 16.12618937 31.62139725 #> 854 35.93491692 16.19313342 31.75266597 #> 855 36.00071963 16.26038490 31.88453754 #> 856 36.06682721 16.32794796 32.01702006 #> 857 36.13324376 16.39582679 32.15012178 #> 858 36.19997349 16.46402570 32.28385114 #> 859 36.26702070 16.53254908 32.41821673 #> 860 36.33438975 16.60140139 32.55322731 #> 861 36.40208512 16.67058720 32.68889184 #> 862 36.47011136 16.74011116 32.82521945 #> 863 36.53847312 16.80997803 32.96221945 #> 864 36.60717514 16.88019266 33.09990136 #> 865 36.67622228 16.95075999 33.23827490 #> 866 36.74561948 17.02168511 33.37734998 #> 867 36.81537180 17.09297315 33.51713674 #> 868 36.88548441 17.16462942 33.65764552 #> 869 36.95596257 17.23665928 33.79888689 #> 870 37.02681168 17.30906827 33.94087165 #> 871 37.09803724 17.38186199 34.08361085 #> 872 37.16964489 17.45504622 34.22711577 #> 873 37.24164039 17.52862683 34.37139794 #> 874 37.31402961 17.60260984 34.51646917 #> 875 37.38681858 17.67700139 34.66234151 #> 876 37.46001346 17.75180779 34.80902730 #> 877 37.53362055 17.82703547 34.95653917 #> 878 37.60764628 17.90269102 35.10489004 #> 879 37.68209726 17.97878118 35.25409312 #> 880 37.75698025 18.05531285 35.40416195 #> 881 37.83230215 18.13229310 35.55511039 #> 882 37.90807004 18.20972917 35.70695262 #> 883 37.98429118 18.28762846 35.85970318 #> 884 38.06097300 18.36599857 36.01337696 #> 885 38.13812311 18.44484728 36.16798921 #> 886 38.21574931 18.52418257 36.32355557 #> 887 38.29385960 18.60401260 36.48009207 #> 888 38.37246219 18.68434577 36.63761514 #> 889 38.45156547 18.76519066 36.79614165 #> 890 38.53117809 18.84655610 36.95568888 #> 891 38.61130890 18.92845114 37.11627458 #> 892 38.69196697 19.01088505 37.27791695 #> 893 38.77316165 19.09386737 37.44063469 #> 894 38.85490250 19.17740790 37.60444700 #> 895 38.93719936 19.26151668 37.76937357 #> 896 39.02006234 19.34620404 37.93543467 #> 897 39.10350182 19.43148060 38.10265110 #> 898 39.18752848 19.51735726 38.27104425 #> 899 39.27215328 19.60384523 38.44063612 #> 900 39.35738751 19.69095605 38.61144932 #> 901 39.44324278 19.77870159 38.78350711 #> 902 39.52973105 19.86709405 38.95683343 #> 903 39.61686459 19.95614600 39.13145293 #> 904 39.70465607 20.04587038 39.30739097 #> 905 39.79311854 20.13628051 39.48467368 #> 906 39.88226541 20.22739012 39.66332799 #> 907 39.97211054 20.31921335 39.84338162 #> 908 40.06266820 20.41176480 40.02486319 #> 909 40.15395309 20.50505950 40.20780218 #> 910 40.24598041 20.59911297 40.39222901 #> 911 40.33876581 20.69394121 40.57817509 #> 912 40.43232546 20.78956076 40.76567280 #> 913 40.52667608 20.88598868 40.95475563 #> 914 40.62183490 20.98324260 41.14545815 #> 915 40.71781976 21.08134074 41.33781607 #> 916 40.81464908 21.18030194 41.53186634 #> 917 40.91234193 21.28014568 41.72764716 #> 918 41.01091803 21.38089212 41.92519805 #> 919 41.11039779 21.48256211 42.12455992 #> 920 41.21080236 21.58517727 42.32577514 #> 921 41.31215363 21.68875998 42.52888760 #> 922 41.41447430 21.79333343 42.73394277 #> 923 41.51778790 21.89892167 42.94098782 #> 924 41.62211885 22.00554966 43.15007168 #> 925 41.72749248 22.11324328 43.36124513 #> 926 41.83393511 22.22202945 43.57456090 #> 927 41.94147407 22.33193608 43.79007376 #> 928 42.05013777 22.44299221 44.00784065 #> 929 42.15995577 22.55522805 44.22792080 #> 930 42.27095880 22.66867502 44.45037581 #> 931 42.38317888 22.78336584 44.67526985 #> 932 42.49664937 22.89933459 44.90266975 #> 933 42.61140501 23.01661679 45.13264517 #> 934 42.72748207 23.13524951 45.36526877 #> 935 42.84491839 23.25527140 45.60061638 #> 936 42.96375349 23.37672287 45.83876719 #> 937 43.08402865 23.49964612 46.07980397 #> 938 43.20578707 23.62408527 46.32381326 #> 939 43.32907393 23.75008652 46.57088562 #> 940 43.45393657 23.87769825 46.82111591 #> 941 43.58042460 24.00697114 47.07460353 #> 942 43.70859003 24.13795837 47.33145276 #> 943 43.83848748 24.27071576 47.59177303 #> 944 43.97017433 24.40530194 47.85567935 #> 945 44.10371092 24.54177859 48.12329261 #> 946 44.23916074 24.68021059 48.39474008 #> 947 44.37659069 24.82066630 48.67015580 #> 948 44.51607129 24.96321783 48.94968113 #> 949 44.65767699 25.10794124 49.23346526 #> 950 44.80148645 25.25491694 49.52166582 #> 951 44.94758288 25.40422995 49.81444956 #> 952 45.09605437 25.55597032 50.11199304 #> 953 45.24699433 25.71023350 50.41448343 #> 954 45.40050189 25.86712082 50.72211942 #> 955 45.55668243 26.02673996 51.03511216 #> 956 45.71564806 26.18920551 51.35368636 #> 957 45.87751827 26.35463960 51.67808147 #> 958 46.04242058 26.52317254 52.00855303 #> 959 46.21049127 26.69494362 52.34537417 #> 960 46.38187624 26.87010194 52.68883727 #> 961 46.55673191 27.04880738 53.03925581 #> 962 46.73522631 27.23123167 53.39696653 #> 963 46.91754024 27.41755958 53.76233176 #> 964 47.10386864 27.60799035 54.13574218 #> 965 47.29442211 27.80273920 54.51761980 #> 966 47.48942862 28.00203915 54.90842153 #> 967 47.68913557 28.20614302 55.30864314 #> 968 47.89381203 28.41532581 55.71882387 #> 969 48.10375139 28.62988737 56.13955169 #> 970 48.31927443 28.85015555 56.57146946 #> 971 48.54073289 29.07648983 57.01528205 #> 972 48.76851358 29.30928553 57.47176469 #> 973 49.00304333 29.54897890 57.94177276 #> 974 49.24479475 29.79605292 58.42625335 #> 975 49.49429305 30.05104439 58.92625904 #> 976 49.75212431 30.31455229 59.44296437 #> 977 50.01894537 30.58724790 59.97768563 #> 978 50.29549576 30.86988706 60.53190492 #> 979 50.58261247 31.16332518 61.10729958 #> 980 50.88124799 31.46853572 61.70577846 #> 981 51.19249290 31.78663325 62.32952709 #> 982 51.51760417 32.11890246 62.98106456 #> 983 51.85804137 32.46683501 63.66331584 #> 984 52.21551309 32.83217714 64.37970508 #> 985 52.59203784 33.21699178 65.13427743 #> 986 52.99002469 33.62374105 65.93186079 #> 987 53.41238238 34.05539773 66.77828439 #> 988 53.86266951 34.51559876 67.68067983 #> 989 54.34530635 35.00886175 68.64790554 #> 990 54.86588134 35.54089816 69.69116098 #> 991 55.43160829 36.11908064 70.82490296 #> 992 56.05203383 36.75316600 72.06826334 #> 993 56.74018057 37.45646357 73.44734004 #> 994 57.51449626 38.24782726 74.99910312 #> 995 58.40240924 39.15528929 76.77852025 #> 996 59.44741155 40.22329926 78.87275136 #> 997 60.72497566 41.52899133 81.43304669 #> 998 62.38524176 43.22581126 84.76029381 #> 999 64.80663077 45.70051165 89.61286514 #> 1000 69.67020919 50.67117077 99.35969268 # }"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_simpson.html","id":null,"dir":"Reference","previous_headings":"","what":"Simpson's paradox dataset simulation — simulate_simpson","title":"Simpson's paradox dataset simulation — simulate_simpson","text":"Simpson's paradox, Yule-Simpson effect, phenomenon probability statistics, trend appears several different groups data disappears reverses groups combined.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_simpson.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simpson's paradox dataset simulation — simulate_simpson","text":"","code":"simulate_simpson( n = 100, r = 0.5, groups = 3, difference = 1, group_prefix = \"G_\" )"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_simpson.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simpson's paradox dataset simulation — simulate_simpson","text":"n number observations group generated (minimum 4). r value vector corresponding desired correlation coefficients. groups Number groups (groups can participants, clusters, anything). difference Difference groups. group_prefix prefix group name (e.g., \"G_1\", \"G_2\", \"G_3\", ...).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_simpson.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simpson's paradox dataset simulation — simulate_simpson","text":"dataset.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_simpson.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Simpson's paradox dataset simulation — simulate_simpson","text":"","code":"data <- simulate_simpson(n = 10, groups = 5, r = 0.5) if (require(\"ggplot2\")) { ggplot(data, aes(x = V1, y = V2)) + geom_point(aes(color = Group)) + geom_smooth(aes(color = Group), method = \"lm\") + geom_smooth(method = \"lm\") } #> Loading required package: ggplot2 #> `geom_smooth()` using formula = 'y ~ x' #> `geom_smooth()` using formula = 'y ~ x'"},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":null,"dir":"Reference","previous_headings":"","what":"Shortest Probability Interval (SPI) — spi","title":"Shortest Probability Interval (SPI) — spi","text":"Compute Shortest Probability Interval (SPI) posterior distributions. SPI computationally stable HDI. implementation based algorithm SPIn package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Shortest Probability Interval (SPI) — spi","text":"","code":"spi(x, ...) # S3 method for class 'numeric' spi(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'data.frame' spi(x, ci = 0.95, rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'stanreg' spi( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' spi( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'get_predicted' spi(x, ci = 0.95, use_iterations = FALSE, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Shortest Probability Interval (SPI) — spi","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. ci Value vector probability (credible) interval - CI (0 1) estimated. Default .95 (95%). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Shortest Probability Interval (SPI) — spi","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability credible interval. CI_low, CI_high lower upper credible interval limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Shortest Probability Interval (SPI) — spi","text":"SPI alternative method HDI (hdi()) quantify uncertainty (posterior) distributions. SPI said stable HDI, , \"HDI can noisy (, high Monte Carlo error)\" (Liu et al. 2015). Furthermore, HDI sensitive additional assumptions, particular assumptions related different estimation methods, can make HDI less accurate reliable.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Shortest Probability Interval (SPI) — spi","text":"code compute SPI adapted SPIn package, slightly modified robust Stan models. Thus, credits go Ying Liu original SPI algorithm R implementation.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Shortest Probability Interval (SPI) — spi","text":"Liu, Y., Gelman, ., & Zheng, T. (2015). Simulation-efficient shortest probability intervals. Statistics Computing, 25(4), 809–819. https://doi.org/10.1007/s11222-015-9563-8","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Shortest Probability Interval (SPI) — spi","text":"","code":"library(bayestestR) posterior <- rnorm(1000) spi(posterior) #> 95% SPI: [-1.96, 1.96] spi(posterior, ci = c(0.80, 0.89, 0.95)) #> Shortest Probability Interval #> #> 80% SPI | 89% SPI | 95% SPI #> --------------------------------------------- #> [-1.17, 1.34] | [-1.50, 1.70] | [-1.96, 1.96] df <- data.frame(replicate(4, rnorm(100))) spi(df) #> Shortest Probability Interval #> #> Parameter | 95% SPI #> ------------------------- #> X1 | [-2.04, 1.89] #> X2 | [-1.65, 1.96] #> X3 | [-2.09, 1.74] #> X4 | [-2.11, 1.97] spi(df, ci = c(0.80, 0.89, 0.95)) #> Shortest Probability Interval #> #> Parameter | 80% SPI | 89% SPI | 95% SPI #> --------------------------------------------------------- #> X1 | [-1.05, 1.49] | [-1.41, 1.76] | [-2.04, 1.89] #> X2 | [-0.90, 1.09] | [-1.33, 1.41] | [-1.65, 1.96] #> X3 | [-1.38, 1.44] | [-1.84, 1.49] | [-2.09, 1.74] #> X4 | [-1.27, 1.40] | [-1.61, 1.56] | [-2.11, 1.97] # \\donttest{ library(rstanarm) model <- suppressWarnings( stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) spi(model) #> Shortest Probability Interval #> #> Parameter | 95% SPI #> ---------------------------- #> (Intercept) | [29.08, 47.70] #> wt | [-6.75, -4.12] #> gear | [-1.74, 1.70] # }"},{"path":"https://easystats.github.io/bayestestR/reference/unupdate.html","id":null,"dir":"Reference","previous_headings":"","what":"Un-update Bayesian models to their prior-to-data state — unupdate","title":"Un-update Bayesian models to their prior-to-data state — unupdate","text":"posteriors priors updated observing data, goal function un-update posteriors obtain models representing priors. models can used examine prior predictive distribution, compare priors posteriors.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/unupdate.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Un-update Bayesian models to their prior-to-data state — unupdate","text":"","code":"unupdate(model, verbose = TRUE, ...) # S3 method for class 'stanreg' unupdate(model, verbose = TRUE, ...) # S3 method for class 'brmsfit' unupdate(model, verbose = TRUE, ...) # S3 method for class 'brmsfit_multiple' unupdate(model, verbose = TRUE, newdata = NULL, ...) # S3 method for class 'blavaan' unupdate(model, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/unupdate.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Un-update Bayesian models to their prior-to-data state — unupdate","text":"model fitted Bayesian model. verbose Toggle warnings. ... used newdata List data.frames update model new data. Required even original data used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/unupdate.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Un-update Bayesian models to their prior-to-data state — unupdate","text":"model un-fitted data, representing prior model.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/unupdate.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Un-update Bayesian models to their prior-to-data state — unupdate","text":"function used internally compute Bayes factors.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate posterior distributions weighted across models — weighted_posteriors","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"Extract posterior samples parameters, weighted across models. Weighting done comparing posterior model probabilities, via bayesfactor_models().","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"","code":"weighted_posteriors(..., prior_odds = NULL, missing = 0, verbose = TRUE) # S3 method for class 'data.frame' weighted_posteriors(..., prior_odds = NULL, missing = 0, verbose = TRUE) # S3 method for class 'stanreg' weighted_posteriors( ..., prior_odds = NULL, missing = 0, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL ) # S3 method for class 'brmsfit' weighted_posteriors( ..., prior_odds = NULL, missing = 0, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL ) # S3 method for class 'blavaan' weighted_posteriors( ..., prior_odds = NULL, missing = 0, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL ) # S3 method for class 'BFBayesFactor' weighted_posteriors( ..., prior_odds = NULL, missing = 0, verbose = TRUE, iterations = 4000 )"},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"... Fitted models (see details), fit data, single BFBayesFactor object. prior_odds Optional vector prior odds models compared first model (denominator, BFBayesFactor objects). data.frames, used basis weighting. missing optional numeric value use model contain parameter appears models. Defaults 0. verbose Toggle warnings. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. iterations BayesFactor models, many posterior samples draw.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"data frame posterior distributions (weighted across models) .","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"Note across models parameters might play different roles. example, parameter plays different role model Y ~ + B (main effect) model Y ~ + B + :B (simple effect). many cases centering predictors (mean subtracting continuous variables, effects coding via contr.sum orthonormal coding via contr.equalprior_pairs factors) can reduce issue. case mindful issue. See bayesfactor_models() details info passed models. Note BayesFactor models, posterior samples generated intercept models. function similar function brms::posterior_average.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"BayesFactor < 0.9.12-4.3, instances might problems duplicate columns random effects resulting data frame.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"Clyde, M., Desimone, H., & Parmigiani, G. (1996). Prediction via orthogonalized model mixing. Journal American Statistical Association, 91(435), 1197-1208. Hinne, M., Gronau, Q. F., van den Bergh, D., Wagenmakers, E. (2019, March 25). conceptual introduction Bayesian Model Averaging. doi:10.31234/osf.io/wgb64 Rouder, J. N., Haaf, J. M., & Vandekerckhove, J. (2018). Bayesian inference psychology, part IV: Parameter estimation Bayes factors. Psychonomic bulletin & review, 25(1), 102-113. van den Bergh, D., Haaf, J. M., Ly, ., Rouder, J. N., & Wagenmakers, E. J. (2019). cautionary note estimating effect size.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"","code":"# \\donttest{ if (require(\"rstanarm\") && require(\"see\") && interactive()) { stan_m0 <- suppressWarnings(stan_glm(extra ~ 1, data = sleep, family = gaussian(), refresh = 0, diagnostic_file = file.path(tempdir(), \"df0.csv\") )) stan_m1 <- suppressWarnings(stan_glm(extra ~ group, data = sleep, family = gaussian(), refresh = 0, diagnostic_file = file.path(tempdir(), \"df1.csv\") )) res <- weighted_posteriors(stan_m0, stan_m1, verbose = FALSE) plot(eti(res)) } ## With BayesFactor if (require(\"BayesFactor\")) { extra_sleep <- ttestBF(formula = extra ~ group, data = sleep) wp <- weighted_posteriors(extra_sleep, verbose = FALSE) describe_posterior(extra_sleep, test = NULL, verbose = FALSE) # also considers the null describe_posterior(wp$delta, test = NULL, verbose = FALSE) } #> Summary of Posterior Distribution #> #> Parameter | Median | 95% CI #> ---------------------------------- #> Posterior | -0.09 | [-1.38, 0.07] ## weighted prediction distributions via data.frames if (require(\"rstanarm\") && interactive()) { m0 <- suppressWarnings(stan_glm( mpg ~ 1, data = mtcars, family = gaussian(), diagnostic_file = file.path(tempdir(), \"df0.csv\"), refresh = 0 )) m1 <- suppressWarnings(stan_glm( mpg ~ carb, data = mtcars, family = gaussian(), diagnostic_file = file.path(tempdir(), \"df1.csv\"), refresh = 0 )) # Predictions: pred_m0 <- data.frame(posterior_predict(m0)) pred_m1 <- data.frame(posterior_predict(m1)) BFmods <- bayesfactor_models(m0, m1, verbose = FALSE) wp <- weighted_posteriors( pred_m0, pred_m1, prior_odds = as.numeric(BFmods)[2], verbose = FALSE ) # look at first 5 prediction intervals hdi(pred_m0[1:5]) hdi(pred_m1[1:5]) hdi(wp[1:5]) # between, but closer to pred_m1 } # }"},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0151","dir":"Changelog","previous_headings":"","what":"bayestestR 0.15.1","title":"bayestestR 0.15.1","text":"CRAN release: 2025-01-17","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-15-1","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.15.1","text":"Several minor changes deal recent changes packages.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-15-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.15.1","text":"Fix emmeans / marginaleffects / data.frame() methods using multiple credible levels (#688).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0150","dir":"Changelog","previous_headings":"","what":"bayestestR 0.15.0","title":"bayestestR 0.15.0","text":"CRAN release: 2024-10-17","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-15-0","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.15.0","text":"Support posterior::rvar-type column data frames. example, data frame df rvar column \".pred\" can now called directly via p_direction(df, rvar_col = \".pred\"). Added support marginaleffects ROPE threshold ranges rope(), describe_posterior(), p_significance() equivalence_test() can now specified list. allows different ranges different parameters. Results objects generated emmeans (emmGrid/emm_list) now return results appended grid-data. Usability improvements p_direction(): Results p_direction() can directly used pd_to_p(). p_direction() gets as_p argument, directly convert pd-values frequentist p-values. p_direction() gets remove_na argument, defaults TRUE, remove NA values input calculating pd-values. Besides existing .numeric() method, p_direction() now also .vector() method. p_significance() now accepts non-symmetric ranges threshold argument. p_to_pd() now also works data frames returned p_direction(). data frame contains pd, p_direction PD column name, assumed pd-values, converted p-values. p_to_pd() data frame inputs gets .numeric() .vector() method.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-15-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.15.0","text":"Fixed warning CRAN check results.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0140","dir":"Changelog","previous_headings":"","what":"bayestestR 0.14.0","title":"bayestestR 0.14.0","text":"CRAN release: 2024-07-24","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-14-0","dir":"Changelog","previous_headings":"","what":"Breaking Changes","title":"bayestestR 0.14.0","text":"Arguments named group, , group_by split_by deprecated future releases easystats packages. Please use instead. affects following functions bayestestR: estimate_density().","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-14-0","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.14.0","text":"bayesian_as_frequentist() now supports model families Bayesian models can successfully converted frequentists counterparts. bayesfactor_models() now throws informative error Bayes factors comparisons calculated.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-14-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.14.0","text":"Fixed issue bayesian_as_frequentist() brms models 0 + Intercept specification model formula.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0132","dir":"Changelog","previous_headings":"","what":"bayestestR 0.13.2","title":"bayestestR 0.13.2","text":"CRAN release: 2024-02-12","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-13-2","dir":"Changelog","previous_headings":"","what":"Breaking Changes","title":"bayestestR 0.13.2","text":"pd_to_p() now returns 1 warning values smaller 0.5. map_estimate(), p_direction(), p_map(), p_significance() now return data-frame input numeric vector. (making output consistently data frame inputs.) Argument posteriors renamed posterior. , mix spellings, now consistently posterior.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-13-2","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.13.2","text":"Retrieving models environment improved.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-13-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.13.2","text":"Fixed issues various format() methods, work properly functions (like p_direction()). Fixed issue estimate_density() double vectors also class attributes. Fixed several minor issues tests.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0131","dir":"Changelog","previous_headings":"","what":"bayestestR 0.13.1","title":"bayestestR 0.13.1","text":"CRAN release: 2023-04-07","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-13-1","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.13.1","text":"Improved speed performance functions called using .call(). Improved speed performance bayesfactor_models() brmsfit objects already included marglik element model object.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functionality-0-13-1","dir":"Changelog","previous_headings":"","what":"New functionality","title":"bayestestR 0.13.1","text":".logical() bayesfactor_restricted() results, extracts boolean vector(s) mark draws part order restriction.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-13-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.13.1","text":"p_map() gains new null argument specify non-0 nulls. Fixed non-working examples ci(method = \"SI\"). Fixed wrong calculation rope range model objects describe_posterior(). smaller bug fixes.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0130","dir":"Changelog","previous_headings":"","what":"bayestestR 0.13.0","title":"bayestestR 0.13.0","text":"CRAN release: 2022-09-18","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-0-13-0","dir":"Changelog","previous_headings":"","what":"Breaking","title":"bayestestR 0.13.0","text":"minimum needed R version bumped 3.6. contr.equalprior(contrasts = FALSE) (previously contr.orthonorm) longer returns identity matrix, shifted diag(n) - 1/n, consistency.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functionality-0-13-0","dir":"Changelog","previous_headings":"","what":"New functionality","title":"bayestestR 0.13.0","text":"p_to_bf(), convert p-values Bayes factors. accurate approximate Bayes factors, use bic_to_bf(). bayestestR now supports objects class rvar package posterior. contr.equalprior (previously contr.orthonorm) gains two new functions: contr.equalprior_pairs contr.equalprior_deviations aide setting intuitive priors.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-13-0","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.13.0","text":"renamed contr.equalprior explicit function. p_direction() now accepts objects class parameters_model() (parameters::model_parameters()), compute probability direction parameters frequentist models.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0121","dir":"Changelog","previous_headings":"","what":"bayestestR 0.12.1","title":"bayestestR 0.12.1","text":"CRAN release: 2022-05-02","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-0-12-1","dir":"Changelog","previous_headings":"","what":"Breaking","title":"bayestestR 0.12.1","text":"Bayesfactor_models() frequentist models now relies updated insight::get_loglikelihood(). might change results REML based models. See documentation. estimate_density() argument group_by renamed . distribution_*(random = FALSE) functions now rely ppoints(), result slightly different results, especially small ns. Uncertainty estimation now defaults \"eti\" (formerly \"hdi\").","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-12-1","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.12.1","text":"bayestestR functions now support draws objects package posterior. rope_range() now handles log(normal)-families models log-transformed outcomes. New function spi(), compute shortest probability intervals. Furthermore, \"spi\" option added new method compute uncertainty intervals.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-12-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.12.1","text":"bci() objects incorrectly returned equal-tailed intervals.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0115","dir":"Changelog","previous_headings":"","what":"bayestestR 0.11.5","title":"bayestestR 0.11.5","text":"CRAN release: 2021-10-30 Fixes failing tests CRAN checks.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions-0-11-1","dir":"Changelog","previous_headings":"","what":"New functions","title":"bayestestR 0.11.1","text":"describe_posterior() gains plot() method, short cut plot(estimate_density(describe_posterior())).","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-11","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.11","text":"Fixed issues related last brms update. Fixed bug describe_posterior.BFBayesFactor() Bayes factors missing put ( #442 ).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0100","dir":"Changelog","previous_headings":"","what":"bayestestR 0.10.0","title":"bayestestR 0.10.0","text":"CRAN release: 2021-05-31","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-0-10-0","dir":"Changelog","previous_headings":"","what":"Breaking","title":"bayestestR 0.10.0","text":"Bayes factors now returned log(BF) (column name log_BF). Printing unaffected. retrieve raw BFs, can run exp(result$log_BF).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions-0-10-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"bayestestR 0.10.0","text":"bci() (alias bcai()) compute bias-corrected accelerated bootstrap intervals. Along new function, ci() describe_posterior() gain new ci_method type, \"bci\".","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-10-0","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.10.0","text":"contr.bayes renamed contr.orthonorm explicit function.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-090","dir":"Changelog","previous_headings":"","what":"bayestestR 0.9.0","title":"bayestestR 0.9.0","text":"CRAN release: 2021-04-08","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-0-9-0","dir":"Changelog","previous_headings":"","what":"Breaking","title":"bayestestR 0.9.0","text":"default ci width changed 0.95 instead 0.89 (see ). come surprise long-time users bayestestR warning impending change now :) Column names bayesfactor_restricted() now p_prior p_posterior (Prior_prob Posterior_prob), consistent bayesfactor_inclusion() output. Removed experimental function mhdior.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-9-0","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.9.0","text":"Support blavaan models. Support blrm models (rmsb). Support BGGM models (BGGM). check_prior() describe_prior() now also work ways prior definition models rstanarm brms.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-9-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.9.0","text":"Fixed bug print() method mediation() function. Fixed remaining inconsistencies CI values, reported fraction rope(). Fixed issues special prior definitions check_prior(), describe_prior() simulate_prior().","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-082","dir":"Changelog","previous_headings":"","what":"bayestestR 0.8.2","title":"bayestestR 0.8.2","text":"CRAN release: 2021-01-26","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-8-2","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.8.2","text":"Support bamlss models. Roll-back R dependency R >= 3.4.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-to-functions-0-8-2","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"bayestestR 0.8.2","text":".stanreg methods gain component argument, also include auxiliary parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-8-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.8.2","text":"bayesfactor_parameters() longer errors reason computing extremely un/likely direction hypotheses. bayesfactor_pointull() / bf_pointull() now bayesfactor_pointnull() / bf_pointnull() (can spot difference? #363 ).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-080","dir":"Changelog","previous_headings":"","what":"bayestestR 0.8.0","title":"bayestestR 0.8.0","text":"CRAN release: 2020-12-05","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions-0-8-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"bayestestR 0.8.0","text":"sexit(), function sequential effect existence significance testing (SEXIT).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-8-0","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.8.0","text":"Added startup-message warn users default ci-width might change future update. Added support mcmc.list objects.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-8-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.8.0","text":"unupdate() gains newdata argument work brmsfit_multiple models. Fixed issue Bayes factor vignette (don’t evaluate code chunks packages available).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-075","dir":"Changelog","previous_headings":"","what":"bayestestR 0.7.5","title":"bayestestR 0.7.5","text":"CRAN release: 2020-10-22","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions-0-7-5","dir":"Changelog","previous_headings":"","what":"New functions","title":"bayestestR 0.7.5","text":"Added .matrix() function bayesfactor_model arrays. unupdate(), utility function get Bayesian models un-fitted data, representing priors .","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-to-functions-0-7-5","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"bayestestR 0.7.5","text":"ci() supports emmeans - Bayesian frequentist ( #312 - cross fix parameters)","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-7-5","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.7.5","text":"Fixed issue default rope range BayesFactor models. Fixed issue collinearity-check rope() models less two parameters. Fixed issue print-method mediation() stanmvreg-models, displays wrong name response-value. Fixed issue effective_sample() models one parameter. rope_range() BayesFactor models returns non-NA values ( #343 )","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-072","dir":"Changelog","previous_headings":"","what":"bayestestR 0.7.2","title":"bayestestR 0.7.2","text":"CRAN release: 2020-07-20","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions-0-7-2","dir":"Changelog","previous_headings":"","what":"New functions","title":"bayestestR 0.7.2","text":"mediation(), compute average direct average causal mediation effects multivariate response models (brmsfit, stanmvreg).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-7-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.7.2","text":"bayesfactor_parameters() works R<3.6.0.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-070","dir":"Changelog","previous_headings":"","what":"bayestestR 0.7.0","title":"bayestestR 0.7.0","text":"CRAN release: 2020-06-19","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-7-0","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.7.0","text":"Preliminary support stanfit objects. Added support bayesQR objects.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-to-functions-0-7-0","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"bayestestR 0.7.0","text":"weighted_posteriors() can now used data frames. Revised print() describe_posterior(). Improved value formatting Bayesfactor functions.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-7-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.7.0","text":"Link transformation now taken account emmeans objets. E.g., describe_posterior(). Fix diagnostic_posterior() algorithm “sampling”. Minor revisions documentations. Fix CRAN check issues win-old-release.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-060","dir":"Changelog","previous_headings":"","what":"bayestestR 0.6.0","title":"bayestestR 0.6.0","text":"CRAN release: 2020-04-20","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-to-functions-0-6-0","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"bayestestR 0.6.0","text":"describe_posterior() now also works effectsize::standardize_posteriors(). p_significance() now also works parameters::simulate_model(). rope_range() supports (frequentis) models.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-6-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.6.0","text":"Fixed issue plot() data.frame-methods p_direction() equivalence_test(). Fix check issues forthcoming insight-update.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-053","dir":"Changelog","previous_headings":"","what":"bayestestR 0.5.3","title":"bayestestR 0.5.3","text":"CRAN release: 2020-03-26","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-5-3","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.5.3","text":"Support bcplm objects (package cplm)","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-to-functions-0-5-3","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"bayestestR 0.5.3","text":"estimate_density() now also works grouped data frames.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-5-3","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.5.3","text":"Fixed bug weighted_posteriors() properly weight Intercept-BFBayesFactor models. Fixed bug weighted_posteriors() models low posterior probability ( #286 ). Fixed bug describe_posterior(), rope() equivalence_test() brmsfit models monotonic effect. Fixed issues related latest changes .data.frame.brmsfit() brms package.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-050","dir":"Changelog","previous_headings":"","what":"bayestestR 0.5.0","title":"bayestestR 0.5.0","text":"CRAN release: 2020-01-18","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-5-0","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.5.0","text":"Added p_pointnull() alias p_MAP(). Added si() function compute support intervals. Added weighted_posteriors() generating posterior samples averaged across models. Added plot()-method p_significance(). p_significance() now also works brmsfit-objects. estimate_density() now also works MCMCglmm-objects. equivalence_test() gets effects component arguments stanreg brmsfit models, print specific model components. Support mcmc objects (package coda) Provide distributions via distribution(). Added distribution_tweedie(). Better handling stanmvreg models describe_posterior(), diagnostic_posterior() describe_prior().","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-5-0","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"bayestestR 0.5.0","text":"point_estimate(): argument centrality default value changed ‘median’ ‘’. p_rope(), previously exploratory index, renamed mhdior() (Max HDI inside/outside ROPE), p_rope() refer rope(..., ci = 1) ( #258 )","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-5-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.5.0","text":"Fixed mistake description p_significance(). Fixed error computing BFs emmGrid based non-linear models ( #260 ). Fixed wrong output percentage-values print.equivalence_test(). Fixed issue describe_posterior() BFBayesFactor-objects one model.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-040","dir":"Changelog","previous_headings":"","what":"bayestestR 0.4.0","title":"bayestestR 0.4.0","text":"CRAN release: 2019-10-20","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions--features-0-4-0","dir":"Changelog","previous_headings":"","what":"New functions / features","title":"bayestestR 0.4.0","text":"convert_bayesian_to_frequentist() Convert (refit) Bayesian model frequentist distribution_binomial() perfect binomial distributions simulate_ttest() Simulate data mean difference simulate_correlation() Simulate correlated datasets p_significance() Compute probability Practical Significance (ps) overlap() Compute overlap two empirical distributions estimate_density(): method = \"mixture\" argument added mixture density estimation","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-4-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.4.0","text":"Fixed bug simulate_prior() stanreg-models autoscale set FALSE","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-030","dir":"Changelog","previous_headings":"","what":"bayestestR 0.3.0","title":"bayestestR 0.3.0","text":"CRAN release: 2019-09-22","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-3-0","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.3.0","text":"revised print()-methods functions like rope(), p_direction(), describe_posterior() etc., particular model objects random effects /zero-inflation component","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions--features-0-3-0","dir":"Changelog","previous_headings":"","what":"New functions / features","title":"bayestestR 0.3.0","text":"check_prior() check prior informative simulate_prior() simulate model’s priors distributions distribution_gamma() generate (near-perfect random) Gamma distribution contr.bayes function orthogonal factor coding (implementation Singmann & Gronau’s bfrms, used proper prior estimation factor 3 levels . See Bayes factor vignette ## Changes functions Added support sim, sim.merMod (arm::sim()) MCMCglmm-objects many functions (like hdi(), ci(), eti(), rope(), p_direction(), point_estimate(), …) describe_posterior() gets effects component argument, include description posterior samples random effects /zero-inflation component. user-friendly warning non-supported models bayesfactor()-methods","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-3-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.3.0","text":"Fixed bug bayesfactor_inclusion() interaction sometimes appeared (#223) Fixed bug describe_posterior() stanreg models fitted fullrank-algorithm","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-025","dir":"Changelog","previous_headings":"","what":"bayestestR 0.2.5","title":"bayestestR 0.2.5","text":"CRAN release: 2019-08-06","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-2-5","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"bayestestR 0.2.5","text":"rope_range() binomial model now different default (-.18; .18 ; instead -.055; .055) rope(): returns proportion (0 1) instead value 0 100 p_direction(): returns proportion (0.5 1) instead value 50 100 (#168) bayesfactor_savagedickey(): hypothesis argument replaced null part new bayesfactor_parameters() function","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions--features-0-2-5","dir":"Changelog","previous_headings":"","what":"New functions / features","title":"bayestestR 0.2.5","text":"density_at(), p_map() map_estimate(): method argument added rope(): ci_method argument added eti(): Computes equal-tailed intervals reshape_ci(): Reshape CIs wide/long bayesfactor_parameters(): New function, replacing bayesfactor_savagedickey(), allows computing Bayes factors point-null interval-null bayesfactor_restricted(): Function computing Bayes factors order restricted models","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-2-5","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.2.5","text":"bayesfactor_inclusion() now works R < 3.6.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-022","dir":"Changelog","previous_headings":"","what":"bayestestR 0.2.2","title":"bayestestR 0.2.2","text":"CRAN release: 2019-06-20","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-2-2","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"bayestestR 0.2.2","text":"equivalence_test(): returns capitalized output (e.g., Rejected instead rejected) describe_posterior.numeric(): dispersion defaults FALSE consistency methods","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions--features-0-2-2","dir":"Changelog","previous_headings":"","what":"New functions / features","title":"bayestestR 0.2.2","text":"pd_to_p() p_to_pd(): Functions convert probability direction (pd) p-value Support emmGrid objects: ci(), rope(), bayesfactor_savagedickey(), describe_posterior(), …","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"minor-changes-0-2-2","dir":"Changelog","previous_headings":"","what":"Minor changes","title":"bayestestR 0.2.2","text":"Improved tutorial 2","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-2-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.2.2","text":"describe_posterior(): Fixed column order restoration bayesfactor_inclusion(): Inclusion BFs matched models inline JASP results.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-020","dir":"Changelog","previous_headings":"","what":"bayestestR 0.2.0","title":"bayestestR 0.2.0","text":"CRAN release: 2019-05-29","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-2-0","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"bayestestR 0.2.0","text":"plotting functions now require installation see package estimate argument name describe_posterior() point_estimate() changed centrality hdi(), ci(), rope() equivalence_test() default ci 0.89 rnorm_perfect() deprecated favour distribution_normal() map_estimate() now returns single value instead dataframe density parameter removed. MAP density value now accessible via attributes(map_output)$MAP_density","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions--features-0-2-0","dir":"Changelog","previous_headings":"","what":"New functions / features","title":"bayestestR 0.2.0","text":"describe_posterior(), describe_prior(), diagnostic_posterior(): added wrapper function point_estimate() added function compute point estimates p_direction(): new argument method compute pd based AUC area_under_curve(): compute AUC distribution() functions added bayesfactor_savagedickey(), bayesfactor_models() bayesfactor_inclusion() functions added Started adding plotting methods (currently see package) p_direction() hdi() probability_at() alias density_at() effective_sample() return effective sample size Stan-models mcse() return Monte Carlo standard error Stan-models","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"minor-changes-0-2-0","dir":"Changelog","previous_headings":"","what":"Minor changes","title":"bayestestR 0.2.0","text":"Improved documentation Improved testing p_direction(): improved printing rope() model-objects now returns HDI values parameters attribute consistent way Changes legend-labels plot.equivalence_test() align plots output print()-method (#78)","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-2-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.2.0","text":"hdi() returned multiple class attributes (#72) Printing results hdi() failed ci-argument fractional parts percentage values (e.g. ci = 0.995). plot.equivalence_test() work properly brms-models (#76).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-010","dir":"Changelog","previous_headings":"","what":"bayestestR 0.1.0","title":"bayestestR 0.1.0","text":"CRAN release: 2019-04-08 CRAN initial publication 0.1.0 release Added NEWS.md file track changes package","code":""}] +[{"path":[]},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"our-pledge","dir":"","previous_headings":"","what":"Our Pledge","title":"Contributor Covenant Code of Conduct","text":"members, contributors, leaders pledge make participation community harassment-free experience everyone, regardless age, body size, visible invisible disability, ethnicity, sex characteristics, gender identity expression, level experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, sexual identity orientation. pledge act interact ways contribute open, welcoming, diverse, inclusive, healthy community.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"our-standards","dir":"","previous_headings":"","what":"Our Standards","title":"Contributor Covenant Code of Conduct","text":"Examples behavior contributes positive environment community include: Demonstrating empathy kindness toward people respectful differing opinions, viewpoints, experiences Giving gracefully accepting constructive feedback Accepting responsibility apologizing affected mistakes, learning experience Focusing best just us individuals, overall community Examples unacceptable behavior include: use sexualized language imagery, sexual attention advances kind Trolling, insulting derogatory comments, personal political attacks Public private harassment Publishing others’ private information, physical email address, without explicit permission conduct reasonably considered inappropriate professional setting","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"enforcement-responsibilities","dir":"","previous_headings":"","what":"Enforcement Responsibilities","title":"Contributor Covenant Code of Conduct","text":"Community leaders responsible clarifying enforcing standards acceptable behavior take appropriate fair corrective action response behavior deem inappropriate, threatening, offensive, harmful. Community leaders right responsibility remove, edit, reject comments, commits, code, wiki edits, issues, contributions aligned Code Conduct, communicate reasons moderation decisions appropriate.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"scope","dir":"","previous_headings":"","what":"Scope","title":"Contributor Covenant Code of Conduct","text":"Code Conduct applies within community spaces, also applies individual officially representing community public spaces. Examples representing community include using official e-mail address, posting via official social media account, acting appointed representative online offline event.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"enforcement","dir":"","previous_headings":"","what":"Enforcement","title":"Contributor Covenant Code of Conduct","text":"Instances abusive, harassing, otherwise unacceptable behavior may reported community leaders responsible enforcement dom.makowski@gmail.com. complaints reviewed investigated promptly fairly. community leaders obligated respect privacy security reporter incident.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"enforcement-guidelines","dir":"","previous_headings":"","what":"Enforcement Guidelines","title":"Contributor Covenant Code of Conduct","text":"Community leaders follow Community Impact Guidelines determining consequences action deem violation Code Conduct:","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"id_1-correction","dir":"","previous_headings":"Enforcement Guidelines","what":"1. Correction","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Use inappropriate language behavior deemed unprofessional unwelcome community. Consequence: private, written warning community leaders, providing clarity around nature violation explanation behavior inappropriate. public apology may requested.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"id_2-warning","dir":"","previous_headings":"Enforcement Guidelines","what":"2. Warning","title":"Contributor Covenant Code of Conduct","text":"Community Impact: violation single incident series actions. Consequence: warning consequences continued behavior. interaction people involved, including unsolicited interaction enforcing Code Conduct, specified period time. includes avoiding interactions community spaces well external channels like social media. Violating terms may lead temporary permanent ban.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"id_3-temporary-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"3. Temporary Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: serious violation community standards, including sustained inappropriate behavior. Consequence: temporary ban sort interaction public communication community specified period time. public private interaction people involved, including unsolicited interaction enforcing Code Conduct, allowed period. Violating terms may lead permanent ban.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"id_4-permanent-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"4. Permanent Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Demonstrating pattern violation community standards, including sustained inappropriate behavior, harassment individual, aggression toward disparagement classes individuals. Consequence: permanent ban sort public interaction within community.","code":""},{"path":"https://easystats.github.io/bayestestR/CODE_OF_CONDUCT.html","id":"attribution","dir":"","previous_headings":"","what":"Attribution","title":"Contributor Covenant Code of Conduct","text":"Code Conduct adapted Contributor Covenant, version 2.1, available https://www.contributor-covenant.org/version/2/1/code_of_conduct.html. Community Impact Guidelines inspired [Mozilla’s code conduct enforcement ladder][https://github.com/mozilla/inclusion]. answers common questions code conduct, see FAQ https://www.contributor-covenant.org/faq. Translations available https://www.contributor-covenant.org/translations.","code":""},{"path":"https://easystats.github.io/bayestestR/CONTRIBUTING.html","id":null,"dir":"","previous_headings":"","what":"Contribution Guidelines","title":"Contribution Guidelines","text":"easystats guidelines 0.1.0 people much welcome contribute code, documentation, testing suggestions. package aims beginner-friendly. Even ’re new open-source way life, new coding github stuff, encourage try submitting pull requests (PRs). “’d like help, ’m good enough programming yet” ’s alright, don’t worry! can always dig code, documentation tests. always typos fix, docs improve, details add, code lines document, tests add… Even smaller PRs appreciated. “’d like help, don’t know start” can look around issue section find features / ideas / bugs start working . can also open new issue just say ’re , interested helping . might ideas adapted skills. “’m sure suggestion idea worthwile” Enough impostor syndrom! suggestions opinions good, even ’s just thought , ’s always good receive feedback. “waste time ? get credit?” Software contributions getting valued academic world, good time collaborate us! Authors substantial contributions added within authors list. ’re also keen including eventual academic publications. Anyway, starting important! enter whole new world, new fantastic point view… fork repo, changes submit . work together make best :)","code":""},{"path":"https://easystats.github.io/bayestestR/CONTRIBUTING.html","id":"code","dir":"","previous_headings":"","what":"Code","title":"Contribution Guidelines","text":"Please document comment code, purpose step (code line) stated clear understandable way. submitting change, please read R style guide particular easystats convention code-style keep consistency code formatting. Regarding style guide, note exception: put readability clarity everything. Thus, like underscores full names (prefer model_performance modelperf interpret_odds_logistic intoddslog). start code, make sure ’re dev branch (“advanced”). , can create new branch named feature (e.g., feature_lightsaber) changes. Finally, submit branch merged dev branch. , every now , dev branch merge master, new package version.","code":""},{"path":"https://easystats.github.io/bayestestR/CONTRIBUTING.html","id":"checks-to-do-before-submission","dir":"","previous_headings":"","what":"Checks to do before submission","title":"Contribution Guidelines","text":"Make sure documentation (roxygen) good Make sure add tests new functions Run: styler::style_pkg(): Automatic style formatting lintr::lint_package(): Style checks devtools::check(): General checks","code":""},{"path":"https://easystats.github.io/bayestestR/CONTRIBUTING.html","id":"useful-materials","dir":"","previous_headings":"","what":"Useful Materials","title":"Contribution Guidelines","text":"Understanding GitHub flow","code":""},{"path":"https://easystats.github.io/bayestestR/PULL_REQUEST_TEMPLATE.html","id":null,"dir":"","previous_headings":"","what":"Description","title":"Description","text":"PR aims adding feature…","code":""},{"path":"https://easystats.github.io/bayestestR/PULL_REQUEST_TEMPLATE.html","id":"proposed-changes","dir":"","previous_headings":"","what":"Proposed Changes","title":"Description","text":"changed foo function …","code":""},{"path":"https://easystats.github.io/bayestestR/SUPPORT.html","id":null,"dir":"","previous_headings":"","what":"Getting help with {bayestestR}","title":"Getting help with {bayestestR}","text":"Thanks using bayestestR. filing issue, places explore pieces put together make process smooth possible. Start making minimal reproducible example using reprex package. haven’t heard used reprex , ’re treat! Seriously, reprex make R-question-asking endeavors easier (pretty insane ROI five ten minutes ’ll take learn ’s ). additional reprex pointers, check Get help! resource used tidyverse team. Armed reprex, next step figure ask: ’s question: start StackOverflow. people answer questions. ’s bug: ’re right place, file issue. ’re sure: let’s discuss try figure ! problem bug feature request, can easily return report . opening new issue, sure search issues pull requests make sure bug hasn’t reported /already fixed development version. default, search pre-populated :issue :open. can edit qualifiers (e.g. :pr, :closed) needed. example, ’d simply remove :open search issues repo, open closed. Thanks help!","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"the-bayes-factor","dir":"Articles","previous_headings":"","what":"The Bayes Factor","title":"Bayes Factors","text":"Bayes Factors (BFs) indices relative evidence one “model” another. role hypothesis testing index, Bayesian framework p-value classical/frequentist framework. significance-based testing, p-values used assess unlikely observed data null hypothesis true, Bayesian model selection framework, Bayes factors assess evidence different models, model corresponding specific hypothesis. According Bayes’ theorem, can update prior probabilities model M (P(M)) posterior probabilities (P(M|D)) observing datum D accounting probability observing datum given model (P(D|M), also known likelihood): P(M|D) = \\frac{P(D|M)\\times P(M)}{P(D)} Using equation, can compare probability-odds two models: \\underbrace{\\frac{P(M_1|D)}{P(M_2|D)}}_{\\text{Posterior Odds}} = \\underbrace{\\frac{P(D|M_1)}{P(D|M_2)}}_{\\text{Likelihood Ratio}} \\times \\underbrace{\\frac{P(M_1)}{P(M_2)}}_{\\text{Prior Odds}} likelihood ratio (middle term) Bayes factor - factor prior odds updated observing data posterior odds. Thus, Bayes factors can calculated two ways: ratio quantifying relative probability observed data two models. (contexts, probabilities also called marginal likelihoods.) BF_{12}=\\frac{P(D|M_1)}{P(D|M_2)} degree shift prior beliefs relative credibility two models (since can computed dividing posterior odds prior odds). BF_{12}=\\frac{Posterior~Odds_{12}}{Prior~Odds_{12}} provide functions computing Bayes factors two different contexts: testing single parameters (coefficients) within model comparing statistical models ","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"bayesfactor_parameters","dir":"Articles","previous_headings":"","what":"1. Testing Models’ Parameters with Bayes Factors","title":"Bayes Factors","text":"Bayes factor single parameter can used answer question: “Given observed data, null hypothesis absence effect become less credible?” Bayesian analysis Students’ (1908) Sleep data set. Let’s use Students’ (1908) Sleep data set (data(\"sleep\")). data comes study participants administered drug researchers assessed extra hours sleep participants slept afterwards. try answering following research question using Bayes factors: Given observed data, hypothesis drug (effect group) effect numbers hours extra sleep (variable extra) become less credible? boxplot suggests second group higher number hours extra sleep. much? Let’s fit simple Bayesian linear model, prior b_{group} \\sim N(0, 3) (.e. prior follows Gaussian/normal distribution mean = 0 SD = 3), using rstanarm package:","code":"set.seed(123) library(rstanarm) model <- stan_glm( formula = extra ~ group, data = sleep, prior = normal(0, 3, autoscale = FALSE), chains = 10, iter = 5000, warmup = 1000 )"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"testing-against-a-null-region","dir":"Articles","previous_headings":"1. Testing Models’ Parameters with Bayes Factors","what":"Testing against a null-region","title":"Bayes Factors","text":"One way operationlizing null-hypothesis setting null region, effect falls within interval practically equivalent null (Kruschke, 2010). case, means defining range effects consider equal drug effect . can compute prior probability drug’s effect falling outside null-region, prior probability drug’s effect falling within null-region get prior odds. Say effect smaller hour extra sleep practically equivalent effect , define prior odds : \\frac {P(b_{drug} \\notin [-1, 1])} {P(b_{drug} \\[-1, 1])} Given prior normal distribution centered 0 hours scale (SD) 3 hours, priors look like : prior odds 2.8. looking posterior distribution, can now compute posterior probability drug’s effect falling outside null-region, posterior probability drug’s effect falling within null-region get posterior odds: \\frac {P(b_{drug} \\notin [-1,1] | Data)} {P(b_{drug} \\[-1,1] | Data)} can see center posterior distribution shifted away 0 (~1.5). Likewise, posterior odds 2.5, seems favor effect non-null. , mean data support alternative null? Hard say, since even data observed, priors already favored alternative - need take priors account ! Let’s compute Bayes factor change prior odds posterior odds: BF_{10} = Odds_{posterior} / Odds_{prior} = 0.9! BF indicates data provide 1/0.9 = 1.1 times evidence effect drug practically nothing drug clinically significant effect. Thus, although center distribution shifted away 0, posterior distribution seems favor non-null effect drug, seems given observed data, probability mass overall shifted closer null interval, making values null interval probable! (see Non-overlapping Hypotheses Morey & Rouder, 2011) can achieved function bayesfactor_parameters(), computes Bayes factor model’s parameters: can also plot using see package: Note interpretation guides Bayes factors can found effectsize package:","code":"My_first_BF <- bayesfactor_parameters(model, null = c(-1, 1)) My_first_BF > Bayes Factor (Null-Interval) > > Parameter | BF > ------------------- > (Intercept) | 0.102 > group2 | 0.883 > > * Evidence Against The Null: [-1.000, 1.000] library(see) plot(My_first_BF) effectsize::interpret_bf(exp(My_first_BF$log_BF[2]), include_value = TRUE) > [1] \"anecdotal evidence (BF = 1/1.13) against\" > (Rules: jeffreys1961)"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"testing-against-the-point-null-0","dir":"Articles","previous_headings":"1. Testing Models’ Parameters with Bayes Factors","what":"Testing against the point-null (0)","title":"Bayes Factors","text":"don’t know region practically equivalent 0? just want null exactly zero? problem - width null region shrinks point, change prior probability posterior probability null can estimated comparing density null value two distributions.1 ratio called Savage-Dickey ratio, added benefit also approximation Bayes factor comparing estimated model model parameter interest restricted point-null: “[…] Bayes factor H_0 versus H_1 obtained analytically integrating model parameter \\theta. However, Bayes factor may likewise obtained considering H_1, dividing height posterior \\theta height prior \\theta, point interest.” (Wagenmakers, Lodewyckx, Kuriyal, & Grasman, 2010)","code":"My_second_BF <- bayesfactor_parameters(model, null = 0) My_second_BF > Bayes Factor (Savage-Dickey density ratio) > > Parameter | BF > ---------------- > group2 | 1.24 > > * Evidence Against The Null: 0 plot(My_second_BF)"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"directional-hypotheses","dir":"Articles","previous_headings":"1. Testing Models’ Parameters with Bayes Factors","what":"Directional hypotheses","title":"Bayes Factors","text":"can also compute Bayes factors directional hypotheses (“one sided”), prior hypotheses direction effect. can done setting order restriction prior distribution (results order restriction posterior distribution) alternative (Morey & Wagenmakers, 2014). example, prior hypothesis drug positive effect number sleep hours, alternative restricted region right null (point interval): can see, given priori assumption direction effect (effect positive), presence effect 2.8 times likely absence effect, given observed data (data 2.8 time probable H_1 H_0). indicates , given observed data, priori hypothesis, posterior mass shifted away null value, giving evidence null (note Bayes factor 2.8 still considered quite weak evidence). Thanks flexibility Bayesian framework, also possible compute Bayes factor dividing hypotheses - , null alternative complementary, opposing one-sided hypotheses (Morey & Wagenmakers, 2014). example, compared alternative H_A: drug positive effects null H_0: drug effect. can also compare instead alternative complementary hypothesis: H_{-}: drug negative effects. can see test produces even stronger (conclusive) evidence one-sided vs. point-null test! indeed, rule thumb, specific two hypotheses , distinct one another, power Bayes factor ! 2 Thanks transitivity Bayes factors, can also use bayesfactor_parameters() compare even types hypotheses, trickery. example: \\underbrace{BF_{0\") test_group2_right > Bayes Factor (Savage-Dickey density ratio) > > Parameter | BF > ---------------- > group2 | 2.37 > > * Evidence Against The Null: 0 > * Direction: Right-Sided test plot(test_group2_right) test_group2_dividing <- bayesfactor_parameters(model, null = c(-Inf, 0)) test_group2_dividing > Bayes Factor (Null-Interval) > > Parameter | BF > ----------------- > group2 | 20.53 > > * Evidence Against The Null: [-Inf, 0.000] plot(test_group2_dividing)"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"si","dir":"Articles","previous_headings":"1. Testing Models’ Parameters with Bayes Factors","what":"Support intervals","title":"Bayes Factors","text":"far ’ve seen Bayes factors quantify relative support competing hypotheses. However, can also ask: Upon observing data, credibility parameter’s values increased (decreased)? example, ’ve seen point null become somewhat less credible observing data, might also ask values gained credibility given observed data?. resulting range values called support interval indicates values supported data (Wagenmakers, Gronau, Dablander, & Etz, 2018). can comparing prior posterior distributions checking posterior densities higher prior densities. bayestestR, can achieved si() function: argument BF = 1 indicates want interval contain values gained support factor least 1 (, support ). Visually, can see credibility values within interval increased (likewise credibility values outside interval decreased): can also see support interval (just barely) excludes point null (0) - whose credibility ’ve already seen decreased observed data. emphasizes relationship support interval Bayes factor: “interpretation intervals analogous frequentist confidence interval contains parameter values rejected tested level \\alpha. instance, BF = 1/3 support interval encloses values theta updating factor stronger 3 .” (Wagenmakers et al., 2018) Thus, choice BF (level support interval indicate) depends want interval represent: BF = 1 contains values whose credibility merely decreased observing data. BF > 1 contains values received impressive support data. BF < 1 contains values whose credibility impressively decreased observing data. Testing values outside interval produce Bayes factor larger 1/BF support alternative.","code":"my_first_si <- si(model, BF = 1) print(my_first_si) > Support Interval > > Parameter | BF = 1 SI | Effects | Component > --------------------------------------------------- > (Intercept) | [-0.44, 1.99] | fixed | conditional > group2 | [ 0.16, 3.04] | fixed | conditional plot(my_first_si)"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"bayesfactor_models","dir":"Articles","previous_headings":"","what":"2. Comparing Models using Bayes Factors","title":"Bayes Factors","text":"Bayes factors can also used compare statistical models. statistical context, answer following question: model observed data probable? words, model likely produced observed data? usually done comparing marginal likelihoods two models. case, Bayes factor measure relative evidence one model . Let’s use Bayes factors model comparison find model best describes length iris’ sepal using iris data set.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"for-bayesian-models-brms-and-rstanarm","dir":"Articles","previous_headings":"2. Comparing Models using Bayes Factors","what":"For Bayesian models (brms and rstanarm)","title":"Bayes Factors","text":"Note: order compute Bayes factors Bayesian models, non-default arguments must added upon fitting: brmsfit models must fitted save_pars = save_pars(= TRUE) stanreg models must fitted defined diagnostic_file. Let’s first fit 5 Bayesian regressions brms predict Sepal.Length: can now compare models bayesfactor_models() function, using denominator argument specify model rest models compared (case, intercept-model): can see Species + Petal.Length model best model - BF=2\\times 10^{53} compared null (intercept ). Due transitive property Bayes factors, can easily change reference model full Species * Petal.Length model: can see, Species + Petal.Length model also favored compared Species * Petal.Length model, though several orders magnitude less - supported 23.38 times !) can also change reference model Species model: Notice , Bayesian framework compared models need nested models, happened compared Petal.Length-model Species-model (something done frequentist framework, compared models must nested one another). can also get matrix Bayes factors pairwise model comparisons: NOTE: order correctly precisely estimate Bayes Factors, always need 4 P’s: Proper Priors 3, Plentiful Posterior 4.","code":"library(brms) # intercept only model m0 <- brm(Sepal.Length ~ 1, data = iris, prior = set_prior(\"student_t(3, 6, 6)\", class = \"Intercept\") + set_prior(\"student_t(3, 0, 6)\", class = \"sigma\"), chains = 10, iter = 5000, warmup = 1000, save_pars = save_pars(all = TRUE) ) # Petal.Length only m1 <- brm(Sepal.Length ~ Petal.Length, data = iris, prior = set_prior(\"student_t(3, 6, 6)\", class = \"Intercept\") + set_prior(\"student_t(3, 0, 6)\", class = \"sigma\") + set_prior(\"normal(0, 1)\", coef = \"Petal.Length\"), chains = 10, iter = 5000, warmup = 1000, save_pars = save_pars(all = TRUE) ) # Species only m2 <- brm(Sepal.Length ~ Species, data = iris, prior = set_prior(\"student_t(3, 6, 6)\", class = \"Intercept\") + set_prior(\"student_t(3, 0, 6)\", class = \"sigma\") + set_prior(\"normal(0, 3)\", coef = c(\"Speciesversicolor\", \"Speciesvirginica\")), chains = 10, iter = 5000, warmup = 1000, save_pars = save_pars(all = TRUE) ) # Species + Petal.Length model m3 <- brm(Sepal.Length ~ Species + Petal.Length, data = iris, prior = set_prior(\"student_t(3, 6, 6)\", class = \"Intercept\") + set_prior(\"student_t(3, 0, 6)\", class = \"sigma\") + set_prior(\"normal(0, 1)\", coef = \"Petal.Length\") + set_prior(\"normal(0, 3)\", coef = c(\"Speciesversicolor\", \"Speciesvirginica\")), chains = 10, iter = 5000, warmup = 1000, save_pars = save_pars(all = TRUE) ) # full interactive model m4 <- brm(Sepal.Length ~ Species * Petal.Length, data = iris, prior = set_prior(\"student_t(3, 6, 6)\", class = \"Intercept\") + set_prior(\"student_t(3, 0, 6)\", class = \"sigma\") + set_prior(\"normal(0, 1)\", coef = \"Petal.Length\") + set_prior(\"normal(0, 3)\", coef = c(\"Speciesversicolor\", \"Speciesvirginica\")) + set_prior(\"normal(0, 2)\", coef = c(\"Speciesversicolor:Petal.Length\", \"Speciesvirginica:Petal.Length\")), chains = 10, iter = 5000, warmup = 1000, save_pars = save_pars(all = TRUE) ) library(bayestestR) comparison <- bayesfactor_models(m1, m2, m3, m4, denominator = m0) comparison > Bayes Factors for Model Comparison > > Model BF > [1] Petal.Length 1.27e+44 > [2] Species 8.34e+27 > [3] Species + Petal.Length 2.29e+53 > [4] Species * Petal.Length 9.79e+51 > > * Against Denominator: [5] (Intercept only) > * Bayes Factor Type: marginal likelihoods (bridgesampling) update(comparison, reference = 4) > Bayes Factors for Model Comparison > > Model BF > [1] Petal.Length 1.30e-08 > [2] Species 8.52e-25 > [3] Species + Petal.Length 23.38 > [5] (Intercept only) 1.02e-52 > > * Against Denominator: [4] Species * Petal.Length > * Bayes Factor Type: marginal likelihoods (bridgesampling) update(comparison, reference = 2) > Bayes Factors for Model Comparison > > Model BF > [1] Petal.Length 1.53e+16 > [3] Species + Petal.Length 2.74e+25 > [4] Species * Petal.Length 1.17e+24 > [5] (Intercept only) 1.20e-28 > > * Against Denominator: [2] Species > * Bayes Factor Type: marginal likelihoods (bridgesampling) as.matrix(comparison) > # Bayes Factors for Model Comparison > > Numerator > Denominator > > | [1] | [2] | [3] | [4] | [5] > --------------------------------------------------------------------------------- > [1] Petal.Length | 1 | 6.54e-17 | 1.80e+09 | 7.68e+07 | 7.85e-45 > [2] Species | 1.53e+16 | 1 | 2.74e+25 | 1.17e+24 | 1.20e-28 > [3] Species + Petal.Length | 5.57e-10 | 3.64e-26 | 1 | 0.043 | 4.37e-54 > [4] Species * Petal.Length | 1.30e-08 | 8.52e-25 | 23.38 | 1 | 1.02e-52 > [5] (Intercept only) | 1.27e+44 | 8.34e+27 | 2.29e+53 | 9.79e+51 | 1"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"for-frequentist-models-via-the-bic-approximation","dir":"Articles","previous_headings":"2. Comparing Models using Bayes Factors","what":"For Frequentist models via the BIC approximation","title":"Bayes Factors","text":"also possible compute Bayes factors comparison frequentist models. done comparing BIC measures, allowing Bayesian comparison nested well non-nested frequentist models (Wagenmakers, 2007). Let’s try linear mixed-effects models:","code":"library(lme4) # define models with increasing complexity m0 <- lmer(Sepal.Length ~ (1 | Species), data = iris) m1 <- lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) m2 <- lmer(Sepal.Length ~ Petal.Length + (Petal.Length | Species), data = iris) m3 <- lmer(Sepal.Length ~ Petal.Length + Petal.Width + (Petal.Length | Species), data = iris) m4 <- lmer(Sepal.Length ~ Petal.Length * Petal.Width + (Petal.Length | Species), data = iris) # model comparison bayesfactor_models(m1, m2, m3, m4, denominator = m0) > Bayes Factors for Model Comparison > > Model BF > [m1] Petal.Length + (1 | Species) 3.82e+25 > [m2] Petal.Length + (Petal.Length | Species) 4.96e+24 > [m3] Petal.Length + Petal.Width + (Petal.Length | Species) 4.03e+23 > [m4] Petal.Length * Petal.Width + (Petal.Length | Species) 9.06e+22 > > * Against Denominator: [m0] 1 + (1 | Species) > * Bayes Factor Type: BIC approximation"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"bayesfactor_restricted","dir":"Articles","previous_headings":"2. Comparing Models using Bayes Factors","what":"Order restricted models","title":"Bayes Factors","text":"stated discussing one-sided hypothesis tests, can create new models imposing order restrictions given model. example, consider following model, predict length iris’ sepal length petal, well species, priors: - b_{petal} \\sim N(0,2) - b_{versicolors}\\ \\&\\ b_{virginica} \\sim N(0,1.2) priors unrestricted - , values -\\infty \\infty parameters model non-zero credibility (matter small; true prior posterior distribution). Subsequently, priori ordering parameters relating iris species can ordering, priori setosa can larger sepals virginica, also possible virginica larger sepals setosa! make sense let priors cover possibilities? depends prior knowledge hypotheses. example, even novice botanist assume unlikely petal length negatively associated sepal length - iris longer petals likely larger, thus also longer sepal. expert botanist perhaps assume setosas smaller sepals versicolors virginica. priors can formulated restricted priors (Morey, 2015; Morey & Rouder, 2011): novice botanist: b_{petal} > 0 expert botanist: b_{versicolors} > 0\\ \\&\\ b_{virginica} > 0 testing restrictions prior posterior samples, can see probabilities restricted distributions change observing data. can achieved bayesfactor_restricted(), compute Bayes factor restricted model vs unrestricted model. Let’s first specify restrictions logical conditions: Let’s test hypotheses: can see novice botanist’s hypothesis gets Bayes factor ~2, indicating data provides twice much evidence model petal length restricted positively associated sepal length model restriction. expert botanist? seems failed miserably, BF favoring unrestricted model many many times . possible? seems controlling petal length, versicolor virginica actually shorter sepals! Note Bayes factors compare restricted model unrestricted model. wanted compare restricted model null model, use transitive property Bayes factors like : BF_{\\text{restricted vs. NULL}} = \\frac {BF_{\\text{restricted vs. un-restricted}}} {BF_{\\text{un-restricted vs NULL}}} restrictions prior distribution, appropriate testing pre-planned (priori) hypotheses, used post hoc comparisons (Morey, 2015). NOTE: See Specifying Correct Priors Factors 2 Levels appendix .","code":"iris_model <- stan_glm(Sepal.Length ~ Species + Petal.Length, data = iris, prior = normal(0, c(2, 1.2, 1.2), autoscale = FALSE), chains = 10, iter = 5000, warmup = 1000 ) botanist_hypotheses <- c( \"Petal.Length > 0\", \"(Speciesversicolor > 0) & (Speciesvirginica > 0)\" ) model_prior <- unupdate(iris_model) botanist_BFs <- bayesfactor_restricted( posterior = iris_model, prior = model_prior, hypothesis = botanist_hypotheses ) print(botanist_BFs) > Bayes Factor (Order-Restriction) > > Hypothesis P(Prior) P(Posterior) BF > Petal.Length > 0 0.50 1 1.99 > (Speciesversicolor > 0) & (Speciesvirginica > 0) 0.25 0 0.00e+00 > > * Bayes factors for the restricted model vs. the un-restricted model."},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"bayesian-model-averaging","dir":"Articles","previous_headings":"","what":"3. Bayesian Model Averaging","title":"Bayes Factors","text":"previous section, discussed direct comparison two models determine effect supported data. However, many cases many models consider, perhaps straightforward models compare determine effect supported data. cases, can use Bayesian model averaging (BMA) determine support provided data parameter term across many models.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"bayesfactor_inclusion","dir":"Articles","previous_headings":"3. Bayesian Model Averaging","what":"Inclusion Bayes factors","title":"Bayes Factors","text":"Inclusion Bayes factors answer question: observed data probable models particular predictor, models without particular predictor? words, average, models predictor X likely produced observed data models without predictor X?5 Since model prior probability, possible sum prior probability models include predictor interest (prior inclusion probability), models include predictor (prior exclusion probability). data observed, model assigned posterior probability, can similarly consider sums posterior models’ probabilities obtain posterior inclusion probability posterior exclusion probability. , change prior inclusion odds posterior inclusion odds Inclusion Bayes factor [“BF_{Inclusion}”; Clyde, Ghosh, & Littman (2011)]. Lets use brms example : examine interaction term’s inclusion Bayes factor, can see across 5 models, model term average (1/0.171) 5.84 times less supported model without term. Note Species, factor represented model several parameters, gets single Bayes factor - inclusion Bayes factors given per predictor! can also compare matched models - averaging done across models (1) include interactions predictor interest; (2) interaction predictors, averaging done across models contain main effects interaction predictor comprised (see explanation might want ).","code":"bayesfactor_inclusion(comparison) > Inclusion Bayes Factors (Model Averaged) > > P(prior) P(posterior) Inclusion BF > Petal.Length 0.60 1.00 1.91e+25 > Species 0.60 1.00 1.25e+09 > Petal.Length:Species 0.20 0.04 0.171 > > * Compared among: all models > * Priors odds: uniform-equal bayesfactor_inclusion(comparison, match_models = TRUE) > Inclusion Bayes Factors (Model Averaged) > > P(prior) P(posterior) Inclusion BF > Petal.Length 0.40 0.96 2.74e+25 > Species 0.40 0.96 1.80e+09 > Petal.Length:Species 0.20 0.04 0.043 > > * Compared among: matched models only > * Priors odds: uniform-equal"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"comparison-with-jasp","dir":"Articles","previous_headings":"3. Bayesian Model Averaging","what":"Comparison with JASP","title":"Bayes Factors","text":"bayesfactor_inclusion() meant provide Bayes Factors per predictor, similar JASP’s Effects option. Let’s compare two. Note comparison use BayesFactor package, JASP uses hood. (Note package used different model-parameterization different default prior-specifications compared Stan-based packages.) Across models: Across matched models: Nuisance Effects: ’ll add dose null model JASP, R:","code":"library(BayesFactor) data(ToothGrowth) ToothGrowth$dose <- as.factor(ToothGrowth$dose) BF_ToothGrowth <- anovaBF(len ~ dose * supp, ToothGrowth, progress = FALSE) bayesfactor_inclusion(BF_ToothGrowth) > Inclusion Bayes Factors (Model Averaged) > > P(prior) P(posterior) Inclusion BF > supp 0.60 1.00 141.02 > dose 0.60 1.00 3.21e+14 > dose:supp 0.20 0.71 10.00 > > * Compared among: all models > * Priors odds: uniform-equal bayesfactor_inclusion(BF_ToothGrowth, match_models = TRUE) > Inclusion Bayes Factors (Model Averaged) > > P(prior) P(posterior) Inclusion BF > supp 0.40 0.28 59.71 > dose 0.40 0.29 1.38e+14 > dose:supp 0.20 0.71 2.54 > > * Compared among: matched models only > * Priors odds: uniform-equal BF_ToothGrowth_against_dose <- BF_ToothGrowth[3:4] / BF_ToothGrowth[2] # OR: # update(bayesfactor_models(BF_ToothGrowth), # subset = c(4, 5), # reference = 3) BF_ToothGrowth_against_dose > Bayes factor analysis > -------------- > [1] supp + dose : 60 ±4.6% > [2] supp + dose + supp:dose : 152 ±1.1% > > Against denominator: > len ~ dose > --- > Bayes factor type: BFlinearModel, JZS bayesfactor_inclusion(BF_ToothGrowth_against_dose) > Inclusion Bayes Factors (Model Averaged) > > P(prior) P(posterior) Inclusion BF > dose 1.00 1.00 > supp 0.67 1.00 105.77 > dose:supp 0.33 0.71 5.00 > > * Compared among: all models > * Priors odds: uniform-equal"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"weighted_posteriors","dir":"Articles","previous_headings":"3. Bayesian Model Averaging","what":"Averaging posteriors","title":"Bayes Factors","text":"Similar can average evidence predictor across models, can also average posterior estimate across models. useful situations Bayes factors seem support null effect, yet HDI alternative excludes null value (also see si() described ). example, looking Motor Trend Car Road Tests (data(mtcars)), naturally predict miles/gallon (mpg) transition type () weight (wt), number carburetors (carb)? good predictor? can determine comparing following models: seems model without carb predictor 1/BF=1.2 times likely model carb predictor. might assume latter model, HDI include point-null value 0 effect, also indicate credibility null posterior. However, case: can ? estimating HDI effect carb full model, acting assumption model correct. However, ’ve just seen, models practically tied. case limit estimation effect just one model? (Bergh, Haaf, Ly, Rouder, & Wagenmakers, 2019). Using Bayesian Model Averaging, can combine posteriors samples several models, weighted models’ marginal likelihood (done via bayesfactor_models() function). parameter part models missing others, assumed fixed 0 (can also seen method applying shrinkage estimates). results posterior distribution across several models, can now treat like posterior distribution, estimate HDI. bayestestR, can weighted_posteriors() function: can see across models consideration, posterior carb effect almost equally weighted alternative model null model - represented half posterior mass concentrated 0 - makes sense models almost equally supported data. can also see across models, now HDI contain 0. Thus resolved conflict Bayes factor HDI (Rouder, Haaf, & Vandekerckhove, 2018)! Note: Parameters might play different roles across different models. example, parameter plays different role model Y ~ + B (main effect) model Y ~ + B + :B (simple effect). many cases centering predictors (mean subtracting continuous variables, orthogonal coding factors) can cases reduce issue.","code":"mod <- stan_glm(mpg ~ wt + am, data = mtcars, prior = normal(0, c(10, 10), autoscale = FALSE), chains = 10, iter = 5000, warmup = 1000, diagnostic_file = file.path(tempdir(), \"df1.csv\"), refresh = 0 ) mod_carb <- stan_glm(mpg ~ wt + am + carb, data = mtcars, prior = normal(0, c(10, 10, 20), autoscale = FALSE), chains = 10, iter = 5000, warmup = 1000, diagnostic_file = file.path(tempdir(), \"df0.csv\"), refresh = 0 ) BF_carb <- bayesfactor_models(mod_carb, denominator = mod, verbose = FALSE) BF_carb > Bayes Factors for Model Comparison > > Model BF > [1] wt + am + carb 0.820 > > * Against Denominator: [2] wt + am > * Bayes Factor Type: marginal likelihoods (bridgesampling) hdi(mod_carb, ci = 0.95) > Highest Density Interval > > Parameter | 95% HDI > ---------------------------- > (Intercept) | [27.95, 40.05] > wt | [-5.49, -1.72] > am | [-0.75, 6.00] > carb | [-2.04, -0.31] BMA_draws <- weighted_posteriors(mod, mod_carb, verbose = FALSE) BMA_hdi <- hdi(BMA_draws, ci = 0.95) BMA_hdi plot(BMA_hdi) > Highest Density Interval > > Parameter | 95% HDI > ---------------------------- > (Intercept) | [28.77, 42.61] > wt | [-6.77, -2.18] > am | [-2.59, 5.47] > carb | [-1.69, 0.00]"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"testing-contrasts-with-emmeans-modelbased","dir":"Articles","previous_headings":"Appendices","what":"Testing contrasts (with emmeans / modelbased)","title":"Bayes Factors","text":"Besides testing parameter bayesfactor_parameters() can used test estimate based prior posterior distribution estimate. One way achieve mix bayesfactor_parameters() + emmeans test Bayesian contrasts. example, sleep example , can estimate group means difference : strong evidence mean group 1 0, group 2 0, hardly evidence difference 0. Conflict? Uncertainty? Bayesian way! can also use easystats’ modelbased package compute Bayes factors contrasts: NOTE: See Specifying Correct Priors Factors 2 Levels section .","code":"library(emmeans) (group_diff <- emmeans(model, pairwise ~ group, data = sleep)) # pass the original model via prior bayesfactor_parameters(group_diff, prior = model) > $emmeans > group emmean lower.HPD upper.HPD > 1 0.79 -0.48 2.0 > 2 2.28 1.00 3.5 > > Point estimate displayed: median > HPD interval probability: 0.95 > > $contrasts > contrast estimate lower.HPD upper.HPD > group1 - group2 -1.47 -3.2 0.223 > > Point estimate displayed: median > HPD interval probability: 0.95 > Bayes Factor (Savage-Dickey density ratio) > > group | contrast | BF > ------------------------------- > 1 | | 0.286 > 2 | | 21.18 > | group1 - group2 | 1.26 > > * Evidence Against The Null: 0 library(modelbased) estimate_contrasts(model, test = \"bf\", bf_prior = model)"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"contr_bayes","dir":"Articles","previous_headings":"Appendices","what":"Specifying correct priors for factors","title":"Bayes Factors","text":"section introduces biased priors obtained using common effects factor coding (contr.sum) dummy factor coding (contr.treatment), solution using orthonormal factor coding (contr.equalprior) (outlined Rouder, Morey, Speckman, & Province, 2012, sec. 7.2). Special care taken working factors 3 levels.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"contrasts-and-marginal-means","dir":"Articles","previous_headings":"Appendices > Specifying correct priors for factors","what":"Contrasts (and marginal means)","title":"Bayes Factors","text":"effects factor coding commonly used factorial analysis carries hidden bias applies Bayesian priors. example, want test pairwise differences 3 levels factor, expect priori differences distribution, … example, test prior pairwise differences 3 species iris dataset. Notice , though prior estimate 3 pairwise contrasts ~0, scale HDI much narrower prior setosa - versicolor contrast! happened??? caused inherent bias priors introduced effects coding (’s even worse default treatment coding, prior intercept usually drastically different effect’s parameters). since affects priors, bias also bias Bayes factors / understating evidence contrasts others! solution use equal-prior factor coding, -la contr.equalprior* family, can either specify factor coding per-factor: can set globally: Let’s estimate prior differences: can see using contr.equalprior_pairs coding scheme, equal priors pairwise contrasts, width corresponding normal(0, c(1, 1), autoscale = FALSE) prior set! solutions problem priors. can read Solomon Kurz’s blog post.","code":"df <- iris contrasts(df$Species) <- contr.sum fit_sum <- stan_glm(Sepal.Length ~ Species, data = df, prior = normal(0, c(1, 1), autoscale = FALSE), prior_PD = TRUE, # sample priors family = gaussian(), chains = 10, iter = 5000, warmup = 1000, refresh = 0 ) (pairs_sum <- pairs(emmeans(fit_sum, ~Species))) ggplot(stack(insight::get_parameters(pairs_sum)), aes(x = values, fill = ind)) + geom_density(linewidth = 1) + facet_grid(ind ~ .) + labs(x = \"prior difference values\") + theme(legend.position = \"none\") > contrast estimate lower.HPD upper.HPD > setosa - versicolor -0.017 -2.8 2.7 > setosa - virginica -0.027 -4.0 4.6 > versicolor - virginica 0.001 -4.2 4.5 > > Point estimate displayed: median > HPD interval probability: 0.95 contrasts(df$Species) <- contr.equalprior_pairs options(contrasts = c(\"contr.equalprior_pairs\", \"contr.poly\")) fit_bayes <- stan_glm(Sepal.Length ~ Species, data = df, prior = normal(0, c(1, 1), autoscale = FALSE), prior_PD = TRUE, # sample priors family = gaussian(), chains = 10, iter = 5000, warmup = 1000, refresh = 0 ) (pairs_bayes <- pairs(emmeans(fit_bayes, ~Species))) ggplot(stack(insight::get_parameters(pairs_bayes)), aes(x = values, fill = ind)) + geom_density(linewidth = 1) + facet_grid(ind ~ .) + labs(x = \"prior difference values\") + theme(legend.position = \"none\") > contrast estimate lower.HPD upper.HPD > setosa - versicolor 0.0000 -2.10 1.89 > setosa - virginica 0.0228 -1.93 1.99 > versicolor - virginica 0.0021 -2.06 1.89 > > Point estimate displayed: median > HPD interval probability: 0.95"},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"order-restrictions","dir":"Articles","previous_headings":"Appendices > Specifying correct priors for factors","what":"Order restrictions","title":"Bayes Factors","text":"bias also affect order restrictions involving 3 levels. example, want test order restriction among , B, C, priori probability obtaining order > C > B 1/6 (reach back intro stats year 1), … example, interested following order restrictions iris dataset (line separate restriction): default factor coding, looks like : happened??? comparison 2 levels prior ~0.5, expected. comparison 3 levels different priors, depending order restriction - .e. orders priori likely others!!! , solved using equal prior factor coding ().","code":"hyp <- c( # comparing 2 levels \"setosa < versicolor\", \"setosa < virginica\", \"versicolor < virginica\", # comparing 3 (or more) levels \"setosa < virginica & virginica < versicolor\", \"virginica < setosa & setosa < versicolor\", \"setosa < versicolor & versicolor < virginica\" ) contrasts(df$Species) <- contr.sum fit_sum <- stan_glm(Sepal.Length ~ Species, data = df, prior = normal(0, c(1, 1), autoscale = FALSE), family = gaussian(), chains = 10, iter = 5000, warmup = 1000 ) em_sum <- emmeans(fit_sum, ~Species) # the posterior marginal means bayesfactor_restricted(em_sum, fit_sum, hypothesis = hyp) > > SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). > Chain 1: > Chain 1: Gradient evaluation took 2.2e-05 seconds > Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds. > Chain 1: Adjust your expectations accordingly! > Chain 1: > Chain 1: > Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) > Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) > Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) > Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) > Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) > Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) > Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) > Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) > Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) > Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) > Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) > Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) > Chain 1: > Chain 1: Elapsed Time: 0.028 seconds (Warm-up) > Chain 1: 0.045 seconds (Sampling) > Chain 1: 0.073 seconds (Total) > Chain 1: > > SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). > Chain 2: > Chain 2: Gradient evaluation took 1e-05 seconds > Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. > Chain 2: Adjust your expectations accordingly! > Chain 2: > Chain 2: > Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) > Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) > Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) > Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) > Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) > Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) > Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) > Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) > Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) > Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) > Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) > Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) > Chain 2: > Chain 2: Elapsed Time: 0.03 seconds (Warm-up) > Chain 2: 0.041 seconds (Sampling) > Chain 2: 0.071 seconds (Total) > Chain 2: > > SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). > Chain 3: > Chain 3: Gradient evaluation took 1.1e-05 seconds > Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. > Chain 3: Adjust your expectations accordingly! > Chain 3: > Chain 3: > Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) > Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) > Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) > Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) > Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) > Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) > Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) > Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) > Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) > Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) > Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) > Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) > Chain 3: > Chain 3: Elapsed Time: 0.028 seconds (Warm-up) > Chain 3: 0.039 seconds (Sampling) > Chain 3: 0.067 seconds (Total) > Chain 3: > > SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). > Chain 4: > Chain 4: Gradient evaluation took 1.1e-05 seconds > Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. > Chain 4: Adjust your expectations accordingly! > Chain 4: > Chain 4: > Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) > Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) > Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) > Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) > Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) > Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) > Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) > Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) > Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) > Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) > Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) > Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) > Chain 4: > Chain 4: Elapsed Time: 0.027 seconds (Warm-up) > Chain 4: 0.041 seconds (Sampling) > Chain 4: 0.068 seconds (Total) > Chain 4: > Bayes Factor (Order-Restriction) > > Hypothesis P(Prior) P(Posterior) BF > setosa < versicolor 0.51 1 1.97 > setosa < virginica 0.49 1 2.02 > versicolor < virginica 0.49 1 2.03 > setosa < virginica & virginica < versicolor 0.11 0 0.00e+00 > virginica < setosa & setosa < versicolor 0.20 0 0.00e+00 > setosa < versicolor & versicolor < virginica 0.20 1 5.09 > > * Bayes factors for the restricted model vs. the un-restricted model. contrasts(df$Species) <- contr.equalprior_pairs fit_bayes <- stan_glm(Sepal.Length ~ Species, data = df, prior = normal(0, c(1, 1), autoscale = FALSE), family = gaussian(), chains = 10, iter = 5000, warmup = 1000 ) em_bayes <- emmeans(fit_sum, ~Species) # the posterior marginal means bayesfactor_restricted(em_bayes, fit_sum, hypothesis = hyp) > Bayes Factor (Order-Restriction) > > Hypothesis P(Prior) P(Posterior) BF > setosa < versicolor 0.49 1 2.06 > setosa < virginica 0.49 1 2.03 > versicolor < virginica 0.51 1 1.96 > setosa < virginica & virginica < versicolor 0.17 0 0.00e+00 > virginica < setosa & setosa < versicolor 0.16 0 0.00e+00 > setosa < versicolor & versicolor < virginica 0.16 1 6.11 > > * Bayes factors for the restricted model vs. the un-restricted model."},{"path":"https://easystats.github.io/bayestestR/articles/bayes_factors.html","id":"conclusion","dir":"Articles","previous_headings":"Appendices > Specifying correct priors for factors","what":"Conclusion","title":"Bayes Factors","text":"comparing results two factor coding schemes, find: 1. cases, estimated (posterior) means quite similar (identical). 2. priors Bayes factors differ two schemes. 3. contr.equalprior*, prior distribution difference order 3 () means balanced. Read equal prior contrasts contr.equalprior docs!","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/bayestestR.html","id":"why-use-the-bayesian-framework","dir":"Articles","previous_headings":"","what":"Why use the Bayesian Framework?","title":"Get Started with Bayesian Analysis","text":"Bayesian framework statistics quickly gaining popularity among scientists, associated general shift towards open honest science. Reasons prefer approach : reliability (Etz & Vandekerckhove, 2016) accuracy (noisy data small samples) (Kruschke, Aguinis, & Joo, 2012) possibility introducing prior knowledge analysis (Andrews & Baguley, 2013; Kruschke et al., 2012) critically, intuitive nature results straightforward interpretation (Kruschke, 2010; Wagenmakers et al., 2018) general, frequentist approach associated focus null hypothesis testing, misuse p-values shown critically contribute reproducibility crisis social psychological sciences (Chambers, Feredoes, Muthukumaraswamy, & Etchells, 2014; Szucs & Ioannidis, 2016). emerging consensus generalization Bayesian approach one way overcoming issues (Benjamin et al., 2018; Etz & Vandekerckhove, 2016). agree Bayesian framework right way go, might wonder exactly framework. ’s fuss ?","code":""},{"path":"https://easystats.github.io/bayestestR/articles/bayestestR.html","id":"what-is-the-bayesian-framework","dir":"Articles","previous_headings":"","what":"What is the Bayesian Framework?","title":"Get Started with Bayesian Analysis","text":"Adopting Bayesian framework shift paradigm change methodology. Indeed, common statistical procedures (t-tests, correlations, ANOVAs, regressions, etc.) can achieved using Bayesian framework. key difference frequentist framework (“classical” approach statistics, p t values, well weird degrees freedom), effects fixed (unknown) data random. words, assumes unknown parameter unique value trying estimate/guess using sample data. hand, Bayesian framework, instead estimating “true effect”, probability different effects given observed data computed, resulting distribution possible values parameters, called posterior distribution. uncertainty Bayesian inference can summarized, instance, median distribution, well range values posterior distribution includes 95% probable values (95% credible interval). Cum grano salis, considered counterparts point-estimate confidence interval frequentist framework. illustrate difference interpretation, Bayesian framework allows say “given observed data, effect 95% probability falling within range”, frequentist (less intuitive) alternative “repeatedly computing confidence intervals data sort, 95% probability effect falls within given range”. essence, Bayesian sampling algorithms (MCMC sampling) return probability distribution (posterior) effect compatible observed data. Thus, effect can described characterizing posterior distribution relation centrality (point-estimates), uncertainty, well existence significance words, putting maths behind aside moment, can say : frequentist approach tries estimate real effect. instance, “real” value correlation x y. Hence, frequentist models return point-estimate (.e., single value distribution) “real” correlation (e.g., r = 0.42) estimated number obscure assumptions (minimum, considering data sampled random “parent”, usually normal distribution). Bayesian framework assumes thing. data . Based observed data (prior belief result), Bayesian sampling algorithm (MCMC sampling one example) returns probability distribution (called posterior) effect compatible observed data. correlation x y, return distribution says, example, “probable effect 0.42, data also compatible correlations 0.12 0.74 certain probabilities”. characterize statistical significance effects, need p-values, indices. simply describe posterior distribution effect. example, can report median, 89% Credible Interval indices. Accurate depiction regular Bayesian user estimating credible interval. Note: Altough purpose package advocate use Bayesian statistics, please note serious arguments supporting frequentist indices (see instance thread). always, world black white (p < .001). … work?","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/bayestestR.html","id":"bayestestr-installation","dir":"Articles","previous_headings":"A simple example","what":"bayestestR installation","title":"Get Started with Bayesian Analysis","text":"can install bayestestR along whole easystats suite running following: Let’s also install load rstanarm, allows fitting Bayesian models, well bayestestR, describe .","code":"install.packages(\"remotes\") remotes::install_github(\"easystats/easystats\") install.packages(\"rstanarm\") library(rstanarm)"},{"path":"https://easystats.github.io/bayestestR/articles/bayestestR.html","id":"traditional-linear-regression","dir":"Articles","previous_headings":"A simple example","what":"Traditional linear regression","title":"Get Started with Bayesian Analysis","text":"Let’s start fitting simple frequentist linear regression (lm() function stands linear model) two numeric variables, Sepal.Length Petal.Length famous iris dataset, included default R. analysis suggests statistically significant (whatever means) positive (coefficient 0.41) linear relationship two variables. Fitting interpreting frequentist models easy obvious people use instead Bayesian framework… right? anymore.","code":"model <- lm(Sepal.Length ~ Petal.Length, data = iris) summary(model) Call: lm(formula = Sepal.Length ~ Petal.Length, data = iris) Residuals: Min 1Q Median 3Q Max -1.2468 -0.2966 -0.0152 0.2768 1.0027 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.3066 0.0784 54.9 <2e-16 *** Petal.Length 0.4089 0.0189 21.6 <2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.41 on 148 degrees of freedom Multiple R-squared: 0.76, Adjusted R-squared: 0.758 F-statistic: 469 on 1 and 148 DF, p-value: <2e-16"},{"path":"https://easystats.github.io/bayestestR/articles/bayestestR.html","id":"bayesian-linear-regression","dir":"Articles","previous_headings":"A simple example","what":"Bayesian linear regression","title":"Get Started with Bayesian Analysis","text":"Summary Posterior Distribution ’s ! just fitted Bayesian version model simply using stan_glm() function instead lm() described posterior distributions parameters! conclusion draw, example, similar. effect (median effect’s posterior distribution) 0.41, can also considered significant Bayesian sense (later). , ready learn ? Check next tutorial! , want even , can check articles describing functionality package offer! https://easystats.github.io/bayestestR/articles/","code":"model <- stan_glm(Sepal.Length ~ Petal.Length, data = iris) posteriors <- describe_posterior(model) # for a nicer table print_md(posteriors, digits = 2)"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/credible_interval.html","id":"what-is-a-credible-interval","dir":"Articles","previous_headings":"","what":"What is a Credible Interval?","title":"Credible Intervals (CI)","text":"Credible intervals important concept Bayesian statistics. core purpose describe summarise uncertainty related unknown parameters trying estimate. regard, appear quite similar frequentist Confidence Intervals. However, goal similar, statistical definition meaning different. Indeed, latter obtained complex algorithm full rarely-tested assumptions approximations, credible intervals fairly straightforward compute. Bayesian inference returns distribution possible effect values (posterior), credible interval just range containing particular percentage probable values. instance, 95% credible interval simply central portion posterior distribution contains 95% values. Note drastically improve interpretability Bayesian interval compared frequentist one. Indeed, Bayesian framework allows us say “given observed data, effect 95% probability falling within range”, compared less straightforward, frequentist alternative (95% Confidence* Interval) “95% probability computing confidence interval data sort, effect falls within range”.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/credible_interval.html","id":"vs--95-ci","dir":"Articles","previous_headings":"","what":"89% vs. 95% CI","title":"Credible Intervals (CI)","text":"Using 89% another popular choice, used default long time (read story change). start? Naturally, came choosing CI level report default, people started using 95%, arbitrary convention used frequentist world. However, authors suggested 95% might appropriate Bayesian posterior distributions, potentially lacking stability enough posterior samples drawn (Kruschke, 2014). proposition use 90% instead 95%. However, recently, McElreath (2014, 2018) suggested use arbitrary thresholds first place, use 89%? Moreover, 89 highest prime number exceed already unstable 95% threshold. anything? Nothing, reminds us total arbitrariness conventions (McElreath, 2018). Thus, CIs computed 89% intervals (ci = 0.89), deemed stable , instance, 95% intervals (Kruschke, 2014). effective sample size (ESS; see ) least 10.000 recommended one wants compute precise 95% intervals (Kruschke, 2014, p. 183ff). Unfortunately, default number posterior samples Bayes packages (e.g., rstanarm brms) 4.000 (thus, might want increase fitting model). However, 95% advantages . instance, shares (case normal posterior distribution) intuitive relationship standard deviation conveys accurate image (artificial) bounds distribution. Also, wider, makes analyses conservative (.e., probability covering 0 larger 95% CI lower ranges 89%), good thing context reproducibility crisis. add mess, software use different default, instance 90%. Ultimately, user make informed decision, based needs goals, justify choice.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/credible_interval.html","id":"different-types-of-cis","dir":"Articles","previous_headings":"","what":"Different types of CIs","title":"Credible Intervals (CI)","text":"reader might notice bayestestR provides two methods compute credible intervals, Highest Density Interval (HDI) (hdi()) Equal-tailed Interval (ETI) (eti()). methods can also changed via method argument ci() function. difference? Let’s see: exactly … also case types distributions? difference strong one. Contrary HDI, points within interval higher probability density points outside interval, ETI equal-tailed. means 90% interval 5% distribution either side limits. indicates 5th percentile 95th percentile. symmetric distributions, two methods computing credible intervals, ETI HDI, return similar results. case skewed distributions. Indeed, possible parameter values ETI lower credibility (less probable) parameter values outside ETI. property seems undesirable summary credible values distribution. hand, ETI range change transformations applied distribution (instance, log-odds probabilities transformation): lower higher bounds transformed distribution correspond transformed lower higher bounds original distribution. contrary, applying transformations distribution change resulting HDI. Thus, instance, exponentiated credible intervals required, calculating ETI recommended.","code":"library(bayestestR) library(ggplot2) # Generate a normal distribution posterior <- distribution_normal(1000) # Compute HDI and ETI ci_hdi <- ci(posterior, method = \"HDI\") ci_eti <- ci(posterior, method = \"ETI\") # Plot the distribution and add the limits of the two CIs out <- estimate_density(posterior, extend = TRUE) ggplot(out, aes(x = x, y = y)) + geom_area(fill = \"orange\") + theme_classic() + # HDI in blue geom_vline(xintercept = ci_hdi$CI_low, color = \"royalblue\", linewidth = 3) + geom_vline(xintercept = ci_hdi$CI_high, color = \"royalblue\", linewidth = 3) + # Quantile in red geom_vline(xintercept = ci_eti$CI_low, color = \"red\", linewidth = 1) + geom_vline(xintercept = ci_eti$CI_high, color = \"red\", linewidth = 1) # Generate a beta distribution posterior <- distribution_beta(1000, 6, 2) # Compute HDI and Quantile CI ci_hdi <- ci(posterior, method = \"HDI\") ci_eti <- ci(posterior, method = \"ETI\") # Plot the distribution and add the limits of the two CIs out <- estimate_density(posterior, extend = TRUE) ggplot(out, aes(x = x, y = y)) + geom_area(fill = \"orange\") + theme_classic() + # HDI in blue geom_vline(xintercept = ci_hdi$CI_low, color = \"royalblue\", linewidth = 3) + geom_vline(xintercept = ci_hdi$CI_high, color = \"royalblue\", linewidth = 3) + # ETI in red geom_vline(xintercept = ci_eti$CI_low, color = \"red\", linewidth = 1) + geom_vline(xintercept = ci_eti$CI_high, color = \"red\", linewidth = 1)"},{"path":"https://easystats.github.io/bayestestR/articles/credible_interval.html","id":"the-support-interval","dir":"Articles","previous_headings":"","what":"The Support Interval","title":"Credible Intervals (CI)","text":"Unlike HDI ETI, look posterior distribution, Support Interval (SI) provides information regarding change credibility values prior posterior - words, indicates values parameter gained support observed data factor greater equal k (Wagenmakers, Gronau, Dablander, & Etz, 2018). blue lines values received support data (BF = 1~SI), red lines values received least moderate support (BF = 3~SI) data. perspective Savage-Dickey Bayes factor, testing point null hypothesis value within Support Interval yield Bayes factor smaller 1/BF.","code":"prior <- distribution_normal(40000, mean = 0, sd = 1) posterior <- distribution_normal(40000, mean = 0.5, sd = 0.3) si_1 <- si(posterior, prior, BF = 1) si_3 <- si(posterior, prior, BF = 3) ggplot(mapping = aes(x = x, y = y)) + theme_classic() + # The posterior geom_area( fill = \"orange\", data = estimate_density(posterior, extend = TRUE) ) + # The prior geom_area( color = \"black\", fill = NA, linewidth = 1, linetype = \"dashed\", data = estimate_density(prior, extend = TRUE) ) + # BF = 1 SI in blue geom_vline(xintercept = si_1$CI_low, color = \"royalblue\", linewidth = 1) + geom_vline(xintercept = si_1$CI_high, color = \"royalblue\", linewidth = 1) + # BF = 3 SI in red geom_vline(xintercept = si_3$CI_low, color = \"red\", linewidth = 1) + geom_vline(xintercept = si_3$CI_high, color = \"red\", linewidth = 1)"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"loading-the-packages","dir":"Articles","previous_headings":"","what":"Loading the packages","title":"1. Initiation to Bayesian models","text":"’ve installed necessary packages, can load rstanarm (fit models), bayestestR (compute useful indices), insight (access parameters).","code":"library(rstanarm) library(bayestestR) library(insight)"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"simple-linear-regression-model","dir":"Articles","previous_headings":"","what":"Simple linear (regression) model","title":"1. Initiation to Bayesian models","text":"begin conducting simple linear regression test relationship Petal.Length (predictor, independent, variable) Sepal.Length (response, dependent, variable) iris dataset included default R.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"fitting-the-model","dir":"Articles","previous_headings":"Simple linear (regression) model","what":"Fitting the model","title":"1. Initiation to Bayesian models","text":"Let’s start fitting frequentist version model, just reference point: can also zoom parameters interest us: model, linear relationship Petal.Length Sepal.Length positive significant (\\beta = 0.41, t(148) = 21.6, p < .001). means one-unit increase Petal.Length (predictor), can expect Sepal.Length (response) increase 0.41. effect can visualized plotting predictor values x axis response values y using ggplot2 package: Now let’s fit Bayesian version model using stan_glm function rstanarm package: can see sampling algorithm run.","code":"model <- lm(Sepal.Length ~ Petal.Length, data = iris) summary(model) > > Call: > lm(formula = Sepal.Length ~ Petal.Length, data = iris) > > Residuals: > Min 1Q Median 3Q Max > -1.2468 -0.2966 -0.0152 0.2768 1.0027 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 4.3066 0.0784 54.9 <2e-16 *** > Petal.Length 0.4089 0.0189 21.6 <2e-16 *** > --- > Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 > > Residual standard error: 0.41 on 148 degrees of freedom > Multiple R-squared: 0.76, Adjusted R-squared: 0.758 > F-statistic: 469 on 1 and 148 DF, p-value: <2e-16 get_parameters(model) > Parameter Estimate > 1 (Intercept) 4.31 > 2 Petal.Length 0.41 library(ggplot2) # Load the package # The ggplot function takes the data as argument, and then the variables # related to aesthetic features such as the x and y axes. ggplot(iris, aes(x = Petal.Length, y = Sepal.Length)) + geom_point() + # This adds the points geom_smooth(method = \"lm\") # This adds a regression line model <- stan_glm(Sepal.Length ~ Petal.Length, data = iris)"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"extracting-the-posterior","dir":"Articles","previous_headings":"Simple linear (regression) model","what":"Extracting the posterior","title":"1. Initiation to Bayesian models","text":"done, let us extract parameters (.e., coefficients) model. can see, parameters take form lengthy dataframe two columns, corresponding intercept effect Petal.Length. columns contain posterior distributions two parameters. simple terms, posterior distribution set different plausible values parameter. Contrast result saw frequentist linear regression mode using lm, results single values effect model, distribution values. one important differences two frameworks.","code":"posteriors <- get_parameters(model) head(posteriors) # Show the first 6 rows > (Intercept) Petal.Length > 1 4.4 0.39 > 2 4.4 0.40 > 3 4.3 0.41 > 4 4.3 0.40 > 5 4.3 0.40 > 6 4.3 0.41"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"about-posterior-draws","dir":"Articles","previous_headings":"Simple linear (regression) model > Extracting the posterior","what":"About posterior draws","title":"1. Initiation to Bayesian models","text":"Let’s look length posteriors. size 4000, less? First , observations (rows) usually referred posterior draws. underlying idea Bayesian sampling algorithm (e.g., Monte Carlo Markov Chains - MCMC) draw hidden true posterior distribution. Thus, posterior draws can estimate underlying true posterior distribution. Therefore, draws , better estimation posterior distribution. However, increased draws also means longer computation time. look documentation (?sampling) rstanarm’s \"sampling\" algorithm used default model , can see several parameters influence number posterior draws. default, 4 chains (can see distinct sampling runs), create 2000 iter (draws). However, half iterations kept, half used warm-(convergence algorithm). Thus, total posterior draws equals 4 chains * (2000 iterations - 1000 warm-) = 4000. can change , instance: case, expected, 2 chains * (1000 iterations - 250 warm-) = 1500 posterior draws. let’s keep first model default setup (draws).","code":"nrow(posteriors) # Size (number of rows) > [1] 4000 model <- stan_glm(Sepal.Length ~ Petal.Length, data = iris, chains = 2, iter = 1000, warmup = 250) nrow(get_parameters(model)) # Size (number of rows) [1] 1500"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"visualizing-the-posterior-distribution","dir":"Articles","previous_headings":"Simple linear (regression) model > Extracting the posterior","what":"Visualizing the posterior distribution","title":"1. Initiation to Bayesian models","text":"Now ’ve understood values come , let’s look . start visualizing posterior distribution parameter interest, effect Petal.Length. distribution represents probability (y axis) different effects (x axis). central values probable extreme values. can see, distribution ranges 0.35 0.50, bulk around 0.41. Congrats! ’ve just described first posterior distribution. heart Bayesian analysis. don’t need p-values, t-values, degrees freedom. Everything need contained within posterior distribution. description consistent values obtained frequentist regression (resulted \\beta 0.41). reassuring! Indeed, cases, Bayesian analysis drastically differ frequentist results interpretation. Rather, makes results interpretable intuitive, easier understand describe. can now go ahead precisely characterize posterior distribution.","code":"ggplot(posteriors, aes(x = Petal.Length)) + geom_density(fill = \"orange\")"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"describing-the-posterior","dir":"Articles","previous_headings":"Simple linear (regression) model","what":"Describing the Posterior","title":"1. Initiation to Bayesian models","text":"Unfortunately, often practical report whole posterior distributions graphs. need find concise way summarize . recommend describe posterior distribution 3 elements: point-estimate one-value summary (similar beta frequentist regressions). credible interval representing associated uncertainty. indices significance, giving information relative importance effect.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"point-estimate","dir":"Articles","previous_headings":"Simple linear (regression) model > Describing the Posterior","what":"Point-estimate","title":"1. Initiation to Bayesian models","text":"single value can best represent posterior distribution? Centrality indices, mean, median, mode usually used point-estimates. ’s difference ? Let’s answer first inspecting mean: close frequentist \\beta. , know, mean quite sensitive outliers extremes values. Maybe median robust? Well, close mean (identical rounding values). Maybe take mode, , peak posterior distribution? Bayesian framework, value called Maximum Posteriori (MAP). Let’s see: close! Let’s visualize values posterior distribution: Well, values give similar results. Thus, choose median, value direct meaning probabilistic perspective: 50% chance true effect higher 50% chance effect lower (divides distribution two equal parts).","code":"mean(posteriors$Petal.Length) > [1] 0.41 median(posteriors$Petal.Length) > [1] 0.41 map_estimate(posteriors$Petal.Length) > MAP Estimate > > Parameter | MAP_Estimate > ------------------------ > x | 0.41 ggplot(posteriors, aes(x = Petal.Length)) + geom_density(fill = \"orange\") + # The mean in blue geom_vline(xintercept = mean(posteriors$Petal.Length), color = \"blue\", linewidth = 1) + # The median in red geom_vline(xintercept = median(posteriors$Petal.Length), color = \"red\", linewidth = 1) + # The MAP in purple geom_vline(xintercept = as.numeric(map_estimate(posteriors$Petal.Length)), color = \"purple\", linewidth = 1)"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"uncertainty","dir":"Articles","previous_headings":"Simple linear (regression) model > Describing the Posterior","what":"Uncertainty","title":"1. Initiation to Bayesian models","text":"Now point-estimate, describe uncertainty. compute range: make sense include extreme values? Probably . Thus, compute credible interval. Long story short, ’s kind similar frequentist confidence interval, easier interpret easier compute — makes sense. compute credible interval based Highest Density Interval (HDI). give us range containing 89% probable effect values. Note use 89% CIs instead 95% CIs (frequentist framework), 89% level gives stable results (Kruschke, 2014) reminds us arbitrariness conventions (McElreath, 2018). Nice, can conclude effect 89% chance falling within [0.38, 0.44] range. just computed two important pieces information describing effects.","code":"range(posteriors$Petal.Length) > [1] 0.33 0.48 hdi(posteriors$Petal.Length, ci = 0.89) > 89% HDI: [0.38, 0.44]"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"effect-significance","dir":"Articles","previous_headings":"Simple linear (regression) model > Describing the Posterior","what":"Effect significance","title":"1. Initiation to Bayesian models","text":"However, many scientific fields sufficient simply describe effects. Scientists also want know effect significance practical statistical terms, words, whether effect important. instance, effect different 0? assess significance effect. can ? Well, particular case, eloquent: possible effect values (.e., whole posterior distribution) positive 0.35, already substantial evidence effect zero. still, want objective decision criterion, say yes effect ‘significant’. One approach, similar frequentist framework, see Credible Interval contains 0. case, mean effect ‘significant’. index fine-grained, ? Can better? Yes!","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"a-linear-model-with-a-categorical-predictor","dir":"Articles","previous_headings":"","what":"A linear model with a categorical predictor","title":"1. Initiation to Bayesian models","text":"Imagine moment interested weight chickens varies depending two different feed types. example, start selecting chickwts dataset (available base R) two feed types interest us (peculiar interests): meat meals sunflowers.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"data-preparation-and-model-fitting","dir":"Articles","previous_headings":"A linear model with a categorical predictor","what":"Data preparation and model fitting","title":"1. Initiation to Bayesian models","text":"Let’s run another Bayesian regression predict weight two types feed type.","code":"library(datawizard) # We keep only rows for which feed is meatmeal or sunflower data <- data_filter(chickwts, feed %in% c(\"meatmeal\", \"sunflower\")) model <- stan_glm(weight ~ feed, data = data)"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"posterior-description","dir":"Articles","previous_headings":"A linear model with a categorical predictor","what":"Posterior description","title":"1. Initiation to Bayesian models","text":"represents posterior distribution difference meatmeal sunflowers. seems difference positive (since values concentrated right side 0). Eating sunflowers makes fat (least, ’re chicken). , much? Let us compute median CI: makes fat around 51 grams (median). However, uncertainty quite high: 89% chance difference two feed types 14 91. effect different 0?","code":"posteriors <- get_parameters(model) ggplot(posteriors, aes(x = feedsunflower)) + geom_density(fill = \"red\") median(posteriors$feedsunflower) > [1] 52 hdi(posteriors$feedsunflower) > 95% HDI: [2.76, 101.93]"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"rope-percentage","dir":"Articles","previous_headings":"A linear model with a categorical predictor","what":"ROPE Percentage","title":"1. Initiation to Bayesian models","text":"Testing whether distribution different 0 doesn’t make sense, 0 single value (probability distribution different single value infinite). However, one way assess significance define area around 0, consider practically equivalent zero (.e., absence , negligible, effect). called Region Practical Equivalence (ROPE), one way testing significance parameters. can define region? Driing driiiing – easystats team speaking. can help? – Prof. Sanders. expert chicks… mean chickens. Just calling let know based expert knowledge, effect -20 20 negligible. Bye. Well, ’s convenient. Now know can define ROPE [-20, 20] range. effects within range considered null (negligible). can now compute proportion 89% probable values (89% CI) null, .e., outside range. 5% 89% CI can considered null. lot? Based guidelines, yes, much. Based particular definition ROPE, conclude effect significant (probability negligible high). said, honest, doubts Prof. Sanders. don’t really trust definition ROPE. objective way defining ? Prof. Sanders giving default values define Region Practical Equivalence (ROPE). Yes! One practice instance use tenth (1/10 = 0.1) standard deviation (SD) response variable, can considered “negligible” effect size (Cohen, 1988). Let’s redefine ROPE region within [-6.2, 6.2] range. Note can directly obtained rope_range function :) Let’s recompute percentage ROPE: reasonable definition ROPE, observe 89% posterior distribution effect overlap ROPE. Thus, can conclude effect significant (sense important enough noted).","code":"rope(posteriors$feedsunflower, range = c(-20, 20), ci = 0.89) > # Proportion of samples inside the ROPE [-20.00, 20.00]: > > inside ROPE > ----------- > 4.95 % rope_value <- 0.1 * sd(data$weight) rope_range <- c(-rope_value, rope_value) rope_range > [1] -6.2 6.2 rope_value <- rope_range(model) rope_value > [1] -6.2 6.2 rope(posteriors$feedsunflower, range = rope_range, ci = 0.89) > # Proportion of samples inside the ROPE [-6.17, 6.17]: > > inside ROPE > ----------- > 0.00 %"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"probability-of-direction-pd","dir":"Articles","previous_headings":"A linear model with a categorical predictor","what":"Probability of Direction (pd)","title":"1. Initiation to Bayesian models","text":"Maybe interested whether effect non-negligible. Maybe just want know effect positive negative. case, can simply compute proportion posterior positive, matter “size” effect. can conclude effect positive probability 98%. call index Probability Direction (pd). can, fact, computed easily following: Interestingly, happens index usually highly correlated frequentist p-value. almost roughly infer corresponding p-value simple transformation: ran model frequentist framework, approximately observe effect p-value 0.04. true?","code":"# select only positive values n_positive <- nrow(data_filter(posteriors, feedsunflower > 0)) n_positive / nrow(posteriors) * 100 > [1] 98 p_direction(posteriors$feedsunflower) > Probability of Direction > > Parameter | pd > ------------------ > Posterior | 98.09% pd <- 97.82 onesided_p <- 1 - pd / 100 twosided_p <- onesided_p * 2 twosided_p > [1] 0.044"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"comparison-to-frequentist","dir":"Articles","previous_headings":"A linear model with a categorical predictor > Probability of Direction (pd)","what":"Comparison to frequentist","title":"1. Initiation to Bayesian models","text":"frequentist model tells us difference positive significant (\\beta = 52, p = 0.04). Although arrived similar conclusion, Bayesian framework allowed us develop profound intuitive understanding effect, uncertainty estimation.","code":"summary(lm(weight ~ feed, data = data)) > > Call: > lm(formula = weight ~ feed, data = data) > > Residuals: > Min 1Q Median 3Q Max > -123.91 -25.91 -6.92 32.09 103.09 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 276.9 17.2 16.10 2.7e-13 *** > feedsunflower 52.0 23.8 2.18 0.04 * > --- > Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 > > Residual standard error: 57 on 21 degrees of freedom > Multiple R-squared: 0.185, Adjusted R-squared: 0.146 > F-statistic: 4.77 on 1 and 21 DF, p-value: 0.0405"},{"path":"https://easystats.github.io/bayestestR/articles/example1.html","id":"all-with-one-function","dir":"Articles","previous_headings":"","what":"All with one function","title":"1. Initiation to Bayesian models","text":"yet, agree, bit tedious extract compute indices. told can , , one function? Behold, describe_posterior! function computes adored mentioned indices, can run directly model: Tada! ! median, CI, pd ROPE percentage! Understanding describing posterior distributions just one aspect Bayesian modelling. ready ?! Click see next example.","code":"describe_posterior(model, test = c(\"p_direction\", \"rope\", \"bayesfactor\")) > Summary of Posterior Distribution > > Parameter | Median | 95% CI | pd | ROPE | % in ROPE > ------------------------------------------------------------------------------ > (Intercept) | 277.13 | [240.57, 312.75] | 100% | [-6.17, 6.17] | 0% > feedsunflower | 51.69 | [ 2.81, 102.04] | 98.09% | [-6.17, 6.17] | 1.01% > > Parameter | BF | Rhat | ESS > ------------------------------------------- > (Intercept) | 1.77e+13 | 1.000 | 32904.00 > feedsunflower | 0.770 | 1.000 | 32751.00"},{"path":[]},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"frequentist-version","dir":"Articles","previous_headings":"Correlations","what":"Frequentist version","title":"2. Confirmation of Bayesian skills","text":", let us begin frequentist correlation two continuous variables, width length sepals flowers. data available R iris dataset (used previous tutorial). compute Pearson’s correlation test, store results object called result, display : can see output, test actually compared two hypotheses: - null hypothesis (h0; correlation), - alternative hypothesis (h1; non-null correlation). Based p-value, null hypothesis rejected: correlation two variables negative non-significant (r = -.12, p > .05).","code":"result <- cor.test(iris$Sepal.Width, iris$Sepal.Length) result > > Pearson's product-moment correlation > > data: iris$Sepal.Width and iris$Sepal.Length > t = -1, df = 148, p-value = 0.2 > alternative hypothesis: true correlation is not equal to 0 > 95 percent confidence interval: > -0.273 0.044 > sample estimates: > cor > -0.12"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"bayesian-correlation","dir":"Articles","previous_headings":"Correlations","what":"Bayesian correlation","title":"2. Confirmation of Bayesian skills","text":"compute Bayesian correlation test, need BayesFactor package (can install running install.packages(\"BayesFactor\")). can load package, compute correlation using correlationBF() function, store result. Now, let us run describe_posterior() function : see many things , important indices now median posterior distribution, -.11. () quite close frequentist correlation. , previously, describe credible interval, pd ROPE percentage, focus another index provided Bayesian framework, Bayes Factor (BF).","code":"library(BayesFactor) result <- correlationBF(iris$Sepal.Width, iris$Sepal.Length) describe_posterior(result) > Summary of Posterior Distribution > > Parameter | Median | 95% CI | pd | ROPE | % in ROPE | BF | Prior > ----------------------------------------------------------------------------------------------- > rho | -0.11 | [-0.27, 0.04] | 92.25% | [-0.05, 0.05] | 20.42% | 0.509 | Beta (3 +- 3)"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"bayes-factor-bf","dir":"Articles","previous_headings":"Correlations","what":"Bayes Factor (BF)","title":"2. Confirmation of Bayesian skills","text":"said previously correlation test actually compares two hypotheses, null (absence effect) alternative one (presence effect). Bayes factor (BF) allows comparison determines two models observed data probable: model effect interest, null model without effect interest. , context correlation example, null hypothesis correlation two variables (h0: \\rho = 0; \\rho stands Bayesian correlation coefficient), alternative hypothesis correlation different 0 - positive negative (h1: \\rho \\neq 0). can use bayesfactor_models() specifically compute Bayes factor comparing models: got BF 0.51. mean? Bayes factors continuous measures relative evidence, Bayes factor greater 1 giving evidence favour one models (often referred numerator), Bayes factor smaller 1 giving evidence favour model (denominator). Yes, heard right, evidence favour null! ’s one reason Bayesian framework sometimes considered superior frequentist framework. Remember stats lessons, p-value can used reject h0, accept . Bayes factor, can measure evidence - favour - null. words, frequentist framework, p-value significant, can conclude evidence effect absent, evidence absence effect. Bayesian framework, can latter. important since sometimes hypotheses effect. BFs representing evidence alternative null can reversed using BF_{01}=1/BF_{10} (01 10 correspond h0 h1 h1 h0, respectively) provide evidence null alternative. improves human readability1 cases BF alternative null smaller 1 (.e., support null). case, BF = 1/0.51 = 2, indicates data 2 times probable null compared alternative hypothesis, , though favouring null, considered anecdotal evidence null. can thus conclude anecdotal evidence favour absence correlation two variables (rmedian = 0.11, BF = 0.51), much informative statement can frequentist statistics. ’s !","code":"bayesfactor_models(result) > Bayes Factors for Model Comparison > > Model BF > [2] (rho != 0) 0.509 > > * Against Denominator: [1] (rho = 0) > * Bayes Factor Type: JZS (BayesFactor)"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"visualise-the-bayes-factor","dir":"Articles","previous_headings":"Correlations","what":"Visualise the Bayes factor","title":"2. Confirmation of Bayesian skills","text":"general, pie charts absolute -go data visualisation, brain’s perceptive system heavily distorts information presented way2. Nevertheless, one exception: pizza charts. intuitive way interpreting strength evidence provided BFs amount surprise. Wagenmakers’ pizza poking analogy. great blog. “pizza plots” can directly created see visualisation companion package easystats (can install running install.packages(\"see\")): , seeing pizza, much surprised outcome blinded poke?","code":"library(see) plot(bayesfactor_models(result)) + scale_fill_pizza()"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"t-tests","dir":"Articles","previous_headings":"","what":"t-tests","title":"2. Confirmation of Bayesian skills","text":"“know know nothing, especially versicolor virginica differ terms Sepal.Width” - Socrates. Time finally answer crucial question!","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"versicolor-vs--virginica","dir":"Articles","previous_headings":"t-tests","what":"Versicolor vs. virginica","title":"2. Confirmation of Bayesian skills","text":"Bayesian t-tests can performed similar way correlations. particularly interested two levels Species factor, versicolor virginica. start filtering iris non-relevant observations corresponding setosa specie, visualise observations distribution Sepal.Width variable. seems (visually) virgnica flowers , average, slightly higer width sepals. Let’s assess difference statistically using ttestBF() function BayesFactor package.","code":"library(datawizard) library(ggplot2) # Select only two relevant species data <- droplevels(data_filter(iris, Species != \"setosa\")) # Visualise distributions and observations ggplot(data, aes(x = Species, y = Sepal.Width, fill = Species)) + geom_violindot(fill_dots = \"black\", size_dots = 1) + scale_fill_material() + theme_modern()"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"compute-the-bayesian-t-test","dir":"Articles","previous_headings":"t-tests","what":"Compute the Bayesian t-test","title":"2. Confirmation of Bayesian skills","text":"indices, can say difference Sepal.Width virginica versicolor probability 100% negative [pd sign median] (median = -0.19, 89% CI [-0.29, -0.092]). data provides strong evidence null hypothesis (BF = 18). Keep mind see another way investigating question.","code":"result <- BayesFactor::ttestBF(formula = Sepal.Width ~ Species, data = data) describe_posterior(result) > Summary of Posterior Distribution > > Parameter | Median | 95% CI | pd | ROPE | % in ROPE > ------------------------------------------------------------------------- > Difference | -0.19 | [-0.32, -0.06] | 99.75% | [-0.03, 0.03] | 0% > > Parameter | BF | Prior > --------------------------------------- > Difference | 17.72 | Cauchy (0 +- 0.71)"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"logistic-model","dir":"Articles","previous_headings":"","what":"Logistic Model","title":"2. Confirmation of Bayesian skills","text":"hypothesis one uses t-test can also tested using binomial model (e.g., logistic model). Indeed, possible reformulate following hypothesis, “important difference variable two groups” hypothesis “variable able discriminate (classify) two groups”. However, models much powerful t-test. case difference Sepal.Width virginica versicolor, question becomes, well can classify two species using Sepal.Width.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"fit-the-model","dir":"Articles","previous_headings":"Logistic Model","what":"Fit the model","title":"2. Confirmation of Bayesian skills","text":"","code":"library(rstanarm) model <- stan_glm(Species ~ Sepal.Width, data = data, family = \"binomial\", chains = 10, iter = 5000, warmup = 1000, refresh = 0 )"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"visualise-the-model","dir":"Articles","previous_headings":"Logistic Model","what":"Visualise the model","title":"2. Confirmation of Bayesian skills","text":"Using modelbased package.","code":"library(modelbased) vizdata <- estimate_relation(model) ggplot(vizdata, aes(x = Sepal.Width, y = Predicted)) + geom_ribbon(aes(ymin = CI_low, ymax = CI_high), alpha = 0.5) + geom_line() + ylab(\"Probability of being virginica\") + theme_modern()"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"performance-and-parameters","dir":"Articles","previous_headings":"Logistic Model","what":"Performance and Parameters","title":"2. Confirmation of Bayesian skills","text":", can extract indices interest posterior distribution using old pal describe_posterior().","code":"describe_posterior(model, test = c(\"pd\", \"ROPE\", \"BF\")) > Summary of Posterior Distribution > > Parameter | Median | 95% CI | pd | ROPE | % in ROPE > --------------------------------------------------------------------------- > (Intercept) | -6.12 | [-10.45, -2.25] | 99.92% | [-0.18, 0.18] | 0% > Sepal.Width | 2.13 | [ 0.79, 3.63] | 99.94% | [-0.18, 0.18] | 0% > > Parameter | BF | Rhat | ESS > -------------------------------------- > (Intercept) | 13.38 | 1.000 | 26540.00 > Sepal.Width | 11.97 | 1.000 | 26693.00 library(performance) model_performance(model) > # Indices of model performance > > ELPD | ELPD_SE | LOOIC | LOOIC_SE | WAIC | R2 | RMSE | Sigma > ------------------------------------------------------------------------ > -66.284 | 3.052 | 132.568 | 6.104 | 132.562 | 0.099 | 0.477 | 1.000 > > ELPD | Log_loss | Score_log | Score_spherical > ------------------------------------------------ > -66.284 | 0.643 | -35.436 | 0.014"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"visualise-the-indices","dir":"Articles","previous_headings":"Logistic Model","what":"Visualise the indices","title":"2. Confirmation of Bayesian skills","text":".","code":"library(see) plot(rope(result))"},{"path":"https://easystats.github.io/bayestestR/articles/example2.html","id":"diagnostic-indices","dir":"Articles","previous_headings":"Logistic Model","what":"Diagnostic Indices","title":"2. Confirmation of Bayesian skills","text":"diagnostic indices Rhat ESS.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example3.html","id":"mixed-models","dir":"Articles","previous_headings":"","what":"Mixed Models","title":"3. Become a Bayesian master","text":"CONTINUED.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example3.html","id":"priors","dir":"Articles","previous_headings":"Mixed Models","what":"Priors","title":"3. Become a Bayesian master","text":"CONTINUED.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/example3.html","id":"whats-next","dir":"Articles","previous_headings":"","what":"What’s next?","title":"3. Become a Bayesian master","text":"journey become true Bayesian master yet . merely beginning. now time leave bayestestR universe apply Bayesian framework variety statistical contexts: Marginal means Contrast analysis Testing Contrasts Bayesian Models ‘emmeans’ ‘bayestestR’","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"how-to-describe-and-report-the-parameters-of-a-model","dir":"Articles","previous_headings":"Reporting Guidelines","what":"How to describe and report the parameters of a model","title":"Reporting Guidelines","text":"Bayesian analysis returns posterior distribution parameter (effect). minimally describe distributions, recommend reporting point-estimate centrality well information characterizing estimation uncertainty (dispersion). Additionally, one can also report indices effect existence /significance. Based previous comparison point-estimates indices effect existence, can draw following recommendations.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"centrality","dir":"Articles","previous_headings":"Reporting Guidelines > How to describe and report the parameters of a model","what":"Centrality","title":"Reporting Guidelines","text":"suggest reporting median index centrality, robust compared mean MAP estimate. However, case severely skewed posterior distribution, MAP estimate good alternative.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"uncertainty","dir":"Articles","previous_headings":"Reporting Guidelines > How to describe and report the parameters of a model","what":"Uncertainty","title":"Reporting Guidelines","text":"95% 89% Credible Intervals (CI) two reasonable ranges characterize uncertainty related estimation (see discussion differences two values). also recommend computing CIs based HDI rather quantiles, favouring probable central values. Note CI based quantile (equal-tailed interval) might appropriate case transformations (instance transforming log-odds probabilities). Otherwise, intervals originally cover null might cover transformation (see ).","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"existence","dir":"Articles","previous_headings":"Reporting Guidelines > How to describe and report the parameters of a model","what":"Existence","title":"Reporting Guidelines","text":"Reviewer 2 (circa long time ago galaxy far away). Bayesian framework can neatly delineate quantify different aspects hypothesis testing, effect existence significance. straightforward index describe existence effect Probability Direction (pd), representing certainty associated probable direction (positive negative) effect. index easy understand, simple interpret, straightforward compute, robust model characteristics, independent scale data. Moreover, strongly correlated frequentist p-value, can thus used draw parallels give reference readers non-familiar Bayesian statistics. two-sided p-value respectively .1, .05, .01 .001 correspond approximately pd* 95%, 97.5%, 99.5% 99.95%. Thus, convenience, suggest following reference values interpretation helpers: pd <= 95% ~ p > .1: uncertain pd > 95% ~ p < .1: possibly existing pd > 97%: likely existing pd > 99%: probably existing pd > 99.9%: certainly existing","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"significance","dir":"Articles","previous_headings":"Reporting Guidelines > How to describe and report the parameters of a model","what":"Significance","title":"Reporting Guidelines","text":"percentage ROPE index significance (primary meaning), informing us whether parameter related non-negligible change (terms magnitude) outcome. suggest reporting percentage full posterior distribution (full ROPE) instead given proportion CI ROPE, appears sensitive (especially delineate highly significant effects). Rather using binary, --nothing decision criterion, suggested original equivalence test, recommend using percentage continuous index significance. However, based simulation data, suggest following reference values interpretation helpers: > 99% ROPE: negligible (can accept null hypothesis) > 97.5% ROPE: probably negligible <= 97.5% & >= 2.5% ROPE: undecided significance < 2.5% ROPE: probably significant < 1% ROPE: significant (can reject null hypothesis) Note extra caution required interpretation highly depends parameters sample size ROPE range (see ).","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"template-sentence","dir":"Articles","previous_headings":"Reporting Guidelines > How to describe and report the parameters of a model","what":"Template Sentence","title":"Reporting Guidelines","text":"Based suggestions, template sentence minimal reporting parameter based posterior distribution : “effect X probability pd negative (Median = median, 89% CI [ HDIlow , HDIhigh ] can considered significant [ROPE% ROPE]).”","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"how-to-compare-different-models","dir":"Articles","previous_headings":"Reporting Guidelines","what":"How to compare different models","title":"Reporting Guidelines","text":"Although can also used assess effect existence significance, Bayes factor (BF) versatile index can used directly compare different models (data generation processes). Bayes factor ratio informs us much (less) likely observed data two compared models - usually model versus model without effect. Depending specifications null model (whether point-estimate (e.g., 0) interval), Bayes factor used context effect existence significance. general, Bayes factor greater 1 taken evidence favour one model (nominator), Bayes factor smaller 1 taken evidence favour model (denominator). Several rules thumb exist help interpretation (see ), > 3 one common threshold categorize non-anecdotal evidence.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"template-sentence-1","dir":"Articles","previous_headings":"Reporting Guidelines > How to compare different models","what":"Template Sentence","title":"Reporting Guidelines","text":"reporting Bayes factors (BF), one can use following sentence: “moderate evidence favour absence effect x (BF = BF).”","code":""},{"path":"https://easystats.github.io/bayestestR/articles/guidelines.html","id":"suggestions","dir":"Articles","previous_headings":"","what":"Suggestions","title":"Reporting Guidelines","text":"advice, opinion , encourage let us know opening discussion thread making pull request.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/indicesExistenceComparison.html","id":"indices-of-effect-existence-and-significance-in-the-bayesian-framework","dir":"Articles","previous_headings":"","what":"Indices of Effect Existence and Significance in the Bayesian Framework","title":"In-Depth 2: Comparison of Indices of Effect Existence and Significance","text":"comparison different Bayesian indices (pd, BFs, ROPE etc.) accessible . , case don’t wish read full article, following table summarizes key takeaways!","code":""},{"path":"https://easystats.github.io/bayestestR/articles/indicesExistenceComparison.html","id":"suggestions","dir":"Articles","previous_headings":"","what":"Suggestions","title":"In-Depth 2: Comparison of Indices of Effect Existence and Significance","text":"advice, opinion , encourage let us know opening discussion thread making pull request.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/mediation.html","id":"mediation-analysis-in-brms-and-rstanarm","dir":"Articles","previous_headings":"","what":"Mediation Analysis in brms and rstanarm","title":"Mediation Analysis using Bayesian Regression Models","text":"mediation() summary function, especially mediation analysis, .e. multivariate response models casual mediation effects. models m2 m3, treat treatment effect job_seek mediator effect. brms model (m2), f1 describes mediator model f2 describes outcome model. similar rstanarm model. mediation() returns data frame information direct effect (median value posterior samples treatment outcome model), mediator effect (median value posterior samples mediator outcome model), indirect effect (median value multiplication posterior samples mediator outcome model posterior samples treatment mediation model) total effect (median value sums posterior samples used direct indirect effect). proportion mediated indirect effect divided total effect. simplest call just needs model-object. Typically, mediation() finds treatment mediator variables automatically. work, use treatment mediator arguments specify related variable names. values, 89% credible intervals calculated default. Use ci calculate different interval.","code":"library(bayestestR) library(mediation) library(brms) library(rstanarm) # load sample data data(jobs) set.seed(123) # linear models, for mediation analysis b1 <- lm(job_seek ~ treat + econ_hard + sex + age, data = jobs) b2 <- lm(depress2 ~ treat + job_seek + econ_hard + sex + age, data = jobs) # mediation analysis, for comparison with brms m1 <- mediate(b1, b2, sims = 1000, treat = \"treat\", mediator = \"job_seek\") # Fit Bayesian mediation model in brms f1 <- bf(job_seek ~ treat + econ_hard + sex + age) f2 <- bf(depress2 ~ treat + job_seek + econ_hard + sex + age) m2 <- brm(f1 + f2 + set_rescor(FALSE), data = jobs, refresh = 0) # Fit Bayesian mediation model in rstanarm m3 <- stan_mvmer( list( job_seek ~ treat + econ_hard + sex + age + (1 | occp), depress2 ~ treat + job_seek + econ_hard + sex + age + (1 | occp) ), data = jobs, refresh = 0 ) # for brms mediation(m2) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.124, 0.046] #> Indirect Effect (ACME) | -0.015 | [-0.041, 0.008] #> Mediator Effect | -0.240 | [-0.294, -0.185] #> Total Effect | -0.055 | [-0.145, 0.034] #> #> Proportion mediated: 28.14% [-181.46%, 237.75%] # for rstanarm mediation(m3) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.129, 0.048] #> Indirect Effect (ACME) | -0.018 | [-0.042, 0.006] #> Mediator Effect | -0.241 | [-0.296, -0.187] #> Total Effect | -0.057 | [-0.151, 0.033] #> #> Proportion mediated: 30.59% [-221.09%, 282.26%]"},{"path":"https://easystats.github.io/bayestestR/articles/mediation.html","id":"comparison-to-the-mediation-package","dir":"Articles","previous_headings":"","what":"Comparison to the mediation package","title":"Mediation Analysis using Bayesian Regression Models","text":"comparison mediation package. Note summary()-output mediation package shows indirect effect first, followed direct effect. want calculate mean instead median values posterior samples, use centrality-argument. Furthermore, print()-method, allows print digits. can see, results similar mediation package produces non-Bayesian models.","code":"summary(m1) #> #> Causal Mediation Analysis #> #> Quasi-Bayesian Confidence Intervals #> #> Estimate 95% CI Lower 95% CI Upper p-value #> ACME -0.0157 -0.0387 0.01 0.19 #> ADE -0.0438 -0.1315 0.04 0.35 #> Total Effect -0.0595 -0.1530 0.02 0.21 #> Prop. Mediated 0.2137 -2.0277 2.70 0.32 #> #> Sample Size Used: 899 #> #> #> Simulations: 1000 mediation(m2, ci = 0.95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.124, 0.046] #> Indirect Effect (ACME) | -0.015 | [-0.041, 0.008] #> Mediator Effect | -0.240 | [-0.294, -0.185] #> Total Effect | -0.055 | [-0.145, 0.034] #> #> Proportion mediated: 28.14% [-181.46%, 237.75%] mediation(m3, ci = 0.95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.129, 0.048] #> Indirect Effect (ACME) | -0.018 | [-0.042, 0.006] #> Mediator Effect | -0.241 | [-0.296, -0.187] #> Total Effect | -0.057 | [-0.151, 0.033] #> #> Proportion mediated: 30.59% [-221.09%, 282.26%] m <- mediation(m2, centrality = \"mean\", ci = 0.95) print(m, digits = 4) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ------------------------------------------------------ #> Direct Effect (ADE) | -0.0395 | [-0.1237, 0.0456] #> Indirect Effect (ACME) | -0.0158 | [-0.0405, 0.0083] #> Mediator Effect | -0.2401 | [-0.2944, -0.1846] #> Total Effect | -0.0553 | [-0.1454, 0.0341] #> #> Proportion mediated: 28.60% [-181.01%, 238.20%]"},{"path":"https://easystats.github.io/bayestestR/articles/mediation.html","id":"comparison-to-sem-from-the-lavaan-package","dir":"Articles","previous_headings":"","what":"Comparison to SEM from the lavaan package","title":"Mediation Analysis using Bayesian Regression Models","text":"Finally, also compare results SEM model, using lavaan. example demonstrate “translate” model different packages modeling approached. summary output lavaan longer, can find related numbers quite easily: direct effect treatment treat (c1), -0.040 indirect effect treatment indirect_treat, -0.016 mediator effect job_seek job_seek (b), -0.240 total effect total_treat, -0.056","code":"library(lavaan) data(jobs) set.seed(1234) model <- \" # direct effects depress2 ~ c1*treat + c2*econ_hard + c3*sex + c4*age + b*job_seek # mediation job_seek ~ a1*treat + a2*econ_hard + a3*sex + a4*age # indirect effects (a*b) indirect_treat := a1*b indirect_econ_hard := a2*b indirect_sex := a3*b indirect_age := a4*b # total effects total_treat := c1 + (a1*b) total_econ_hard := c2 + (a2*b) total_sex := c3 + (a3*b) total_age := c4 + (a4*b) \" m4 <- sem(model, data = jobs) summary(m4) #> lavaan 0.6-19 ended normally after 1 iteration #> #> Estimator ML #> Optimization method NLMINB #> Number of model parameters 11 #> #> Number of observations 899 #> #> Model Test User Model: #> #> Test statistic 0.000 #> Degrees of freedom 0 #> #> Parameter Estimates: #> #> Standard errors Standard #> Information Expected #> Information saturated (h1) model Structured #> #> Regressions: #> Estimate Std.Err z-value P(>|z|) #> depress2 ~ #> treat (c1) -0.040 0.043 -0.929 0.353 #> econ_hard (c2) 0.149 0.021 7.156 0.000 #> sex (c3) 0.107 0.041 2.604 0.009 #> age (c4) 0.001 0.002 0.332 0.740 #> job_seek (b) -0.240 0.028 -8.524 0.000 #> job_seek ~ #> treat (a1) 0.066 0.051 1.278 0.201 #> econ_hard (a2) 0.053 0.025 2.167 0.030 #> sex (a3) -0.008 0.049 -0.157 0.875 #> age (a4) 0.005 0.002 1.983 0.047 #> #> Variances: #> Estimate Std.Err z-value P(>|z|) #> .depress2 0.373 0.018 21.201 0.000 #> .job_seek 0.524 0.025 21.201 0.000 #> #> Defined Parameters: #> Estimate Std.Err z-value P(>|z|) #> indirect_treat -0.016 0.012 -1.264 0.206 #> indirct_cn_hrd -0.013 0.006 -2.100 0.036 #> indirect_sex 0.002 0.012 0.157 0.875 #> indirect_age -0.001 0.001 -1.932 0.053 #> total_treat -0.056 0.045 -1.244 0.214 #> total_econ_hrd 0.136 0.022 6.309 0.000 #> total_sex 0.109 0.043 2.548 0.011 #> total_age -0.000 0.002 -0.223 0.824 # just to have the numbers right at hand and you don't need to scroll up mediation(m2, ci = 0.95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.124, 0.046] #> Indirect Effect (ACME) | -0.015 | [-0.041, 0.008] #> Mediator Effect | -0.240 | [-0.294, -0.185] #> Total Effect | -0.055 | [-0.145, 0.034] #> #> Proportion mediated: 28.14% [-181.46%, 237.75%]"},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"function-overview","dir":"Articles","previous_headings":"","what":"Function Overview","title":"Overview of Vignettes","text":"Function Reference","code":""},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"get-started","dir":"Articles","previous_headings":"","what":"Get Started","title":"Overview of Vignettes","text":"Get Started Bayesian Analysis","code":""},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"examples","dir":"Articles","previous_headings":"","what":"Examples","title":"Overview of Vignettes","text":"Initiation Bayesian models Confirmation Bayesian skills Become Bayesian master","code":""},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"articles","dir":"Articles","previous_headings":"","what":"Articles","title":"Overview of Vignettes","text":"Credible Intervals (CI)) Region Practical Equivalence (ROPE) Probability Direction (pd) Bayes Factors","code":""},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"in-depths","dir":"Articles","previous_headings":"","what":"In-Depths","title":"Overview of Vignettes","text":"Comparison Point-Estimates Indices Effect Existence Significance Bayesian Framework Mediation Analysis using Bayesian Regression Models","code":""},{"path":"https://easystats.github.io/bayestestR/articles/overview_of_vignettes.html","id":"guidelines","dir":"Articles","previous_headings":"","what":"Guidelines","title":"Overview of Vignettes","text":"Reporting Guidelines","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"what-is-the-pd","dir":"Articles","previous_headings":"","what":"What is the pd?","title":"Probability of Direction (pd)","text":"Probability Direction (pd) index effect existence, ranging 50% 100%, representing certainty effect goes particular direction (.e., positive negative). Beyond simplicity interpretation, understanding computation, index also presents interesting properties: independent model: solely based posterior distributions require additional information data model. robust scale response variable predictors. strongly correlated frequentist p-value, can thus used draw parallels give reference readers non-familiar Bayesian statistics. However, index relevant assess magnitude importance effect (meaning “significance”), better achieved indices ROPE percentage. fact, indices significance existence totally independent. can effect pd 99.99%, whole posterior distribution concentrated within [0.0001, 0.0002] range. case, effect positive high certainty, also significant (.e., small). Indices effect existence, pd, particularly useful exploratory research clinical studies, focus make sure effect interest opposite direction (clinical studies, treatment harmful). However, effect’s direction confirmed, focus shift toward significance, including precise estimation magnitude, relevance importance.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"relationship-with-the-p-value","dir":"Articles","previous_headings":"","what":"Relationship with the p-value","title":"Probability of Direction (pd)","text":"cases, seems pd direct correspondence frequentist one-sided p-value formula: p_{one-sided} = 1-p_d Similarly, two-sided p-value (commonly reported one) equivalent formula: p_{two-sided} = 2*(1-p_d) Thus, two-sided p-value respectively .1, .05, .01 .001 correspond approximately pd 95%, 97.5%, 99.5% 99.95% . Correlation frequentist p-value probability direction (pd) ’s like p-value, must bad p-value bad [insert reference reproducibility crisis]. fact, aspect reproducibility crisis might misunderstood. Indeed, p-value intrinsically bad wrong. Instead, misuse, misunderstanding misinterpretation fuels decay situation. instance, fact pd highly correlated p-value suggests latter index effect existence significance (.e., “worth interest”). Bayesian version, pd, intuitive meaning makes obvious fact thresholds arbitrary. Additionally, mathematical interpretative transparency pd, reconceptualisation index effect existence, offers valuable insight characterization Bayesian results. Moreover, concomitant proximity frequentist p-value makes perfect metric ease transition psychological research adoption Bayesian framework.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"methods-of-computation","dir":"Articles","previous_headings":"","what":"Methods of computation","title":"Probability of Direction (pd)","text":"simple direct way compute pd 1) look median’s sign, 2) select portion posterior sign 3) compute percentage portion represents. “simple” method straightforward, precision directly tied number posterior draws. second approach relies density estimation. starts estimating density function (many methods available), computing area curve (AUC) density curve side 0. density-based method hypothetically considered precise, strongly depends method used estimate density function.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"methods-comparison","dir":"Articles","previous_headings":"","what":"Methods comparison","title":"Probability of Direction (pd)","text":"Let’s compare 4 available methods, direct method 3 density-based methods differing density estimation algorithm (see estimate_density).","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"correlation","dir":"Articles","previous_headings":"Methods comparison","what":"Correlation","title":"Probability of Direction (pd)","text":"Let’s start testing proximity similarity results obtained different methods. methods give highly correlated give similar results. means method choice drastic game changer used tweak results much.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"accuracy","dir":"Articles","previous_headings":"Methods comparison","what":"Accuracy","title":"Probability of Direction (pd)","text":"test accuracy methods, start computing direct pd dense distribution (large amount observations). baseline, “true” pd. , iteratively draw smaller samples parent distribution, compute pd different methods. closer estimate reference one, better. “Kernel” based density methods seems consistently underestimate pd. Interestingly, “direct” method appears reliable, even case small number posterior draws.","code":"data <- data.frame() for (i in 1:25) { the_mean <- runif(1, 0, 4) the_sd <- abs(runif(1, 0.5, 4)) parent_distribution <- rnorm(100000, the_mean, the_sd) true_pd <- as.numeric(pd(parent_distribution)) for (j in 1:25) { sample_size <- round(runif(1, 25, 5000)) subsample <- sample(parent_distribution, sample_size) data <- rbind( data, data.frame( sample_size = sample_size, true = true_pd, direct = as.numeric(pd(subsample)) - true_pd, kernel = as.numeric(pd(subsample, method = \"kernel\")) - true_pd, logspline = as.numeric(pd(subsample, method = \"logspline\")) - true_pd, KernSmooth = as.numeric(pd(subsample, method = \"KernSmooth\")) - true_pd ) ) } } data <- as.data.frame(sapply(data, as.numeric)) library(datawizard) # for reshape_longer data <- reshape_longer(data, select = 3:6, names_to = \"Method\", values_to = \"Distance\") ggplot(data, aes(x = sample_size, y = Distance, color = Method, fill = Method)) + geom_point(alpha = 0.3, stroke = 0, shape = 16) + geom_smooth(alpha = 0.2) + geom_hline(yintercept = 0) + theme_classic() + xlab(\"\\nDistribution Size\")"},{"path":"https://easystats.github.io/bayestestR/articles/probability_of_direction.html","id":"can-the-pd-be-100","dir":"Articles","previous_headings":"Methods comparison","what":"Can the pd be 100%?","title":"Probability of Direction (pd)","text":"p = 0.000 coined one term avoid reporting results (Lilienfeld et al., 2015), even often displayed statistical software. rationale every probability distribution, value probability exactly 0. always infinitesimal probability associated data point, p = 0.000 returned software due approximations related, among , finite memory hardware. One apply rationale pd: since data points non-null probability density, pd (particular portion probability density) can never 100%. entirely valid point, people using direct method might argue pd based posterior draws, rather theoretical, hidden, true posterior distribution (approximated posterior draws). posterior draws represent finite sample pd = 100% valid statement.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"what-is-the-rope","dir":"Articles","previous_headings":"","what":"What is the ROPE?","title":"Region of Practical Equivalence (ROPE)","text":"Unlike frequentist approach, Bayesian inference based statistical significance, effects tested “zero”. Indeed, Bayesian framework offers probabilistic view parameters, allowing assessment uncertainty related . Thus, rather concluding effect present simply differs zero, conclude probability outside specific range can considered “practically effect” (.e., negligible magnitude) sufficient. range called region practical equivalence (ROPE). Indeed, statistically, probability posterior distribution different 0 make much sense (probability different single point infinite). Therefore, idea underlining ROPE let user define area around null value enclosing values equivalent null value practical purposes (J. Kruschke, 2014; J. K. Kruschke, 2010; J. K. Kruschke, Aguinis, & Joo, 2012).","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"equivalence-test","dir":"Articles","previous_headings":"","what":"Equivalence Test","title":"Region of Practical Equivalence (ROPE)","text":"ROPE, region corresponding “null” hypothesis, used equivalence test, test whether parameter significant (sense important enough cared ). test usually based “HDI+ROPE decision rule” (J. Kruschke, 2014; J. K. Kruschke & Liddell, 2018) check whether parameter values accepted rejected explicitly formulated “null hypothesis” (.e., ROPE). words, checks percentage Credible Interval (CI) null region (ROPE). percentage sufficiently low, null hypothesis rejected. percentage sufficiently high, null hypothesis accepted.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"credible-interval-in-rope-vs-full-posterior-in-rope","dir":"Articles","previous_headings":"","what":"Credible interval in ROPE vs full posterior in ROPE","title":"Region of Practical Equivalence (ROPE)","text":"Using ROPE HDI Credible Interval, Kruschke (2018) suggests using percentage 95% HDI falls within ROPE decision rule. However, 89% HDI considered better choice (J. Kruschke, 2014; R. McElreath, 2014; Richard McElreath, 2018), bayestestR provides default percentage 89% HDI falls within ROPE. However, simulation studies data suggest using percentage full posterior distribution, instead CI, might sensitive (especially delineate highly significant effects). Thus, recommend user considers using full ROPE percentage (setting ci = 1), return portion entire posterior distribution ROPE.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"what-percentage-in-rope-to-accept-or-to-reject","dir":"Articles","previous_headings":"","what":"What percentage in ROPE to accept or to reject?","title":"Region of Practical Equivalence (ROPE)","text":"HDI completely outside ROPE, “null hypothesis” parameter “rejected”. ROPE completely covers HDI, .e., credible values parameter inside region practical equivalence, null hypothesis accepted. Else, ’s unclear whether null hypothesis accepted rejected. full ROPE used (.e., 100% HDI), null hypothesis rejected accepted percentage posterior within ROPE smaller 2.5% greater 97.5%. Desirable results low proportions inside ROPE (closer zero better).","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"how-to-define-the-rope-range","dir":"Articles","previous_headings":"","what":"How to define the ROPE range?","title":"Region of Practical Equivalence (ROPE)","text":"Kruschke (2018) suggests ROPE set, default, range -0.1 0.1 standardized parameter (negligible effect size according Cohen, 1988). linear models (lm), can generalised : [-0.1*SD_{y}, 0.1*SD_{y}]. logistic models, parameters expressed log odds ratio can converted standardized difference formula: \\pi/\\sqrt{3} (see effectsize package, resulting range -0.18 -0.18. models binary outcome, strongly recommended manually specify rope argument. Currently, default applied logistic models. t-tests, standard deviation response used, similarly linear models (see ). correlations, -0.05, 0.05 used, .e., half value negligible correlation suggested Cohen’s (1988) rules thumb. models, -0.1, 0.1 used determine ROPE limits, strongly advised specify manually.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"sensitivity-to-parameters-scale","dir":"Articles","previous_headings":"","what":"Sensitivity to parameter’s scale","title":"Region of Practical Equivalence (ROPE)","text":"important consider unit (.e., scale) predictors using index based ROPE, correct interpretation ROPE representing region practical equivalence zero dependent scale predictors. Indeed, unlike indices (pd), percentage ROPE depend unit parameter. words, ROPE represents fixed portion response’s scale, proximity coefficient depends scale coefficient . instance, consider simple regression growth ~ time, modelling development Wookies babies, negligible change (ROPE) less 54 cm. time variable expressed days, find coefficient (representing growth day) 10 cm (median posterior coefficient 10). consider negligible. However, decide express time variable years, coefficient scaled transformation (now represent growth year). coefficient now around 3550 cm (10 * 355), now consider significant. can see pd percentage ROPE linear relationship Sepal.Length Sepal.Width respectively 92.95% 15.95%, corresponding uncertain significant effect. happen scale predictor? can see, simply dividing predictor 100, drastically changed conclusion related percentage ROPE (became close 0): effect now interpreted significant. Thus, recommend paying close attention unit predictors selecting ROPE range (e.g., coefficient correspond small effect?), reporting reading ROPE results.","code":"library(rstanarm) library(bayestestR) library(see) data <- iris # Use the iris data model <- stan_glm(Sepal.Length ~ Sepal.Width, data = data) # Fit model # Compute indices pd <- p_direction(model) percentage_in_rope <- rope(model, ci = 1) # Visualise the pd plot(pd) pd # Visualise the percentage in ROPE plot(percentage_in_rope) percentage_in_rope > Probability of Direction > > Parameter | pd > -------------------- > (Intercept) | 100% > Sepal.Width | 91.65% > # Proportion of samples inside the ROPE [-0.08, 0.08]: > > Parameter | inside ROPE > ------------------------- > (Intercept) | 0.00 % > Sepal.Width | 16.28 % data$Sepal.Width_scaled <- data$Sepal.Width / 100 # Divide predictor by 100 model <- stan_glm(Sepal.Length ~ Sepal.Width_scaled, data = data) # Fit model # Compute indices pd <- p_direction(model) percentage_in_rope <- rope(model, ci = 1) # Visualise the pd plot(pd) pd # Visualise the percentage in ROPE plot(percentage_in_rope) percentage_in_rope > Probability of Direction > > Parameter | pd > --------------------------- > (Intercept) | 100% > Sepal.Width_scaled | 91.65% > # Proportion of samples inside the ROPE [-0.08, 0.08]: > > Parameter | inside ROPE > -------------------------------- > (Intercept) | 0.00 % > Sepal.Width_scaled | 0.10 %"},{"path":"https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html","id":"multicollinearity-non-independent-covariates","dir":"Articles","previous_headings":"","what":"Multicollinearity: Non-independent covariates","title":"Region of Practical Equivalence (ROPE)","text":"parameters show strong correlations, .e., covariates independent, joint parameter distributions may shift towards away ROPE. Collinearity invalidates ROPE hypothesis testing based univariate marginals, probabilities conditional independence. problematic parameters partial overlap ROPE region. case collinearity, (joint) distributions parameters may either get increased decreased ROPE, means inferences based ROPE inappropriate (J. Kruschke, 2014). equivalence_test() rope() functions perform simple check pairwise correlations parameters, can collinearity two variables, first step check assumptions hypothesis testing look different pair plots. even sophisticated check projection predictive variable selection (Piironen & Vehtari, 2017).","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"introduction","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework","what":"Introduction","title":"In-Depth 1: Comparison of Point-Estimates","text":"One main difference Bayesian frequentist frameworks former returns probability distribution effect (.e., model parameter interest, regression slope) instead single value. However, still need demand - reporting use analysis - single value (point-estimate) best characterises underlying posterior distribution. three main indices used literature effect estimation: - mean - median - MAP (Maximum Posteriori) estimate (roughly corresponding mode - “peak” - distribution) Unfortunately, consensus one use, systematic comparison ever done. present work, compare three point-estimates effect , well widely known beta, extracted comparable frequentist model. comparisons can help us draw bridges relationships two influential statistical frameworks.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"methods","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 1: Relationship with Error (Noise) and Sample Size","what":"Methods","title":"In-Depth 1: Comparison of Point-Estimates","text":"carrying simulation aimed modulating following characteristics: Model type: linear logistic. “True” effect (known parameters values data drawn): Can 1 0 (effect). Sample size: 20 100 steps 10. Error: Gaussian noise applied predictor SD uniformly spread 0.33 6.66 (1000 different values). generated dataset combination characteristics, resulting total 2 * 2 * 9 * 1000 = 36000 Bayesian frequentist models. code used generation available (please note takes usually several days/weeks complete).","code":"library(ggplot2) library(datawizard) library(see) library(parameters) df <- read.csv(\"https://raw.github.com/easystats/circus/main/data/bayesSim_study1.csv\")"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"sensitivity-to-noise","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 1: Relationship with Error (Noise) and Sample Size > Results","what":"Sensitivity to Noise","title":"In-Depth 1: Comparison of Point-Estimates","text":"","code":"dat <- df dat <- data_select(dat, select = c(\"error\", \"true_effect\", \"outcome_type\", \"Coefficient\", \"Median\", \"Mean\", \"MAP\")) dat <- reshape_longer( dat, select = -c(\"error\", \"true_effect\", \"outcome_type\"), names_to = \"estimate\", values_to = \"value\" ) dat$temp <- as.factor(cut(dat$error, 10, labels = FALSE)) tmp <- lapply(split(dat, dat$temp), function(x) { x$error_group <- rep(round(mean(x$error), 1), times = nrow(x)) return(x) }) dat <- do.call(rbind, tmp) dat <- data_filter(dat, value < 6) ggplot(dat, aes(x = error_group, y = value, fill = estimate, group = interaction(estimate, error_group))) + # geom_hline(yintercept = 0) + # geom_point(alpha=0.05, size=2, stroke = 0, shape=16) + # geom_smooth(method=\"loess\") + geom_boxplot(outlier.shape = NA) + theme_modern() + scale_fill_manual( values = c(\"Coefficient\" = \"#607D8B\", \"MAP\" = \"#795548\", \"Mean\" = \"#FF9800\", \"Median\" = \"#FFEB3B\"), name = \"Index\" ) + ylab(\"Point-estimate\") + xlab(\"Noise\") + facet_wrap(~ outcome_type * true_effect, scales = \"free\")"},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"sensitivity-to-sample-size","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 1: Relationship with Error (Noise) and Sample Size > Results","what":"Sensitivity to Sample Size","title":"In-Depth 1: Comparison of Point-Estimates","text":"","code":"dat <- df dat <- data_select(dat, select = c(\"sample_size\", \"true_effect\", \"outcome_type\", \"Coefficient\", \"Median\", \"Mean\", \"MAP\")) dat <- reshape_longer( dat, select = -c(\"sample_size\", \"true_effect\", \"outcome_type\"), names_to = \"estimate\", values_to = \"value\" ) dat$temp <- as.factor(cut(dat$sample_size, 10, labels = FALSE)) tmp <- lapply(split(dat, dat$temp), function(x) { x$size_group <- rep(round(mean(x$sample_size), 1), times = nrow(x)) return(x) }) dat <- do.call(rbind, tmp) dat <- data_filter(dat, value < 6) ggplot(dat, aes(x = size_group, y = value, fill = estimate, group = interaction(estimate, size_group))) + # geom_hline(yintercept = 0) + # geom_point(alpha=0.05, size=2, stroke = 0, shape=16) + # geom_smooth(method=\"loess\") + geom_boxplot(outlier.shape = NA) + theme_modern() + scale_fill_manual( values = c(\"Coefficient\" = \"#607D8B\", \"MAP\" = \"#795548\", \"Mean\" = \"#FF9800\", \"Median\" = \"#FFEB3B\"), name = \"Index\" ) + ylab(\"Point-estimate\") + xlab(\"Sample size\") + facet_wrap(~ outcome_type * true_effect, scales = \"free\")"},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"statistical-modelling","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 1: Relationship with Error (Noise) and Sample Size > Results","what":"Statistical Modelling","title":"In-Depth 1: Comparison of Point-Estimates","text":"fitted (frequentist) multiple linear regression statistically test predict presence absence effect estimates well interaction noise sample size. suggests , order delineate presence absence effect, compared frequentist’s beta coefficient: linear models, Mean better predictor, closely followed Median, MAP frequentist Coefficient. logistic models, MAP better predictor, followed Median, Mean , behind, frequentist Coefficient. Overall, median appears safe choice, maintaining high performance across different types models.","code":"dat <- df dat <- data_select(dat, select = c(\"sample_size\", \"true_effect\", \"outcome_type\", \"Coefficient\", \"Median\", \"Mean\", \"MAP\")) dat <- reshape_longer( dat, select = -c(\"sample_size\", \"error\", \"true_effect\", \"outcome_type\"), names_to = \"estimate\", values_to = \"value\" ) out <- glm(true_effect ~ outcome_type / estimate / value, data = dat, family = \"binomial\") out <- parameters(out, ci_method = \"wald\") out <- data_select(out, c(\"Parameter\", \"Coefficient\", \"p\")) rows <- grep(\"^outcome_type(.*):value$\", x = out$Parameter) out <- data_filter(out, rows) out <- out[order(out$Coefficient, decreasing = TRUE), ] knitr::kable(out, digits = 2)"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"methods-1","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 2: Relationship with Sampling Characteristics","what":"Methods","title":"In-Depth 1: Comparison of Point-Estimates","text":"carrying another simulation aimed modulating following characteristics: Model type: linear logistic. “True” effect (original regression coefficient data drawn): Can 1 0 (effect). draws: 10 5000 step 5 (1000 iterations). warmup: Ratio warmup iterations. 1/10 9/10 step 0.1 (9 iterations). generated 3 datasets combination characteristics, resulting total 2 * 2 * 8 * 40 * 9 * 3 = 34560 Bayesian frequentist models. code used generation avaible (please note takes usually several days/weeks complete).","code":"df <- read.csv(\"https://raw.github.com/easystats/circus/main/data/bayesSim_study2.csv\")"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"sensitivity-to-number-of-iterations","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 2: Relationship with Sampling Characteristics > Results","what":"Sensitivity to number of iterations","title":"In-Depth 1: Comparison of Point-Estimates","text":"","code":"dat <- df dat <- data_select(dat, select = c(\"iterations\", \"true_effect\", \"outcome_type\", \"beta\", \"Median\", \"Mean\", \"MAP\")) dat <- reshape_longer( dat, select = -c(\"iterations\", \"true_effect\", \"outcome_type\"), names_to = \"estimate\", values_to = \"value\" ) dat$temp <- as.factor(cut(dat$iterations, 5, labels = FALSE)) tmp <- lapply(split(dat, dat$temp), function(x) { x$iterations_group <- rep(round(mean(x$iterations), 1), times = nrow(x)) return(x) }) dat <- do.call(rbind, tmp) dat <- data_filter(dat, value < 6) ggplot(dat, aes(x = iterations_group, y = value, fill = estimate, group = interaction(estimate, iterations_group))) + geom_boxplot(outlier.shape = NA) + theme_classic() + scale_fill_manual( values = c(\"beta\" = \"#607D8B\", \"MAP\" = \"#795548\", \"Mean\" = \"#FF9800\", \"Median\" = \"#FFEB3B\"), name = \"Index\" ) + ylab(\"Point-estimate of the true value 0\\n\") + xlab(\"\\nNumber of Iterations\") + facet_wrap(~ outcome_type * true_effect, scales = \"free\")"},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"sensitivity-to-warmup-ratio","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework > Experiment 2: Relationship with Sampling Characteristics > Results","what":"Sensitivity to warmup ratio","title":"In-Depth 1: Comparison of Point-Estimates","text":"","code":"dat <- df dat$warmup <- dat$warmup / dat$iterations dat <- data_select(dat, select = c(\"warmup\", \"true_effect\", \"outcome_type\", \"beta\", \"Median\", \"Mean\", \"MAP\")) dat <- reshape_longer( dat, select = -c(\"warmup\", \"true_effect\", \"outcome_type\"), names_to = \"estimate\", values_to = \"value\" ) dat$temp <- as.factor(cut(dat$warmup, 3, labels = FALSE)) tmp <- lapply(split(dat, dat$temp), function(x) { x$warmup_group <- rep(round(mean(x$warmup), 1), times = nrow(x)) return(x) }) dat <- do.call(rbind, tmp) dat <- data_filter(dat, value < 6) ggplot(dat, aes(x = warmup_group, y = value, fill = estimate, group = interaction(estimate, warmup_group))) + geom_boxplot(outlier.shape = NA) + theme_classic() + scale_fill_manual( values = c(\"beta\" = \"#607D8B\", \"MAP\" = \"#795548\", \"Mean\" = \"#FF9800\", \"Median\" = \"#FFEB3B\"), name = \"Index\" ) + ylab(\"Point-estimate of the true value 0\\n\") + xlab(\"\\nNumber of Iterations\") + facet_wrap(~ outcome_type * true_effect, scales = \"free\")"},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"discussion","dir":"Articles > Web_only","previous_headings":"Effect Point-Estimates in the Bayesian Framework","what":"Discussion","title":"In-Depth 1: Comparison of Point-Estimates","text":"Conclusions can found guidelines section article.","code":""},{"path":"https://easystats.github.io/bayestestR/articles/web_only/indicesEstimationComparison.html","id":"suggestions","dir":"Articles > Web_only","previous_headings":"","what":"Suggestions","title":"In-Depth 1: Comparison of Point-Estimates","text":"advice, opinion , encourage let us know opening discussion thread making pull request.","code":""},{"path":"https://easystats.github.io/bayestestR/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Dominique Makowski. Author, maintainer. Daniel Lüdecke. Author. Mattan S. Ben-Shachar. Author. Indrajeet Patil. Author. Micah K. Wilson. Author. Brenton M. Wiernik. Author. Paul-Christian Bürkner. Reviewer. Tristan Mahr. Reviewer. Henrik Singmann. Contributor. Quentin F. Gronau. Contributor. Sam Crawley. Contributor.","code":""},{"path":"https://easystats.github.io/bayestestR/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Makowski, D., Ben-Shachar, M., & Lüdecke, D. (2019). bayestestR: Describing Effects Uncertainty, Existence Significance within Bayesian Framework. Journal Open Source Software, 4(40), 1541. doi:10.21105/joss.01541","code":"@Article{, title = {bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework.}, author = {Dominique Makowski and Mattan S. Ben-Shachar and Daniel Lüdecke}, journal = {Journal of Open Source Software}, doi = {10.21105/joss.01541}, year = {2019}, number = {40}, volume = {4}, pages = {1541}, url = {https://joss.theoj.org/papers/10.21105/joss.01541}, }"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"bayestestr-","dir":"","previous_headings":"","what":"Understand and Describe Bayesian Models and Posterior Distributions","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Become Bayesian master ⚠️ changed default CI width! Please make informed decision set explicitly (ci = 0.89, ci = 0.95 anything else decide) ⚠️ Existing R packages allow users easily fit large variety models extract visualize posterior draws. However, packages return limited set indices (e.g., point-estimates CIs). bayestestR provides comprehensive consistent set functions analyze describe posterior distributions generated variety models objects, including popular modeling packages rstanarm, brms BayesFactor. can reference package documentation follows: Makowski, D., Ben-Shachar, M. S., & Lüdecke, D. (2019). bayestestR: Describing Effects Uncertainty, Existence Significance within Bayesian Framework. Journal Open Source Software, 4(40), 1541. 10.21105/joss.01541 Makowski, D., Ben-Shachar, M. S., Chen, S. H. ., & Lüdecke, D. (2019). Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. 10.3389/fpsyg.2019.02767","code":""},{"path":"https://easystats.github.io/bayestestR/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"bayestestR package available CRAN, latest development version available R-universe (rOpenSci). downloaded package, can load using: Tip Instead library(bayestestR), use library(easystats). make features easystats-ecosystem available. stay updated, use easystats::install_latest().","code":"library(\"bayestestR\")"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"documentation","dir":"","previous_headings":"","what":"Documentation","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Access package documentation check-vignettes:","code":""},{"path":"https://easystats.github.io/bayestestR/index.html","id":"tutorials","dir":"","previous_headings":"Documentation","what":"Tutorials","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Get Started Bayesian Analysis Example 1: Initiation Bayesian models Example 2: Confirmation Bayesian skills Example 3: Become Bayesian master","code":""},{"path":"https://easystats.github.io/bayestestR/index.html","id":"articles","dir":"","previous_headings":"Documentation","what":"Articles","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Credible Intervals (CI) Probability Direction (pd) Region Practical Equivalence (ROPE) Bayes Factors (BF) Comparison Point-Estimates Comparison Indices Effect Existence Reporting Guidelines","code":""},{"path":"https://easystats.github.io/bayestestR/index.html","id":"features","dir":"","previous_headings":"","what":"Features","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Bayesian framework, parameters estimated probabilistic fashion distributions. distributions can summarised described reporting four types indices: mean(), median() map_estimate() estimation mode. point_estimate() can used get can run directly models. hdi() Highest Density Intervals (HDI), spi() Shortest Probability Intervals (SPI) eti() Equal-Tailed Intervals (ETI). ci() can used general method Confidence Credible Intervals (CI). p_direction() Bayesian equivalent frequentist p-value (see Makowski et al., 2019) p_pointnull() represents odds null hypothesis (h0 = 0) compared likely hypothesis (MAP). bf_pointnull() classic Bayes Factor (BF) assessing likelihood effect presence absence (h0 = 0). p_rope() probability effect falling inside Region Practical Equivalence (ROPE). bf_rope() computes Bayes factor null defined region (ROPE). p_significance() combines region equivalence probability direction. describe_posterior() master function can compute indices cited . describe_posterior() works many objects, including complex brmsfit-models. better readability, output separated model components: bayestestR also includes many features useful Bayesian analyses. examples:","code":"describe_posterior( rnorm(10000), centrality = \"median\", test = c(\"p_direction\", \"p_significance\"), verbose = FALSE ) ## Summary of Posterior Distribution ## ## Parameter | Median | 95% CI | pd | ps ## -------------------------------------------------- ## Posterior | -0.01 | [-1.98, 1.93] | 50.52% | 0.46 zinb <- read.csv(\"http://stats.idre.ucla.edu/stat/data/fish.csv\") set.seed(123) model <- brm( bf( count ~ child + camper + (1 | persons), zi ~ child + camper + (1 | persons) ), data = zinb, family = zero_inflated_poisson(), chains = 1, iter = 500 ) describe_posterior( model, effects = \"all\", component = \"all\", test = c(\"p_direction\", \"p_significance\"), centrality = \"all\" ) ## Summary of Posterior Distribution ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## -------------------------------------------------------------------------------------- ## (Intercept) | 0.96 | 0.96 | 0.96 | [-0.81, 2.51] | 90.00% | 0.88 | 1.011 | 110.00 ## child | -1.16 | -1.16 | -1.16 | [-1.36, -0.94] | 100% | 1.00 | 0.996 | 278.00 ## camper | 0.73 | 0.72 | 0.73 | [ 0.54, 0.91] | 100% | 1.00 | 0.996 | 271.00 ## ## # Fixed effects (zero-inflated) ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## -------------------------------------------------------------------------------------- ## (Intercept) | -0.48 | -0.51 | -0.22 | [-2.03, 0.89] | 78.00% | 0.73 | 0.997 | 138.00 ## child | 1.85 | 1.86 | 1.81 | [ 1.19, 2.54] | 100% | 1.00 | 0.996 | 303.00 ## camper | -0.88 | -0.86 | -0.99 | [-1.61, -0.07] | 98.40% | 0.96 | 0.996 | 292.00 ## ## # Random effects (conditional) Intercept: persons ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## --------------------------------------------------------------------------------------- ## persons.1 | -0.99 | -1.01 | -0.84 | [-2.68, 0.80] | 92.00% | 0.90 | 1.007 | 106.00 ## persons.2 | -4.65e-03 | -0.04 | 0.03 | [-1.63, 1.66] | 50.00% | 0.45 | 1.013 | 109.00 ## persons.3 | 0.69 | 0.66 | 0.69 | [-0.95, 2.34] | 79.60% | 0.78 | 1.010 | 114.00 ## persons.4 | 1.57 | 1.56 | 1.56 | [-0.05, 3.29] | 96.80% | 0.96 | 1.009 | 114.00 ## ## # Random effects (zero-inflated) Intercept: persons ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## ------------------------------------------------------------------------------------ ## persons.1 | 1.10 | 1.11 | 1.08 | [-0.23, 2.72] | 94.80% | 0.93 | 0.997 | 166.00 ## persons.2 | 0.18 | 0.18 | 0.22 | [-0.94, 1.58] | 63.20% | 0.54 | 0.996 | 154.00 ## persons.3 | -0.30 | -0.31 | -0.54 | [-1.79, 1.02] | 64.00% | 0.59 | 0.997 | 154.00 ## persons.4 | -1.45 | -1.46 | -1.44 | [-2.90, -0.10] | 98.00% | 0.97 | 1.000 | 189.00 ## ## # Random effects (conditional) SD/Cor: persons ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## ---------------------------------------------------------------------------------- ## (Intercept) | 1.42 | 1.58 | 1.07 | [ 0.71, 3.58] | 100% | 1.00 | 1.010 | 126.00 ## ## # Random effects (zero-inflated) SD/Cor: persons ## ## Parameter | Median | Mean | MAP | 95% CI | pd | ps | Rhat | ESS ## ---------------------------------------------------------------------------------- ## (Intercept) | 1.30 | 1.49 | 0.99 | [ 0.63, 3.41] | 100% | 1.00 | 0.996 | 129.00"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"point-estimates","dir":"","previous_headings":"","what":"Point-estimates","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"easystats packages, plot() methods available see package many functions: median mean available base R functions, map_estimate() bayestestR can used directly find Highest Maximum Posteriori (MAP) estimate posterior, .e., value associated highest probability density (“peak” posterior distribution). words, estimation mode continuous parameters.","code":"library(bayestestR) posterior <- distribution_gamma(10000, 1.5) # Generate a skewed distribution centrality <- point_estimate(posterior) # Get indices of centrality centrality ## Point Estimate ## ## Median | Mean | MAP ## -------------------- ## 1.18 | 1.50 | 0.51"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"uncertainty-ci","dir":"","previous_headings":"","what":"Uncertainty (CI)","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"hdi() computes Highest Density Interval (HDI) posterior distribution, .e., interval contains points within interval higher probability density points outside interval. HDI can used context Bayesian posterior characterization Credible Interval (CI). Unlike equal-tailed intervals (see eti()) typically exclude 2.5% tail distribution, HDI equal-tailed therefore always includes mode(s) posterior distributions.","code":"posterior <- distribution_chisquared(10000, 4) hdi(posterior, ci = 0.89) ## 89% HDI: [0.18, 7.63] eti(posterior, ci = 0.89) ## 89% ETI: [0.75, 9.25]"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/index.html","id":"probability-of-direction-pd","dir":"","previous_headings":"Existence and Significance Testing","what":"Probability of Direction (pd)","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"p_direction() computes Probability Direction (pd, also known Maximum Probability Effect - MPE). varies 50% 100% (.e., 0.5 1) can interpreted probability (expressed percentage) parameter (described posterior distribution) strictly positive negative (whichever probable). mathematically defined proportion posterior distribution median’s sign. Although differently expressed, index fairly similar (.e., strongly correlated) frequentist p-value. Relationship p-value: cases, seems pd corresponds frequentist one-sided p-value formula p-value = (1-pd/100) two-sided p-value (commonly reported) formula p-value = 2*(1-pd/100). Thus, pd 95%, 97.5% 99.5% 99.95% corresponds approximately two-sided p-value respectively .1, .05, .01 .001. See reporting guidelines.","code":"posterior <- distribution_normal(10000, 0.4, 0.2) p_direction(posterior) ## Probability of Direction ## ## Parameter | pd ## ------------------ ## Posterior | 97.72%"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"rope","dir":"","previous_headings":"Existence and Significance Testing","what":"ROPE","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"rope() computes proportion (percentage) HDI (default 89% HDI) posterior distribution lies within region practical equivalence. Statistically, probability posterior distribution different 0 make much sense (probability different single point infinite). Therefore, idea underlining ROPE let user define area around null value enclosing values equivalent null value practical purposes Kruschke (2018). Kruschke suggests null value set, default, -0.1 0.1 range standardized parameter (negligible effect size according Cohen, 1988). generalized: instance, linear models, ROPE set 0 +/- .1 * sd(y). ROPE range can automatically computed models using rope_range function. Kruschke suggests using proportion 95% (90%, considered stable) HDI falls within ROPE index “null-hypothesis” testing (understood Bayesian framework, see equivalence_test).","code":"posterior <- distribution_normal(10000, 0.4, 0.2) rope(posterior, range = c(-0.1, 0.1)) ## # Proportion of samples inside the ROPE [-0.10, 0.10]: ## ## inside ROPE ## ----------- ## 4.40 %"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"bayes-factor","dir":"","previous_headings":"Existence and Significance Testing","what":"Bayes Factor","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"bayesfactor_parameters() computes Bayes factors null (either point interval), bases prior posterior samples single parameter. Bayes factor indicates degree mass posterior distribution shifted away closer null value(s) (relative prior distribution), thus indicating null value become less likely given observed data. null interval, Bayes factor computed comparing prior posterior odds parameter falling within outside null; null point, Savage-Dickey density ratio computed, also approximation Bayes factor comparing marginal likelihoods model model tested parameter restricted point null (Wagenmakers, Lodewyckx, Kuriyal, & Grasman, 2010). lollipops represent density point-null prior distribution (blue lollipop dotted distribution) posterior distribution (red lollipop yellow distribution). ratio two - Savage-Dickey ratio - indicates degree mass parameter distribution shifted away closer null. info, see Bayes factors vignette.","code":"prior <- distribution_normal(10000, mean = 0, sd = 1) posterior <- distribution_normal(10000, mean = 1, sd = 0.7) bayesfactor_parameters(posterior, prior, direction = \"two-sided\", null = 0, verbose = FALSE) ## Bayes Factor (Savage-Dickey density ratio) ## ## BF ## ---- ## 1.94 ## ## * Evidence Against The Null: 0"},{"path":[]},{"path":"https://easystats.github.io/bayestestR/index.html","id":"find-ropes-appropriate-range","dir":"","previous_headings":"Utilities","what":"Find ROPE’s appropriate range","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"rope_range(): function attempts automatically finding suitable “default” values Region Practical Equivalence (ROPE). Kruschke (2018) suggests null value set, default, range -0.1 0.1 standardized parameter (negligible effect size according Cohen, 1988), can generalised linear models -0.1 * sd(y), 0.1 * sd(y). logistic models, parameters expressed log odds ratio can converted standardized difference formula sqrt(3)/pi, resulting range -0.05 0.05.","code":"rope_range(model)"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"density-estimation","dir":"","previous_headings":"Utilities","what":"Density Estimation","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"estimate_density(): function wrapper different methods density estimation. default, uses base R density default uses different smoothing bandwidth (\"SJ\") legacy default implemented base R density function (\"nrd0\"). However, Deng & Wickham suggest method = \"KernSmooth\" fastest accurate.","code":""},{"path":"https://easystats.github.io/bayestestR/index.html","id":"perfect-distributions","dir":"","previous_headings":"Utilities","what":"Perfect Distributions","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"distribution(): Generate sample size n near-perfect distributions.","code":"distribution(n = 10) ## [1] -1.55 -1.00 -0.66 -0.38 -0.12 0.12 0.38 0.66 1.00 1.55"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"probability-of-a-value","dir":"","previous_headings":"Utilities","what":"Probability of a Value","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"density_at(): Compute density given point distribution.","code":"density_at(rnorm(1000, 1, 1), 1) ## [1] 0.41"},{"path":"https://easystats.github.io/bayestestR/index.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of Conduct","title":"Understand and Describe Bayesian Models and Posterior Distributions","text":"Please note bayestestR project released Contributor Code Conduct. contributing project, agree abide terms.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/area_under_curve.html","id":null,"dir":"Reference","previous_headings":"","what":"Area under the Curve (AUC) — area_under_curve","title":"Area under the Curve (AUC) — area_under_curve","text":"Based DescTools AUC function. can calculate area curve naive algorithm elaborated spline approach. curve must given vectors xy-coordinates. function can handle unsorted x values (sorting x) ties x values (ignoring duplicates).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/area_under_curve.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Area under the Curve (AUC) — area_under_curve","text":"","code":"area_under_curve(x, y, method = c(\"trapezoid\", \"step\", \"spline\"), ...) auc(x, y, method = c(\"trapezoid\", \"step\", \"spline\"), ...)"},{"path":"https://easystats.github.io/bayestestR/reference/area_under_curve.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Area under the Curve (AUC) — area_under_curve","text":"x Vector x values. y Vector y values. method Method compute Area Curve (AUC). Can \"trapezoid\" (default), \"step\" \"spline\". \"trapezoid\", curve formed connecting points direct line (composite trapezoid rule). \"step\" chosen stepwise connection two points used. calculating area spline interpolation splinefun function used combination integrate. ... Arguments passed methods.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/area_under_curve.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Area under the Curve (AUC) — area_under_curve","text":"","code":"library(bayestestR) posterior <- distribution_normal(1000) dens <- estimate_density(posterior) dens <- dens[dens$x > 0, ] x <- dens$x y <- dens$y area_under_curve(x, y, method = \"trapezoid\") #> [1] 0.4980638 area_under_curve(x, y, method = \"step\") #> [1] 0.4992903 area_under_curve(x, y, method = \"spline\") #> [1] 0.4980639"},{"path":"https://easystats.github.io/bayestestR/reference/as.data.frame.density.html","id":null,"dir":"Reference","previous_headings":"","what":"Coerce to a Data Frame — as.data.frame.density","title":"Coerce to a Data Frame — as.data.frame.density","text":"Coerce Data Frame","code":""},{"path":"https://easystats.github.io/bayestestR/reference/as.data.frame.density.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Coerce to a Data Frame — as.data.frame.density","text":"","code":"# S3 method for class 'density' as.data.frame(x, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/as.data.frame.density.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Coerce to a Data Frame — as.data.frame.density","text":"x R object. ... additional arguments passed methods.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/as.numeric.p_direction.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert to Numeric — as.numeric.map_estimate","title":"Convert to Numeric — as.numeric.map_estimate","text":"Convert Numeric","code":""},{"path":"https://easystats.github.io/bayestestR/reference/as.numeric.p_direction.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert to Numeric — as.numeric.map_estimate","text":"","code":"# S3 method for class 'map_estimate' as.numeric(x, ...) # S3 method for class 'p_direction' as.numeric(x, ...) # S3 method for class 'p_map' as.numeric(x, ...) # S3 method for class 'p_significance' as.numeric(x, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/as.numeric.p_direction.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert to Numeric — as.numeric.map_estimate","text":"x object coerced tested. ... arguments passed methods.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayes Factors (BF) — bayesfactor","title":"Bayes Factors (BF) — bayesfactor","text":"function compte Bayes factors (BFs) appropriate input. vectors single models, compute BFs single parameters, hypothesis specified, BFs restricted models. multiple models, return BF corresponding comparison models model comparison passed, compute inclusion BF. complete overview functions, read Bayes factor vignette.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayes Factors (BF) — bayesfactor","text":"","code":"bayesfactor( ..., prior = NULL, direction = \"two-sided\", null = 0, hypothesis = NULL, effects = c(\"fixed\", \"random\", \"all\"), verbose = TRUE, denominator = 1, match_models = FALSE, prior_odds = NULL )"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayes Factors (BF) — bayesfactor","text":"... numeric vector, model object(s), output bayesfactor_models. prior object representing prior distribution (see 'Details'). direction Test type (see 'Details'). One 0, \"two-sided\" (default, two tailed), -1, \"left\" (left tailed) 1, \"right\" (right tailed). null Value null, either scalar (point-null) range (interval-null). hypothesis character vector specifying restrictions logical conditions (see examples ). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. verbose Toggle warnings. denominator Either integer indicating models use denominator, model used denominator. Ignored BFBayesFactor. match_models See details. prior_odds Optional vector prior odds models. See BayesFactor::priorOdds<-.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bayes Factors (BF) — bayesfactor","text":"type Bayes factor, depending input. See bayesfactor_parameters(), bayesfactor_models() bayesfactor_inclusion().","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Bayes Factors (BF) — bayesfactor","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayes Factors (BF) — bayesfactor","text":"","code":"# \\dontrun{ library(bayestestR) prior <- distribution_normal(1000, mean = 0, sd = 1) posterior <- distribution_normal(1000, mean = 0.5, sd = 0.3) bayesfactor(posterior, prior = prior, verbose = FALSE) #> Bayes Factor (Savage-Dickey density ratio) #> #> BF #> ---- #> 1.21 #> #> * Evidence Against The Null: 0 #> # rstanarm models # --------------- model <- suppressWarnings(rstanarm::stan_lmer(extra ~ group + (1 | ID), data = sleep)) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 4.2e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.42 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.199 seconds (Warm-up) #> Chain 1: 0.298 seconds (Sampling) #> Chain 1: 0.497 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.5e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.177 seconds (Warm-up) #> Chain 2: 0.178 seconds (Sampling) #> Chain 2: 0.355 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.5e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.2 seconds (Warm-up) #> Chain 3: 0.127 seconds (Sampling) #> Chain 3: 0.327 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.4e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.174 seconds (Warm-up) #> Chain 4: 0.202 seconds (Sampling) #> Chain 4: 0.376 seconds (Total) #> Chain 4: bayesfactor(model, verbose = FALSE) #> Bayes Factor (Savage-Dickey density ratio) #> #> Parameter | BF #> ------------------- #> (Intercept) | 0.206 #> group2 | 2.66 #> #> * Evidence Against The Null: 0 #> # Frequentist models # --------------- m0 <- lm(extra ~ 1, data = sleep) m1 <- lm(extra ~ group, data = sleep) m2 <- lm(extra ~ group + ID, data = sleep) comparison <- bayesfactor(m0, m1, m2) comparison #> Bayes Factors for Model Comparison #> #> Model BF #> [..2] group 1.30 #> [..3] group + ID 1.12e+04 #> #> * Against Denominator: [..1] (Intercept only) #> * Bayes Factor Type: BIC approximation bayesfactor(comparison) #> Inclusion Bayes Factors (Model Averaged) #> #> P(prior) P(posterior) Inclusion BF #> group 0.67 1.00 5.61e+03 #> ID 0.33 1.00 9.77e+03 #> #> * Compared among: all models #> * Priors odds: uniform-equal # }"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":null,"dir":"Reference","previous_headings":"","what":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"bf_* function alias main function. info, see Bayes factors vignette.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"","code":"bayesfactor_inclusion(models, match_models = FALSE, prior_odds = NULL, ...) bf_inclusion(models, match_models = FALSE, prior_odds = NULL, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"models object class bayesfactor_models() BFBayesFactor. match_models See details. prior_odds Optional vector prior odds models. See BayesFactor::priorOdds<-. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"data frame containing prior posterior probabilities, log(BF) effect (Use .numeric() extract non-log Bayes factors; see examples).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"Inclusion Bayes factors answer question: observed data probable models particular effect, models without particular effect? words, average - models effect \\(X\\) likely produced observed data models without effect \\(X\\)?","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"match-models","dir":"Reference","previous_headings":"","what":"Match Models","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"match_models=FALSE (default), Inclusion BFs computed comparing models term models without term. TRUE, comparison restricted models (1) include interactions term interest; (2) interaction terms, averaging done across models containe main effect terms interaction term comprised.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"Random effects lmer style converted interaction terms: .e., (X|G) become terms 1:G X:G.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"interpreting-bayes-factors","dir":"Reference","previous_headings":"","what":"Interpreting Bayes Factors","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"Bayes factor greater 1 can interpreted evidence null, one convention Bayes factor greater 3 can considered \"substantial\" evidence null (vice versa, Bayes factor smaller 1/3 indicates substantial evidence favor null-model) (Wetzels et al. 2011).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"Hinne, M., Gronau, Q. F., van den Bergh, D., Wagenmakers, E. (2019, March 25). conceptual introduction Bayesian Model Averaging. doi:10.31234/osf.io/wgb64 Clyde, M. ., Ghosh, J., & Littman, M. L. (2011). Bayesian adaptive sampling variable selection model averaging. Journal Computational Graphical Statistics, 20(1), 80-101. Mathot, S. (2017). Bayes like Baws: Interpreting Bayesian Repeated Measures JASP. Blog post.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"Mattan S. Ben-Shachar","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_inclusion.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Inclusion Bayes Factors for testing predictors across Bayesian models — bayesfactor_inclusion","text":"","code":"library(bayestestR) # Using bayesfactor_models: # ------------------------------ mo0 <- lm(Sepal.Length ~ 1, data = iris) mo1 <- lm(Sepal.Length ~ Species, data = iris) mo2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris) mo3 <- lm(Sepal.Length ~ Species * Petal.Length, data = iris) BFmodels <- bayesfactor_models(mo1, mo2, mo3, denominator = mo0) (bf_inc <- bayesfactor_inclusion(BFmodels)) #> Inclusion Bayes Factors (Model Averaged) #> #> P(prior) P(posterior) Inclusion BF #> Species 0.75 1.00 2.02e+55 #> Petal.Length 0.50 1.00 3.58e+26 #> Petal.Length:Species 0.25 0.04 0.113 #> #> * Compared among: all models #> * Priors odds: uniform-equal as.numeric(bf_inc) #> [1] 2.021143e+55 3.575448e+26 1.131202e-01 # \\donttest{ # BayesFactor # ------------------------------- BF <- BayesFactor::generalTestBF(len ~ supp * dose, ToothGrowth, progress = FALSE) bayesfactor_inclusion(BF) #> Inclusion Bayes Factors (Model Averaged) #> #> P(prior) P(posterior) Inclusion BF #> supp 0.60 0.98 35.18 #> dose 0.60 1.00 5.77e+12 #> dose:supp 0.20 0.56 5.08 #> #> * Compared among: all models #> * Priors odds: uniform-equal # compare only matched models: bayesfactor_inclusion(BF, match_models = TRUE) #> Inclusion Bayes Factors (Model Averaged) #> #> P(prior) P(posterior) Inclusion BF #> supp 0.40 0.42 22.68 #> dose 0.40 0.44 3.81e+12 #> dose:supp 0.20 0.56 1.33 #> #> * Compared among: matched models only #> * Priors odds: uniform-equal # }"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayes Factors (BF) for model comparison — bayesfactor_models","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"function computes extracts Bayes factors fitted models. bf_* function alias main function.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"","code":"bayesfactor_models(..., denominator = 1, verbose = TRUE) bf_models(..., denominator = 1, verbose = TRUE) # Default S3 method bayesfactor_models(..., denominator = 1, verbose = TRUE) # S3 method for class 'bayesfactor_models' update(object, subset = NULL, reference = NULL, ...) # S3 method for class 'bayesfactor_models' as.matrix(x, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"... Fitted models (see details), fit data, single BFBayesFactor object (see 'Details'). Ignored .matrix(), update(). following named arguments present, passed insight::get_loglikelihood() (see details): estimator (defaults \"ML\") check_response (defaults FALSE) denominator Either integer indicating models use denominator, model used denominator. Ignored BFBayesFactor. verbose Toggle warnings. object, x bayesfactor_models() object. subset Vector model indices keep remove. reference Index model reference , \"top\" reference best model, \"bottom\" reference worst model.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"data frame containing models' formulas (reconstructed fixed random effects) log(BF)s (Use .numeric() extract non-log Bayes factors; see examples), prints nicely.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"passed models supported insight DV models tested equality (else assumed true), models' terms extracted (allowing follow-analysis bayesfactor_inclusion). brmsfit stanreg models, Bayes factors computed using bridgesampling package. brmsfit models must fitted save_pars = save_pars(= TRUE). stanreg models must fitted defined diagnostic_file. BFBayesFactor, bayesfactor_models() mostly wraparound BayesFactor::extractBF(). model types, Bayes factors computed using BIC approximation. Note BICs extracted using insight::get_loglikelihood, see documentation options dealing transformed responses REML estimation. order correctly precisely estimate Bayes factors, rule thumb 4 P's: Proper Priors Plentiful Posteriors. many? number posterior samples needed testing substantially larger estimation (default 4000 samples may enough many cases). conservative rule thumb obtain 10 times samples required estimation (Gronau, Singmann, & Wagenmakers, 2017). less 40,000 samples detected, bayesfactor_models() gives warning. See also Bayes factors vignette.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"interpreting-bayes-factors","dir":"Reference","previous_headings":"","what":"Interpreting Bayes Factors","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"Bayes factor greater 1 can interpreted evidence null, one convention Bayes factor greater 3 can considered \"substantial\" evidence null (vice versa, Bayes factor smaller 1/3 indicates substantial evidence favor null-model) (Wetzels et al. 2011).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"Gronau, Q. F., Singmann, H., & Wagenmakers, E. J. (2017). Bridgesampling: R package estimating normalizing constants. arXiv preprint arXiv:1710.08162. Kass, R. E., Raftery, . E. (1995). Bayes Factors. Journal American Statistical Association, 90(430), 773-795. Robert, C. P. (2016). expected demise Bayes factor. Journal Mathematical Psychology, 72, 33–37. Wagenmakers, E. J. (2007). practical solution pervasive problems p values. Psychonomic bulletin & review, 14(5), 779-804. Wetzels, R., Matzke, D., Lee, M. D., Rouder, J. N., Iverson, G. J., Wagenmakers, E.-J. (2011). Statistical Evidence Experimental Psychology: Empirical Comparison Using 855 t Tests. Perspectives Psychological Science, 6(3), 291–298. doi:10.1177/1745691611406923","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"Mattan S. Ben-Shachar","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_models.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayes Factors (BF) for model comparison — bayesfactor_models","text":"","code":"# With lm objects: # ---------------- lm1 <- lm(mpg ~ 1, data = mtcars) lm2 <- lm(mpg ~ hp, data = mtcars) lm3 <- lm(mpg ~ hp + drat, data = mtcars) lm4 <- lm(mpg ~ hp * drat, data = mtcars) (BFM <- bayesfactor_models(lm1, lm2, lm3, lm4, denominator = 1)) #> Bayes Factors for Model Comparison #> #> Model BF #> [lm2] hp 4.54e+05 #> [lm3] hp + drat 7.70e+07 #> [lm4] hp * drat 1.59e+07 #> #> * Against Denominator: [lm1] (Intercept only) #> * Bayes Factor Type: BIC approximation # bayesfactor_models(lm2, lm3, lm4, denominator = lm1) # same result # bayesfactor_models(lm1, lm2, lm3, lm4, denominator = lm1) # same result update(BFM, reference = \"bottom\") #> Bayes Factors for Model Comparison #> #> Model BF #> [lm2] hp 4.54e+05 #> [lm3] hp + drat 7.70e+07 #> [lm4] hp * drat 1.59e+07 #> #> * Against Denominator: [lm1] (Intercept only) #> * Bayes Factor Type: BIC approximation as.matrix(BFM) #> # Bayes Factors for Model Comparison #> #> Numerator #> Denominator #> #> | [1] | [2] | [3] | [4] #> ---------------------------------------------------------------- #> [1] (Intercept only) | 1 | 4.54e+05 | 7.70e+07 | 1.59e+07 #> [2] hp | 2.20e-06 | 1 | 169.72 | 35.09 #> [3] hp + drat | 1.30e-08 | 0.006 | 1 | 0.207 #> [4] hp * drat | 6.28e-08 | 0.028 | 4.84 | 1 as.numeric(BFM) #> [1] 1.0 453874.3 77029881.3 15925712.4 lm2b <- lm(sqrt(mpg) ~ hp, data = mtcars) # Set check_response = TRUE for transformed responses bayesfactor_models(lm2b, denominator = lm2, check_response = TRUE) #> Bayes Factors for Model Comparison #> #> Model BF #> [lm2b] hp 6.94 #> #> * Against Denominator: [lm2] hp #> * Bayes Factor Type: BIC approximation # \\donttest{ # With lmerMod objects: # --------------------- lmer1 <- lme4::lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) lmer2 <- lme4::lmer(Sepal.Length ~ Petal.Length + (Petal.Length | Species), data = iris) #> boundary (singular) fit: see help('isSingular') lmer3 <- lme4::lmer( Sepal.Length ~ Petal.Length + (Petal.Length | Species) + (1 | Petal.Width), data = iris ) #> boundary (singular) fit: see help('isSingular') bayesfactor_models(lmer1, lmer2, lmer3, denominator = 1, estimator = \"REML\" ) #> Bayes Factors for Model Comparison #> #> Model BF #> [lmer2] Petal.Length + (Petal.Length | Species) 0.058 #> [lmer3] Petal.Length + (Petal.Length | Species) + (1 | Petal.Width) 0.005 #> #> * Against Denominator: [lmer1] Petal.Length + (1 | Species) #> * Bayes Factor Type: BIC approximation # rstanarm models # --------------------- # (note that a unique diagnostic_file MUST be specified in order to work) stan_m0 <- suppressWarnings(rstanarm::stan_glm(Sepal.Length ~ 1, data = iris, family = gaussian(), diagnostic_file = file.path(tempdir(), \"df0.csv\") )) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 1.9e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.19 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 1: 0.036 seconds (Sampling) #> Chain 1: 0.054 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 8e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.016 seconds (Warm-up) #> Chain 2: 0.036 seconds (Sampling) #> Chain 2: 0.052 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 8e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 3: 0.036 seconds (Sampling) #> Chain 3: 0.054 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 8e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 4: 0.037 seconds (Sampling) #> Chain 4: 0.055 seconds (Total) #> Chain 4: stan_m1 <- suppressWarnings(rstanarm::stan_glm(Sepal.Length ~ Species, data = iris, family = gaussian(), diagnostic_file = file.path(tempdir(), \"df1.csv\") )) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.2e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.029 seconds (Warm-up) #> Chain 1: 0.046 seconds (Sampling) #> Chain 1: 0.075 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.1e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.031 seconds (Warm-up) #> Chain 2: 0.048 seconds (Sampling) #> Chain 2: 0.079 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.2e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.029 seconds (Warm-up) #> Chain 3: 0.048 seconds (Sampling) #> Chain 3: 0.077 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.2e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.029 seconds (Warm-up) #> Chain 4: 0.048 seconds (Sampling) #> Chain 4: 0.077 seconds (Total) #> Chain 4: stan_m2 <- suppressWarnings(rstanarm::stan_glm(Sepal.Length ~ Species + Petal.Length, data = iris, family = gaussian(), diagnostic_file = file.path(tempdir(), \"df2.csv\") )) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.2e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.095 seconds (Warm-up) #> Chain 1: 0.111 seconds (Sampling) #> Chain 1: 0.206 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.2e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.086 seconds (Warm-up) #> Chain 2: 0.108 seconds (Sampling) #> Chain 2: 0.194 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.2e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.088 seconds (Warm-up) #> Chain 3: 0.111 seconds (Sampling) #> Chain 3: 0.199 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.1e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.081 seconds (Warm-up) #> Chain 4: 0.1 seconds (Sampling) #> Chain 4: 0.181 seconds (Total) #> Chain 4: bayesfactor_models(stan_m1, stan_m2, denominator = stan_m0, verbose = FALSE) #> Bayes Factors for Model Comparison #> #> Model BF #> [1] Species 6.27e+27 #> [2] Species + Petal.Length 2.25e+53 #> #> * Against Denominator: [3] (Intercept only) #> * Bayes Factor Type: marginal likelihoods (bridgesampling) # brms models # -------------------- # (note the save_pars MUST be set to save_pars(all = TRUE) in order to work) brm1 <- brms::brm(Sepal.Length ~ 1, data = iris, save_pars = save_pars(all = TRUE)) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.2e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.03 seconds (Warm-up) #> Chain 1: 0.032 seconds (Sampling) #> Chain 1: 0.062 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 8e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.029 seconds (Warm-up) #> Chain 2: 0.026 seconds (Sampling) #> Chain 2: 0.055 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 7e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.03 seconds (Warm-up) #> Chain 3: 0.033 seconds (Sampling) #> Chain 3: 0.063 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 7e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.032 seconds (Warm-up) #> Chain 4: 0.034 seconds (Sampling) #> Chain 4: 0.066 seconds (Total) #> Chain 4: brm2 <- brms::brm(Sepal.Length ~ Species, data = iris, save_pars = save_pars(all = TRUE)) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 1.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.016 seconds (Warm-up) #> Chain 1: 0.015 seconds (Sampling) #> Chain 1: 0.031 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 4e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.017 seconds (Warm-up) #> Chain 2: 0.016 seconds (Sampling) #> Chain 2: 0.033 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 4e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.016 seconds (Warm-up) #> Chain 3: 0.014 seconds (Sampling) #> Chain 3: 0.03 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 4e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.017 seconds (Warm-up) #> Chain 4: 0.015 seconds (Sampling) #> Chain 4: 0.032 seconds (Total) #> Chain 4: brm3 <- brms::brm( Sepal.Length ~ Species + Petal.Length, data = iris, save_pars = save_pars(all = TRUE) ) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.049 seconds (Warm-up) #> Chain 1: 0.058 seconds (Sampling) #> Chain 1: 0.107 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 4e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.05 seconds (Warm-up) #> Chain 2: 0.053 seconds (Sampling) #> Chain 2: 0.103 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 4e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.047 seconds (Warm-up) #> Chain 3: 0.055 seconds (Sampling) #> Chain 3: 0.102 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 4e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.052 seconds (Warm-up) #> Chain 4: 0.053 seconds (Sampling) #> Chain 4: 0.105 seconds (Total) #> Chain 4: bayesfactor_models(brm1, brm2, brm3, denominator = 1, verbose = FALSE) #> Bayes Factors for Model Comparison #> #> Model BF #> [2] Species 5.86e+29 #> [3] Species + Petal.Length 7.50e+55 #> #> * Against Denominator: [1] (Intercept only) #> * Bayes Factor Type: marginal likelihoods (bridgesampling) # BayesFactor # --------------------------- data(puzzles) BF <- BayesFactor::anovaBF(RT ~ shape * color + ID, data = puzzles, whichRandom = \"ID\", progress = FALSE ) BF #> Bayes factor analysis #> -------------- #> [1] shape + ID : 2.841658 ±0.92% #> [2] color + ID : 2.830879 ±0.86% #> [3] shape + color + ID : 11.75567 ±1.98% #> [4] shape + color + shape:color + ID : 4.371906 ±1.99% #> #> Against denominator: #> RT ~ ID #> --- #> Bayes factor type: BFlinearModel, JZS #> bayesfactor_models(BF) # basically the same #> Bayes Factors for Model Comparison #> #> Model BF #> [2] shape + ID 2.84 #> [3] color + ID 2.83 #> [4] shape + color + ID 11.76 #> [5] shape + color + shape:color + ID 4.37 #> #> * Against Denominator: [1] ID #> * Bayes Factor Type: JZS (BayesFactor) # }"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"method computes Bayes factors null (either point interval), based prior posterior samples single parameter. Bayes factor indicates degree mass posterior distribution shifted away closer null value(s) (relative prior distribution), thus indicating null value become less likely given observed data. null interval, Bayes factor computed comparing prior posterior odds parameter falling within outside null interval (Morey & Rouder, 2011; Liao et al., 2020); null point, Savage-Dickey density ratio computed, also approximation Bayes factor comparing marginal likelihoods model model tested parameter restricted point null (Wagenmakers et al., 2010; Heck, 2019). Note logspline package used estimating densities probabilities, must installed function work. bayesfactor_pointnull() bayesfactor_rope() wrappers around bayesfactor_parameters different defaults null tested (point range, respectively). Aliases main functions prefixed bf_*, like bf_parameters() bf_pointnull(). info, particular specifying correct priors factors 2 levels, see Bayes factors vignette.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"","code":"bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) bayesfactor_pointnull( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) bayesfactor_rope( posterior, prior = NULL, direction = \"two-sided\", null = rope_range(posterior, verbose = FALSE), ..., verbose = TRUE ) bf_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) bf_pointnull( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) bf_rope( posterior, prior = NULL, direction = \"two-sided\", null = rope_range(posterior, verbose = FALSE), ..., verbose = TRUE ) # S3 method for class 'numeric' bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) # S3 method for class 'stanreg' bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"location\", \"smooth_terms\", \"sigma\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ..., verbose = TRUE ) # S3 method for class 'brmsfit' bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"location\", \"smooth_terms\", \"sigma\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ..., verbose = TRUE ) # S3 method for class 'blavaan' bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, ..., verbose = TRUE ) # S3 method for class 'data.frame' bayesfactor_parameters( posterior, prior = NULL, direction = \"two-sided\", null = 0, rvar_col = NULL, ..., verbose = TRUE )"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"posterior numerical vector, stanreg / brmsfit object, emmGrid data frame - representing posterior distribution(s) (see 'Details'). prior object representing prior distribution (see 'Details'). direction Test type (see 'Details'). One 0, \"two-sided\" (default, two tailed), -1, \"left\" (left tailed) 1, \"right\" (right tailed). null Value null, either scalar (point-null) range (interval-null). ... Arguments passed methods. (Can used pass arguments internal logspline::logspline().) verbose Toggle warnings. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. rvar_col single character - name rvar column data frame processed. See example p_direction().","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"data frame containing (log) Bayes factor representing evidence null (Use .numeric() extract non-log Bayes factors; see examples).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"method used compute Bayes factors based prior posterior distributions.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"one-sided-amp-dividing-tests-setting-an-order-restriction-","dir":"Reference","previous_headings":"","what":"One-sided & Dividing Tests (setting an order restriction)","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"One sided tests (controlled direction) conducted restricting prior posterior non-null values (\"alternative\") one side null (Morey & Wagenmakers, 2014). example, prior hypothesis parameter positive, alternative restricted region right null (point interval). example, Bayes factor comparing \"null\" 0-0.1 alternative >0.1, set bayesfactor_parameters(null = c(0, 0.1), direction = \">\"). also possible compute Bayes factor dividing hypotheses - , null alternative complementary, opposing one-sided hypotheses (Morey & Wagenmakers, 2014). example, Bayes factor comparing \"null\" <0 alternative >0, set bayesfactor_parameters(null = c(-Inf, 0)).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"setting-the-correct-prior","dir":"Reference","previous_headings":"","what":"Setting the correct prior","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"computation Bayes factors, model priors must proper priors (least flat, preferable informative); priors alternative get wider, likelihood null value(s) increases, extreme completely flat priors null infinitely favorable alternative (called Jeffreys-Lindley-Bartlett paradox). Thus, ever try (want) compute Bayes factor informed prior. (Note default, brms::brm() uses flat priors fixed-effects; See example .) important provide correct prior meaningful results, match posterior-type input: numeric vector - prior also numeric vector, representing prior-estimate. data frame - prior also data frame, representing prior-estimates, matching column order. rvar_col specified, prior name rvar column represents prior-estimates. Supported Bayesian model (stanreg, brmsfit, etc.) prior model equivalent model MCMC samples priors . See unupdate(). prior set NULL, unupdate() called internally (supported brmsfit_multiple model). Output {marginaleffects} function - prior also equivalent output {marginaleffects} function based prior-model (See unupdate()). Output {emmeans} function prior also equivalent output {emmeans} function based prior-model (See unupdate()). prior can also original (posterior) model, case function try \"unupdate\" estimates (supported estimates undergone transformations – \"log\", \"response\", etc. – regriding).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"interpreting-bayes-factors","dir":"Reference","previous_headings":"","what":"Interpreting Bayes Factors","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"Bayes factor greater 1 can interpreted evidence null, one convention Bayes factor greater 3 can considered \"substantial\" evidence null (vice versa, Bayes factor smaller 1/3 indicates substantial evidence favor null-model) (Wetzels et al. 2011).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"Wagenmakers, E. J., Lodewyckx, T., Kuriyal, H., Grasman, R. (2010). Bayesian hypothesis testing psychologists: tutorial Savage-Dickey method. Cognitive psychology, 60(3), 158-189. Heck, D. W. (2019). caveat Savage–Dickey density ratio: case computing Bayes factors regression parameters. British Journal Mathematical Statistical Psychology, 72(2), 316-333. Morey, R. D., & Wagenmakers, E. J. (2014). Simple relation Bayesian order-restricted point-null hypothesis tests. Statistics & Probability Letters, 92, 121-124. Morey, R. D., & Rouder, J. N. (2011). Bayes factor approaches testing interval null hypotheses. Psychological methods, 16(4), 406. Liao, J. G., Midya, V., & Berg, . (2020). Connecting contrasting Bayes factor modified ROPE procedure testing interval null hypotheses. American Statistician, 1-19. Wetzels, R., Matzke, D., Lee, M. D., Rouder, J. N., Iverson, G. J., Wagenmakers, E.-J. (2011). Statistical Evidence Experimental Psychology: Empirical Comparison Using 855 t Tests. Perspectives Psychological Science, 6(3), 291–298. doi:10.1177/1745691611406923","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"Mattan S. Ben-Shachar","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_parameters.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayes Factors (BF) for a Single Parameter — bayesfactor_parameters","text":"","code":"library(bayestestR) prior <- distribution_normal(1000, mean = 0, sd = 1) posterior <- distribution_normal(1000, mean = .5, sd = .3) (BF_pars <- bayesfactor_parameters(posterior, prior, verbose = FALSE)) #> Bayes Factor (Savage-Dickey density ratio) #> #> BF #> ---- #> 1.21 #> #> * Evidence Against The Null: 0 #> as.numeric(BF_pars) #> [1] 1.212843 # \\donttest{ # rstanarm models # --------------- contrasts(sleep$group) <- contr.equalprior_pairs # see vingette stan_model <- suppressWarnings(stan_lmer( extra ~ group + (1 | ID), data = sleep, refresh = 0 )) bayesfactor_parameters(stan_model, verbose = FALSE) #> Bayes Factor (Savage-Dickey density ratio) #> #> Parameter | BF #> ------------------ #> (Intercept) | 4.55 #> group1 | 3.74 #> #> * Evidence Against The Null: 0 #> bayesfactor_parameters(stan_model, null = rope_range(stan_model)) #> Sampling priors, please wait... #> Bayes Factor (Null-Interval) #> #> Parameter | BF #> ------------------ #> (Intercept) | 4.17 #> group1 | 3.36 #> #> * Evidence Against The Null: [-0.202, 0.202] #> # emmGrid objects # --------------- group_diff <- pairs(emmeans(stan_model, ~group, data = sleep)) bayesfactor_parameters(group_diff, prior = stan_model, verbose = FALSE) #> Bayes Factor (Savage-Dickey density ratio) #> #> contrast | BF #> ---------------------- #> group1 - group2 | 3.81 #> #> * Evidence Against The Null: 0 #> # Or # group_diff_prior <- pairs(emmeans(unupdate(stan_model), ~group)) # bayesfactor_parameters(group_diff, prior = group_diff_prior, verbose = FALSE) # } # brms models # ----------- # \\dontrun{ contrasts(sleep$group) <- contr.equalprior_pairs # see vingette my_custom_priors <- set_prior(\"student_t(3, 0, 1)\", class = \"b\") + set_prior(\"student_t(3, 0, 1)\", class = \"sd\", group = \"ID\") brms_model <- suppressWarnings(brm(extra ~ group + (1 | ID), data = sleep, prior = my_custom_priors, refresh = 0 )) #> Compiling Stan program... #> Start sampling bayesfactor_parameters(brms_model, verbose = FALSE) #> Bayes Factor (Savage-Dickey density ratio) #> #> Parameter | BF #> ------------------- #> (Intercept) | 6.58 #> group1 | 11.41 #> #> * Evidence Against The Null: 0 #> # }"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"method computes Bayes factors comparing model order restrictions parameters fully unrestricted model. Note method used confirmatory analyses. bf_* function alias main function. info, particular specifying correct priors factors 2 levels, see Bayes factors vignette.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"","code":"bayesfactor_restricted(posterior, ...) bf_restricted(posterior, ...) # S3 method for class 'stanreg' bayesfactor_restricted( posterior, hypothesis, prior = NULL, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), ... ) # S3 method for class 'brmsfit' bayesfactor_restricted( posterior, hypothesis, prior = NULL, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), ... ) # S3 method for class 'blavaan' bayesfactor_restricted( posterior, hypothesis, prior = NULL, verbose = TRUE, ... ) # S3 method for class 'emmGrid' bayesfactor_restricted( posterior, hypothesis, prior = NULL, verbose = TRUE, ... ) # S3 method for class 'data.frame' bayesfactor_restricted( posterior, hypothesis, prior = NULL, rvar_col = NULL, ... ) # S3 method for class 'bayesfactor_restricted' as.logical(x, which = c(\"posterior\", \"prior\"), ...)"},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"posterior stanreg / brmsfit object, emmGrid data frame - representing posterior distribution(s) (see Details). ... Currently used. hypothesis character vector specifying restrictions logical conditions (see examples ). prior object representing prior distribution (see Details). verbose Toggle warnings. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. rvar_col single character - name rvar column data frame processed. See example p_direction(). x object class bayesfactor_restricted logical matrix posterior prior distribution(s)?","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"data frame containing (log) Bayes factor representing evidence un-restricted model (Use .numeric() extract non-log Bayes factors; see examples). (bool_results attribute contains results sample, indicating included hypothesized restriction.)","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"method used compute Bayes factors order-restricted models vs un-restricted models setting order restriction prior posterior distributions (Morey & Wagenmakers, 2013). (Though possible use bayesfactor_restricted() test interval restrictions, suitable testing order restrictions; see examples).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"setting-the-correct-prior","dir":"Reference","previous_headings":"","what":"Setting the correct prior","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"computation Bayes factors, model priors must proper priors (least flat, preferable informative); priors alternative get wider, likelihood null value(s) increases, extreme completely flat priors null infinitely favorable alternative (called Jeffreys-Lindley-Bartlett paradox). Thus, ever try (want) compute Bayes factor informed prior. (Note default, brms::brm() uses flat priors fixed-effects; See example .) important provide correct prior meaningful results, match posterior-type input: numeric vector - prior also numeric vector, representing prior-estimate. data frame - prior also data frame, representing prior-estimates, matching column order. rvar_col specified, prior name rvar column represents prior-estimates. Supported Bayesian model (stanreg, brmsfit, etc.) prior model equivalent model MCMC samples priors . See unupdate(). prior set NULL, unupdate() called internally (supported brmsfit_multiple model). Output {marginaleffects} function - prior also equivalent output {marginaleffects} function based prior-model (See unupdate()). Output {emmeans} function prior also equivalent output {emmeans} function based prior-model (See unupdate()). prior can also original (posterior) model, case function try \"unupdate\" estimates (supported estimates undergone transformations – \"log\", \"response\", etc. – regriding).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"interpreting-bayes-factors","dir":"Reference","previous_headings":"","what":"Interpreting Bayes Factors","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"Bayes factor greater 1 can interpreted evidence null, one convention Bayes factor greater 3 can considered \"substantial\" evidence null (vice versa, Bayes factor smaller 1/3 indicates substantial evidence favor null-model) (Wetzels et al. 2011).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"Morey, R. D., & Wagenmakers, E. J. (2014). Simple relation Bayesian order-restricted point-null hypothesis tests. Statistics & Probability Letters, 92, 121-124. Morey, R. D., & Rouder, J. N. (2011). Bayes factor approaches testing interval null hypotheses. Psychological methods, 16(4), 406. Morey, R. D. (Jan, 2015). Multiple Comparisons BayesFactor, Part 2 – order restrictions. Retrieved https://richarddmorey.org/category/order-restrictions/.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayesfactor_restricted.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayes Factors (BF) for Order Restricted Models — bayesfactor_restricted","text":"","code":"set.seed(444) library(bayestestR) prior <- data.frame( A = rnorm(500), B = rnorm(500), C = rnorm(500) ) posterior <- data.frame( A = rnorm(500, .4, 0.7), B = rnorm(500, -.2, 0.4), C = rnorm(500, 0, 0.5) ) hyps <- c( \"A > B & B > C\", \"A > B & A > C\", \"C > A\" ) (b <- bayesfactor_restricted(posterior, hypothesis = hyps, prior = prior)) #> Bayes Factor (Order-Restriction) #> #> Hypothesis P(Prior) P(Posterior) BF #> A > B & B > C 0.16 0.23 1.39 #> A > B & A > C 0.36 0.59 1.61 #> C > A 0.46 0.34 0.742 #> #> * Bayes factors for the restricted model vs. the un-restricted model. bool <- as.logical(b, which = \"posterior\") head(bool) #> A > B & B > C A > B & A > C C > A #> [1,] TRUE TRUE FALSE #> [2,] TRUE TRUE FALSE #> [3,] TRUE TRUE FALSE #> [4,] FALSE TRUE FALSE #> [5,] FALSE FALSE TRUE #> [6,] FALSE TRUE FALSE see::plots( plot(estimate_density(posterior)), # distribution **conditional** on the restrictions plot(estimate_density(posterior[bool[, hyps[1]], ])) + ggplot2::ggtitle(hyps[1]), plot(estimate_density(posterior[bool[, hyps[2]], ])) + ggplot2::ggtitle(hyps[2]), plot(estimate_density(posterior[bool[, hyps[3]], ])) + ggplot2::ggtitle(hyps[3]), guides = \"collect\" ) # \\donttest{ # rstanarm models # --------------- data(\"mtcars\") fit_stan <- rstanarm::stan_glm(mpg ~ wt + cyl + am, data = mtcars, refresh = 0 ) hyps <- c( \"am > 0 & cyl < 0\", \"cyl < 0\", \"wt - cyl > 0\" ) bayesfactor_restricted(fit_stan, hypothesis = hyps) #> Sampling priors, please wait... #> Bayes Factor (Order-Restriction) #> #> Hypothesis P(Prior) P(Posterior) BF #> am > 0 & cyl < 0 0.25 0.56 2.25 #> cyl < 0 0.50 1.00 1.99 #> wt - cyl > 0 0.50 0.10 0.197 #> #> * Bayes factors for the restricted model vs. the un-restricted model. # } # \\donttest{ # emmGrid objects # --------------- # replicating http://bayesfactor.blogspot.com/2015/01/multiple-comparisons-with-bayesfactor-2.html data(\"disgust\") contrasts(disgust$condition) <- contr.equalprior_pairs # see vignette fit_model <- rstanarm::stan_glm(score ~ condition, data = disgust, family = gaussian()) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.03 seconds (Warm-up) #> Chain 1: 0.039 seconds (Sampling) #> Chain 1: 0.069 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.1e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.03 seconds (Warm-up) #> Chain 2: 0.04 seconds (Sampling) #> Chain 2: 0.07 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.1e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.03 seconds (Warm-up) #> Chain 3: 0.039 seconds (Sampling) #> Chain 3: 0.069 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.1e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.028 seconds (Warm-up) #> Chain 4: 0.04 seconds (Sampling) #> Chain 4: 0.068 seconds (Total) #> Chain 4: em_condition <- emmeans::emmeans(fit_model, ~condition, data = disgust) hyps <- c(\"lemon < control & control < sulfur\") bayesfactor_restricted(em_condition, prior = fit_model, hypothesis = hyps) #> Sampling priors, please wait... #> Bayes Factor (Order-Restriction) #> #> Hypothesis P(Prior) P(Posterior) BF #> lemon < control & control < sulfur 0.17 0.75 4.28 #> #> * Bayes factors for the restricted model vs. the un-restricted model. # > # Bayes Factor (Order-Restriction) # > # > Hypothesis P(Prior) P(Posterior) BF # > lemon < control & control < sulfur 0.17 0.75 4.49 # > --- # > Bayes factors for the restricted model vs. the un-restricted model. # }"},{"path":"https://easystats.github.io/bayestestR/reference/bayestestR-package.html","id":null,"dir":"Reference","previous_headings":"","what":"bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework — bayestestR-package","title":"bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework — bayestestR-package","text":"Existing R packages allow users easily fit large variety models extract visualize posterior draws. However, packages return limited set indices (e.g., point-estimates CIs). bayestestR provides comprehensive consistent set functions analyze describe posterior distributions generated variety models objects, including popular modeling packages rstanarm, brms BayesFactor. References: Makowski et al. (2019) doi:10.21105/joss.01541 Makowski et al. (2019) doi:10.3389/fpsyg.2019.02767","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bayestestR-package.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework — bayestestR-package","text":"bayestestR","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/bayestestR-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework — bayestestR-package","text":"Maintainer: Dominique Makowski dom.makowski@gmail.com (ORCID) Authors: Daniel Lüdecke d.luedecke@uke.de (ORCID) Mattan S. Ben-Shachar matanshm@post.bgu.ac.il (ORCID) Indrajeet Patil patilindrajeet.science@gmail.com (ORCID) Micah K. Wilson micah.k.wilson@curtin.edu.au (ORCID) Brenton M. Wiernik brenton@wiernik.org (ORCID) contributors: Paul-Christian Bürkner paul.buerkner@gmail.com [reviewer] Tristan Mahr tristan.mahr@wisc.edu (ORCID) [reviewer] Henrik Singmann singmann@gmail.com (ORCID) [contributor] Quentin F. Gronau (ORCID) [contributor] Sam Crawley sam@crawley.nz (ORCID) [contributor]","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":null,"dir":"Reference","previous_headings":"","what":"Bias Corrected and Accelerated Interval (BCa) — bci","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"Compute Bias Corrected Accelerated Interval (BCa) posterior distributions.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"","code":"bci(x, ...) bcai(x, ...) # S3 method for class 'numeric' bci(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'data.frame' bci(x, ci = 0.95, rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'MCMCglmm' bci(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'sim.merMod' bci( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'sim' bci(x, ci = 0.95, parameters = NULL, verbose = TRUE, ...) # S3 method for class 'emmGrid' bci(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'slopes' bci(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'stanreg' bci( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' bci( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'BFBayesFactor' bci(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'get_predicted' bci(x, ci = 0.95, use_iterations = FALSE, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. ci Value vector probability (credible) interval - CI (0 1) estimated. Default .95 (95%). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability credible interval. CI_low, CI_high lower upper credible interval limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"Unlike equal-tailed intervals (see eti()) typically exclude 2.5% tail distribution always include median, HDI equal-tailed therefore always includes mode(s) posterior distributions. can useful better represent credibility mass distribution, HDI also limitations. See spi() details. 95% 89% Credible Intervals (CI) two reasonable ranges characterize uncertainty related estimation (see discussion differences two values). 89% intervals (ci = 0.89) deemed stable , instance, 95% intervals (Kruschke, 2014). effective sample size least 10.000 recommended one wants estimate 95% intervals high precision (Kruschke, 2014, p. 183ff). Unfortunately, default number posterior samples Bayes packages (e.g., rstanarm brms) 4.000 (thus, might want increase fitting model). Moreover, 89 indicates arbitrariness interval limits - remarkable property highest prime number exceed already unstable 95% threshold (McElreath, 2015). However, 95% advantages . instance, shares (case normal posterior distribution) intuitive relationship standard deviation conveys accurate image (artificial) bounds distribution. Also, wider, makes analyses conservative (.e., probability covering 0 larger 95% CI lower ranges 89%), good thing context reproducibility crisis. 95% equal-tailed interval (ETI) 2.5% distribution either side limits. indicates 2.5th percentile 97.5h percentile. symmetric distributions, two methods computing credible intervals, ETI HDI, return similar results. case skewed distributions. Indeed, possible parameter values ETI lower credibility (less probable) parameter values outside ETI. property seems undesirable summary credible values distribution. hand, ETI range change transformations applied distribution (instance, log odds scale probabilities): lower higher bounds transformed distribution correspond transformed lower higher bounds original distribution. contrary, applying transformations distribution change resulting HDI.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"DiCiccio, T. J. B. Efron. (1996). Bootstrap Confidence Intervals. Statistical Science. 11(3): 189–212. 10.1214/ss/1032280214","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/bci.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bias Corrected and Accelerated Interval (BCa) — bci","text":"","code":"posterior <- rnorm(1000) bci(posterior) #> 95% ETI: [-1.78, 2.11] bci(posterior, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> 80% ETI | 89% ETI | 95% ETI #> --------------------------------------------- #> [-1.17, 1.34] | [-1.52, 1.70] | [-1.78, 2.11]"},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"difference two Bayesian information criterion (BIC) indices two models can used approximate Bayes factors via: $$BF_{10} = e^{(BIC_0 - BIC_1)/2}$$","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"","code":"bic_to_bf(bic, denominator, log = FALSE)"},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"bic vector BIC values. denominator BIC value use denominator (test ). log TRUE, return log(BF).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"Bayes Factors corresponding BIC values denominator.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"Wagenmakers, E. J. (2007). practical solution pervasive problems p values. Psychonomic bulletin & review, 14(5), 779-804","code":""},{"path":"https://easystats.github.io/bayestestR/reference/bic_to_bf.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convert BIC indices to Bayes Factors via the BIC-approximation method. — bic_to_bf","text":"","code":"bic1 <- BIC(lm(Sepal.Length ~ 1, data = iris)) bic2 <- BIC(lm(Sepal.Length ~ Species, data = iris)) bic3 <- BIC(lm(Sepal.Length ~ Species + Petal.Length, data = iris)) bic4 <- BIC(lm(Sepal.Length ~ Species * Petal.Length, data = iris)) bic_to_bf(c(bic1, bic2, bic3, bic4), denominator = bic1) #> [1] 1.000000e+00 1.695852e+29 5.843105e+55 2.203243e+54"},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if Prior is Informative — check_prior","title":"Check if Prior is Informative — check_prior","text":"Performs simple test check whether prior informative posterior. idea, accompanying heuristics, discussed Gelman et al. 2017.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if Prior is Informative — check_prior","text":"","code":"check_prior(model, method = \"gelman\", simulate_priors = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if Prior is Informative — check_prior","text":"model stanreg, stanfit, brmsfit, blavaan, MCMCglmm object. method Can \"gelman\" \"lakeland\". \"gelman\" method, SD posterior 0.1 times SD prior, prior considered informative. \"lakeland\" method, prior considered informative posterior falls within 95% HDI prior. simulate_priors prior distributions simulated using simulate_prior() (default; faster) sampled via unupdate() (slower, accurate). ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check if Prior is Informative — check_prior","text":"data frame two columns: parameter names quality prior (might \"informative\", \"uninformative\") \"determinable\" prior distribution determined).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check if Prior is Informative — check_prior","text":"Gelman, ., Simpson, D., Betancourt, M. (2017). Prior Can Often Understood Context Likelihood. Entropy, 19(10), 555. doi:10.3390/e19100555","code":""},{"path":"https://easystats.github.io/bayestestR/reference/check_prior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if Prior is Informative — check_prior","text":"","code":"# \\donttest{ library(bayestestR) model <- rstanarm::stan_glm(mpg ~ wt + am, data = mtcars, chains = 1, refresh = 0) check_prior(model, method = \"gelman\") #> Parameter Prior_Quality #> 1 (Intercept) informative #> 2 wt uninformative #> 3 am uninformative check_prior(model, method = \"lakeland\") #> Parameter Prior_Quality #> 1 (Intercept) informative #> 2 wt informative #> 3 am informative # An extreme example where both methods diverge: model <- rstanarm::stan_glm(mpg ~ wt, data = mtcars[1:3, ], prior = normal(-3.3, 1, FALSE), prior_intercept = normal(0, 1000, FALSE), refresh = 0 ) check_prior(model, method = \"gelman\") #> Parameter Prior_Quality #> 1 (Intercept) uninformative #> 2 wt informative check_prior(model, method = \"lakeland\") #> Parameter Prior_Quality #> 1 (Intercept) informative #> 2 wt misinformative # can provide visual confirmation to the Lakeland method plot(si(model, verbose = FALSE)) # }"},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":null,"dir":"Reference","previous_headings":"","what":"Confidence/Credible/Compatibility Interval (CI) — ci","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"Compute Confidence/Credible/Compatibility Intervals (CI) Support Intervals (SI) Bayesian frequentist models. Documentation accessible :","code":""},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"","code":"ci(x, ...) # S3 method for class 'numeric' ci(x, ci = 0.95, method = \"ETI\", verbose = TRUE, BF = 1, ...) # S3 method for class 'data.frame' ci(x, ci = 0.95, method = \"ETI\", BF = 1, rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'sim.merMod' ci( x, ci = 0.95, method = \"ETI\", effects = c(\"fixed\", \"random\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'sim' ci(x, ci = 0.95, method = \"ETI\", parameters = NULL, verbose = TRUE, ...) # S3 method for class 'stanreg' ci( x, ci = 0.95, method = \"ETI\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, BF = 1, ... ) # S3 method for class 'brmsfit' ci( x, ci = 0.95, method = \"ETI\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, BF = 1, ... ) # S3 method for class 'BFBayesFactor' ci(x, ci = 0.95, method = \"ETI\", verbose = TRUE, BF = 1, ...) # S3 method for class 'MCMCglmm' ci(x, ci = 0.95, method = \"ETI\", verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"x stanreg brmsfit model, vector representing posterior distribution. ... Currently used. ci Value vector probability CI (0 1) estimated. Default 0.95 (95%). method Can \"ETI\" (default), \"HDI\", \"BCI\", \"SPI\" \"SI\". verbose Toggle warnings. BF amount support required included support interval. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability credible interval. CI_low, CI_high lower upper credible interval limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"Bayesian models Frequentist models","code":""},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"comes interpretation, recommend thinking CI terms \"uncertainty\" \"compatibility\" interval, latter defined \"Given value interval background assumptions, data seem surprising\" (Gelman & Greenland 2019). also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"Gelman , Greenland S. confidence intervals better termed \"uncertainty intervals\"? BMJ 2019;l5381. 10.1136/bmj.l5381","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/ci.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Confidence/Credible/Compatibility Interval (CI) — ci","text":"","code":"library(bayestestR) posterior <- rnorm(1000) ci(posterior, method = \"ETI\") #> 95% ETI: [-2.00, 1.96] ci(posterior, method = \"HDI\") #> 95% HDI: [-1.91, 2.03] df <- data.frame(replicate(4, rnorm(100))) ci(df, method = \"ETI\", ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | 95% ETI #> --------------------------------------------------------- #> X1 | [-1.46, 1.35] | [-1.70, 1.63] | [-1.94, 1.94] #> X2 | [-1.21, 1.34] | [-1.51, 1.71] | [-1.81, 2.08] #> X3 | [-1.20, 1.19] | [-1.54, 1.48] | [-2.02, 1.71] #> X4 | [-1.22, 1.51] | [-1.88, 1.61] | [-2.20, 1.82] ci(df, method = \"HDI\", ci = c(0.80, 0.89, 0.95)) #> Highest Density Interval #> #> Parameter | 80% HDI | 89% HDI | 95% HDI #> --------------------------------------------------------- #> X1 | [-1.38, 1.37] | [-1.95, 1.37] | [-1.77, 2.17] #> X2 | [-1.20, 1.35] | [-1.64, 1.52] | [-2.15, 1.80] #> X3 | [-1.21, 1.17] | [-1.46, 1.56] | [-2.07, 1.72] #> X4 | [-1.03, 1.52] | [-1.45, 1.74] | [-2.34, 1.71] model <- suppressWarnings( stan_glm(mpg ~ wt, data = mtcars, chains = 2, iter = 200, refresh = 0) ) ci(model, method = \"ETI\", ci = c(0.80, 0.89)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | Effects | Component #> --------------------------------------------------------------------- #> (Intercept) | [34.59, 39.93] | [34.12, 40.56] | fixed | conditional #> wt | [-6.10, -4.52] | [-6.27, -4.33] | fixed | conditional ci(model, method = \"HDI\", ci = c(0.80, 0.89)) #> Highest Density Interval #> #> Parameter | 80% HDI | 89% HDI #> --------------------------------------------- #> (Intercept) | [34.36, 39.67] | [34.20, 40.60] #> wt | [-6.09, -4.51] | [-6.37, -4.47] bf <- ttestBF(x = rnorm(100, 1, 1)) ci(bf, method = \"ETI\") #> Equal-Tailed Interval #> #> Parameter | 95% ETI #> ------------------------- #> Difference | [0.80, 1.23] ci(bf, method = \"HDI\") #> Highest Density Interval #> #> Parameter | 95% HDI #> ------------------------- #> Difference | [0.81, 1.24] model <- emtrends(model, ~1, \"wt\", data = mtcars) ci(model, method = \"ETI\") #> Equal-Tailed Interval #> #> X1 | 95% ETI #> ------------------------ #> overall | [-6.37, -4.20] ci(model, method = \"HDI\") #> Highest Density Interval #> #> X1 | 95% HDI #> ------------------------ #> overall | [-6.37, -4.18]"},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":null,"dir":"Reference","previous_headings":"","what":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"Build contrasts factors equal marginal priors levels. 3 functions give orthogonal contrasts, scaled differently allow different prior specifications (see 'Details'). Implementation Singmann & Gronau's bfrms, following description Rouder, Morey, Speckman, & Province (2012, p. 363).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"","code":"contr.equalprior(n, contrasts = TRUE, sparse = FALSE) contr.equalprior_pairs(n, contrasts = TRUE, sparse = FALSE) contr.equalprior_deviations(n, contrasts = TRUE, sparse = FALSE)"},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"n vector levels factor, number levels. contrasts logical indicating whether contrasts computed. sparse logical indicating result sparse (class dgCMatrix), using package Matrix.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"matrix n rows k columns, k=n-1 contrasts TRUE k=n contrasts FALSE.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"using stats::contr.treatment, dummy variable difference level reference level. useful setting different priors coefficient, used one trying set general prior differences means, (well stats::contr.sum others) results unequal marginal priors means difference . can see priors means (narrow prior), likewise pairwise differences (priors differences narrow). solution use one methods provided , result marginally equal priors means differences . Though obscure interpretation parameters, setting equal priors means differences important useful specifying equal priors means factor differences correct estimation Bayes factors contrasts order restrictions multi-level factors (k>2). See info specifying correct priors factors 2 levels Bayes factors vignette. NOTE: setting priors dummy variables, always: Use priors centered 0! location/centered priors meaningless! Use identically-scaled priors dummy variables single factor! contr.equalprior returns original orthogonal-normal contrasts described Rouder, Morey, Speckman, & Province (2012, p. 363). Setting contrasts = FALSE returns \\(I_{n} - \\frac{1}{n}\\) matrix.","code":"library(brms) data <- data.frame( group = factor(rep(LETTERS[1:4], each = 3)), y = rnorm(12) ) contrasts(data$group) # R's default contr.treatment #> B C D #> A 0 0 0 #> B 1 0 0 #> C 0 1 0 #> D 0 0 1 model_prior <- brm( y ~ group, data = data, sample_prior = \"only\", # Set the same priors on the 3 dummy variable # (Using an arbitrary scale) prior = set_prior(\"normal(0, 10)\", coef = c(\"groupB\", \"groupC\", \"groupD\")) ) est <- emmeans::emmeans(model_prior, pairwise ~ group) point_estimate(est, centr = \"mean\", disp = TRUE) #> Point Estimate #> #> Parameter | Mean | SD #> ------------------------- #> A | -0.01 | 6.35 #> B | -0.10 | 9.59 #> C | 0.11 | 9.55 #> D | -0.16 | 9.52 #> A - B | 0.10 | 9.94 #> A - C | -0.12 | 9.96 #> A - D | 0.15 | 9.87 #> B - C | -0.22 | 14.38 #> B - D | 0.05 | 14.14 #> C - D | 0.27 | 14.00"},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"contr-equalprior-pairs","dir":"Reference","previous_headings":"","what":"contr.equalprior_pairs","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"Useful setting priors terms pairwise differences means - scales priors defines prior distribution pair-wise differences pairwise differences (e.g., - B, B - C, etc.). means prior distribution, distribution differences matches prior set \"normal(0, 10)\". Success!","code":"contrasts(data$group) <- contr.equalprior_pairs contrasts(data$group) #> [,1] [,2] [,3] #> A 0.0000000 0.6123724 0.0000000 #> B -0.1893048 -0.2041241 0.5454329 #> C -0.3777063 -0.2041241 -0.4366592 #> D 0.5670111 -0.2041241 -0.1087736 model_prior <- brm( y ~ group, data = data, sample_prior = \"only\", # Set the same priors on the 3 dummy variable # (Using an arbitrary scale) prior = set_prior(\"normal(0, 10)\", coef = c(\"group1\", \"group2\", \"group3\")) ) est <- emmeans(model_prior, pairwise ~ group) point_estimate(est, centr = \"mean\", disp = TRUE) #> Point Estimate #> #> Parameter | Mean | SD #> ------------------------- #> A | -0.31 | 7.46 #> B | -0.24 | 7.47 #> C | -0.34 | 7.50 #> D | -0.30 | 7.25 #> A - B | -0.08 | 10.00 #> A - C | 0.03 | 10.03 #> A - D | -0.01 | 9.85 #> B - C | 0.10 | 10.28 #> B - D | 0.06 | 9.94 #> C - D | -0.04 | 10.18"},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"contr-equalprior-deviations","dir":"Reference","previous_headings":"","what":"contr.equalprior_deviations","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"Useful setting priors terms deviations mean grand mean - scales priors defines prior distribution distance (, ) mean one levels might overall mean. (See examples.)","code":""},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"Rouder, J. N., Morey, R. D., Speckman, P. L., & Province, J. M. (2012). Default Bayes factors ANOVA designs. Journal Mathematical Psychology, 56(5), 356-374. https://doi.org/10.1016/j.jmp.2012.08.001","code":""},{"path":"https://easystats.github.io/bayestestR/reference/contr.equalprior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Contrast Matrices for Equal Marginal Priors in Bayesian Estimation — contr.equalprior","text":"","code":"contr.equalprior(2) # Q_2 in Rouder et al. (2012, p. 363) #> [,1] #> [1,] -0.7071068 #> [2,] 0.7071068 contr.equalprior(5) # equivalent to Q_5 in Rouder et al. (2012, p. 363) #> [,1] [,2] [,3] [,4] #> [1,] 0.000000e+00 0.0000000 0.0000000 0.8944272 #> [2,] -4.163336e-17 0.0000000 0.8660254 -0.2236068 #> [3,] -5.773503e-01 -0.5773503 -0.2886751 -0.2236068 #> [4,] -2.113249e-01 0.7886751 -0.2886751 -0.2236068 #> [5,] 7.886751e-01 -0.2113249 -0.2886751 -0.2236068 ## check decomposition Q3 <- contr.equalprior(3) Q3 %*% t(Q3) ## 2/3 on diagonal and -1/3 on off-diagonal elements #> [,1] [,2] [,3] #> [1,] 0.6666667 -0.3333333 -0.3333333 #> [2,] -0.3333333 0.6666667 -0.3333333 #> [3,] -0.3333333 -0.3333333 0.6666667"},{"path":"https://easystats.github.io/bayestestR/reference/convert_bayesian_as_frequentist.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert (refit) a Bayesian model to frequentist — convert_bayesian_as_frequentist","title":"Convert (refit) a Bayesian model to frequentist — convert_bayesian_as_frequentist","text":"Refit Bayesian model frequentist. Can useful comparisons.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/convert_bayesian_as_frequentist.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert (refit) a Bayesian model to frequentist — convert_bayesian_as_frequentist","text":"","code":"convert_bayesian_as_frequentist(model, data = NULL, REML = TRUE) bayesian_as_frequentist(model, data = NULL, REML = TRUE)"},{"path":"https://easystats.github.io/bayestestR/reference/convert_bayesian_as_frequentist.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert (refit) a Bayesian model to frequentist — convert_bayesian_as_frequentist","text":"model Bayesian model. data Data used model. NULL, try extract model. REML mixed effects, models estimated using restricted maximum likelihood (REML) (TRUE, default) maximum likelihood (FALSE)?","code":""},{"path":"https://easystats.github.io/bayestestR/reference/convert_bayesian_as_frequentist.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convert (refit) a Bayesian model to frequentist — convert_bayesian_as_frequentist","text":"","code":"# \\donttest{ # Rstanarm ---------------------- # Simple regressions model <- rstanarm::stan_glm(Sepal.Length ~ Species, data = iris, chains = 2, refresh = 0 ) bayesian_as_frequentist(model) #> #> Call: #> stats::lm(formula = formula$conditional, data = data) #> #> Coefficients: #> (Intercept) Speciesversicolor Speciesvirginica #> 5.006 0.930 1.582 #> model <- rstanarm::stan_glm(vs ~ mpg, family = \"binomial\", data = mtcars, chains = 2, refresh = 0 ) bayesian_as_frequentist(model) #> #> Call: stats::glm(formula = formula$conditional, family = family, data = data) #> #> Coefficients: #> (Intercept) mpg #> -8.8331 0.4304 #> #> Degrees of Freedom: 31 Total (i.e. Null); 30 Residual #> Null Deviance:\t 43.86 #> Residual Deviance: 25.53 \tAIC: 29.53 # Mixed models model <- rstanarm::stan_glmer( Sepal.Length ~ Petal.Length + (1 | Species), data = iris, chains = 2, refresh = 0 ) bayesian_as_frequentist(model) #> Linear mixed model fit by REML ['lmerMod'] #> Formula: Sepal.Length ~ Petal.Length + (1 | Species) #> Data: data #> REML criterion at convergence: 119.793 #> Random effects: #> Groups Name Std.Dev. #> Species (Intercept) 1.0778 #> Residual 0.3381 #> Number of obs: 150, groups: Species, 3 #> Fixed Effects: #> (Intercept) Petal.Length #> 2.5045 0.8885 model <- rstanarm::stan_glmer(vs ~ mpg + (1 | cyl), family = \"binomial\", data = mtcars, chains = 2, refresh = 0 ) bayesian_as_frequentist(model) #> Generalized linear mixed model fit by maximum likelihood (Laplace #> Approximation) [glmerMod] #> Family: binomial ( logit ) #> Formula: vs ~ mpg + (1 | cyl) #> Data: data #> AIC BIC logLik deviance df.resid #> 31.1738 35.5710 -12.5869 25.1738 29 #> Random effects: #> Groups Name Std.Dev. #> cyl (Intercept) 1.925 #> Number of obs: 32, groups: cyl, 3 #> Fixed Effects: #> (Intercept) mpg #> -3.9227 0.1723 # }"},{"path":"https://easystats.github.io/bayestestR/reference/density_at.html","id":null,"dir":"Reference","previous_headings":"","what":"Density Probability at a Given Value — density_at","title":"Density Probability at a Given Value — density_at","text":"Compute density value given point distribution (.e., value y axis value x distribution).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/density_at.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Density Probability at a Given Value — density_at","text":"","code":"density_at(posterior, x, precision = 2^10, method = \"kernel\", ...)"},{"path":"https://easystats.github.io/bayestestR/reference/density_at.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Density Probability at a Given Value — density_at","text":"posterior Vector representing posterior distribution. x value get approximate probability. precision Number points density data. See n parameter density. method Density estimation method. Can \"kernel\" (default), \"logspline\" \"KernSmooth\". ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/density_at.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Density Probability at a Given Value — density_at","text":"","code":"library(bayestestR) posterior <- distribution_normal(n = 10) density_at(posterior, 0) #> [1] 0.3206131 density_at(posterior, c(0, 1)) #> [1] 0.3206131 0.2374056"},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":null,"dir":"Reference","previous_headings":"","what":"Describe Posterior Distributions — describe_posterior","title":"Describe Posterior Distributions — describe_posterior","text":"Compute indices relevant describe characterize posterior distributions.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Describe Posterior Distributions — describe_posterior","text":"","code":"describe_posterior(posterior, ...) # S3 method for class 'numeric' describe_posterior( posterior, centrality = \"median\", dispersion = FALSE, ci = 0.95, ci_method = \"eti\", test = c(\"p_direction\", \"rope\"), rope_range = \"default\", rope_ci = 0.95, keep_iterations = FALSE, bf_prior = NULL, BF = 1, verbose = TRUE, ... ) # S3 method for class 'data.frame' describe_posterior( posterior, centrality = \"median\", dispersion = FALSE, ci = 0.95, ci_method = \"eti\", test = c(\"p_direction\", \"rope\"), rope_range = \"default\", rope_ci = 0.95, keep_iterations = FALSE, bf_prior = NULL, BF = 1, rvar_col = NULL, verbose = TRUE, ... ) # S3 method for class 'stanreg' describe_posterior( posterior, centrality = \"median\", dispersion = FALSE, ci = 0.95, ci_method = \"eti\", test = c(\"p_direction\", \"rope\"), rope_range = \"default\", rope_ci = 0.95, keep_iterations = FALSE, bf_prior = NULL, diagnostic = c(\"ESS\", \"Rhat\"), priors = FALSE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, BF = 1, verbose = TRUE, ... ) # S3 method for class 'brmsfit' describe_posterior( posterior, centrality = \"median\", dispersion = FALSE, ci = 0.95, ci_method = \"eti\", test = c(\"p_direction\", \"rope\"), rope_range = \"default\", rope_ci = 0.95, keep_iterations = FALSE, bf_prior = NULL, diagnostic = c(\"ESS\", \"Rhat\"), effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\", \"location\", \"distributional\", \"auxiliary\"), parameters = NULL, BF = 1, priors = FALSE, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Describe Posterior Distributions — describe_posterior","text":"posterior vector, data frame model posterior draws. bayestestR supports wide range models (see methods(\"describe_posterior\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric method. ... Additional arguments passed methods. centrality point-estimates (centrality indices) compute. Character (vector) list one options: \"median\", \"mean\", \"MAP\" (see map_estimate()), \"trimmed\" (just mean(x, trim = threshold)), \"mode\" \"\". dispersion Logical, TRUE, computes indices dispersion related estimate(s) (SD MAD mean median, respectively). Dispersion available \"MAP\" \"mode\" centrality indices. ci Value vector probability CI (0 1) estimated. Default 0.95 (95%). ci_method type index used Credible Interval. Can \"ETI\" (default, see eti()), \"HDI\" (see hdi()), \"BCI\" (see bci()), \"SPI\" (see spi()), \"SI\" (see si()). test indices effect existence compute. Character (vector) list one options: \"p_direction\" (\"pd\"), \"rope\", \"p_map\", \"p_significance\" (\"ps\"), \"p_rope\", \"equivalence_test\" (\"equitest\"), \"bayesfactor\" (\"bf\") \"\" compute tests. \"test\", corresponding bayestestR function called (e.g. rope() p_direction()) results included summary output. rope_range ROPE's lower higher bounds. vector two values (e.g., c(-0.1, 0.1)), \"default\" list numeric vectors length numbers parameters. \"default\", bounds set x +- 0.1*SD(response). rope_ci Credible Interval (CI) probability, corresponding proportion HDI, use percentage ROPE. keep_iterations TRUE, keep iterations (draws) bootstrapped Bayesian models. added additional columns named iter_1, iter_2, .... can reshape long format running reshape_iterations(). bf_prior Distribution representing prior computation Bayes factors / SI. Used input posterior, otherwise (case models) ignored. BF amount support required included support interval. verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). diagnostic Diagnostic metrics compute. Character (vector) list one options: \"ESS\", \"Rhat\", \"MCSE\" \"\". priors Add prior used parameter. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Describe Posterior Distributions — describe_posterior","text":"One components point estimates (like posterior mean median), intervals tests can omitted summary output setting related argument NULL. example, test = NULL centrality = NULL return HDI (CI).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Describe Posterior Distributions — describe_posterior","text":"Makowski, D., Ben-Shachar, M. S., Chen, S. H. ., Lüdecke, D. (2019). Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767 Region Practical Equivalence (ROPE) Bayes factors","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_posterior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Describe Posterior Distributions — describe_posterior","text":"","code":"library(bayestestR) if (require(\"logspline\")) { x <- rnorm(1000) describe_posterior(x, verbose = FALSE) describe_posterior(x, centrality = \"all\", dispersion = TRUE, test = \"all\", verbose = FALSE ) describe_posterior(x, ci = c(0.80, 0.90), verbose = FALSE) df <- data.frame(replicate(4, rnorm(100))) describe_posterior(df, verbose = FALSE) describe_posterior( df, centrality = \"all\", dispersion = TRUE, test = \"all\", verbose = FALSE ) describe_posterior(df, ci = c(0.80, 0.90), verbose = FALSE) df <- data.frame(replicate(4, rnorm(20))) head(reshape_iterations( describe_posterior(df, keep_iterations = TRUE, verbose = FALSE) )) } #> Summary of Posterior Distribution #> #> Parameter | Median | 95% CI | pd | ROPE | % in ROPE #> ----------------------------------------------------------------------- #> X1 | -0.21 | [-1.70, 1.42] | 60.00% | [-0.10, 0.10] | 5.56% #> X2 | -0.21 | [-2.38, 2.41] | 55.00% | [-0.10, 0.10] | 11.11% #> X3 | 0.22 | [-1.96, 2.61] | 55.00% | [-0.10, 0.10] | 5.56% #> X4 | -0.20 | [-1.58, 0.61] | 65.00% | [-0.10, 0.10] | 16.67% #> X1 | -0.21 | [-1.70, 1.42] | 60.00% | [-0.10, 0.10] | 5.56% #> X2 | -0.21 | [-2.38, 2.41] | 55.00% | [-0.10, 0.10] | 11.11% #> #> Parameter | iter_index | iter_group | iter_value #> ------------------------------------------------ #> X1 | 1.00 | 1.00 | 0.51 #> X2 | 2.00 | 1.00 | -0.36 #> X3 | 3.00 | 1.00 | 1.32 #> X4 | 4.00 | 1.00 | 0.34 #> X1 | 1.00 | 2.00 | -0.14 #> X2 | 2.00 | 2.00 | -0.50 # \\donttest{ # rstanarm models # ----------------------------------------------- if (require(\"rstanarm\") && require(\"emmeans\")) { model <- suppressWarnings( stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) describe_posterior(model) describe_posterior(model, centrality = \"all\", dispersion = TRUE, test = \"all\") describe_posterior(model, ci = c(0.80, 0.90)) describe_posterior(model, rope_range = list(c(-10, 5), c(-0.2, 0.2), \"default\")) # emmeans estimates # ----------------------------------------------- describe_posterior(emtrends(model, ~1, \"wt\")) } #> Warning: Bayes factors might not be precise. #> For precise Bayes factors, sampling at least 40,000 posterior samples is #> recommended. #> Summary of Posterior Distribution #> #> X1 | Median | 95% CI | pd | ROPE | % in ROPE #> -------------------------------------------------------------------- #> overall | -5.37 | [-6.57, -4.25] | 100% | [-0.10, 0.10] | 0% # BayesFactor objects # ----------------------------------------------- if (require(\"BayesFactor\")) { bf <- ttestBF(x = rnorm(100, 1, 1)) describe_posterior(bf) describe_posterior(bf, centrality = \"all\", dispersion = TRUE, test = \"all\") describe_posterior(bf, ci = c(0.80, 0.90)) } #> Summary of Posterior Distribution #> #> Parameter | Median | 80% CI | 90% CI | pd | ROPE #> ------------------------------------------------------------------------ #> Difference | 0.97 | [0.84, 1.09] | [0.81, 1.12] | 100% | [-0.09, 0.09] #> #> Parameter | % in ROPE | BF | Prior #> ------------------------------------------------------ #> Difference | 0% | 1.27e+15 | Cauchy (0 +- 0.71) # }"},{"path":"https://easystats.github.io/bayestestR/reference/describe_prior.html","id":null,"dir":"Reference","previous_headings":"","what":"Describe Priors — describe_prior","title":"Describe Priors — describe_prior","text":"Returns summary priors used model.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_prior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Describe Priors — describe_prior","text":"","code":"describe_prior(model, ...) # S3 method for class 'brmsfit' describe_prior( model, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\", \"location\", \"distributional\", \"auxiliary\"), parameters = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/describe_prior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Describe Priors — describe_prior","text":"model Bayesian model. ... Currently used. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/describe_prior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Describe Priors — describe_prior","text":"","code":"# \\donttest{ library(bayestestR) # rstanarm models # ----------------------------------------------- if (require(\"rstanarm\")) { model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars) describe_prior(model) } #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.049 seconds (Warm-up) #> Chain 1: 0.047 seconds (Sampling) #> Chain 1: 0.096 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.1e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.048 seconds (Warm-up) #> Chain 2: 0.045 seconds (Sampling) #> Chain 2: 0.093 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.1e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.046 seconds (Warm-up) #> Chain 3: 0.044 seconds (Sampling) #> Chain 3: 0.09 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.1e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.059 seconds (Warm-up) #> Chain 4: 0.041 seconds (Sampling) #> Chain 4: 0.1 seconds (Total) #> Chain 4: #> Parameter Prior_Distribution Prior_Location Prior_Scale #> 1 (Intercept) normal 20.09062 15.067370 #> 2 wt normal 0.00000 15.399106 #> 3 cyl normal 0.00000 8.436748 # brms models # ----------------------------------------------- if (require(\"brms\")) { model <- brms::brm(mpg ~ wt + cyl, data = mtcars) describe_prior(model) } #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 9e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 1: 0.016 seconds (Sampling) #> Chain 1: 0.035 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 2: 0.016 seconds (Sampling) #> Chain 2: 0.037 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 4e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.016 seconds (Sampling) #> Chain 3: 0.035 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.018 seconds (Sampling) #> Chain 4: 0.038 seconds (Total) #> Chain 4: #> Parameter Prior_Distribution Prior_Location Prior_Scale Prior_df #> 1 b_Intercept student_t 19.2 5.4 3 #> 2 b_wt uniform NA NA NA #> 3 b_cyl uniform NA NA NA #> 4 sigma student_t 0.0 5.4 3 # BayesFactor objects # ----------------------------------------------- if (require(\"BayesFactor\")) { bf <- ttestBF(x = rnorm(100, 1, 1)) describe_prior(bf) } #> Parameter Prior_Distribution Prior_Location Prior_Scale #> 1 Difference cauchy 0 0.7071068 # }"},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_draws.html","id":null,"dir":"Reference","previous_headings":"","what":"Diagnostic values for each iteration — diagnostic_draws","title":"Diagnostic values for each iteration — diagnostic_draws","text":"Returns accumulated log-posterior, average Metropolis acceptance rate, divergent transitions, treedepth rather terminated evolution normally.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_draws.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Diagnostic values for each iteration — diagnostic_draws","text":"","code":"diagnostic_draws(posterior, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_draws.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Diagnostic values for each iteration — diagnostic_draws","text":"posterior stanreg, stanfit, brmsfit, blavaan object. ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_draws.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Diagnostic values for each iteration — diagnostic_draws","text":"","code":"# \\donttest{ set.seed(333) if (require(\"brms\", quietly = TRUE)) { model <- suppressWarnings(brm(mpg ~ wt * cyl * vs, data = mtcars, iter = 100, control = list(adapt_delta = 0.80), refresh = 0 )) diagnostic_draws(model) } #> Compiling Stan program... #> Start sampling #> Chain Iteration Acceptance_Rate Step_Size Tree_Depth n_Leapfrog Divergent #> 1 1 1 0.8635300 0.03468417 6 63 0 #> 2 1 2 0.9136471 0.03468417 7 255 0 #> 3 1 3 0.9863575 0.03468417 10 1023 0 #> 4 1 4 0.9662903 0.03468417 6 127 0 #> 5 1 5 0.9900292 0.03468417 6 63 0 #> 6 1 6 0.9289357 0.03468417 9 639 0 #> 7 1 7 0.9969973 0.03468417 6 63 0 #> 8 1 8 0.9937014 0.03468417 10 1023 0 #> 9 1 9 0.9797568 0.03468417 8 383 0 #> 10 1 10 0.9855480 0.03468417 9 639 0 #> 11 1 11 0.8617986 0.03468417 8 447 0 #> 12 1 12 0.9949926 0.03468417 9 639 0 #> 13 1 13 0.9937324 0.03468417 10 1023 0 #> 14 1 14 0.9770628 0.03468417 7 191 0 #> 15 1 15 0.9194509 0.03468417 10 1023 0 #> 16 1 16 0.9568499 0.03468417 6 127 0 #> 17 1 17 0.9861167 0.03468417 6 63 0 #> 18 1 18 0.9507910 0.03468417 9 895 0 #> 19 1 19 0.9887289 0.03468417 10 1023 0 #> 20 1 20 0.9993835 0.03468417 5 31 0 #> 21 1 21 0.9843024 0.03468417 10 1023 0 #> 22 1 22 0.9797890 0.03468417 8 383 0 #> 23 1 23 0.9944690 0.03468417 8 511 0 #> 24 1 24 0.9767860 0.03468417 8 383 0 #> 25 1 25 0.9383421 0.03468417 9 959 0 #> 26 1 26 0.9570864 0.03468417 9 655 0 #> 27 1 27 0.9778837 0.03468417 10 1023 0 #> 28 1 28 0.9799058 0.03468417 7 255 0 #> 29 1 29 0.9462549 0.03468417 6 63 0 #> 30 1 30 0.8323784 0.03468417 7 127 0 #> 31 1 31 0.9895450 0.03468417 10 1023 0 #> 32 1 32 0.9926231 0.03468417 8 335 0 #> 33 1 33 0.9927103 0.03468417 5 31 0 #> 34 1 34 0.9987344 0.03468417 6 63 0 #> 35 1 35 0.9830424 0.03468417 6 63 0 #> 36 1 36 0.9682450 0.03468417 7 127 0 #> 37 1 37 0.9933061 0.03468417 10 1023 0 #> 38 1 38 0.9403289 0.03468417 10 1023 0 #> 39 1 39 0.9995874 0.03468417 10 1023 0 #> 40 1 40 0.8767612 0.03468417 9 1023 0 #> 41 1 41 0.9725447 0.03468417 10 1023 0 #> 42 1 42 0.9861283 0.03468417 8 383 0 #> 43 1 43 0.8548139 0.03468417 10 1023 0 #> 44 1 44 0.8389647 0.03468417 10 1023 0 #> 45 1 45 0.9554398 0.03468417 5 63 0 #> 46 1 46 0.9761778 0.03468417 10 1023 0 #> 47 1 47 0.9701103 0.03468417 7 127 0 #> 48 1 48 0.9922098 0.03468417 7 191 0 #> 49 1 49 0.9621097 0.03468417 8 511 0 #> 50 1 50 0.9994729 0.03468417 9 639 0 #> 51 2 1 0.9977753 0.03332572 10 1023 0 #> 52 2 2 0.8687279 0.03332572 7 191 0 #> 53 2 3 0.9933227 0.03332572 9 767 0 #> 54 2 4 0.9684332 0.03332572 6 63 0 #> 55 2 5 0.7551614 0.03332572 10 1023 0 #> 56 2 6 0.9266202 0.03332572 7 127 0 #> 57 2 7 0.9960281 0.03332572 10 1023 0 #> 58 2 8 0.9963768 0.03332572 6 63 0 #> 59 2 9 0.9775107 0.03332572 8 447 0 #> 60 2 10 0.7663591 0.03332572 5 63 0 #> 61 2 11 0.9954953 0.03332572 6 63 0 #> 62 2 12 0.9986424 0.03332572 6 127 0 #> 63 2 13 0.9925190 0.03332572 10 1023 0 #> 64 2 14 0.9999734 0.03332572 10 1023 0 #> 65 2 15 0.8988157 0.03332572 3 7 0 #> 66 2 16 0.9068830 0.03332572 10 1023 0 #> 67 2 17 0.9299921 0.03332572 10 1023 0 #> 68 2 18 0.9866188 0.03332572 10 1023 0 #> 69 2 19 0.9163855 0.03332572 10 1023 0 #> 70 2 20 0.9925532 0.03332572 7 255 0 #> 71 2 21 0.9876059 0.03332572 7 191 0 #> 72 2 22 0.9880354 0.03332572 9 1023 0 #> 73 2 23 0.8505949 0.03332572 10 1023 0 #> 74 2 24 0.9905202 0.03332572 6 63 0 #> 75 2 25 0.7128606 0.03332572 6 127 0 #> 76 2 26 0.9990718 0.03332572 10 1023 0 #> 77 2 27 0.9784712 0.03332572 7 127 0 #> 78 2 28 0.9965097 0.03332572 8 383 0 #> 79 2 29 0.9942409 0.03332572 6 63 0 #> 80 2 30 0.8183472 0.03332572 9 655 0 #> 81 2 31 0.9680243 0.03332572 7 255 0 #> 82 2 32 0.9890766 0.03332572 5 31 0 #> 83 2 33 0.8715178 0.03332572 10 1023 0 #> 84 2 34 0.9362153 0.03332572 10 1023 0 #> 85 2 35 0.9997559 0.03332572 8 511 0 #> 86 2 36 0.8987255 0.03332572 3 15 0 #> 87 2 37 0.9880821 0.03332572 9 831 0 #> 88 2 38 0.9293936 0.03332572 10 1023 0 #> 89 2 39 0.9527558 0.03332572 10 1023 0 #> 90 2 40 0.8142194 0.03332572 10 1023 0 #> 91 2 41 0.9874643 0.03332572 10 1023 0 #> 92 2 42 0.9770989 0.03332572 9 1023 0 #> 93 2 43 0.8866260 0.03332572 9 895 0 #> 94 2 44 0.8828927 0.03332572 10 1023 0 #> 95 2 45 0.9949399 0.03332572 7 255 0 #> 96 2 46 0.9146691 0.03332572 7 191 0 #> 97 2 47 0.9659167 0.03332572 10 1023 0 #> 98 2 48 0.9957213 0.03332572 10 1023 0 #> 99 2 49 0.9404257 0.03332572 7 191 0 #> 100 2 50 0.9803275 0.03332572 8 319 0 #> 101 3 1 0.8155798 0.03742549 7 191 0 #> 102 3 2 0.9565482 0.03742549 10 1023 0 #> 103 3 3 0.9060364 0.03742549 7 127 0 #> 104 3 4 0.9684992 0.03742549 9 575 0 #> 105 3 5 0.9600335 0.03742549 10 1023 0 #> 106 3 6 0.9894624 0.03742549 6 63 0 #> 107 3 7 0.8202755 0.03742549 6 127 0 #> 108 3 8 0.9680231 0.03742549 10 1023 0 #> 109 3 9 0.8682610 0.03742549 10 1023 0 #> 110 3 10 0.9976172 0.03742549 7 191 0 #> 111 3 11 0.9476391 0.03742549 10 1023 0 #> 112 3 12 0.9262045 0.03742549 8 383 0 #> 113 3 13 0.9867719 0.03742549 7 127 0 #> 114 3 14 0.9935106 0.03742549 5 31 0 #> 115 3 15 0.8351069 0.03742549 10 1023 0 #> 116 3 16 0.9793729 0.03742549 6 63 0 #> 117 3 17 0.9247268 0.03742549 10 1023 0 #> 118 3 18 0.9983438 0.03742549 10 1023 0 #> 119 3 19 0.9405665 0.03742549 6 127 0 #> 120 3 20 0.9904495 0.03742549 8 511 0 #> 121 3 21 0.9905696 0.03742549 9 639 0 #> 122 3 22 0.8409423 0.03742549 8 319 0 #> 123 3 23 0.9956203 0.03742549 7 159 0 #> 124 3 24 0.8021489 0.03742549 8 447 0 #> 125 3 25 0.9744307 0.03742549 10 1023 0 #> 126 3 26 0.9732865 0.03742549 10 1023 0 #> 127 3 27 0.9197724 0.03742549 6 127 0 #> 128 3 28 0.9771663 0.03742549 7 255 0 #> 129 3 29 0.8528693 0.03742549 10 1023 0 #> 130 3 30 0.9857750 0.03742549 9 575 0 #> 131 3 31 0.9776708 0.03742549 6 127 0 #> 132 3 32 0.9103742 0.03742549 10 1023 0 #> 133 3 33 0.8127369 0.03742549 9 975 0 #> 134 3 34 0.9979473 0.03742549 10 1023 0 #> 135 3 35 0.9930098 0.03742549 10 1023 0 #> 136 3 36 0.9739349 0.03742549 10 1023 0 #> 137 3 37 0.9803240 0.03742549 10 1023 0 #> 138 3 38 0.9944887 0.03742549 10 1023 0 #> 139 3 39 0.9891067 0.03742549 10 1023 0 #> 140 3 40 0.9473967 0.03742549 9 607 0 #> 141 3 41 0.9985215 0.03742549 8 447 0 #> 142 3 42 0.9952529 0.03742549 9 671 0 #> 143 3 43 0.8960930 0.03742549 7 175 0 #> 144 3 44 0.9864406 0.03742549 9 895 0 #> 145 3 45 0.9969098 0.03742549 7 191 0 #> 146 3 46 0.7906908 0.03742549 9 911 0 #> 147 3 47 0.8782428 0.03742549 6 127 0 #> 148 3 48 0.9951144 0.03742549 6 63 0 #> 149 3 49 0.9446957 0.03742549 10 1023 0 #> 150 3 50 0.9983473 0.03742549 10 1023 0 #> 151 4 1 0.7601993 0.03204934 10 1023 0 #> 152 4 2 0.9471750 0.03204934 10 1023 0 #> 153 4 3 0.9960623 0.03204934 10 1023 0 #> 154 4 4 0.9909054 0.03204934 9 1023 0 #> 155 4 5 0.6559207 0.03204934 8 255 0 #> 156 4 6 0.9831670 0.03204934 8 407 0 #> 157 4 7 0.9634444 0.03204934 8 255 0 #> 158 4 8 0.9564323 0.03204934 10 1023 0 #> 159 4 9 0.9956201 0.03204934 7 159 0 #> 160 4 10 0.9656818 0.03204934 7 127 0 #> 161 4 11 0.9472236 0.03204934 8 511 0 #> 162 4 12 0.9997053 0.03204934 10 1023 0 #> 163 4 13 0.9977500 0.03204934 10 1023 0 #> 164 4 14 0.9206532 0.03204934 10 1023 0 #> 165 4 15 0.9921322 0.03204934 8 383 0 #> 166 4 16 0.9503797 0.03204934 10 1023 0 #> 167 4 17 0.9978729 0.03204934 10 1023 0 #> 168 4 18 0.5197365 0.03204934 10 1023 0 #> 169 4 19 0.9963902 0.03204934 10 1023 0 #> 170 4 20 0.9900801 0.03204934 6 127 0 #> 171 4 21 0.9547605 0.03204934 6 127 0 #> 172 4 22 0.9878980 0.03204934 9 831 0 #> 173 4 23 0.9957027 0.03204934 7 127 0 #> 174 4 24 0.9596356 0.03204934 9 783 0 #> 175 4 25 0.9987294 0.03204934 6 63 0 #> 176 4 26 0.9829070 0.03204934 5 63 0 #> 177 4 27 0.8891739 0.03204934 8 423 0 #> 178 4 28 0.9505994 0.03204934 10 1023 0 #> 179 4 29 0.9929616 0.03204934 6 63 0 #> 180 4 30 0.7685323 0.03204934 6 79 0 #> 181 4 31 0.9900524 0.03204934 9 911 0 #> 182 4 32 0.9579094 0.03204934 7 255 0 #> 183 4 33 0.9937229 0.03204934 8 455 0 #> 184 4 34 0.9798194 0.03204934 8 383 0 #> 185 4 35 0.9502694 0.03204934 9 703 0 #> 186 4 36 0.9891410 0.03204934 6 127 0 #> 187 4 37 0.9968589 0.03204934 6 63 0 #> 188 4 38 0.9649275 0.03204934 9 975 0 #> 189 4 39 0.9441781 0.03204934 9 895 0 #> 190 4 40 0.9700667 0.03204934 4 15 0 #> 191 4 41 0.9007986 0.03204934 10 1023 0 #> 192 4 42 0.9788679 0.03204934 8 383 0 #> 193 4 43 0.8766682 0.03204934 7 255 0 #> 194 4 44 0.9638160 0.03204934 10 1023 0 #> 195 4 45 0.9948981 0.03204934 9 767 0 #> 196 4 46 0.9952965 0.03204934 10 1023 0 #> 197 4 47 0.9811392 0.03204934 8 351 0 #> 198 4 48 0.9838751 0.03204934 4 31 0 #> 199 4 49 0.9346458 0.03204934 9 895 0 #> 200 4 50 0.8551706 0.03204934 10 1023 0 #> Energy LogPosterior #> 1 81.68979 -78.53575 #> 2 81.30190 -79.44012 #> 3 83.73613 -77.76254 #> 4 83.83327 -78.54812 #> 5 81.58150 -77.67229 #> 6 80.76513 -77.72217 #> 7 78.86958 -76.65568 #> 8 80.47306 -76.57489 #> 9 80.00525 -77.24579 #> 10 80.38747 -77.69605 #> 11 83.50795 -78.71900 #> 12 79.99357 -76.68875 #> 13 81.08848 -78.41224 #> 14 80.44635 -78.40657 #> 15 83.25134 -76.90618 #> 16 81.55041 -79.25998 #> 17 81.93603 -79.45798 #> 18 82.52994 -79.45795 #> 19 82.56797 -78.76203 #> 20 80.91393 -79.51177 #> 21 82.62931 -79.59458 #> 22 82.52404 -77.47962 #> 23 81.75065 -79.21792 #> 24 81.54242 -77.54294 #> 25 80.28798 -76.43826 #> 26 81.23868 -77.89696 #> 27 84.82903 -79.69482 #> 28 84.27202 -79.33650 #> 29 82.01173 -78.61202 #> 30 83.52101 -80.83462 #> 31 84.19173 -80.50366 #> 32 83.22688 -80.28814 #> 33 80.95878 -78.06432 #> 34 83.41489 -79.20627 #> 35 81.76939 -76.08874 #> 36 78.81229 -77.83248 #> 37 79.71623 -77.32440 #> 38 83.44286 -78.74506 #> 39 81.74730 -77.08450 #> 40 80.00788 -76.52118 #> 41 78.52085 -75.14859 #> 42 76.14307 -74.66305 #> 43 78.17077 -76.99175 #> 44 82.32819 -75.88508 #> 45 82.69528 -78.50423 #> 46 84.08986 -80.33075 #> 47 86.27963 -81.32510 #> 48 83.57613 -78.75639 #> 49 83.37602 -80.38082 #> 50 85.22319 -81.98533 #> 51 79.04425 -75.93612 #> 52 79.59342 -76.88132 #> 53 81.98043 -76.85721 #> 54 84.09465 -81.35656 #> 55 87.52969 -82.04522 #> 56 84.40385 -80.15068 #> 57 83.29075 -79.89185 #> 58 81.02457 -77.92864 #> 59 79.48660 -75.45440 #> 60 81.65772 -76.70111 #> 61 78.79809 -75.66794 #> 62 77.92424 -75.54409 #> 63 81.11599 -77.93756 #> 64 82.91378 -77.97202 #> 65 81.60834 -78.62578 #> 66 83.37749 -78.77475 #> 67 81.49486 -77.59621 #> 68 81.84369 -78.65842 #> 69 83.21586 -77.13988 #> 70 81.77040 -78.27169 #> 71 80.24218 -75.79366 #> 72 80.95216 -75.74954 #> 73 85.45359 -80.36946 #> 74 81.86366 -77.80998 #> 75 84.36743 -78.75500 #> 76 84.03165 -81.71340 #> 77 85.38472 -81.17972 #> 78 87.72757 -78.53602 #> 79 81.70592 -79.29349 #> 80 89.51925 -83.20471 #> 81 88.66343 -83.49537 #> 82 85.51220 -82.51621 #> 83 88.48239 -82.78556 #> 84 86.23336 -80.13370 #> 85 82.87816 -79.53007 #> 86 85.66489 -80.89422 #> 87 90.04579 -80.05998 #> 88 87.75324 -82.60976 #> 89 86.50583 -78.22795 #> 90 83.36130 -78.28413 #> 91 82.86748 -78.98554 #> 92 84.29937 -78.10824 #> 93 82.14089 -79.40267 #> 94 82.73149 -80.67393 #> 95 84.70622 -81.30023 #> 96 89.91822 -85.70927 #> 97 93.83836 -81.46507 #> 98 85.20532 -81.23038 #> 99 84.54402 -79.25060 #> 100 83.81674 -77.61327 #> 101 87.84960 -80.90316 #> 102 83.56417 -78.36262 #> 103 81.36451 -79.69073 #> 104 84.53421 -81.87829 #> 105 83.92396 -79.49371 #> 106 82.05882 -77.34125 #> 107 79.43416 -76.36843 #> 108 78.99842 -76.88755 #> 109 89.20045 -82.35807 #> 110 89.71731 -85.52151 #> 111 91.03985 -83.68061 #> 112 89.72210 -85.08433 #> 113 88.48898 -83.66736 #> 114 85.29272 -79.85449 #> 115 85.05584 -80.59265 #> 116 86.14571 -81.42476 #> 117 87.89185 -80.74854 #> 118 88.68445 -83.90403 #> 119 89.87877 -82.33156 #> 120 85.60347 -80.76304 #> 121 87.87220 -82.80787 #> 122 88.01223 -81.47572 #> 123 84.11699 -79.98619 #> 124 87.77799 -83.58798 #> 125 86.39205 -78.62288 #> 126 89.77851 -85.15048 #> 127 88.10457 -82.96944 #> 128 88.49397 -78.50942 #> 129 85.00417 -76.89855 #> 130 79.76467 -76.28304 #> 131 80.15671 -76.97819 #> 132 84.73166 -79.85099 #> 133 86.48743 -81.80299 #> 134 90.36706 -87.56979 #> 135 91.32256 -77.72510 #> 136 82.83510 -78.23065 #> 137 81.86588 -77.50455 #> 138 83.28792 -77.62655 #> 139 81.73475 -75.42355 #> 140 80.24097 -77.58810 #> 141 80.02829 -77.88882 #> 142 81.95778 -76.93972 #> 143 80.35629 -76.19316 #> 144 77.97606 -76.20728 #> 145 78.93616 -77.04431 #> 146 84.85137 -78.47213 #> 147 86.10982 -80.57124 #> 148 85.22013 -81.62952 #> 149 85.16246 -76.54344 #> 150 78.63998 -75.95998 #> 151 85.09025 -78.61080 #> 152 84.10733 -81.79794 #> 153 85.92464 -83.01607 #> 154 85.51669 -81.25821 #> 155 89.79504 -83.21534 #> 156 87.48172 -80.87003 #> 157 84.27767 -76.44178 #> 158 80.14010 -76.75732 #> 159 77.44618 -75.36220 #> 160 78.00573 -76.10621 #> 161 79.32189 -77.88368 #> 162 80.32595 -77.78867 #> 163 81.65414 -79.27375 #> 164 86.60368 -78.97770 #> 165 83.43989 -78.77708 #> 166 82.12886 -76.53895 #> 167 79.44107 -77.20244 #> 168 89.00809 -82.69022 #> 169 88.11850 -78.93262 #> 170 83.11158 -78.67737 #> 171 82.11975 -77.40317 #> 172 81.67231 -78.88842 #> 173 80.68019 -76.97997 #> 174 79.23976 -77.34819 #> 175 78.67661 -77.87987 #> 176 79.36949 -77.91093 #> 177 81.28618 -77.91514 #> 178 83.50125 -77.54460 #> 179 79.29785 -77.86241 #> 180 81.69942 -78.75083 #> 181 83.85002 -78.72521 #> 182 82.14366 -78.13542 #> 183 79.35053 -77.44870 #> 184 80.39313 -78.60093 #> 185 83.93737 -80.50491 #> 186 84.07392 -78.84227 #> 187 81.65765 -79.31137 #> 188 84.54190 -82.44243 #> 189 88.29899 -85.10235 #> 190 91.25734 -89.05588 #> 191 95.95799 -86.31677 #> 192 90.32763 -81.04195 #> 193 85.14247 -80.29119 #> 194 84.00587 -80.27536 #> 195 86.07170 -83.35788 #> 196 86.94719 -80.06940 #> 197 84.22259 -79.43354 #> 198 81.80435 -77.56382 #> 199 81.70503 -79.41256 #> 200 85.60565 -82.82386 # }"},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":null,"dir":"Reference","previous_headings":"","what":"Posteriors Sampling Diagnostic — diagnostic_posterior","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"Extract diagnostic metrics (Effective Sample Size (ESS), Rhat Monte Carlo Standard Error MCSE).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"","code":"diagnostic_posterior(posterior, ...) # Default S3 method diagnostic_posterior(posterior, diagnostic = c(\"ESS\", \"Rhat\"), ...) # S3 method for class 'stanreg' diagnostic_posterior( posterior, diagnostic = \"all\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... ) # S3 method for class 'brmsfit' diagnostic_posterior( posterior, diagnostic = \"all\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"posterior stanreg, stanfit, brmsfit, blavaan object. ... Currently used. diagnostic Diagnostic metrics compute. Character (vector) list one options: \"ESS\", \"Rhat\", \"MCSE\" \"\". effects variables fixed effects (\"fixed\"), random effects (\"random\") (\"\") returned? applies mixed models. May abbreviated. component type parameters return, parameters conditional model, zero-inflated part model, dispersion term, instrumental variables marginal effects returned? Applies models zero-inflated /dispersion formula, models instrumental variables (called fixed-effects regressions), models marginal effects (mfx). See details section Model Components .May abbreviated. Note conditional component also refers count mean component - names may differ, depending modeling package. three convenient shortcuts (applicable model classes): component = \"\" returns possible parameters. component = \"location\", location parameters conditional, zero_inflated, smooth_terms, instruments returned (everything fixed random effects - depending effects argument - auxiliary parameters). component = \"distributional\" (\"auxiliary\"), components like sigma, dispersion, beta precision (auxiliary parameters) returned. parameters Regular expression pattern describes parameters returned.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"Effective Sample (ESS) large possible, although applications, effective sample size greater 1000 sufficient stable estimates (Bürkner, 2017). ESS corresponds number independent samples estimation power N autocorrelated samples. measure \"much independent information autocorrelated chains\" (Kruschke 2015, p182-3). Rhat closest 1. larger 1.1 (Gelman Rubin, 1992) 1.01 (Vehtari et al., 2019). split Rhat statistic quantifies consistency ensemble Markov chains. Monte Carlo Standard Error (MCSE) another measure accuracy chains. defined standard deviation chains divided effective sample size (formula mcse() Kruschke 2015, p. 187). MCSE \"provides quantitative suggestion big estimation noise \".","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"Gelman, ., & Rubin, D. B. (1992). Inference iterative simulation using multiple sequences. Statistical science, 7(4), 457-472. Vehtari, ., Gelman, ., Simpson, D., Carpenter, B., Bürkner, P. C. (2019). Rank-normalization, folding, localization: improved Rhat assessing convergence MCMC. arXiv preprint arXiv:1903.08008. Kruschke, J. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/diagnostic_posterior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Posteriors Sampling Diagnostic — diagnostic_posterior","text":"","code":"# \\donttest{ # rstanarm models # ----------------------------------------------- model <- suppressWarnings( rstanarm::stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) diagnostic_posterior(model) #> Parameter Rhat ESS MCSE #> 1 (Intercept) 0.9980336 182.6025 0.36283152 #> 2 gear 0.9917174 206.3058 0.06519599 #> 3 wt 0.9978902 186.7773 0.04770867 # brms models # ----------------------------------------------- model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 1.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 1: 0.017 seconds (Sampling) #> Chain 1: 0.037 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 2: 0.019 seconds (Sampling) #> Chain 2: 0.04 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 3: 0.019 seconds (Sampling) #> Chain 3: 0.04 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 4: 0.021 seconds (Sampling) #> Chain 4: 0.04 seconds (Total) #> Chain 4: diagnostic_posterior(model) #> Parameter Rhat ESS MCSE #> 1 b_Intercept 1.000297 4618.359 0.02593799 #> 2 b_cyl 1.003470 1868.916 0.01002953 #> 3 b_wt 1.003250 1697.213 0.01926801 # }"},{"path":"https://easystats.github.io/bayestestR/reference/disgust.html","id":null,"dir":"Reference","previous_headings":"","what":"Moral Disgust Judgment — disgust","title":"Moral Disgust Judgment — disgust","text":"sample (simulated) dataset, used tests examples.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/disgust.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Moral Disgust Judgment — disgust","text":"data frame 500 rows 5 variables: score Score questionnaire, ranges 0 50 higher scores representing harsher moral judgment condition one three conditions, differing odor present room: pleasant scent associated cleanliness (lemon), disgusting scent (sulfur), control condition unusual odor present","code":"data(\"disgust\") head(disgust, n = 5) #> score condition #> 1 13 control #> 2 26 control #> 3 30 control #> 4 23 control #> 5 34 control"},{"path":"https://easystats.github.io/bayestestR/reference/disgust.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Moral Disgust Judgment — disgust","text":"Richard D. Morey","code":""},{"path":"https://easystats.github.io/bayestestR/reference/distribution.html","id":null,"dir":"Reference","previous_headings":"","what":"Empirical Distributions — distribution","title":"Empirical Distributions — distribution","text":"Generate sequence n-quantiles, .e., sample size n near-perfect distribution.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/distribution.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Empirical Distributions — distribution","text":"","code":"distribution(type = \"normal\", ...) distribution_custom(n, type = \"norm\", ..., random = FALSE) distribution_beta(n, shape1, shape2, ncp = 0, random = FALSE, ...) distribution_binomial(n, size = 1, prob = 0.5, random = FALSE, ...) distribution_binom(n, size = 1, prob = 0.5, random = FALSE, ...) distribution_cauchy(n, location = 0, scale = 1, random = FALSE, ...) distribution_chisquared(n, df, ncp = 0, random = FALSE, ...) distribution_chisq(n, df, ncp = 0, random = FALSE, ...) distribution_gamma(n, shape, scale = 1, random = FALSE, ...) distribution_mixture_normal(n, mean = c(-3, 3), sd = 1, random = FALSE, ...) distribution_normal(n, mean = 0, sd = 1, random = FALSE, ...) distribution_gaussian(n, mean = 0, sd = 1, random = FALSE, ...) distribution_nbinom(n, size, prob, mu, phi, random = FALSE, ...) distribution_poisson(n, lambda = 1, random = FALSE, ...) distribution_student(n, df, ncp, random = FALSE, ...) distribution_t(n, df, ncp, random = FALSE, ...) distribution_student_t(n, df, ncp, random = FALSE, ...) distribution_tweedie(n, xi = NULL, mu, phi, power = NULL, random = FALSE, ...) distribution_uniform(n, min = 0, max = 1, random = FALSE, ...) rnorm_perfect(n, mean = 0, sd = 1)"},{"path":"https://easystats.github.io/bayestestR/reference/distribution.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Empirical Distributions — distribution","text":"type Can names base R's Distributions, like \"cauchy\", \"pois\" \"beta\". ... Arguments passed methods. n number observations random Generate near-perfect random (simple wrappers base R r* functions) distributions. shape1, shape2 non-negative parameters Beta distribution. ncp non-centrality parameter. size number trials (zero ). prob probability success trial. location, scale location scale parameters. df degrees freedom (non-negative, can non-integer). shape Shape parameter. mean vector means. sd vector standard deviations. mu mean phi Corresponding glmmTMB's implementation nbinom distribution, size=mu/phi. lambda vector (non-negative) means. xi tweedie distributions, value xi variance var(Y) = phi * mu^xi. power Alias xi. min, max lower upper limits distribution. Must finite.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/distribution.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Empirical Distributions — distribution","text":"random = FALSE, function return q*(ppoints(n), ...).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/distribution.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Empirical Distributions — distribution","text":"","code":"library(bayestestR) x <- distribution(n = 10) plot(density(x)) x <- distribution(type = \"gamma\", n = 100, shape = 2) plot(density(x))"},{"path":"https://easystats.github.io/bayestestR/reference/dot-extract_priors_rstanarm.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract and Returns the priors formatted for rstanarm — .extract_priors_rstanarm","title":"Extract and Returns the priors formatted for rstanarm — .extract_priors_rstanarm","text":"Extract Returns priors formatted rstanarm","code":""},{"path":"https://easystats.github.io/bayestestR/reference/dot-extract_priors_rstanarm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract and Returns the priors formatted for rstanarm — .extract_priors_rstanarm","text":"","code":".extract_priors_rstanarm(model, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/dot-prior_new_location.html","id":null,"dir":"Reference","previous_headings":"","what":"Set a new location for a prior — .prior_new_location","title":"Set a new location for a prior — .prior_new_location","text":"Set new location prior","code":""},{"path":"https://easystats.github.io/bayestestR/reference/dot-prior_new_location.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set a new location for a prior — .prior_new_location","text":"","code":".prior_new_location(prior, sign, magnitude = 10)"},{"path":"https://easystats.github.io/bayestestR/reference/dot-select_nums.html","id":null,"dir":"Reference","previous_headings":"","what":"select numerics columns — .select_nums","title":"select numerics columns — .select_nums","text":"select numerics columns","code":""},{"path":"https://easystats.github.io/bayestestR/reference/dot-select_nums.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"select numerics columns — .select_nums","text":"","code":".select_nums(x)"},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":null,"dir":"Reference","previous_headings":"","what":"Effective Sample Size (ESS) — effective_sample","title":"Effective Sample Size (ESS) — effective_sample","text":"function returns effective sample size (ESS).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Effective Sample Size (ESS) — effective_sample","text":"","code":"effective_sample(model, ...) # S3 method for class 'brmsfit' effective_sample( model, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ... ) # S3 method for class 'stanreg' effective_sample( model, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Effective Sample Size (ESS) — effective_sample","text":"model stanreg, stanfit, brmsfit, blavaan, MCMCglmm object. ... Currently used. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Effective Sample Size (ESS) — effective_sample","text":"data frame two columns: Parameter name effective sample size (ESS).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Effective Sample Size (ESS) — effective_sample","text":"Effective Sample (ESS) large possible, altough applications, effective sample size greater 1,000 sufficient stable estimates (Bürkner, 2017). ESS corresponds number independent samples estimation power N autocorrelated samples. measure “much independent information autocorrelated chains” (Kruschke 2015, p182-3).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Effective Sample Size (ESS) — effective_sample","text":"Kruschke, J. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press. Bürkner, P. C. (2017). brms: R package Bayesian multilevel models using Stan. Journal Statistical Software, 80(1), 1-28","code":""},{"path":"https://easystats.github.io/bayestestR/reference/effective_sample.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Effective Sample Size (ESS) — effective_sample","text":"","code":"# \\donttest{ library(rstanarm) model <- suppressWarnings( stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) effective_sample(model) #> Parameter ESS #> 1 (Intercept) 172 #> 2 wt 181 #> 3 gear 175 # }"},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":null,"dir":"Reference","previous_headings":"","what":"Test for Practical Equivalence — equivalence_test","title":"Test for Practical Equivalence — equivalence_test","text":"Perform Test Practical Equivalence Bayesian frequentist models.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Test for Practical Equivalence — equivalence_test","text":"","code":"equivalence_test(x, ...) # Default S3 method equivalence_test(x, ...) # S3 method for class 'data.frame' equivalence_test( x, range = \"default\", ci = 0.95, rvar_col = NULL, verbose = TRUE, ... ) # S3 method for class 'stanreg' equivalence_test( x, range = \"default\", ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' equivalence_test( x, range = \"default\", ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Test for Practical Equivalence — equivalence_test","text":"x Vector representing posterior distribution. Can also stanreg brmsfit model. ... Currently used. range ROPE's lower higher bounds. \"default\" depending number outcome variables vector list. models one response, range can : vector length two (e.g., c(-0.1, 0.1)), list numeric vector length numbers parameters (see 'Examples'). list named numeric vectors, names correspond parameter names. case, parameters matching name range set \"default\". multivariate models, range list another list (one response variable) numeric vectors . Vector names correspond name response variables. \"default\" input vector, range set c(-0.1, 0.1). \"default\" input Bayesian model, rope_range() used. See 'Examples'. ci Credible Interval (CI) probability, corresponding proportion HDI, use percentage ROPE. rvar_col single character - name rvar column data frame processed. See example p_direction(). verbose Toggle warnings. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Test for Practical Equivalence — equivalence_test","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability HDI. ROPE_low, ROPE_high limits ROPE. values identical parameters. ROPE_Percentage proportion HDI lies inside ROPE. ROPE_Equivalence \"test result\", character. Either \"rejected\", \"accepted\" \"undecided\". HDI_low , HDI_high lower upper HDI limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Test for Practical Equivalence — equivalence_test","text":"Documentation accessible : Bayesian models Frequentist models Bayesian models, Test Practical Equivalence based \"HDI+ROPE decision rule\" (Kruschke, 2014, 2018) check whether parameter values accepted rejected explicitly formulated \"null hypothesis\" (.e., ROPE). words, checks percentage 89% HDI null region (ROPE). percentage sufficiently low, null hypothesis rejected. percentage sufficiently high, null hypothesis accepted. Using ROPE HDI, Kruschke (2018) suggests using percentage 95% (89%, considered stable) HDI falls within ROPE decision rule. HDI completely outside ROPE, \"null hypothesis\" parameter \"rejected\". ROPE completely covers HDI, .e., credible values parameter inside region practical equivalence, null hypothesis accepted. Else, ’s undecided whether accept reject null hypothesis. full ROPE used (.e., 100% HDI), null hypothesis rejected accepted percentage posterior within ROPE smaller 2.5% greater 97.5%. Desirable results low proportions inside ROPE (closer zero better). attention required finding suitable values ROPE limits (argument range). See 'Details' rope_range() information. Multicollinearity: Non-independent covariates parameters show strong correlations, .e. covariates independent, joint parameter distributions may shift towards away ROPE. cases, test practical equivalence may inappropriate results. Collinearity invalidates ROPE hypothesis testing based univariate marginals, probabilities conditional independence. problematic results \"undecided\" parameters, may either move towards \"rejection\" away (Kruschke 2014, 340f). equivalence_test() performs simple check pairwise correlations parameters, can collinearity two variables, first step check assumptions hypothesis testing look different pair plots. even sophisticated check projection predictive variable selection (Piironen Vehtari 2017).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Test for Practical Equivalence — equivalence_test","text":"print()-method digits-argument control amount digits output, plot()-method visualize results equivalence-test (models ).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Test for Practical Equivalence — equivalence_test","text":"Kruschke, J. K. (2018). Rejecting accepting parameter values Bayesian estimation. Advances Methods Practices Psychological Science, 1(2), 270-280. doi:10.1177/2515245918771304 Kruschke, J. K. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press Piironen, J., & Vehtari, . (2017). Comparison Bayesian predictive methods model selection. Statistics Computing, 27(3), 711–735. doi:10.1007/s11222-016-9649-y","code":""},{"path":"https://easystats.github.io/bayestestR/reference/equivalence_test.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Test for Practical Equivalence — equivalence_test","text":"","code":"library(bayestestR) equivalence_test(x = rnorm(1000, 0, 0.01), range = c(-0.1, 0.1)) #> # Test for Practical Equivalence #> #> ROPE: [-0.10 0.10] #> #> H0 | inside ROPE | 95% HDI #> -------------------------------------- #> Accepted | 100.00 % | [-0.02, 0.02] #> #> equivalence_test(x = rnorm(1000, 0, 1), range = c(-0.1, 0.1)) #> # Test for Practical Equivalence #> #> ROPE: [-0.10 0.10] #> #> H0 | inside ROPE | 95% HDI #> --------------------------------------- #> Undecided | 8.11 % | [-2.00, 1.97] #> #> equivalence_test(x = rnorm(1000, 1, 0.01), range = c(-0.1, 0.1)) #> # Test for Practical Equivalence #> #> ROPE: [-0.10 0.10] #> #> H0 | inside ROPE | 95% HDI #> ------------------------------------- #> Rejected | 0.00 % | [0.98, 1.02] #> #> equivalence_test(x = rnorm(1000, 1, 1), ci = c(.50, .99)) #> # Test for Practical Equivalence #> #> ROPE: [-0.10 0.10] #> #> H0 | inside ROPE | 50% HDI #> ------------------------------------- #> Rejected | 0.00 % | [0.31, 1.64] #> #> #> H0 | inside ROPE | 99% HDI #> --------------------------------------- #> Undecided | 5.05 % | [-1.58, 3.65] #> #> # print more digits test <- equivalence_test(x = rnorm(1000, 1, 1), ci = c(.50, .99)) print(test, digits = 4) #> # Test for Practical Equivalence #> #> ROPE: [-0.1000 0.1000] #> #> H0 | inside ROPE | 50% HDI #> ----------------------------------------- #> Rejected | 0.0000 % | [0.3115, 1.7148] #> #> #> H0 | inside ROPE | 99% HDI #> ------------------------------------------- #> Undecided | 4.9495 % | [-1.7070, 3.7015] #> #> # \\donttest{ model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.3e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.23 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.048 seconds (Warm-up) #> Chain 1: 0.048 seconds (Sampling) #> Chain 1: 0.096 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.2e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.049 seconds (Warm-up) #> Chain 2: 0.043 seconds (Sampling) #> Chain 2: 0.092 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.1e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 3: 0.045 seconds (Sampling) #> Chain 3: 0.09 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.1e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.049 seconds (Warm-up) #> Chain 4: 0.043 seconds (Sampling) #> Chain 4: 0.092 seconds (Total) #> Chain 4: equivalence_test(model) #> Possible multicollinearity between cyl and wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?equivalence_test'. #> # Test for Practical Equivalence #> #> ROPE: [-0.60 0.60] #> #> Parameter | H0 | inside ROPE | 95% HDI #> ----------------------------------------------------- #> (Intercept) | Rejected | 0.00 % | [36.21, 43.06] #> wt | Rejected | 0.00 % | [-4.74, -1.62] #> cyl | Rejected | 0.00 % | [-2.36, -0.70] #> #> # multiple ROPE ranges - asymmetric, symmetric, default equivalence_test(model, range = list(c(10, 40), c(-5, -4), \"default\")) #> Possible multicollinearity between cyl and wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?equivalence_test'. #> # Test for Practical Equivalence #> #> Parameter | H0 | inside ROPE | 95% HDI | ROPE #> ----------------------------------------------------------------------- #> (Intercept) | Undecided | 58.39 % | [36.21, 43.06] | [10.00, 40.00] #> wt | Undecided | 12.05 % | [-4.74, -1.62] | [-5.00, -4.00] #> cyl | Rejected | 0.00 % | [-2.36, -0.70] | [-0.10, 0.10] #> #> # named ROPE ranges equivalence_test(model, range = list(wt = c(-5, -4), `(Intercept)` = c(10, 40))) #> Possible multicollinearity between cyl and wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?equivalence_test'. #> # Test for Practical Equivalence #> #> Parameter | H0 | inside ROPE | 95% HDI | ROPE #> ----------------------------------------------------------------------- #> (Intercept) | Undecided | 58.39 % | [36.21, 43.06] | [10.00, 40.00] #> wt | Undecided | 12.05 % | [-4.74, -1.62] | [-5.00, -4.00] #> cyl | Rejected | 0.00 % | [-2.36, -0.70] | [-0.10, 0.10] #> #> # plot result test <- equivalence_test(model) #> Possible multicollinearity between cyl and wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?equivalence_test'. plot(test) #> Picking joint bandwidth of 0.0895 equivalence_test(emmeans::emtrends(model, ~1, \"wt\", data = mtcars)) #> # Test for Practical Equivalence #> #> ROPE: [-0.10 0.10] #> #> X1 | H0 | inside ROPE | 95% HDI #> ------------------------------------------------- #> overall | Rejected | 0.00 % | [-4.74, -1.62] #> #> model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 1: 0.02 seconds (Sampling) #> Chain 1: 0.039 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 2: 0.021 seconds (Sampling) #> Chain 2: 0.042 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.014 seconds (Sampling) #> Chain 3: 0.033 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.016 seconds (Sampling) #> Chain 4: 0.036 seconds (Total) #> Chain 4: equivalence_test(model) #> Possible multicollinearity between b_cyl and b_wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?equivalence_test'. #> # Test for Practical Equivalence #> #> ROPE: [-0.60 0.60] #> #> Parameter | H0 | inside ROPE | 95% HDI #> --------------------------------------------------- #> Intercept | Rejected | 0.00 % | [36.19, 43.16] #> wt | Rejected | 0.00 % | [-4.71, -1.64] #> cyl | Rejected | 0.00 % | [-2.36, -0.68] #> #> bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) # equivalence_test(bf) # }"},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":null,"dir":"Reference","previous_headings":"","what":"Density Estimation — estimate_density","title":"Density Estimation — estimate_density","text":"function wrapper different methods density estimation. default, uses base R density default uses different smoothing bandwidth (\"SJ\") legacy default implemented base R density function (\"nrd0\"). However, Deng Wickham suggest method = \"KernSmooth\" fastest accurate.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Density Estimation — estimate_density","text":"","code":"estimate_density(x, ...) # S3 method for class 'data.frame' estimate_density( x, method = \"kernel\", precision = 2^10, extend = FALSE, extend_scale = 0.1, bw = \"SJ\", ci = NULL, select = NULL, by = NULL, at = NULL, rvar_col = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Density Estimation — estimate_density","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. method Density estimation method. Can \"kernel\" (default), \"logspline\" \"KernSmooth\". precision Number points density data. See n parameter density. extend Extend range x axis factor extend_scale. extend_scale Ratio range extend x axis. value 0.1 means x axis extended 1/10 range data. bw See eponymous argument density. , default changed \"SJ\", recommended. ci confidence interval threshold. used method = \"kernel\". feature experimental, use caution. select Character vector column names. NULL (default), numeric variables selected. arguments datawizard::extract_column_names() (exclude) can also used. Optional character vector. NULL input data frame, density estimation performed group (subsets) indicated . See examples. Deprecated favour . rvar_col single character - name rvar column data frame processed. See example p_direction().","code":""},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Density Estimation — estimate_density","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Density Estimation — estimate_density","text":"Deng, H., & Wickham, H. (2011). Density estimation R. Electronic publication.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/estimate_density.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Density Estimation — estimate_density","text":"","code":"library(bayestestR) set.seed(1) x <- rnorm(250, mean = 1) # Basic usage density_kernel <- estimate_density(x) # default method is \"kernel\" hist(x, prob = TRUE) lines(density_kernel$x, density_kernel$y, col = \"black\", lwd = 2) lines(density_kernel$x, density_kernel$CI_low, col = \"gray\", lty = 2) lines(density_kernel$x, density_kernel$CI_high, col = \"gray\", lty = 2) legend(\"topright\", legend = c(\"Estimate\", \"95% CI\"), col = c(\"black\", \"gray\"), lwd = 2, lty = c(1, 2) ) # Other Methods density_logspline <- estimate_density(x, method = \"logspline\") density_KernSmooth <- estimate_density(x, method = \"KernSmooth\") density_mixture <- estimate_density(x, method = \"mixture\") hist(x, prob = TRUE) lines(density_kernel$x, density_kernel$y, col = \"black\", lwd = 2) lines(density_logspline$x, density_logspline$y, col = \"red\", lwd = 2) lines(density_KernSmooth$x, density_KernSmooth$y, col = \"blue\", lwd = 2) lines(density_mixture$x, density_mixture$y, col = \"green\", lwd = 2) # Extension density_extended <- estimate_density(x, extend = TRUE) density_default <- estimate_density(x, extend = FALSE) hist(x, prob = TRUE) lines(density_extended$x, density_extended$y, col = \"red\", lwd = 3) lines(density_default$x, density_default$y, col = \"black\", lwd = 3) # Multiple columns head(estimate_density(iris)) #> Parameter x y #> 1 Sepal.Length 4.300000 0.09643086 #> 2 Sepal.Length 4.303519 0.09759152 #> 3 Sepal.Length 4.307038 0.09875679 #> 4 Sepal.Length 4.310557 0.09993469 #> 5 Sepal.Length 4.314076 0.10111692 #> 6 Sepal.Length 4.317595 0.10230788 head(estimate_density(iris, select = \"Sepal.Width\")) #> Parameter x y #> 1 Sepal.Width 2.000000 0.04647877 #> 2 Sepal.Width 2.002346 0.04729167 #> 3 Sepal.Width 2.004692 0.04811925 #> 4 Sepal.Width 2.007038 0.04895638 #> 5 Sepal.Width 2.009384 0.04980346 #> 6 Sepal.Width 2.011730 0.05066768 # Grouped data head(estimate_density(iris, by = \"Species\")) #> Parameter x y Species #> 1 Sepal.Length 4.300000 0.2354858 setosa #> 2 Sepal.Length 4.301466 0.2374750 setosa #> 3 Sepal.Length 4.302933 0.2394634 setosa #> 4 Sepal.Length 4.304399 0.2414506 setosa #> 5 Sepal.Length 4.305865 0.2434373 setosa #> 6 Sepal.Length 4.307331 0.2454216 setosa head(estimate_density(iris$Petal.Width, by = iris$Species)) #> x y Group #> 1 0.1000000 9.011849 setosa #> 2 0.1004888 8.955321 setosa #> 3 0.1009775 8.792006 setosa #> 4 0.1014663 8.527789 setosa #> 5 0.1019550 8.171922 setosa #> 6 0.1024438 7.736494 setosa # \\donttest{ # rstanarm models # ----------------------------------------------- library(rstanarm) model <- suppressWarnings( stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) head(estimate_density(model)) #> Parameter x y #> 1 (Intercept) 24.19242 0.002028542 #> 2 (Intercept) 24.22067 0.002051099 #> 3 (Intercept) 24.24892 0.002073742 #> 4 (Intercept) 24.27717 0.002096471 #> 5 (Intercept) 24.30542 0.002119371 #> 6 (Intercept) 24.33367 0.002142358 library(emmeans) head(estimate_density(emtrends(model, ~1, \"wt\", data = mtcars))) #> X1 x y #> 1 overall -7.810281 0.01753665 #> 2 overall -7.806283 0.01762645 #> 3 overall -7.802285 0.01771463 #> 4 overall -7.798287 0.01780030 #> 5 overall -7.794289 0.01788463 #> 6 overall -7.790292 0.01796733 # brms models # ----------------------------------------------- library(brms) model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 7e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 1: 0.014 seconds (Sampling) #> Chain 1: 0.033 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 2: 0.013 seconds (Sampling) #> Chain 2: 0.032 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 3: 0.017 seconds (Sampling) #> Chain 3: 0.037 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.017 seconds (Sampling) #> Chain 4: 0.037 seconds (Total) #> Chain 4: estimate_density(model) #> Parameter x y #> 1 b_Intercept 31.677511642 2.750315e-04 #> 2 b_Intercept 31.692604903 2.747610e-04 #> 3 b_Intercept 31.707698163 2.740329e-04 #> 4 b_Intercept 31.722791424 2.728507e-04 #> 5 b_Intercept 31.737884685 2.712198e-04 #> 6 b_Intercept 31.752977946 2.691474e-04 #> 7 b_Intercept 31.768071206 2.666050e-04 #> 8 b_Intercept 31.783164467 2.635872e-04 #> 9 b_Intercept 31.798257728 2.601691e-04 #> 10 b_Intercept 31.813350988 2.563662e-04 #> 11 b_Intercept 31.828444249 2.521953e-04 #> 12 b_Intercept 31.843537510 2.476744e-04 #> 13 b_Intercept 31.858630770 2.428182e-04 #> 14 b_Intercept 31.873724031 2.375927e-04 #> 15 b_Intercept 31.888817292 2.320916e-04 #> 16 b_Intercept 31.903910552 2.263376e-04 #> 17 b_Intercept 31.919003813 2.203543e-04 #> 18 b_Intercept 31.934097074 2.141651e-04 #> 19 b_Intercept 31.949190334 2.077943e-04 #> 20 b_Intercept 31.964283595 2.012455e-04 #> 21 b_Intercept 31.979376856 1.945691e-04 #> 22 b_Intercept 31.994470116 1.877983e-04 #> 23 b_Intercept 32.009563377 1.809569e-04 #> 24 b_Intercept 32.024656638 1.740682e-04 #> 25 b_Intercept 32.039749899 1.671553e-04 #> 26 b_Intercept 32.054843159 1.602432e-04 #> 27 b_Intercept 32.069936420 1.533636e-04 #> 28 b_Intercept 32.085029681 1.465364e-04 #> 29 b_Intercept 32.100122941 1.397808e-04 #> 30 b_Intercept 32.115216202 1.331152e-04 #> 31 b_Intercept 32.130309463 1.265571e-04 #> 32 b_Intercept 32.145402723 1.201261e-04 #> 33 b_Intercept 32.160495984 1.138693e-04 #> 34 b_Intercept 32.175589245 1.077739e-04 #> 35 b_Intercept 32.190682505 1.018525e-04 #> 36 b_Intercept 32.205775766 9.611626e-05 #> 37 b_Intercept 32.220869027 9.057580e-05 #> 38 b_Intercept 32.235962287 8.524068e-05 #> 39 b_Intercept 32.251055548 8.015963e-05 #> 40 b_Intercept 32.266148809 7.531581e-05 #> 41 b_Intercept 32.281242069 7.070384e-05 #> 42 b_Intercept 32.296335330 6.632925e-05 #> 43 b_Intercept 32.311428591 6.219697e-05 #> 44 b_Intercept 32.326521851 5.831140e-05 #> 45 b_Intercept 32.341615112 5.470275e-05 #> 46 b_Intercept 32.356708373 5.138225e-05 #> 47 b_Intercept 32.371801634 4.832055e-05 #> 48 b_Intercept 32.386894894 4.552042e-05 #> 49 b_Intercept 32.401988155 4.298453e-05 #> 50 b_Intercept 32.417081416 4.071555e-05 #> 51 b_Intercept 32.432174676 3.872390e-05 #> 52 b_Intercept 32.447267937 3.706140e-05 #> 53 b_Intercept 32.462361198 3.567666e-05 #> 54 b_Intercept 32.477454458 3.457348e-05 #> 55 b_Intercept 32.492547719 3.375609e-05 #> 56 b_Intercept 32.507640980 3.322920e-05 #> 57 b_Intercept 32.522734240 3.299803e-05 #> 58 b_Intercept 32.537827501 3.312604e-05 #> 59 b_Intercept 32.552920762 3.358088e-05 #> 60 b_Intercept 32.568014022 3.435644e-05 #> 61 b_Intercept 32.583107283 3.546153e-05 #> 62 b_Intercept 32.598200544 3.690572e-05 #> 63 b_Intercept 32.613293804 3.869939e-05 #> 64 b_Intercept 32.628387065 4.089532e-05 #> 65 b_Intercept 32.643480326 4.351605e-05 #> 66 b_Intercept 32.658573587 4.653145e-05 #> 67 b_Intercept 32.673666847 4.995626e-05 #> 68 b_Intercept 32.688760108 5.380601e-05 #> 69 b_Intercept 32.703853369 5.809703e-05 #> 70 b_Intercept 32.718946629 6.286326e-05 #> 71 b_Intercept 32.734039890 6.820712e-05 #> 72 b_Intercept 32.749133151 7.405780e-05 #> 73 b_Intercept 32.764226411 8.043528e-05 #> 74 b_Intercept 32.779319672 8.736004e-05 #> 75 b_Intercept 32.794412933 9.485309e-05 #> 76 b_Intercept 32.809506193 1.029358e-04 #> 77 b_Intercept 32.824599454 1.117538e-04 #> 78 b_Intercept 32.839692715 1.212438e-04 #> 79 b_Intercept 32.854785975 1.314036e-04 #> 80 b_Intercept 32.869879236 1.422560e-04 #> 81 b_Intercept 32.884972497 1.538237e-04 #> 82 b_Intercept 32.900065757 1.661291e-04 #> 83 b_Intercept 32.915159018 1.792873e-04 #> 84 b_Intercept 32.930252279 1.933288e-04 #> 85 b_Intercept 32.945345539 2.081875e-04 #> 86 b_Intercept 32.960438800 2.238846e-04 #> 87 b_Intercept 32.975532061 2.404404e-04 #> 88 b_Intercept 32.990625322 2.578750e-04 #> 89 b_Intercept 33.005718582 2.762463e-04 #> 90 b_Intercept 33.020811843 2.957153e-04 #> 91 b_Intercept 33.035905104 3.161285e-04 #> 92 b_Intercept 33.050998364 3.375019e-04 #> 93 b_Intercept 33.066091625 3.598505e-04 #> 94 b_Intercept 33.081184886 3.831885e-04 #> 95 b_Intercept 33.096278146 4.075289e-04 #> 96 b_Intercept 33.111371407 4.330883e-04 #> 97 b_Intercept 33.126464668 4.597125e-04 #> 98 b_Intercept 33.141557928 4.873768e-04 #> 99 b_Intercept 33.156651189 5.160887e-04 #> 100 b_Intercept 33.171744450 5.458550e-04 #> 101 b_Intercept 33.186837710 5.766811e-04 #> 102 b_Intercept 33.201930971 6.087035e-04 #> 103 b_Intercept 33.217024232 6.419097e-04 #> 104 b_Intercept 33.232117492 6.761860e-04 #> 105 b_Intercept 33.247210753 7.115323e-04 #> 106 b_Intercept 33.262304014 7.479478e-04 #> 107 b_Intercept 33.277397275 7.854306e-04 #> 108 b_Intercept 33.292490535 8.240289e-04 #> 109 b_Intercept 33.307583796 8.638791e-04 #> 110 b_Intercept 33.322677057 9.047803e-04 #> 111 b_Intercept 33.337770317 9.467256e-04 #> 112 b_Intercept 33.352863578 9.897067e-04 #> 113 b_Intercept 33.367956839 1.033715e-03 #> 114 b_Intercept 33.383050099 1.078740e-03 #> 115 b_Intercept 33.398143360 1.124972e-03 #> 116 b_Intercept 33.413236621 1.172216e-03 #> 117 b_Intercept 33.428329881 1.220431e-03 #> 118 b_Intercept 33.443423142 1.269600e-03 #> 119 b_Intercept 33.458516403 1.319708e-03 #> 120 b_Intercept 33.473609663 1.370737e-03 #> 121 b_Intercept 33.488702924 1.422780e-03 #> 122 b_Intercept 33.503796185 1.475782e-03 #> 123 b_Intercept 33.518889445 1.529629e-03 #> 124 b_Intercept 33.533982706 1.584296e-03 #> 125 b_Intercept 33.549075967 1.639758e-03 #> 126 b_Intercept 33.564169228 1.695987e-03 #> 127 b_Intercept 33.579262488 1.752994e-03 #> 128 b_Intercept 33.594355749 1.810821e-03 #> 129 b_Intercept 33.609449010 1.869305e-03 #> 130 b_Intercept 33.624542270 1.928412e-03 #> 131 b_Intercept 33.639635531 1.988108e-03 #> 132 b_Intercept 33.654728792 2.048355e-03 #> 133 b_Intercept 33.669822052 2.109115e-03 #> 134 b_Intercept 33.684915313 2.170437e-03 #> 135 b_Intercept 33.700008574 2.232178e-03 #> 136 b_Intercept 33.715101834 2.294287e-03 #> 137 b_Intercept 33.730195095 2.356721e-03 #> 138 b_Intercept 33.745288356 2.419439e-03 #> 139 b_Intercept 33.760381616 2.482397e-03 #> 140 b_Intercept 33.775474877 2.545570e-03 #> 141 b_Intercept 33.790568138 2.608890e-03 #> 142 b_Intercept 33.805661398 2.672296e-03 #> 143 b_Intercept 33.820754659 2.735747e-03 #> 144 b_Intercept 33.835847920 2.799202e-03 #> 145 b_Intercept 33.850941180 2.862624e-03 #> 146 b_Intercept 33.866034441 2.925967e-03 #> 147 b_Intercept 33.881127702 2.989169e-03 #> 148 b_Intercept 33.896220963 3.052212e-03 #> 149 b_Intercept 33.911314223 3.115067e-03 #> 150 b_Intercept 33.926407484 3.177709e-03 #> 151 b_Intercept 33.941500745 3.240114e-03 #> 152 b_Intercept 33.956594005 3.302262e-03 #> 153 b_Intercept 33.971687266 3.364072e-03 #> 154 b_Intercept 33.986780527 3.425588e-03 #> 155 b_Intercept 34.001873787 3.486807e-03 #> 156 b_Intercept 34.016967048 3.547730e-03 #> 157 b_Intercept 34.032060309 3.608357e-03 #> 158 b_Intercept 34.047153569 3.668696e-03 #> 159 b_Intercept 34.062246830 3.728721e-03 #> 160 b_Intercept 34.077340091 3.788472e-03 #> 161 b_Intercept 34.092433351 3.847997e-03 #> 162 b_Intercept 34.107526612 3.907324e-03 #> 163 b_Intercept 34.122619873 3.966483e-03 #> 164 b_Intercept 34.137713133 4.025511e-03 #> 165 b_Intercept 34.152806394 4.084443e-03 #> 166 b_Intercept 34.167899655 4.143338e-03 #> 167 b_Intercept 34.182992916 4.202260e-03 #> 168 b_Intercept 34.198086176 4.261261e-03 #> 169 b_Intercept 34.213179437 4.320396e-03 #> 170 b_Intercept 34.228272698 4.379720e-03 #> 171 b_Intercept 34.243365958 4.439295e-03 #> 172 b_Intercept 34.258459219 4.499272e-03 #> 173 b_Intercept 34.273552480 4.559662e-03 #> 174 b_Intercept 34.288645740 4.620529e-03 #> 175 b_Intercept 34.303739001 4.681938e-03 #> 176 b_Intercept 34.318832262 4.743952e-03 #> 177 b_Intercept 34.333925522 4.806639e-03 #> 178 b_Intercept 34.349018783 4.870188e-03 #> 179 b_Intercept 34.364112044 4.934637e-03 #> 180 b_Intercept 34.379205304 4.999988e-03 #> 181 b_Intercept 34.394298565 5.066300e-03 #> 182 b_Intercept 34.409391826 5.133633e-03 #> 183 b_Intercept 34.424485086 5.202044e-03 #> 184 b_Intercept 34.439578347 5.271678e-03 #> 185 b_Intercept 34.454671608 5.342704e-03 #> 186 b_Intercept 34.469764868 5.414991e-03 #> 187 b_Intercept 34.484858129 5.488583e-03 #> 188 b_Intercept 34.499951390 5.563520e-03 #> 189 b_Intercept 34.515044651 5.639842e-03 #> 190 b_Intercept 34.530137911 5.717583e-03 #> 191 b_Intercept 34.545231172 5.797118e-03 #> 192 b_Intercept 34.560324433 5.878145e-03 #> 193 b_Intercept 34.575417693 5.960685e-03 #> 194 b_Intercept 34.590510954 6.044757e-03 #> 195 b_Intercept 34.605604215 6.130376e-03 #> 196 b_Intercept 34.620697475 6.217555e-03 #> 197 b_Intercept 34.635790736 6.306553e-03 #> 198 b_Intercept 34.650883997 6.397246e-03 #> 199 b_Intercept 34.665977257 6.489521e-03 #> 200 b_Intercept 34.681070518 6.583378e-03 #> 201 b_Intercept 34.696163779 6.678818e-03 #> 202 b_Intercept 34.711257039 6.775837e-03 #> 203 b_Intercept 34.726350300 6.874562e-03 #> 204 b_Intercept 34.741443561 6.975093e-03 #> 205 b_Intercept 34.756536821 7.077190e-03 #> 206 b_Intercept 34.771630082 7.180851e-03 #> 207 b_Intercept 34.786723343 7.286073e-03 #> 208 b_Intercept 34.801816604 7.392856e-03 #> 209 b_Intercept 34.816909864 7.501213e-03 #> 210 b_Intercept 34.832003125 7.611488e-03 #> 211 b_Intercept 34.847096386 7.723342e-03 #> 212 b_Intercept 34.862189646 7.836789e-03 #> 213 b_Intercept 34.877282907 7.951845e-03 #> 214 b_Intercept 34.892376168 8.068534e-03 #> 215 b_Intercept 34.907469428 8.186883e-03 #> 216 b_Intercept 34.922562689 8.307209e-03 #> 217 b_Intercept 34.937655950 8.429409e-03 #> 218 b_Intercept 34.952749210 8.553418e-03 #> 219 b_Intercept 34.967842471 8.679294e-03 #> 220 b_Intercept 34.982935732 8.807102e-03 #> 221 b_Intercept 34.998028992 8.936912e-03 #> 222 b_Intercept 35.013122253 9.069000e-03 #> 223 b_Intercept 35.028215514 9.203608e-03 #> 224 b_Intercept 35.043308774 9.340541e-03 #> 225 b_Intercept 35.058402035 9.479906e-03 #> 226 b_Intercept 35.073495296 9.621818e-03 #> 227 b_Intercept 35.088588556 9.766397e-03 #> 228 b_Intercept 35.103681817 9.913813e-03 #> 229 b_Intercept 35.118775078 1.006485e-02 #> 230 b_Intercept 35.133868339 1.021905e-02 #> 231 b_Intercept 35.148961599 1.037655e-02 #> 232 b_Intercept 35.164054860 1.053752e-02 #> 233 b_Intercept 35.179148121 1.070212e-02 #> 234 b_Intercept 35.194241381 1.087049e-02 #> 235 b_Intercept 35.209334642 1.104354e-02 #> 236 b_Intercept 35.224427903 1.122104e-02 #> 237 b_Intercept 35.239521163 1.140293e-02 #> 238 b_Intercept 35.254614424 1.158938e-02 #> 239 b_Intercept 35.269707685 1.178055e-02 #> 240 b_Intercept 35.284800945 1.197662e-02 #> 241 b_Intercept 35.299894206 1.217825e-02 #> 242 b_Intercept 35.314987467 1.238588e-02 #> 243 b_Intercept 35.330080727 1.259898e-02 #> 244 b_Intercept 35.345173988 1.281770e-02 #> 245 b_Intercept 35.360267249 1.304220e-02 #> 246 b_Intercept 35.375360509 1.327260e-02 #> 247 b_Intercept 35.390453770 1.350918e-02 #> 248 b_Intercept 35.405547031 1.375328e-02 #> 249 b_Intercept 35.420640292 1.400375e-02 #> 250 b_Intercept 35.435733552 1.426069e-02 #> 251 b_Intercept 35.450826813 1.452421e-02 #> 252 b_Intercept 35.465920074 1.479440e-02 #> 253 b_Intercept 35.481013334 1.507136e-02 #> 254 b_Intercept 35.496106595 1.535638e-02 #> 255 b_Intercept 35.511199856 1.564873e-02 #> 256 b_Intercept 35.526293116 1.594809e-02 #> 257 b_Intercept 35.541386377 1.625451e-02 #> 258 b_Intercept 35.556479638 1.656803e-02 #> 259 b_Intercept 35.571572898 1.688870e-02 #> 260 b_Intercept 35.586666159 1.721728e-02 #> 261 b_Intercept 35.601759420 1.755399e-02 #> 262 b_Intercept 35.616852680 1.789793e-02 #> 263 b_Intercept 35.631945941 1.824908e-02 #> 264 b_Intercept 35.647039202 1.860746e-02 #> 265 b_Intercept 35.662132462 1.897305e-02 #> 266 b_Intercept 35.677225723 1.934604e-02 #> 267 b_Intercept 35.692318984 1.972766e-02 #> 268 b_Intercept 35.707412244 2.011641e-02 #> 269 b_Intercept 35.722505505 2.051225e-02 #> 270 b_Intercept 35.737598766 2.091514e-02 #> 271 b_Intercept 35.752692027 2.132501e-02 #> 272 b_Intercept 35.767785287 2.174182e-02 #> 273 b_Intercept 35.782878548 2.216673e-02 #> 274 b_Intercept 35.797971809 2.259869e-02 #> 275 b_Intercept 35.813065069 2.303729e-02 #> 276 b_Intercept 35.828158330 2.348243e-02 #> 277 b_Intercept 35.843251591 2.393400e-02 #> 278 b_Intercept 35.858344851 2.439187e-02 #> 279 b_Intercept 35.873438112 2.485657e-02 #> 280 b_Intercept 35.888531373 2.532796e-02 #> 281 b_Intercept 35.903624633 2.580513e-02 #> 282 b_Intercept 35.918717894 2.628790e-02 #> 283 b_Intercept 35.933811155 2.677609e-02 #> 284 b_Intercept 35.948904415 2.726949e-02 #> 285 b_Intercept 35.963997676 2.776808e-02 #> 286 b_Intercept 35.979090937 2.827230e-02 #> 287 b_Intercept 35.994184197 2.878094e-02 #> 288 b_Intercept 36.009277458 2.929374e-02 #> 289 b_Intercept 36.024370719 2.981045e-02 #> 290 b_Intercept 36.039463980 3.033079e-02 #> 291 b_Intercept 36.054557240 3.085448e-02 #> 292 b_Intercept 36.069650501 3.138174e-02 #> 293 b_Intercept 36.084743762 3.191173e-02 #> 294 b_Intercept 36.099837022 3.244402e-02 #> 295 b_Intercept 36.114930283 3.297830e-02 #> 296 b_Intercept 36.130023544 3.351428e-02 #> 297 b_Intercept 36.145116804 3.405166e-02 #> 298 b_Intercept 36.160210065 3.459021e-02 #> 299 b_Intercept 36.175303326 3.512950e-02 #> 300 b_Intercept 36.190396586 3.566915e-02 #> 301 b_Intercept 36.205489847 3.620888e-02 #> 302 b_Intercept 36.220583108 3.674844e-02 #> 303 b_Intercept 36.235676368 3.728758e-02 #> 304 b_Intercept 36.250769629 3.782604e-02 #> 305 b_Intercept 36.265862890 3.836341e-02 #> 306 b_Intercept 36.280956150 3.889965e-02 #> 307 b_Intercept 36.296049411 3.943463e-02 #> 308 b_Intercept 36.311142672 3.996823e-02 #> 309 b_Intercept 36.326235933 4.050037e-02 #> 310 b_Intercept 36.341329193 4.103100e-02 #> 311 b_Intercept 36.356422454 4.155977e-02 #> 312 b_Intercept 36.371515715 4.208699e-02 #> 313 b_Intercept 36.386608975 4.261279e-02 #> 314 b_Intercept 36.401702236 4.313727e-02 #> 315 b_Intercept 36.416795497 4.366059e-02 #> 316 b_Intercept 36.431888757 4.418292e-02 #> 317 b_Intercept 36.446982018 4.470443e-02 #> 318 b_Intercept 36.462075279 4.522553e-02 #> 319 b_Intercept 36.477168539 4.574662e-02 #> 320 b_Intercept 36.492261800 4.626808e-02 #> 321 b_Intercept 36.507355061 4.679028e-02 #> 322 b_Intercept 36.522448321 4.731367e-02 #> 323 b_Intercept 36.537541582 4.783881e-02 #> 324 b_Intercept 36.552634843 4.836669e-02 #> 325 b_Intercept 36.567728103 4.889756e-02 #> 326 b_Intercept 36.582821364 4.943199e-02 #> 327 b_Intercept 36.597914625 4.997057e-02 #> 328 b_Intercept 36.613007885 5.051391e-02 #> 329 b_Intercept 36.628101146 5.106262e-02 #> 330 b_Intercept 36.643194407 5.161881e-02 #> 331 b_Intercept 36.658287668 5.218224e-02 #> 332 b_Intercept 36.673380928 5.275342e-02 #> 333 b_Intercept 36.688474189 5.333302e-02 #> 334 b_Intercept 36.703567450 5.392170e-02 #> 335 b_Intercept 36.718660710 5.452013e-02 #> 336 b_Intercept 36.733753971 5.513044e-02 #> 337 b_Intercept 36.748847232 5.575327e-02 #> 338 b_Intercept 36.763940492 5.638816e-02 #> 339 b_Intercept 36.779033753 5.703572e-02 #> 340 b_Intercept 36.794127014 5.769652e-02 #> 341 b_Intercept 36.809220274 5.837113e-02 #> 342 b_Intercept 36.824313535 5.906091e-02 #> 343 b_Intercept 36.839406796 5.976834e-02 #> 344 b_Intercept 36.854500056 6.049133e-02 #> 345 b_Intercept 36.869593317 6.123028e-02 #> 346 b_Intercept 36.884686578 6.198556e-02 #> 347 b_Intercept 36.899779838 6.275749e-02 #> 348 b_Intercept 36.914873099 6.354635e-02 #> 349 b_Intercept 36.929966360 6.435609e-02 #> 350 b_Intercept 36.945059621 6.518363e-02 #> 351 b_Intercept 36.960152881 6.602876e-02 #> 352 b_Intercept 36.975246142 6.689152e-02 #> 353 b_Intercept 36.990339403 6.777195e-02 #> 354 b_Intercept 37.005432663 6.867003e-02 #> 355 b_Intercept 37.020525924 6.958808e-02 #> 356 b_Intercept 37.035619185 7.052517e-02 #> 357 b_Intercept 37.050712445 7.147945e-02 #> 358 b_Intercept 37.065805706 7.245068e-02 #> 359 b_Intercept 37.080898967 7.343859e-02 #> 360 b_Intercept 37.095992227 7.444286e-02 #> 361 b_Intercept 37.111085488 7.546411e-02 #> 362 b_Intercept 37.126178749 7.650345e-02 #> 363 b_Intercept 37.141272009 7.755772e-02 #> 364 b_Intercept 37.156365270 7.862644e-02 #> 365 b_Intercept 37.171458531 7.970910e-02 #> 366 b_Intercept 37.186551791 8.080518e-02 #> 367 b_Intercept 37.201645052 8.191412e-02 #> 368 b_Intercept 37.216738313 8.303783e-02 #> 369 b_Intercept 37.231831573 8.417301e-02 #> 370 b_Intercept 37.246924834 8.531890e-02 #> 371 b_Intercept 37.262018095 8.647486e-02 #> 372 b_Intercept 37.277111356 8.764026e-02 #> 373 b_Intercept 37.292204616 8.881444e-02 #> 374 b_Intercept 37.307297877 8.999777e-02 #> 375 b_Intercept 37.322391138 9.118891e-02 #> 376 b_Intercept 37.337484398 9.238649e-02 #> 377 b_Intercept 37.352577659 9.358986e-02 #> 378 b_Intercept 37.367670920 9.479839e-02 #> 379 b_Intercept 37.382764180 9.601145e-02 #> 380 b_Intercept 37.397857441 9.722862e-02 #> 381 b_Intercept 37.412950702 9.844935e-02 #> 382 b_Intercept 37.428043962 9.967243e-02 #> 383 b_Intercept 37.443137223 1.008973e-01 #> 384 b_Intercept 37.458230484 1.021234e-01 #> 385 b_Intercept 37.473323744 1.033502e-01 #> 386 b_Intercept 37.488417005 1.045772e-01 #> 387 b_Intercept 37.503510266 1.058036e-01 #> 388 b_Intercept 37.518603526 1.070289e-01 #> 389 b_Intercept 37.533696787 1.082528e-01 #> 390 b_Intercept 37.548790048 1.094747e-01 #> 391 b_Intercept 37.563883309 1.106943e-01 #> 392 b_Intercept 37.578976569 1.119113e-01 #> 393 b_Intercept 37.594069830 1.131247e-01 #> 394 b_Intercept 37.609163091 1.143343e-01 #> 395 b_Intercept 37.624256351 1.155401e-01 #> 396 b_Intercept 37.639349612 1.167418e-01 #> 397 b_Intercept 37.654442873 1.179393e-01 #> 398 b_Intercept 37.669536133 1.191323e-01 #> 399 b_Intercept 37.684629394 1.203201e-01 #> 400 b_Intercept 37.699722655 1.215024e-01 #> 401 b_Intercept 37.714815915 1.226796e-01 #> 402 b_Intercept 37.729909176 1.238516e-01 #> 403 b_Intercept 37.745002437 1.250184e-01 #> 404 b_Intercept 37.760095697 1.261800e-01 #> 405 b_Intercept 37.775188958 1.273363e-01 #> 406 b_Intercept 37.790282219 1.284860e-01 #> 407 b_Intercept 37.805375479 1.296306e-01 #> 408 b_Intercept 37.820468740 1.307699e-01 #> 409 b_Intercept 37.835562001 1.319041e-01 #> 410 b_Intercept 37.850655261 1.330333e-01 #> 411 b_Intercept 37.865748522 1.341576e-01 #> 412 b_Intercept 37.880841783 1.352764e-01 #> 413 b_Intercept 37.895935044 1.363902e-01 #> 414 b_Intercept 37.911028304 1.374997e-01 #> 415 b_Intercept 37.926121565 1.386050e-01 #> 416 b_Intercept 37.941214826 1.397063e-01 #> 417 b_Intercept 37.956308086 1.408037e-01 #> 418 b_Intercept 37.971401347 1.418973e-01 #> 419 b_Intercept 37.986494608 1.429870e-01 #> 420 b_Intercept 38.001587868 1.440736e-01 #> 421 b_Intercept 38.016681129 1.451574e-01 #> 422 b_Intercept 38.031774390 1.462385e-01 #> 423 b_Intercept 38.046867650 1.473171e-01 #> 424 b_Intercept 38.061960911 1.483934e-01 #> 425 b_Intercept 38.077054172 1.494671e-01 #> 426 b_Intercept 38.092147432 1.505390e-01 #> 427 b_Intercept 38.107240693 1.516090e-01 #> 428 b_Intercept 38.122333954 1.526774e-01 #> 429 b_Intercept 38.137427214 1.537443e-01 #> 430 b_Intercept 38.152520475 1.548096e-01 #> 431 b_Intercept 38.167613736 1.558732e-01 #> 432 b_Intercept 38.182706997 1.569351e-01 #> 433 b_Intercept 38.197800257 1.579956e-01 #> 434 b_Intercept 38.212893518 1.590543e-01 #> 435 b_Intercept 38.227986779 1.601114e-01 #> 436 b_Intercept 38.243080039 1.611665e-01 #> 437 b_Intercept 38.258173300 1.622194e-01 #> 438 b_Intercept 38.273266561 1.632696e-01 #> 439 b_Intercept 38.288359821 1.643171e-01 #> 440 b_Intercept 38.303453082 1.653616e-01 #> 441 b_Intercept 38.318546343 1.664028e-01 #> 442 b_Intercept 38.333639603 1.674404e-01 #> 443 b_Intercept 38.348732864 1.684739e-01 #> 444 b_Intercept 38.363826125 1.695020e-01 #> 445 b_Intercept 38.378919385 1.705249e-01 #> 446 b_Intercept 38.394012646 1.715425e-01 #> 447 b_Intercept 38.409105907 1.725540e-01 #> 448 b_Intercept 38.424199167 1.735593e-01 #> 449 b_Intercept 38.439292428 1.745577e-01 #> 450 b_Intercept 38.454385689 1.755475e-01 #> 451 b_Intercept 38.469478949 1.765290e-01 #> 452 b_Intercept 38.484572210 1.775021e-01 #> 453 b_Intercept 38.499665471 1.784665e-01 #> 454 b_Intercept 38.514758732 1.794218e-01 #> 455 b_Intercept 38.529851992 1.803676e-01 #> 456 b_Intercept 38.544945253 1.813027e-01 #> 457 b_Intercept 38.560038514 1.822263e-01 #> 458 b_Intercept 38.575131774 1.831395e-01 #> 459 b_Intercept 38.590225035 1.840421e-01 #> 460 b_Intercept 38.605318296 1.849339e-01 #> 461 b_Intercept 38.620411556 1.858150e-01 #> 462 b_Intercept 38.635504817 1.866850e-01 #> 463 b_Intercept 38.650598078 1.875418e-01 #> 464 b_Intercept 38.665691338 1.883879e-01 #> 465 b_Intercept 38.680784599 1.892234e-01 #> 466 b_Intercept 38.695877860 1.900484e-01 #> 467 b_Intercept 38.710971120 1.908632e-01 #> 468 b_Intercept 38.726064381 1.916680e-01 #> 469 b_Intercept 38.741157642 1.924614e-01 #> 470 b_Intercept 38.756250902 1.932451e-01 #> 471 b_Intercept 38.771344163 1.940201e-01 #> 472 b_Intercept 38.786437424 1.947865e-01 #> 473 b_Intercept 38.801530685 1.955450e-01 #> 474 b_Intercept 38.816623945 1.962959e-01 #> 475 b_Intercept 38.831717206 1.970389e-01 #> 476 b_Intercept 38.846810467 1.977745e-01 #> 477 b_Intercept 38.861903727 1.985040e-01 #> 478 b_Intercept 38.876996988 1.992279e-01 #> 479 b_Intercept 38.892090249 1.999464e-01 #> 480 b_Intercept 38.907183509 2.006601e-01 #> 481 b_Intercept 38.922276770 2.013690e-01 #> 482 b_Intercept 38.937370031 2.020728e-01 #> 483 b_Intercept 38.952463291 2.027726e-01 #> 484 b_Intercept 38.967556552 2.034686e-01 #> 485 b_Intercept 38.982649813 2.041610e-01 #> 486 b_Intercept 38.997743073 2.048497e-01 #> 487 b_Intercept 39.012836334 2.055348e-01 #> 488 b_Intercept 39.027929595 2.062154e-01 #> 489 b_Intercept 39.043022855 2.068920e-01 #> 490 b_Intercept 39.058116116 2.075643e-01 #> 491 b_Intercept 39.073209377 2.082320e-01 #> 492 b_Intercept 39.088302638 2.088948e-01 #> 493 b_Intercept 39.103395898 2.095522e-01 #> 494 b_Intercept 39.118489159 2.102030e-01 #> 495 b_Intercept 39.133582420 2.108461e-01 #> 496 b_Intercept 39.148675680 2.114817e-01 #> 497 b_Intercept 39.163768941 2.121090e-01 #> 498 b_Intercept 39.178862202 2.127271e-01 #> 499 b_Intercept 39.193955462 2.133353e-01 #> 500 b_Intercept 39.209048723 2.139322e-01 #> 501 b_Intercept 39.224141984 2.145145e-01 #> 502 b_Intercept 39.239235244 2.150836e-01 #> 503 b_Intercept 39.254328505 2.156382e-01 #> 504 b_Intercept 39.269421766 2.161776e-01 #> 505 b_Intercept 39.284515026 2.167005e-01 #> 506 b_Intercept 39.299608287 2.172060e-01 #> 507 b_Intercept 39.314701548 2.176892e-01 #> 508 b_Intercept 39.329794808 2.181517e-01 #> 509 b_Intercept 39.344888069 2.185931e-01 #> 510 b_Intercept 39.359981330 2.190125e-01 #> 511 b_Intercept 39.375074590 2.194090e-01 #> 512 b_Intercept 39.390167851 2.197818e-01 #> 513 b_Intercept 39.405261112 2.201269e-01 #> 514 b_Intercept 39.420354373 2.204435e-01 #> 515 b_Intercept 39.435447633 2.207335e-01 #> 516 b_Intercept 39.450540894 2.209965e-01 #> 517 b_Intercept 39.465634155 2.212320e-01 #> 518 b_Intercept 39.480727415 2.214393e-01 #> 519 b_Intercept 39.495820676 2.216171e-01 #> 520 b_Intercept 39.510913937 2.217605e-01 #> 521 b_Intercept 39.526007197 2.218748e-01 #> 522 b_Intercept 39.541100458 2.219601e-01 #> 523 b_Intercept 39.556193719 2.220162e-01 #> 524 b_Intercept 39.571286979 2.220433e-01 #> 525 b_Intercept 39.586380240 2.220415e-01 #> 526 b_Intercept 39.601473501 2.220055e-01 #> 527 b_Intercept 39.616566761 2.219405e-01 #> 528 b_Intercept 39.631660022 2.218480e-01 #> 529 b_Intercept 39.646753283 2.217283e-01 #> 530 b_Intercept 39.661846543 2.215822e-01 #> 531 b_Intercept 39.676939804 2.214103e-01 #> 532 b_Intercept 39.692033065 2.212104e-01 #> 533 b_Intercept 39.707126326 2.209839e-01 #> 534 b_Intercept 39.722219586 2.207344e-01 #> 535 b_Intercept 39.737312847 2.204629e-01 #> 536 b_Intercept 39.752406108 2.201702e-01 #> 537 b_Intercept 39.767499368 2.198573e-01 #> 538 b_Intercept 39.782592629 2.195242e-01 #> 539 b_Intercept 39.797685890 2.191699e-01 #> 540 b_Intercept 39.812779150 2.187990e-01 #> 541 b_Intercept 39.827872411 2.184122e-01 #> 542 b_Intercept 39.842965672 2.180107e-01 #> 543 b_Intercept 39.858058932 2.175952e-01 #> 544 b_Intercept 39.873152193 2.171669e-01 #> 545 b_Intercept 39.888245454 2.167242e-01 #> 546 b_Intercept 39.903338714 2.162705e-01 #> 547 b_Intercept 39.918431975 2.158068e-01 #> 548 b_Intercept 39.933525236 2.153337e-01 #> 549 b_Intercept 39.948618496 2.148521e-01 #> 550 b_Intercept 39.963711757 2.143623e-01 #> 551 b_Intercept 39.978805018 2.138641e-01 #> 552 b_Intercept 39.993898278 2.133584e-01 #> 553 b_Intercept 40.008991539 2.128461e-01 #> 554 b_Intercept 40.024084800 2.123276e-01 #> 555 b_Intercept 40.039178061 2.118031e-01 #> 556 b_Intercept 40.054271321 2.112726e-01 #> 557 b_Intercept 40.069364582 2.107361e-01 #> 558 b_Intercept 40.084457843 2.101926e-01 #> 559 b_Intercept 40.099551103 2.096430e-01 #> 560 b_Intercept 40.114644364 2.090873e-01 #> 561 b_Intercept 40.129737625 2.085251e-01 #> 562 b_Intercept 40.144830885 2.079562e-01 #> 563 b_Intercept 40.159924146 2.073802e-01 #> 564 b_Intercept 40.175017407 2.067950e-01 #> 565 b_Intercept 40.190110667 2.062014e-01 #> 566 b_Intercept 40.205203928 2.055991e-01 #> 567 b_Intercept 40.220297189 2.049877e-01 #> 568 b_Intercept 40.235390449 2.043665e-01 #> 569 b_Intercept 40.250483710 2.037350e-01 #> 570 b_Intercept 40.265576971 2.030911e-01 #> 571 b_Intercept 40.280670231 2.024345e-01 #> 572 b_Intercept 40.295763492 2.017657e-01 #> 573 b_Intercept 40.310856753 2.010841e-01 #> 574 b_Intercept 40.325950014 2.003894e-01 #> 575 b_Intercept 40.341043274 1.996810e-01 #> 576 b_Intercept 40.356136535 1.989576e-01 #> 577 b_Intercept 40.371229796 1.982172e-01 #> 578 b_Intercept 40.386323056 1.974618e-01 #> 579 b_Intercept 40.401416317 1.966912e-01 #> 580 b_Intercept 40.416509578 1.959053e-01 #> 581 b_Intercept 40.431602838 1.951038e-01 #> 582 b_Intercept 40.446696099 1.942869e-01 #> 583 b_Intercept 40.461789360 1.934510e-01 #> 584 b_Intercept 40.476882620 1.925995e-01 #> 585 b_Intercept 40.491975881 1.917329e-01 #> 586 b_Intercept 40.507069142 1.908514e-01 #> 587 b_Intercept 40.522162402 1.899553e-01 #> 588 b_Intercept 40.537255663 1.890450e-01 #> 589 b_Intercept 40.552348924 1.881192e-01 #> 590 b_Intercept 40.567442184 1.871794e-01 #> 591 b_Intercept 40.582535445 1.862273e-01 #> 592 b_Intercept 40.597628706 1.852639e-01 #> 593 b_Intercept 40.612721966 1.842897e-01 #> 594 b_Intercept 40.627815227 1.833056e-01 #> 595 b_Intercept 40.642908488 1.823119e-01 #> 596 b_Intercept 40.658001749 1.813091e-01 #> 597 b_Intercept 40.673095009 1.802996e-01 #> 598 b_Intercept 40.688188270 1.792844e-01 #> 599 b_Intercept 40.703281531 1.782645e-01 #> 600 b_Intercept 40.718374791 1.772410e-01 #> 601 b_Intercept 40.733468052 1.762147e-01 #> 602 b_Intercept 40.748561313 1.751869e-01 #> 603 b_Intercept 40.763654573 1.741590e-01 #> 604 b_Intercept 40.778747834 1.731322e-01 #> 605 b_Intercept 40.793841095 1.721074e-01 #> 606 b_Intercept 40.808934355 1.710856e-01 #> 607 b_Intercept 40.824027616 1.700677e-01 #> 608 b_Intercept 40.839120877 1.690556e-01 #> 609 b_Intercept 40.854214137 1.680501e-01 #> 610 b_Intercept 40.869307398 1.670517e-01 #> 611 b_Intercept 40.884400659 1.660610e-01 #> 612 b_Intercept 40.899493919 1.650788e-01 #> 613 b_Intercept 40.914587180 1.641057e-01 #> 614 b_Intercept 40.929680441 1.631431e-01 #> 615 b_Intercept 40.944773702 1.621924e-01 #> 616 b_Intercept 40.959866962 1.612527e-01 #> 617 b_Intercept 40.974960223 1.603241e-01 #> 618 b_Intercept 40.990053484 1.594069e-01 #> 619 b_Intercept 41.005146744 1.585014e-01 #> 620 b_Intercept 41.020240005 1.576076e-01 #> 621 b_Intercept 41.035333266 1.567284e-01 #> 622 b_Intercept 41.050426526 1.558608e-01 #> 623 b_Intercept 41.065519787 1.550047e-01 #> 624 b_Intercept 41.080613048 1.541600e-01 #> 625 b_Intercept 41.095706308 1.533262e-01 #> 626 b_Intercept 41.110799569 1.525032e-01 #> 627 b_Intercept 41.125892830 1.516919e-01 #> 628 b_Intercept 41.140986090 1.508909e-01 #> 629 b_Intercept 41.156079351 1.500987e-01 #> 630 b_Intercept 41.171172612 1.493149e-01 #> 631 b_Intercept 41.186265872 1.485386e-01 #> 632 b_Intercept 41.201359133 1.477694e-01 #> 633 b_Intercept 41.216452394 1.470067e-01 #> 634 b_Intercept 41.231545654 1.462500e-01 #> 635 b_Intercept 41.246638915 1.454974e-01 #> 636 b_Intercept 41.261732176 1.447482e-01 #> 637 b_Intercept 41.276825437 1.440015e-01 #> 638 b_Intercept 41.291918697 1.432565e-01 #> 639 b_Intercept 41.307011958 1.425123e-01 #> 640 b_Intercept 41.322105219 1.417679e-01 #> 641 b_Intercept 41.337198479 1.410221e-01 #> 642 b_Intercept 41.352291740 1.402742e-01 #> 643 b_Intercept 41.367385001 1.395234e-01 #> 644 b_Intercept 41.382478261 1.387689e-01 #> 645 b_Intercept 41.397571522 1.380101e-01 #> 646 b_Intercept 41.412664783 1.372451e-01 #> 647 b_Intercept 41.427758043 1.364737e-01 #> 648 b_Intercept 41.442851304 1.356956e-01 #> 649 b_Intercept 41.457944565 1.349103e-01 #> 650 b_Intercept 41.473037825 1.341172e-01 #> 651 b_Intercept 41.488131086 1.333159e-01 #> 652 b_Intercept 41.503224347 1.325053e-01 #> 653 b_Intercept 41.518317607 1.316842e-01 #> 654 b_Intercept 41.533410868 1.308538e-01 #> 655 b_Intercept 41.548504129 1.300138e-01 #> 656 b_Intercept 41.563597390 1.291640e-01 #> 657 b_Intercept 41.578690650 1.283044e-01 #> 658 b_Intercept 41.593783911 1.274348e-01 #> 659 b_Intercept 41.608877172 1.265531e-01 #> 660 b_Intercept 41.623970432 1.256616e-01 #> 661 b_Intercept 41.639063693 1.247604e-01 #> 662 b_Intercept 41.654156954 1.238497e-01 #> 663 b_Intercept 41.669250214 1.229297e-01 #> 664 b_Intercept 41.684343475 1.220006e-01 #> 665 b_Intercept 41.699436736 1.210612e-01 #> 666 b_Intercept 41.714529996 1.201130e-01 #> 667 b_Intercept 41.729623257 1.191568e-01 #> 668 b_Intercept 41.744716518 1.181930e-01 #> 669 b_Intercept 41.759809778 1.172219e-01 #> 670 b_Intercept 41.774903039 1.162441e-01 #> 671 b_Intercept 41.789996300 1.152593e-01 #> 672 b_Intercept 41.805089560 1.142679e-01 #> 673 b_Intercept 41.820182821 1.132712e-01 #> 674 b_Intercept 41.835276082 1.122696e-01 #> 675 b_Intercept 41.850369343 1.112636e-01 #> 676 b_Intercept 41.865462603 1.102536e-01 #> 677 b_Intercept 41.880555864 1.092399e-01 #> 678 b_Intercept 41.895649125 1.082226e-01 #> 679 b_Intercept 41.910742385 1.072027e-01 #> 680 b_Intercept 41.925835646 1.061806e-01 #> 681 b_Intercept 41.940928907 1.051568e-01 #> 682 b_Intercept 41.956022167 1.041314e-01 #> 683 b_Intercept 41.971115428 1.031050e-01 #> 684 b_Intercept 41.986208689 1.020778e-01 #> 685 b_Intercept 42.001301949 1.010502e-01 #> 686 b_Intercept 42.016395210 1.000225e-01 #> 687 b_Intercept 42.031488471 9.899513e-02 #> 688 b_Intercept 42.046581731 9.796819e-02 #> 689 b_Intercept 42.061674992 9.694197e-02 #> 690 b_Intercept 42.076768253 9.591680e-02 #> 691 b_Intercept 42.091861513 9.489301e-02 #> 692 b_Intercept 42.106954774 9.387067e-02 #> 693 b_Intercept 42.122048035 9.284994e-02 #> 694 b_Intercept 42.137141295 9.183101e-02 #> 695 b_Intercept 42.152234556 9.081403e-02 #> 696 b_Intercept 42.167327817 8.979924e-02 #> 697 b_Intercept 42.182421078 8.878724e-02 #> 698 b_Intercept 42.197514338 8.777776e-02 #> 699 b_Intercept 42.212607599 8.677095e-02 #> 700 b_Intercept 42.227700860 8.576696e-02 #> 701 b_Intercept 42.242794120 8.476595e-02 #> 702 b_Intercept 42.257887381 8.376806e-02 #> 703 b_Intercept 42.272980642 8.277412e-02 #> 704 b_Intercept 42.288073902 8.178389e-02 #> 705 b_Intercept 42.303167163 8.079738e-02 #> 706 b_Intercept 42.318260424 7.981478e-02 #> 707 b_Intercept 42.333353684 7.883627e-02 #> 708 b_Intercept 42.348446945 7.786202e-02 #> 709 b_Intercept 42.363540206 7.689276e-02 #> 710 b_Intercept 42.378633466 7.592884e-02 #> 711 b_Intercept 42.393726727 7.496991e-02 #> 712 b_Intercept 42.408819988 7.401619e-02 #> 713 b_Intercept 42.423913248 7.306788e-02 #> 714 b_Intercept 42.439006509 7.212521e-02 #> 715 b_Intercept 42.454099770 7.118860e-02 #> 716 b_Intercept 42.469193031 7.025935e-02 #> 717 b_Intercept 42.484286291 6.933652e-02 #> 718 b_Intercept 42.499379552 6.842033e-02 #> 719 b_Intercept 42.514472813 6.751100e-02 #> 720 b_Intercept 42.529566073 6.660874e-02 #> 721 b_Intercept 42.544659334 6.571375e-02 #> 722 b_Intercept 42.559752595 6.482771e-02 #> 723 b_Intercept 42.574845855 6.394978e-02 #> 724 b_Intercept 42.589939116 6.307982e-02 #> 725 b_Intercept 42.605032377 6.221800e-02 #> 726 b_Intercept 42.620125637 6.136449e-02 #> 727 b_Intercept 42.635218898 6.051945e-02 #> 728 b_Intercept 42.650312159 5.968405e-02 #> 729 b_Intercept 42.665405419 5.885848e-02 #> 730 b_Intercept 42.680498680 5.804187e-02 #> 731 b_Intercept 42.695591941 5.723431e-02 #> 732 b_Intercept 42.710685201 5.643591e-02 #> 733 b_Intercept 42.725778462 5.564676e-02 #> 734 b_Intercept 42.740871723 5.486731e-02 #> 735 b_Intercept 42.755964983 5.409907e-02 #> 736 b_Intercept 42.771058244 5.334030e-02 #> 737 b_Intercept 42.786151505 5.259104e-02 #> 738 b_Intercept 42.801244766 5.185134e-02 #> 739 b_Intercept 42.816338026 5.112120e-02 #> 740 b_Intercept 42.831431287 5.040065e-02 #> 741 b_Intercept 42.846524548 4.969158e-02 #> 742 b_Intercept 42.861617808 4.899244e-02 #> 743 b_Intercept 42.876711069 4.830290e-02 #> 744 b_Intercept 42.891804330 4.762292e-02 #> 745 b_Intercept 42.906897590 4.695251e-02 #> 746 b_Intercept 42.921990851 4.629164e-02 #> 747 b_Intercept 42.937084112 4.564144e-02 #> 748 b_Intercept 42.952177372 4.500174e-02 #> 749 b_Intercept 42.967270633 4.437149e-02 #> 750 b_Intercept 42.982363894 4.375063e-02 #> 751 b_Intercept 42.997457154 4.313914e-02 #> 752 b_Intercept 43.012550415 4.253697e-02 #> 753 b_Intercept 43.027643676 4.194452e-02 #> 754 b_Intercept 43.042736936 4.136299e-02 #> 755 b_Intercept 43.057830197 4.079062e-02 #> 756 b_Intercept 43.072923458 4.022736e-02 #> 757 b_Intercept 43.088016719 3.967315e-02 #> 758 b_Intercept 43.103109979 3.912794e-02 #> 759 b_Intercept 43.118203240 3.859165e-02 #> 760 b_Intercept 43.133296501 3.806600e-02 #> 761 b_Intercept 43.148389761 3.754934e-02 #> 762 b_Intercept 43.163483022 3.704132e-02 #> 763 b_Intercept 43.178576283 3.654187e-02 #> 764 b_Intercept 43.193669543 3.605087e-02 #> 765 b_Intercept 43.208762804 3.556822e-02 #> 766 b_Intercept 43.223856065 3.509484e-02 #> 767 b_Intercept 43.238949325 3.463033e-02 #> 768 b_Intercept 43.254042586 3.417369e-02 #> 769 b_Intercept 43.269135847 3.372478e-02 #> 770 b_Intercept 43.284229107 3.328343e-02 #> 771 b_Intercept 43.299322368 3.284945e-02 #> 772 b_Intercept 43.314415629 3.242304e-02 #> 773 b_Intercept 43.329508889 3.200477e-02 #> 774 b_Intercept 43.344602150 3.159315e-02 #> 775 b_Intercept 43.359695411 3.118794e-02 #> 776 b_Intercept 43.374788671 3.078890e-02 #> 777 b_Intercept 43.389881932 3.039577e-02 #> 778 b_Intercept 43.404975193 3.000830e-02 #> 779 b_Intercept 43.420068454 2.962725e-02 #> 780 b_Intercept 43.435161714 2.925124e-02 #> 781 b_Intercept 43.450254975 2.887987e-02 #> 782 b_Intercept 43.465348236 2.851282e-02 #> 783 b_Intercept 43.480441496 2.814981e-02 #> 784 b_Intercept 43.495534757 2.779051e-02 #> 785 b_Intercept 43.510628018 2.743501e-02 #> 786 b_Intercept 43.525721278 2.708275e-02 #> 787 b_Intercept 43.540814539 2.673307e-02 #> 788 b_Intercept 43.555907800 2.638568e-02 #> 789 b_Intercept 43.571001060 2.604027e-02 #> 790 b_Intercept 43.586094321 2.569654e-02 #> 791 b_Intercept 43.601187582 2.535425e-02 #> 792 b_Intercept 43.616280842 2.501313e-02 #> 793 b_Intercept 43.631374103 2.467267e-02 #> 794 b_Intercept 43.646467364 2.433264e-02 #> 795 b_Intercept 43.661560624 2.399278e-02 #> 796 b_Intercept 43.676653885 2.365288e-02 #> 797 b_Intercept 43.691747146 2.331272e-02 #> 798 b_Intercept 43.706840407 2.297196e-02 #> 799 b_Intercept 43.721933667 2.263050e-02 #> 800 b_Intercept 43.737026928 2.228819e-02 #> 801 b_Intercept 43.752120189 2.194494e-02 #> 802 b_Intercept 43.767213449 2.160064e-02 #> 803 b_Intercept 43.782306710 2.125523e-02 #> 804 b_Intercept 43.797399971 2.090849e-02 #> 805 b_Intercept 43.812493231 2.056046e-02 #> 806 b_Intercept 43.827586492 2.021126e-02 #> 807 b_Intercept 43.842679753 1.986094e-02 #> 808 b_Intercept 43.857773013 1.950957e-02 #> 809 b_Intercept 43.872866274 1.915722e-02 #> 810 b_Intercept 43.887959535 1.880396e-02 #> 811 b_Intercept 43.903052795 1.844992e-02 #> 812 b_Intercept 43.918146056 1.809540e-02 #> 813 b_Intercept 43.933239317 1.774059e-02 #> 814 b_Intercept 43.948332577 1.738569e-02 #> 815 b_Intercept 43.963425838 1.703092e-02 #> 816 b_Intercept 43.978519099 1.667652e-02 #> 817 b_Intercept 43.993612360 1.632296e-02 #> 818 b_Intercept 44.008705620 1.597044e-02 #> 819 b_Intercept 44.023798881 1.561925e-02 #> 820 b_Intercept 44.038892142 1.526965e-02 #> 821 b_Intercept 44.053985402 1.492195e-02 #> 822 b_Intercept 44.069078663 1.457643e-02 #> 823 b_Intercept 44.084171924 1.423385e-02 #> 824 b_Intercept 44.099265184 1.389444e-02 #> 825 b_Intercept 44.114358445 1.355829e-02 #> 826 b_Intercept 44.129451706 1.322569e-02 #> 827 b_Intercept 44.144544966 1.289693e-02 #> 828 b_Intercept 44.159638227 1.257229e-02 #> 829 b_Intercept 44.174731488 1.225238e-02 #> 830 b_Intercept 44.189824748 1.193797e-02 #> 831 b_Intercept 44.204918009 1.162858e-02 #> 832 b_Intercept 44.220011270 1.132445e-02 #> 833 b_Intercept 44.235104530 1.102577e-02 #> 834 b_Intercept 44.250197791 1.073274e-02 #> 835 b_Intercept 44.265291052 1.044555e-02 #> 836 b_Intercept 44.280384312 1.016576e-02 #> 837 b_Intercept 44.295477573 9.892174e-03 #> 838 b_Intercept 44.310570834 9.624881e-03 #> 839 b_Intercept 44.325664095 9.363970e-03 #> 840 b_Intercept 44.340757355 9.109505e-03 #> 841 b_Intercept 44.355850616 8.861536e-03 #> 842 b_Intercept 44.370943877 8.621112e-03 #> 843 b_Intercept 44.386037137 8.387710e-03 #> 844 b_Intercept 44.401130398 8.160811e-03 #> 845 b_Intercept 44.416223659 7.940373e-03 #> 846 b_Intercept 44.431316919 7.726340e-03 #> 847 b_Intercept 44.446410180 7.518639e-03 #> 848 b_Intercept 44.461503441 7.317676e-03 #> 849 b_Intercept 44.476596701 7.123723e-03 #> 850 b_Intercept 44.491689962 6.935716e-03 #> 851 b_Intercept 44.506783223 6.753517e-03 #> 852 b_Intercept 44.521876483 6.576978e-03 #> 853 b_Intercept 44.536969744 6.405944e-03 #> 854 b_Intercept 44.552063005 6.240283e-03 #> 855 b_Intercept 44.567156265 6.080890e-03 #> 856 b_Intercept 44.582249526 5.926382e-03 #> 857 b_Intercept 44.597342787 5.776575e-03 #> 858 b_Intercept 44.612436048 5.631283e-03 #> 859 b_Intercept 44.627529308 5.490321e-03 #> 860 b_Intercept 44.642622569 5.353501e-03 #> 861 b_Intercept 44.657715830 5.221234e-03 #> 862 b_Intercept 44.672809090 5.092898e-03 #> 863 b_Intercept 44.687902351 4.968045e-03 #> 864 b_Intercept 44.702995612 4.846500e-03 #> 865 b_Intercept 44.718088872 4.728094e-03 #> 866 b_Intercept 44.733182133 4.612662e-03 #> 867 b_Intercept 44.748275394 4.500274e-03 #> 868 b_Intercept 44.763368654 4.390862e-03 #> 869 b_Intercept 44.778461915 4.283890e-03 #> 870 b_Intercept 44.793555176 4.179225e-03 #> 871 b_Intercept 44.808648436 4.076745e-03 #> 872 b_Intercept 44.823741697 3.976337e-03 #> 873 b_Intercept 44.838834958 3.877915e-03 #> 874 b_Intercept 44.853928218 3.781731e-03 #> 875 b_Intercept 44.869021479 3.687276e-03 #> 876 b_Intercept 44.884114740 3.594476e-03 #> 877 b_Intercept 44.899208000 3.503264e-03 #> 878 b_Intercept 44.914301261 3.413582e-03 #> 879 b_Intercept 44.929394522 3.325379e-03 #> 880 b_Intercept 44.944487783 3.238845e-03 #> 881 b_Intercept 44.959581043 3.153781e-03 #> 882 b_Intercept 44.974674304 3.070077e-03 #> 883 b_Intercept 44.989767565 2.987714e-03 #> 884 b_Intercept 45.004860825 2.906682e-03 #> 885 b_Intercept 45.019954086 2.826973e-03 #> 886 b_Intercept 45.035047347 2.748712e-03 #> 887 b_Intercept 45.050140607 2.671962e-03 #> 888 b_Intercept 45.065233868 2.596555e-03 #> 889 b_Intercept 45.080327129 2.522504e-03 #> 890 b_Intercept 45.095420389 2.449828e-03 #> 891 b_Intercept 45.110513650 2.378547e-03 #> 892 b_Intercept 45.125606911 2.308712e-03 #> 893 b_Intercept 45.140700171 2.240635e-03 #> 894 b_Intercept 45.155793432 2.174045e-03 #> 895 b_Intercept 45.170886693 2.108968e-03 #> 896 b_Intercept 45.185979953 2.045434e-03 #> 897 b_Intercept 45.201073214 1.983472e-03 #> 898 b_Intercept 45.216166475 1.923112e-03 #> 899 b_Intercept 45.231259736 1.864677e-03 #> 900 b_Intercept 45.246352996 1.808007e-03 #> 901 b_Intercept 45.261446257 1.753039e-03 #> 902 b_Intercept 45.276539518 1.699798e-03 #> 903 b_Intercept 45.291632778 1.648307e-03 #> 904 b_Intercept 45.306726039 1.598589e-03 #> 905 b_Intercept 45.321819300 1.550849e-03 #> 906 b_Intercept 45.336912560 1.505163e-03 #> 907 b_Intercept 45.352005821 1.461311e-03 #> 908 b_Intercept 45.367099082 1.419306e-03 #> 909 b_Intercept 45.382192342 1.379155e-03 #> 910 b_Intercept 45.397285603 1.340865e-03 #> 911 b_Intercept 45.412378864 1.304490e-03 #> 912 b_Intercept 45.427472124 1.270361e-03 #> 913 b_Intercept 45.442565385 1.238090e-03 #> 914 b_Intercept 45.457658646 1.207671e-03 #> 915 b_Intercept 45.472751906 1.179091e-03 #> 916 b_Intercept 45.487845167 1.152336e-03 #> 917 b_Intercept 45.502938428 1.127389e-03 #> 918 b_Intercept 45.518031688 1.104550e-03 #> 919 b_Intercept 45.533124949 1.083544e-03 #> 920 b_Intercept 45.548218210 1.064254e-03 #> 921 b_Intercept 45.563311471 1.046649e-03 #> 922 b_Intercept 45.578404731 1.030691e-03 #> 923 b_Intercept 45.593497992 1.016342e-03 #> 924 b_Intercept 45.608591253 1.003722e-03 #> 925 b_Intercept 45.623684513 9.927868e-04 #> 926 b_Intercept 45.638777774 9.832930e-04 #> 927 b_Intercept 45.653871035 9.751872e-04 #> 928 b_Intercept 45.668964295 9.684133e-04 #> 929 b_Intercept 45.684057556 9.629128e-04 #> 930 b_Intercept 45.699150817 9.586621e-04 #> 931 b_Intercept 45.714244077 9.557610e-04 #> 932 b_Intercept 45.729337338 9.539022e-04 #> 933 b_Intercept 45.744430599 9.530162e-04 #> 934 b_Intercept 45.759523859 9.530321e-04 #> 935 b_Intercept 45.774617120 9.538776e-04 #> 936 b_Intercept 45.789710381 9.554791e-04 #> 937 b_Intercept 45.804803641 9.578687e-04 #> 938 b_Intercept 45.819896902 9.608453e-04 #> 939 b_Intercept 45.834990163 9.643046e-04 #> 940 b_Intercept 45.850083424 9.681698e-04 #> 941 b_Intercept 45.865176684 9.723643e-04 #> 942 b_Intercept 45.880269945 9.768116e-04 #> 943 b_Intercept 45.895363206 9.814439e-04 #> 944 b_Intercept 45.910456466 9.861536e-04 #> 945 b_Intercept 45.925549727 9.908492e-04 #> 946 b_Intercept 45.940642988 9.954604e-04 #> 947 b_Intercept 45.955736248 9.999190e-04 #> 948 b_Intercept 45.970829509 1.004159e-03 #> 949 b_Intercept 45.985922770 1.008101e-03 #> 950 b_Intercept 46.001016030 1.011611e-03 #> 951 b_Intercept 46.016109291 1.014690e-03 #> 952 b_Intercept 46.031202552 1.017285e-03 #> 953 b_Intercept 46.046295812 1.019352e-03 #> 954 b_Intercept 46.061389073 1.020848e-03 #> 955 b_Intercept 46.076482334 1.021733e-03 #> 956 b_Intercept 46.091575594 1.021838e-03 #> 957 b_Intercept 46.106668855 1.021233e-03 #> 958 b_Intercept 46.121762116 1.019920e-03 #> 959 b_Intercept 46.136855376 1.017886e-03 #> 960 b_Intercept 46.151948637 1.015118e-03 #> 961 b_Intercept 46.167041898 1.011613e-03 #> 962 b_Intercept 46.182135159 1.007279e-03 #> 963 b_Intercept 46.197228419 1.002137e-03 #> 964 b_Intercept 46.212321680 9.962804e-04 #> 965 b_Intercept 46.227414941 9.897290e-04 #> 966 b_Intercept 46.242508201 9.825055e-04 #> 967 b_Intercept 46.257601462 9.746368e-04 #> 968 b_Intercept 46.272694723 9.661276e-04 #> 969 b_Intercept 46.287787983 9.569492e-04 #> 970 b_Intercept 46.302881244 9.472568e-04 #> 971 b_Intercept 46.317974505 9.370950e-04 #> 972 b_Intercept 46.333067765 9.265107e-04 #> 973 b_Intercept 46.348161026 9.155530e-04 #> 974 b_Intercept 46.363254287 9.042728e-04 #> 975 b_Intercept 46.378347547 8.926850e-04 #> 976 b_Intercept 46.393440808 8.809063e-04 #> 977 b_Intercept 46.408534069 8.689985e-04 #> 978 b_Intercept 46.423627329 8.570156e-04 #> 979 b_Intercept 46.438720590 8.450111e-04 #> 980 b_Intercept 46.453813851 8.330378e-04 #> 981 b_Intercept 46.468907112 8.211670e-04 #> 982 b_Intercept 46.484000372 8.094661e-04 #> 983 b_Intercept 46.499093633 7.979708e-04 #> 984 b_Intercept 46.514186894 7.867233e-04 #> 985 b_Intercept 46.529280154 7.757631e-04 #> 986 b_Intercept 46.544373415 7.651263e-04 #> 987 b_Intercept 46.559466676 7.548669e-04 #> 988 b_Intercept 46.574559936 7.450713e-04 #> 989 b_Intercept 46.589653197 7.356966e-04 #> 990 b_Intercept 46.604746458 7.267591e-04 #> 991 b_Intercept 46.619839718 7.182713e-04 #> 992 b_Intercept 46.634932979 7.102415e-04 #> 993 b_Intercept 46.650026240 7.026738e-04 #> 994 b_Intercept 46.665119500 6.956628e-04 #> 995 b_Intercept 46.680212761 6.891119e-04 #> 996 b_Intercept 46.695306022 6.829982e-04 #> 997 b_Intercept 46.710399282 6.773049e-04 #> 998 b_Intercept 46.725492543 6.720118e-04 #> 999 b_Intercept 46.740585804 6.670952e-04 #> 1000 b_Intercept 46.755679065 6.625690e-04 #> 1001 b_Intercept 46.770772325 6.583749e-04 #> 1002 b_Intercept 46.785865586 6.544444e-04 #> 1003 b_Intercept 46.800958847 6.507399e-04 #> 1004 b_Intercept 46.816052107 6.472224e-04 #> 1005 b_Intercept 46.831145368 6.438514e-04 #> 1006 b_Intercept 46.846238629 6.405879e-04 #> 1007 b_Intercept 46.861331889 6.373781e-04 #> 1008 b_Intercept 46.876425150 6.341615e-04 #> 1009 b_Intercept 46.891518411 6.308952e-04 #> 1010 b_Intercept 46.906611671 6.275372e-04 #> 1011 b_Intercept 46.921704932 6.240460e-04 #> 1012 b_Intercept 46.936798193 6.203815e-04 #> 1013 b_Intercept 46.951891453 6.164465e-04 #> 1014 b_Intercept 46.966984714 6.122388e-04 #> 1015 b_Intercept 46.982077975 6.077294e-04 #> 1016 b_Intercept 46.997171235 6.028879e-04 #> 1017 b_Intercept 47.012264496 5.976871e-04 #> 1018 b_Intercept 47.027357757 5.921020e-04 #> 1019 b_Intercept 47.042451017 5.860494e-04 #> 1020 b_Intercept 47.057544278 5.795267e-04 #> 1021 b_Intercept 47.072637539 5.725586e-04 #> 1022 b_Intercept 47.087730800 5.651363e-04 #> 1023 b_Intercept 47.102824060 5.572541e-04 #> 1024 b_Intercept 47.117917321 5.489094e-04 #> 1025 b_wt -6.882027746 7.178788e-04 #> 1026 b_wt -6.875683507 7.244908e-04 #> 1027 b_wt -6.869339268 7.304031e-04 #> 1028 b_wt -6.862995028 7.357482e-04 #> 1029 b_wt -6.856650789 7.405577e-04 #> 1030 b_wt -6.850306549 7.448527e-04 #> 1031 b_wt -6.843962310 7.486560e-04 #> 1032 b_wt -6.837618071 7.519925e-04 #> 1033 b_wt -6.831273831 7.548166e-04 #> 1034 b_wt -6.824929592 7.572221e-04 #> 1035 b_wt -6.818585352 7.592649e-04 #> 1036 b_wt -6.812241113 7.609759e-04 #> 1037 b_wt -6.805896873 7.623862e-04 #> 1038 b_wt -6.799552634 7.635272e-04 #> 1039 b_wt -6.793208395 7.644019e-04 #> 1040 b_wt -6.786864155 7.650659e-04 #> 1041 b_wt -6.780519916 7.655699e-04 #> 1042 b_wt -6.774175676 7.659411e-04 #> 1043 b_wt -6.767831437 7.662052e-04 #> 1044 b_wt -6.761487198 7.663864e-04 #> 1045 b_wt -6.755142958 7.665032e-04 #> 1046 b_wt -6.748798719 7.665820e-04 #> 1047 b_wt -6.742454479 7.666448e-04 #> 1048 b_wt -6.736110240 7.667035e-04 #> 1049 b_wt -6.729766000 7.667674e-04 #> 1050 b_wt -6.723421761 7.668429e-04 #> 1051 b_wt -6.717077522 7.669349e-04 #> 1052 b_wt -6.710733282 7.670438e-04 #> 1053 b_wt -6.704389043 7.671615e-04 #> 1054 b_wt -6.698044803 7.672799e-04 #> 1055 b_wt -6.691700564 7.673884e-04 #> 1056 b_wt -6.685356325 7.674739e-04 #> 1057 b_wt -6.679012085 7.675187e-04 #> 1058 b_wt -6.672667846 7.674871e-04 #> 1059 b_wt -6.666323606 7.673652e-04 #> 1060 b_wt -6.659979367 7.671300e-04 #> 1061 b_wt -6.653635128 7.667574e-04 #> 1062 b_wt -6.647290888 7.662221e-04 #> 1063 b_wt -6.640946649 7.654980e-04 #> 1064 b_wt -6.634602409 7.644974e-04 #> 1065 b_wt -6.628258170 7.632392e-04 #> 1066 b_wt -6.621913930 7.616979e-04 #> 1067 b_wt -6.615569691 7.598489e-04 #> 1068 b_wt -6.609225452 7.576688e-04 #> 1069 b_wt -6.602881212 7.551353e-04 #> 1070 b_wt -6.596536973 7.521447e-04 #> 1071 b_wt -6.590192733 7.487394e-04 #> 1072 b_wt -6.583848494 7.449190e-04 #> 1073 b_wt -6.577504255 7.406726e-04 #> 1074 b_wt -6.571160015 7.359922e-04 #> 1075 b_wt -6.564815776 7.308726e-04 #> 1076 b_wt -6.558471536 7.252373e-04 #> 1077 b_wt -6.552127297 7.191355e-04 #> 1078 b_wt -6.545783057 7.126072e-04 #> 1079 b_wt -6.539438818 7.056642e-04 #> 1080 b_wt -6.533094579 6.983223e-04 #> 1081 b_wt -6.526750339 6.906000e-04 #> 1082 b_wt -6.520406100 6.824768e-04 #> 1083 b_wt -6.514061860 6.740029e-04 #> 1084 b_wt -6.507717621 6.652481e-04 #> 1085 b_wt -6.501373382 6.562478e-04 #> 1086 b_wt -6.495029142 6.470397e-04 #> 1087 b_wt -6.488684903 6.376641e-04 #> 1088 b_wt -6.482340663 6.281575e-04 #> 1089 b_wt -6.475996424 6.185835e-04 #> 1090 b_wt -6.469652184 6.090074e-04 #> 1091 b_wt -6.463307945 5.994785e-04 #> 1092 b_wt -6.456963706 5.900470e-04 #> 1093 b_wt -6.450619466 5.807638e-04 #> 1094 b_wt -6.444275227 5.716970e-04 #> 1095 b_wt -6.437930987 5.629494e-04 #> 1096 b_wt -6.431586748 5.545349e-04 #> 1097 b_wt -6.425242509 5.465027e-04 #> 1098 b_wt -6.418898269 5.389005e-04 #> 1099 b_wt -6.412554030 5.317750e-04 #> 1100 b_wt -6.406209790 5.251862e-04 #> 1101 b_wt -6.399865551 5.192987e-04 #> 1102 b_wt -6.393521311 5.140377e-04 #> 1103 b_wt -6.387177072 5.094376e-04 #> 1104 b_wt -6.380832833 5.055297e-04 #> 1105 b_wt -6.374488593 5.023425e-04 #> 1106 b_wt -6.368144354 4.999012e-04 #> 1107 b_wt -6.361800114 4.984114e-04 #> 1108 b_wt -6.355455875 4.977214e-04 #> 1109 b_wt -6.349111636 4.978340e-04 #> 1110 b_wt -6.342767396 4.987557e-04 #> 1111 b_wt -6.336423157 5.004892e-04 #> 1112 b_wt -6.330078917 5.030336e-04 #> 1113 b_wt -6.323734678 5.065392e-04 #> 1114 b_wt -6.317390438 5.108699e-04 #> 1115 b_wt -6.311046199 5.159736e-04 #> 1116 b_wt -6.304701960 5.218314e-04 #> 1117 b_wt -6.298357720 5.284212e-04 #> 1118 b_wt -6.292013481 5.357184e-04 #> 1119 b_wt -6.285669241 5.437965e-04 #> 1120 b_wt -6.279325002 5.525577e-04 #> 1121 b_wt -6.272980763 5.619127e-04 #> 1122 b_wt -6.266636523 5.718253e-04 #> 1123 b_wt -6.260292284 5.822579e-04 #> 1124 b_wt -6.253948044 5.931720e-04 #> 1125 b_wt -6.247603805 6.045753e-04 #> 1126 b_wt -6.241259565 6.164057e-04 #> 1127 b_wt -6.234915326 6.285720e-04 #> 1128 b_wt -6.228571087 6.410343e-04 #> 1129 b_wt -6.222226847 6.537537e-04 #> 1130 b_wt -6.215882608 6.666919e-04 #> 1131 b_wt -6.209538368 6.798236e-04 #> 1132 b_wt -6.203194129 6.931079e-04 #> 1133 b_wt -6.196849890 7.064864e-04 #> 1134 b_wt -6.190505650 7.199302e-04 #> 1135 b_wt -6.184161411 7.334126e-04 #> 1136 b_wt -6.177817171 7.469098e-04 #> 1137 b_wt -6.171472932 7.603989e-04 #> 1138 b_wt -6.165128692 7.738524e-04 #> 1139 b_wt -6.158784453 7.872606e-04 #> 1140 b_wt -6.152440214 8.006152e-04 #> 1141 b_wt -6.146095974 8.139115e-04 #> 1142 b_wt -6.139751735 8.271476e-04 #> 1143 b_wt -6.133407495 8.403243e-04 #> 1144 b_wt -6.127063256 8.534391e-04 #> 1145 b_wt -6.120719017 8.665173e-04 #> 1146 b_wt -6.114374777 8.795738e-04 #> 1147 b_wt -6.108030538 8.926262e-04 #> 1148 b_wt -6.101686298 9.056948e-04 #> 1149 b_wt -6.095342059 9.188027e-04 #> 1150 b_wt -6.088997819 9.319999e-04 #> 1151 b_wt -6.082653580 9.453112e-04 #> 1152 b_wt -6.076309341 9.587672e-04 #> 1153 b_wt -6.069965101 9.724016e-04 #> 1154 b_wt -6.063620862 9.862495e-04 #> 1155 b_wt -6.057276622 1.000347e-03 #> 1156 b_wt -6.050932383 1.014796e-03 #> 1157 b_wt -6.044588144 1.029612e-03 #> 1158 b_wt -6.038243904 1.044817e-03 #> 1159 b_wt -6.031899665 1.060452e-03 #> 1160 b_wt -6.025555425 1.076555e-03 #> 1161 b_wt -6.019211186 1.093168e-03 #> 1162 b_wt -6.012866947 1.110418e-03 #> 1163 b_wt -6.006522707 1.128331e-03 #> 1164 b_wt -6.000178468 1.146895e-03 #> 1165 b_wt -5.993834228 1.166147e-03 #> 1166 b_wt -5.987489989 1.186125e-03 #> 1167 b_wt -5.981145749 1.206867e-03 #> 1168 b_wt -5.974801510 1.228505e-03 #> 1169 b_wt -5.968457271 1.251108e-03 #> 1170 b_wt -5.962113031 1.274608e-03 #> 1171 b_wt -5.955768792 1.299042e-03 #> 1172 b_wt -5.949424552 1.324450e-03 #> 1173 b_wt -5.943080313 1.350868e-03 #> 1174 b_wt -5.936736074 1.378422e-03 #> 1175 b_wt -5.930391834 1.407268e-03 #> 1176 b_wt -5.924047595 1.437277e-03 #> 1177 b_wt -5.917703355 1.468496e-03 #> 1178 b_wt -5.911359116 1.500974e-03 #> 1179 b_wt -5.905014876 1.534765e-03 #> 1180 b_wt -5.898670637 1.569982e-03 #> 1181 b_wt -5.892326398 1.606942e-03 #> 1182 b_wt -5.885982158 1.645439e-03 #> 1183 b_wt -5.879637919 1.685547e-03 #> 1184 b_wt -5.873293679 1.727345e-03 #> 1185 b_wt -5.866949440 1.770917e-03 #> 1186 b_wt -5.860605201 1.816362e-03 #> 1187 b_wt -5.854260961 1.864271e-03 #> 1188 b_wt -5.847916722 1.914316e-03 #> 1189 b_wt -5.841572482 1.966611e-03 #> 1190 b_wt -5.835228243 2.021281e-03 #> 1191 b_wt -5.828884003 2.078455e-03 #> 1192 b_wt -5.822539764 2.138270e-03 #> 1193 b_wt -5.816195525 2.201504e-03 #> 1194 b_wt -5.809851285 2.267863e-03 #> 1195 b_wt -5.803507046 2.337424e-03 #> 1196 b_wt -5.797162806 2.410358e-03 #> 1197 b_wt -5.790818567 2.486839e-03 #> 1198 b_wt -5.784474328 2.567049e-03 #> 1199 b_wt -5.778130088 2.651906e-03 #> 1200 b_wt -5.771785849 2.741262e-03 #> 1201 b_wt -5.765441609 2.835047e-03 #> 1202 b_wt -5.759097370 2.933463e-03 #> 1203 b_wt -5.752753130 3.036712e-03 #> 1204 b_wt -5.746408891 3.144997e-03 #> 1205 b_wt -5.740064652 3.259284e-03 #> 1206 b_wt -5.733720412 3.379684e-03 #> 1207 b_wt -5.727376173 3.505850e-03 #> 1208 b_wt -5.721031933 3.637974e-03 #> 1209 b_wt -5.714687694 3.776243e-03 #> 1210 b_wt -5.708343455 3.920840e-03 #> 1211 b_wt -5.701999215 4.072622e-03 #> 1212 b_wt -5.695654976 4.232079e-03 #> 1213 b_wt -5.689310736 4.398445e-03 #> 1214 b_wt -5.682966497 4.571851e-03 #> 1215 b_wt -5.676622257 4.752415e-03 #> 1216 b_wt -5.670278018 4.940244e-03 #> 1217 b_wt -5.663933779 5.135915e-03 #> 1218 b_wt -5.657589539 5.340342e-03 #> 1219 b_wt -5.651245300 5.552267e-03 #> 1220 b_wt -5.644901060 5.771709e-03 #> 1221 b_wt -5.638556821 5.998665e-03 #> 1222 b_wt -5.632212582 6.233116e-03 #> 1223 b_wt -5.625868342 6.475238e-03 #> 1224 b_wt -5.619524103 6.726273e-03 #> 1225 b_wt -5.613179863 6.984538e-03 #> 1226 b_wt -5.606835624 7.249901e-03 #> 1227 b_wt -5.600491384 7.522212e-03 #> 1228 b_wt -5.594147145 7.801302e-03 #> 1229 b_wt -5.587802906 8.086977e-03 #> 1230 b_wt -5.581458666 8.380429e-03 #> 1231 b_wt -5.575114427 8.679895e-03 #> 1232 b_wt -5.568770187 8.985060e-03 #> 1233 b_wt -5.562425948 9.295636e-03 #> 1234 b_wt -5.556081709 9.611319e-03 #> 1235 b_wt -5.549737469 9.931790e-03 #> 1236 b_wt -5.543393230 1.025749e-02 #> 1237 b_wt -5.537048990 1.058726e-02 #> 1238 b_wt -5.530704751 1.092054e-02 #> 1239 b_wt -5.524360511 1.125697e-02 #> 1240 b_wt -5.518016272 1.159616e-02 #> 1241 b_wt -5.511672033 1.193775e-02 #> 1242 b_wt -5.505327793 1.228158e-02 #> 1243 b_wt -5.498983554 1.262695e-02 #> 1244 b_wt -5.492639314 1.297334e-02 #> 1245 b_wt -5.486295075 1.332040e-02 #> 1246 b_wt -5.479950836 1.366776e-02 #> 1247 b_wt -5.473606596 1.401509e-02 #> 1248 b_wt -5.467262357 1.436195e-02 #> 1249 b_wt -5.460918117 1.470791e-02 #> 1250 b_wt -5.454573878 1.505273e-02 #> 1251 b_wt -5.448229638 1.539617e-02 #> 1252 b_wt -5.441885399 1.573800e-02 #> 1253 b_wt -5.435541160 1.607800e-02 #> 1254 b_wt -5.429196920 1.641579e-02 #> 1255 b_wt -5.422852681 1.675104e-02 #> 1256 b_wt -5.416508441 1.708396e-02 #> 1257 b_wt -5.410164202 1.741449e-02 #> 1258 b_wt -5.403819963 1.774258e-02 #> 1259 b_wt -5.397475723 1.806823e-02 #> 1260 b_wt -5.391131484 1.839134e-02 #> 1261 b_wt -5.384787244 1.871169e-02 #> 1262 b_wt -5.378443005 1.902985e-02 #> 1263 b_wt -5.372098765 1.934598e-02 #> 1264 b_wt -5.365754526 1.966024e-02 #> 1265 b_wt -5.359410287 1.997283e-02 #> 1266 b_wt -5.353066047 2.028398e-02 #> 1267 b_wt -5.346721808 2.059380e-02 #> 1268 b_wt -5.340377568 2.090293e-02 #> 1269 b_wt -5.334033329 2.121169e-02 #> 1270 b_wt -5.327689090 2.152042e-02 #> 1271 b_wt -5.321344850 2.182948e-02 #> 1272 b_wt -5.315000611 2.213925e-02 #> 1273 b_wt -5.308656371 2.245050e-02 #> 1274 b_wt -5.302312132 2.276354e-02 #> 1275 b_wt -5.295967893 2.307874e-02 #> 1276 b_wt -5.289623653 2.339654e-02 #> 1277 b_wt -5.283279414 2.371734e-02 #> 1278 b_wt -5.276935174 2.404158e-02 #> 1279 b_wt -5.270590935 2.437051e-02 #> 1280 b_wt -5.264246695 2.470418e-02 #> 1281 b_wt -5.257902456 2.504276e-02 #> 1282 b_wt -5.251558217 2.538666e-02 #> 1283 b_wt -5.245213977 2.573624e-02 #> 1284 b_wt -5.238869738 2.609186e-02 #> 1285 b_wt -5.232525498 2.645493e-02 #> 1286 b_wt -5.226181259 2.682550e-02 #> 1287 b_wt -5.219837020 2.720329e-02 #> 1288 b_wt -5.213492780 2.758857e-02 #> 1289 b_wt -5.207148541 2.798160e-02 #> 1290 b_wt -5.200804301 2.838262e-02 #> 1291 b_wt -5.194460062 2.879282e-02 #> 1292 b_wt -5.188115822 2.921256e-02 #> 1293 b_wt -5.181771583 2.964097e-02 #> 1294 b_wt -5.175427344 3.007818e-02 #> 1295 b_wt -5.169083104 3.052429e-02 #> 1296 b_wt -5.162738865 3.097943e-02 #> 1297 b_wt -5.156394625 3.144439e-02 #> 1298 b_wt -5.150050386 3.192001e-02 #> 1299 b_wt -5.143706147 3.240487e-02 #> 1300 b_wt -5.137361907 3.289899e-02 #> 1301 b_wt -5.131017668 3.340238e-02 #> 1302 b_wt -5.124673428 3.391504e-02 #> 1303 b_wt -5.118329189 3.443739e-02 #> 1304 b_wt -5.111984949 3.497084e-02 #> 1305 b_wt -5.105640710 3.551353e-02 #> 1306 b_wt -5.099296471 3.606545e-02 #> 1307 b_wt -5.092952231 3.662659e-02 #> 1308 b_wt -5.086607992 3.719693e-02 #> 1309 b_wt -5.080263752 3.777651e-02 #> 1310 b_wt -5.073919513 3.836741e-02 #> 1311 b_wt -5.067575274 3.896747e-02 #> 1312 b_wt -5.061231034 3.957670e-02 #> 1313 b_wt -5.054886795 4.019511e-02 #> 1314 b_wt -5.048542555 4.082272e-02 #> 1315 b_wt -5.042198316 4.145955e-02 #> 1316 b_wt -5.035854076 4.210761e-02 #> 1317 b_wt -5.029509837 4.276525e-02 #> 1318 b_wt -5.023165598 4.343225e-02 #> 1319 b_wt -5.016821358 4.410867e-02 #> 1320 b_wt -5.010477119 4.479456e-02 #> 1321 b_wt -5.004132879 4.549000e-02 #> 1322 b_wt -4.997788640 4.619674e-02 #> 1323 b_wt -4.991444401 4.691382e-02 #> 1324 b_wt -4.985100161 4.764067e-02 #> 1325 b_wt -4.978755922 4.837735e-02 #> 1326 b_wt -4.972411682 4.912392e-02 #> 1327 b_wt -4.966067443 4.988043e-02 #> 1328 b_wt -4.959723203 5.064834e-02 #> 1329 b_wt -4.953378964 5.142730e-02 #> 1330 b_wt -4.947034725 5.221632e-02 #> 1331 b_wt -4.940690485 5.301539e-02 #> 1332 b_wt -4.934346246 5.382451e-02 #> 1333 b_wt -4.928002006 5.464365e-02 #> 1334 b_wt -4.921657767 5.547381e-02 #> 1335 b_wt -4.915313528 5.631523e-02 #> 1336 b_wt -4.908969288 5.716644e-02 #> 1337 b_wt -4.902625049 5.802735e-02 #> 1338 b_wt -4.896280809 5.889783e-02 #> 1339 b_wt -4.889936570 5.977774e-02 #> 1340 b_wt -4.883592330 6.066755e-02 #> 1341 b_wt -4.877248091 6.156796e-02 #> 1342 b_wt -4.870903852 6.247717e-02 #> 1343 b_wt -4.864559612 6.339498e-02 #> 1344 b_wt -4.858215373 6.432115e-02 #> 1345 b_wt -4.851871133 6.525547e-02 #> 1346 b_wt -4.845526894 6.619794e-02 #> 1347 b_wt -4.839182655 6.714958e-02 #> 1348 b_wt -4.832838415 6.810849e-02 #> 1349 b_wt -4.826494176 6.907443e-02 #> 1350 b_wt -4.820149936 7.004716e-02 #> 1351 b_wt -4.813805697 7.102644e-02 #> 1352 b_wt -4.807461457 7.201207e-02 #> 1353 b_wt -4.801117218 7.300519e-02 #> 1354 b_wt -4.794772979 7.400414e-02 #> 1355 b_wt -4.788428739 7.500874e-02 #> 1356 b_wt -4.782084500 7.601885e-02 #> 1357 b_wt -4.775740260 7.703434e-02 #> 1358 b_wt -4.769396021 7.805512e-02 #> 1359 b_wt -4.763051782 7.908215e-02 #> 1360 b_wt -4.756707542 8.011456e-02 #> 1361 b_wt -4.750363303 8.115217e-02 #> 1362 b_wt -4.744019063 8.219502e-02 #> 1363 b_wt -4.737674824 8.324320e-02 #> 1364 b_wt -4.731330584 8.429682e-02 #> 1365 b_wt -4.724986345 8.535701e-02 #> 1366 b_wt -4.718642106 8.642355e-02 #> 1367 b_wt -4.712297866 8.749626e-02 #> 1368 b_wt -4.705953627 8.857544e-02 #> 1369 b_wt -4.699609387 8.966139e-02 #> 1370 b_wt -4.693265148 9.075445e-02 #> 1371 b_wt -4.686920909 9.185604e-02 #> 1372 b_wt -4.680576669 9.296665e-02 #> 1373 b_wt -4.674232430 9.408587e-02 #> 1374 b_wt -4.667888190 9.521417e-02 #> 1375 b_wt -4.661543951 9.635204e-02 #> 1376 b_wt -4.655199712 9.749999e-02 #> 1377 b_wt -4.648855472 9.865958e-02 #> 1378 b_wt -4.642511233 9.983222e-02 #> 1379 b_wt -4.636166993 1.010169e-01 #> 1380 b_wt -4.629822754 1.022140e-01 #> 1381 b_wt -4.623478514 1.034243e-01 #> 1382 b_wt -4.617134275 1.046482e-01 #> 1383 b_wt -4.610790036 1.058871e-01 #> 1384 b_wt -4.604445796 1.071436e-01 #> 1385 b_wt -4.598101557 1.084157e-01 #> 1386 b_wt -4.591757317 1.097037e-01 #> 1387 b_wt -4.585413078 1.110082e-01 #> 1388 b_wt -4.579068839 1.123295e-01 #> 1389 b_wt -4.572724599 1.136686e-01 #> 1390 b_wt -4.566380360 1.150293e-01 #> 1391 b_wt -4.560036120 1.164084e-01 #> 1392 b_wt -4.553691881 1.178061e-01 #> 1393 b_wt -4.547347641 1.192226e-01 #> 1394 b_wt -4.541003402 1.206582e-01 #> 1395 b_wt -4.534659163 1.221132e-01 #> 1396 b_wt -4.528314923 1.235922e-01 #> 1397 b_wt -4.521970684 1.250912e-01 #> 1398 b_wt -4.515626444 1.266102e-01 #> 1399 b_wt -4.509282205 1.281490e-01 #> 1400 b_wt -4.502937966 1.297078e-01 #> 1401 b_wt -4.496593726 1.312866e-01 #> 1402 b_wt -4.490249487 1.328891e-01 #> 1403 b_wt -4.483905247 1.345125e-01 #> 1404 b_wt -4.477561008 1.361557e-01 #> 1405 b_wt -4.471216768 1.378185e-01 #> 1406 b_wt -4.464872529 1.395008e-01 #> 1407 b_wt -4.458528290 1.412024e-01 #> 1408 b_wt -4.452184050 1.429261e-01 #> 1409 b_wt -4.445839811 1.446702e-01 #> 1410 b_wt -4.439495571 1.464329e-01 #> 1411 b_wt -4.433151332 1.482138e-01 #> 1412 b_wt -4.426807093 1.500128e-01 #> 1413 b_wt -4.420462853 1.518295e-01 #> 1414 b_wt -4.414118614 1.536656e-01 #> 1415 b_wt -4.407774374 1.555207e-01 #> 1416 b_wt -4.401430135 1.573924e-01 #> 1417 b_wt -4.395085895 1.592803e-01 #> 1418 b_wt -4.388741656 1.611841e-01 #> 1419 b_wt -4.382397417 1.631033e-01 #> 1420 b_wt -4.376053177 1.650388e-01 #> 1421 b_wt -4.369708938 1.669911e-01 #> 1422 b_wt -4.363364698 1.689574e-01 #> 1423 b_wt -4.357020459 1.709372e-01 #> 1424 b_wt -4.350676220 1.729302e-01 #> 1425 b_wt -4.344331980 1.749359e-01 #> 1426 b_wt -4.337987741 1.769543e-01 #> 1427 b_wt -4.331643501 1.789866e-01 #> 1428 b_wt -4.325299262 1.810297e-01 #> 1429 b_wt -4.318955022 1.830833e-01 #> 1430 b_wt -4.312610783 1.851468e-01 #> 1431 b_wt -4.306266544 1.872197e-01 #> 1432 b_wt -4.299922304 1.893016e-01 #> 1433 b_wt -4.293578065 1.913934e-01 #> 1434 b_wt -4.287233825 1.934927e-01 #> 1435 b_wt -4.280889586 1.955988e-01 #> 1436 b_wt -4.274545347 1.977112e-01 #> 1437 b_wt -4.268201107 1.998294e-01 #> 1438 b_wt -4.261856868 2.019527e-01 #> 1439 b_wt -4.255512628 2.040815e-01 #> 1440 b_wt -4.249168389 2.062140e-01 #> 1441 b_wt -4.242824149 2.083498e-01 #> 1442 b_wt -4.236479910 2.104882e-01 #> 1443 b_wt -4.230135671 2.126287e-01 #> 1444 b_wt -4.223791431 2.147709e-01 #> 1445 b_wt -4.217447192 2.169143e-01 #> 1446 b_wt -4.211102952 2.190581e-01 #> 1447 b_wt -4.204758713 2.212021e-01 #> 1448 b_wt -4.198414474 2.233456e-01 #> 1449 b_wt -4.192070234 2.254885e-01 #> 1450 b_wt -4.185725995 2.276304e-01 #> 1451 b_wt -4.179381755 2.297707e-01 #> 1452 b_wt -4.173037516 2.319093e-01 #> 1453 b_wt -4.166693276 2.340461e-01 #> 1454 b_wt -4.160349037 2.361811e-01 #> 1455 b_wt -4.154004798 2.383141e-01 #> 1456 b_wt -4.147660558 2.404451e-01 #> 1457 b_wt -4.141316319 2.425739e-01 #> 1458 b_wt -4.134972079 2.447008e-01 #> 1459 b_wt -4.128627840 2.468260e-01 #> 1460 b_wt -4.122283601 2.489499e-01 #> 1461 b_wt -4.115939361 2.510726e-01 #> 1462 b_wt -4.109595122 2.531944e-01 #> 1463 b_wt -4.103250882 2.553158e-01 #> 1464 b_wt -4.096906643 2.574373e-01 #> 1465 b_wt -4.090562403 2.595595e-01 #> 1466 b_wt -4.084218164 2.616827e-01 #> 1467 b_wt -4.077873925 2.638076e-01 #> 1468 b_wt -4.071529685 2.659347e-01 #> 1469 b_wt -4.065185446 2.680647e-01 #> 1470 b_wt -4.058841206 2.701990e-01 #> 1471 b_wt -4.052496967 2.723375e-01 #> 1472 b_wt -4.046152728 2.744811e-01 #> 1473 b_wt -4.039808488 2.766301e-01 #> 1474 b_wt -4.033464249 2.787854e-01 #> 1475 b_wt -4.027120009 2.809473e-01 #> 1476 b_wt -4.020775770 2.831186e-01 #> 1477 b_wt -4.014431530 2.852981e-01 #> 1478 b_wt -4.008087291 2.874864e-01 #> 1479 b_wt -4.001743052 2.896838e-01 #> 1480 b_wt -3.995398812 2.918910e-01 #> 1481 b_wt -3.989054573 2.941082e-01 #> 1482 b_wt -3.982710333 2.963380e-01 #> 1483 b_wt -3.976366094 2.985792e-01 #> 1484 b_wt -3.970021855 3.008314e-01 #> 1485 b_wt -3.963677615 3.030949e-01 #> 1486 b_wt -3.957333376 3.053697e-01 #> 1487 b_wt -3.950989136 3.076558e-01 #> 1488 b_wt -3.944644897 3.099552e-01 #> 1489 b_wt -3.938300658 3.122665e-01 #> 1490 b_wt -3.931956418 3.145887e-01 #> 1491 b_wt -3.925612179 3.169216e-01 #> 1492 b_wt -3.919267939 3.192647e-01 #> 1493 b_wt -3.912923700 3.216177e-01 #> 1494 b_wt -3.906579460 3.239812e-01 #> 1495 b_wt -3.900235221 3.263543e-01 #> 1496 b_wt -3.893890982 3.287352e-01 #> 1497 b_wt -3.887546742 3.311234e-01 #> 1498 b_wt -3.881202503 3.335180e-01 #> 1499 b_wt -3.874858263 3.359183e-01 #> 1500 b_wt -3.868514024 3.383239e-01 #> 1501 b_wt -3.862169785 3.407338e-01 #> 1502 b_wt -3.855825545 3.431462e-01 #> 1503 b_wt -3.849481306 3.455604e-01 #> 1504 b_wt -3.843137066 3.479753e-01 #> 1505 b_wt -3.836792827 3.503901e-01 #> 1506 b_wt -3.830448587 3.528037e-01 #> 1507 b_wt -3.824104348 3.552145e-01 #> 1508 b_wt -3.817760109 3.576218e-01 #> 1509 b_wt -3.811415869 3.600247e-01 #> 1510 b_wt -3.805071630 3.624222e-01 #> 1511 b_wt -3.798727390 3.648136e-01 #> 1512 b_wt -3.792383151 3.671979e-01 #> 1513 b_wt -3.786038912 3.695723e-01 #> 1514 b_wt -3.779694672 3.719377e-01 #> 1515 b_wt -3.773350433 3.742936e-01 #> 1516 b_wt -3.767006193 3.766391e-01 #> 1517 b_wt -3.760661954 3.789738e-01 #> 1518 b_wt -3.754317714 3.812972e-01 #> 1519 b_wt -3.747973475 3.836058e-01 #> 1520 b_wt -3.741629236 3.859019e-01 #> 1521 b_wt -3.735284996 3.881853e-01 #> 1522 b_wt -3.728940757 3.904556e-01 #> 1523 b_wt -3.722596517 3.927129e-01 #> 1524 b_wt -3.716252278 3.949568e-01 #> 1525 b_wt -3.709908039 3.971848e-01 #> 1526 b_wt -3.703563799 3.993988e-01 #> 1527 b_wt -3.697219560 4.015996e-01 #> 1528 b_wt -3.690875320 4.037873e-01 #> 1529 b_wt -3.684531081 4.059620e-01 #> 1530 b_wt -3.678186841 4.081240e-01 #> 1531 b_wt -3.671842602 4.102716e-01 #> 1532 b_wt -3.665498363 4.124061e-01 #> 1533 b_wt -3.659154123 4.145290e-01 #> 1534 b_wt -3.652809884 4.166406e-01 #> 1535 b_wt -3.646465644 4.187412e-01 #> 1536 b_wt -3.640121405 4.208313e-01 #> 1537 b_wt -3.633777166 4.229099e-01 #> 1538 b_wt -3.627432926 4.249778e-01 #> 1539 b_wt -3.621088687 4.270364e-01 #> 1540 b_wt -3.614744447 4.290860e-01 #> 1541 b_wt -3.608400208 4.311271e-01 #> 1542 b_wt -3.602055968 4.331597e-01 #> 1543 b_wt -3.595711729 4.351837e-01 #> 1544 b_wt -3.589367490 4.371986e-01 #> 1545 b_wt -3.583023250 4.392058e-01 #> 1546 b_wt -3.576679011 4.412053e-01 #> 1547 b_wt -3.570334771 4.431973e-01 #> 1548 b_wt -3.563990532 4.451816e-01 #> 1549 b_wt -3.557646293 4.471577e-01 #> 1550 b_wt -3.551302053 4.491241e-01 #> 1551 b_wt -3.544957814 4.510822e-01 #> 1552 b_wt -3.538613574 4.530313e-01 #> 1553 b_wt -3.532269335 4.549712e-01 #> 1554 b_wt -3.525925095 4.569012e-01 #> 1555 b_wt -3.519580856 4.588207e-01 #> 1556 b_wt -3.513236617 4.607262e-01 #> 1557 b_wt -3.506892377 4.626193e-01 #> 1558 b_wt -3.500548138 4.644991e-01 #> 1559 b_wt -3.494203898 4.663647e-01 #> 1560 b_wt -3.487859659 4.682151e-01 #> 1561 b_wt -3.481515420 4.700493e-01 #> 1562 b_wt -3.475171180 4.718619e-01 #> 1563 b_wt -3.468826941 4.736548e-01 #> 1564 b_wt -3.462482701 4.754272e-01 #> 1565 b_wt -3.456138462 4.771777e-01 #> 1566 b_wt -3.449794222 4.789051e-01 #> 1567 b_wt -3.443449983 4.806081e-01 #> 1568 b_wt -3.437105744 4.822802e-01 #> 1569 b_wt -3.430761504 4.839227e-01 #> 1570 b_wt -3.424417265 4.855359e-01 #> 1571 b_wt -3.418073025 4.871183e-01 #> 1572 b_wt -3.411728786 4.886686e-01 #> 1573 b_wt -3.405384547 4.901855e-01 #> 1574 b_wt -3.399040307 4.916622e-01 #> 1575 b_wt -3.392696068 4.930988e-01 #> 1576 b_wt -3.386351828 4.944973e-01 #> 1577 b_wt -3.380007589 4.958565e-01 #> 1578 b_wt -3.373663349 4.971754e-01 #> 1579 b_wt -3.367319110 4.984527e-01 #> 1580 b_wt -3.360974871 4.996826e-01 #> 1581 b_wt -3.354630631 5.008627e-01 #> 1582 b_wt -3.348286392 5.019978e-01 #> 1583 b_wt -3.341942152 5.030870e-01 #> 1584 b_wt -3.335597913 5.041294e-01 #> 1585 b_wt -3.329253674 5.051245e-01 #> 1586 b_wt -3.322909434 5.060678e-01 #> 1587 b_wt -3.316565195 5.069541e-01 #> 1588 b_wt -3.310220955 5.077908e-01 #> 1589 b_wt -3.303876716 5.085775e-01 #> 1590 b_wt -3.297532477 5.093138e-01 #> 1591 b_wt -3.291188237 5.099994e-01 #> 1592 b_wt -3.284843998 5.106320e-01 #> 1593 b_wt -3.278499758 5.112029e-01 #> 1594 b_wt -3.272155519 5.117222e-01 #> 1595 b_wt -3.265811279 5.121898e-01 #> 1596 b_wt -3.259467040 5.126057e-01 #> 1597 b_wt -3.253122801 5.129698e-01 #> 1598 b_wt -3.246778561 5.132821e-01 #> 1599 b_wt -3.240434322 5.135302e-01 #> 1600 b_wt -3.234090082 5.137266e-01 #> 1601 b_wt -3.227745843 5.138715e-01 #> 1602 b_wt -3.221401604 5.139649e-01 #> 1603 b_wt -3.215057364 5.140071e-01 #> 1604 b_wt -3.208713125 5.139981e-01 #> 1605 b_wt -3.202368885 5.139278e-01 #> 1606 b_wt -3.196024646 5.138052e-01 #> 1607 b_wt -3.189680406 5.136323e-01 #> 1608 b_wt -3.183336167 5.134094e-01 #> 1609 b_wt -3.176991928 5.131369e-01 #> 1610 b_wt -3.170647688 5.128149e-01 #> 1611 b_wt -3.164303449 5.124357e-01 #> 1612 b_wt -3.157959209 5.120045e-01 #> 1613 b_wt -3.151614970 5.115254e-01 #> 1614 b_wt -3.145270731 5.109988e-01 #> 1615 b_wt -3.138926491 5.104251e-01 #> 1616 b_wt -3.132582252 5.098049e-01 #> 1617 b_wt -3.126238012 5.091327e-01 #> 1618 b_wt -3.119893773 5.084106e-01 #> 1619 b_wt -3.113549533 5.076441e-01 #> 1620 b_wt -3.107205294 5.068340e-01 #> 1621 b_wt -3.100861055 5.059811e-01 #> 1622 b_wt -3.094516815 5.050860e-01 #> 1623 b_wt -3.088172576 5.041458e-01 #> 1624 b_wt -3.081828336 5.031599e-01 #> 1625 b_wt -3.075484097 5.021353e-01 #> 1626 b_wt -3.069139858 5.010728e-01 #> 1627 b_wt -3.062795618 4.999736e-01 #> 1628 b_wt -3.056451379 4.988388e-01 #> 1629 b_wt -3.050107139 4.976675e-01 #> 1630 b_wt -3.043762900 4.964576e-01 #> 1631 b_wt -3.037418660 4.952164e-01 #> 1632 b_wt -3.031074421 4.939453e-01 #> 1633 b_wt -3.024730182 4.926457e-01 #> 1634 b_wt -3.018385942 4.913188e-01 #> 1635 b_wt -3.012041703 4.899656e-01 #> 1636 b_wt -3.005697463 4.885832e-01 #> 1637 b_wt -2.999353224 4.871788e-01 #> 1638 b_wt -2.993008985 4.857539e-01 #> 1639 b_wt -2.986664745 4.843098e-01 #> 1640 b_wt -2.980320506 4.828479e-01 #> 1641 b_wt -2.973976266 4.813699e-01 #> 1642 b_wt -2.967632027 4.798741e-01 #> 1643 b_wt -2.961287787 4.783657e-01 #> 1644 b_wt -2.954943548 4.768461e-01 #> 1645 b_wt -2.948599309 4.753167e-01 #> 1646 b_wt -2.942255069 4.737787e-01 #> 1647 b_wt -2.935910830 4.722335e-01 #> 1648 b_wt -2.929566590 4.706813e-01 #> 1649 b_wt -2.923222351 4.691247e-01 #> 1650 b_wt -2.916878112 4.675648e-01 #> 1651 b_wt -2.910533872 4.660026e-01 #> 1652 b_wt -2.904189633 4.644390e-01 #> 1653 b_wt -2.897845393 4.628749e-01 #> 1654 b_wt -2.891501154 4.613112e-01 #> 1655 b_wt -2.885156914 4.597486e-01 #> 1656 b_wt -2.878812675 4.581878e-01 #> 1657 b_wt -2.872468436 4.566291e-01 #> 1658 b_wt -2.866124196 4.550729e-01 #> 1659 b_wt -2.859779957 4.535193e-01 #> 1660 b_wt -2.853435717 4.519690e-01 #> 1661 b_wt -2.847091478 4.504220e-01 #> 1662 b_wt -2.840747239 4.488778e-01 #> 1663 b_wt -2.834402999 4.473362e-01 #> 1664 b_wt -2.828058760 4.457970e-01 #> 1665 b_wt -2.821714520 4.442599e-01 #> 1666 b_wt -2.815370281 4.427246e-01 #> 1667 b_wt -2.809026041 4.411903e-01 #> 1668 b_wt -2.802681802 4.396564e-01 #> 1669 b_wt -2.796337563 4.381219e-01 #> 1670 b_wt -2.789993323 4.365864e-01 #> 1671 b_wt -2.783649084 4.350489e-01 #> 1672 b_wt -2.777304844 4.335084e-01 #> 1673 b_wt -2.770960605 4.319633e-01 #> 1674 b_wt -2.764616366 4.304131e-01 #> 1675 b_wt -2.758272126 4.288568e-01 #> 1676 b_wt -2.751927887 4.272935e-01 #> 1677 b_wt -2.745583647 4.257221e-01 #> 1678 b_wt -2.739239408 4.241416e-01 #> 1679 b_wt -2.732895168 4.225483e-01 #> 1680 b_wt -2.726550929 4.209434e-01 #> 1681 b_wt -2.720206690 4.193259e-01 #> 1682 b_wt -2.713862450 4.176946e-01 #> 1683 b_wt -2.707518211 4.160488e-01 #> 1684 b_wt -2.701173971 4.143875e-01 #> 1685 b_wt -2.694829732 4.127058e-01 #> 1686 b_wt -2.688485493 4.110058e-01 #> 1687 b_wt -2.682141253 4.092873e-01 #> 1688 b_wt -2.675797014 4.075493e-01 #> 1689 b_wt -2.669452774 4.057912e-01 #> 1690 b_wt -2.663108535 4.040122e-01 #> 1691 b_wt -2.656764295 4.022077e-01 #> 1692 b_wt -2.650420056 4.003796e-01 #> 1693 b_wt -2.644075817 3.985288e-01 #> 1694 b_wt -2.637731577 3.966546e-01 #> 1695 b_wt -2.631387338 3.947569e-01 #> 1696 b_wt -2.625043098 3.928352e-01 #> 1697 b_wt -2.618698859 3.908857e-01 #> 1698 b_wt -2.612354620 3.889094e-01 #> 1699 b_wt -2.606010380 3.869086e-01 #> 1700 b_wt -2.599666141 3.848831e-01 #> 1701 b_wt -2.593321901 3.828329e-01 #> 1702 b_wt -2.586977662 3.807582e-01 #> 1703 b_wt -2.580633423 3.786563e-01 #> 1704 b_wt -2.574289183 3.765270e-01 #> 1705 b_wt -2.567944944 3.743737e-01 #> 1706 b_wt -2.561600704 3.721968e-01 #> 1707 b_wt -2.555256465 3.699966e-01 #> 1708 b_wt -2.548912225 3.677733e-01 #> 1709 b_wt -2.542567986 3.655259e-01 #> 1710 b_wt -2.536223747 3.632530e-01 #> 1711 b_wt -2.529879507 3.609589e-01 #> 1712 b_wt -2.523535268 3.586442e-01 #> 1713 b_wt -2.517191028 3.563094e-01 #> 1714 b_wt -2.510846789 3.539552e-01 #> 1715 b_wt -2.504502550 3.515816e-01 #> 1716 b_wt -2.498158310 3.491866e-01 #> 1717 b_wt -2.491814071 3.467750e-01 #> 1718 b_wt -2.485469831 3.443473e-01 #> 1719 b_wt -2.479125592 3.419045e-01 #> 1720 b_wt -2.472781352 3.394474e-01 #> 1721 b_wt -2.466437113 3.369769e-01 #> 1722 b_wt -2.460092874 3.344912e-01 #> 1723 b_wt -2.453748634 3.319945e-01 #> 1724 b_wt -2.447404395 3.294878e-01 #> 1725 b_wt -2.441060155 3.269719e-01 #> 1726 b_wt -2.434715916 3.244480e-01 #> 1727 b_wt -2.428371677 3.219169e-01 #> 1728 b_wt -2.422027437 3.193786e-01 #> 1729 b_wt -2.415683198 3.168357e-01 #> 1730 b_wt -2.409338958 3.142891e-01 #> 1731 b_wt -2.402994719 3.117400e-01 #> 1732 b_wt -2.396650479 3.091893e-01 #> 1733 b_wt -2.390306240 3.066379e-01 #> 1734 b_wt -2.383962001 3.040870e-01 #> 1735 b_wt -2.377617761 3.015378e-01 #> 1736 b_wt -2.371273522 2.989911e-01 #> 1737 b_wt -2.364929282 2.964479e-01 #> 1738 b_wt -2.358585043 2.939087e-01 #> 1739 b_wt -2.352240804 2.913743e-01 #> 1740 b_wt -2.345896564 2.888464e-01 #> 1741 b_wt -2.339552325 2.863255e-01 #> 1742 b_wt -2.333208085 2.838117e-01 #> 1743 b_wt -2.326863846 2.813053e-01 #> 1744 b_wt -2.320519606 2.788068e-01 #> 1745 b_wt -2.314175367 2.763165e-01 #> 1746 b_wt -2.307831128 2.738356e-01 #> 1747 b_wt -2.301486888 2.713646e-01 #> 1748 b_wt -2.295142649 2.689025e-01 #> 1749 b_wt -2.288798409 2.664492e-01 #> 1750 b_wt -2.282454170 2.640047e-01 #> 1751 b_wt -2.276109931 2.615689e-01 #> 1752 b_wt -2.269765691 2.591420e-01 #> 1753 b_wt -2.263421452 2.567247e-01 #> 1754 b_wt -2.257077212 2.543150e-01 #> 1755 b_wt -2.250732973 2.519126e-01 #> 1756 b_wt -2.244388733 2.495169e-01 #> 1757 b_wt -2.238044494 2.471276e-01 #> 1758 b_wt -2.231700255 2.447441e-01 #> 1759 b_wt -2.225356015 2.423666e-01 #> 1760 b_wt -2.219011776 2.399932e-01 #> 1761 b_wt -2.212667536 2.376233e-01 #> 1762 b_wt -2.206323297 2.352562e-01 #> 1763 b_wt -2.199979058 2.328913e-01 #> 1764 b_wt -2.193634818 2.305279e-01 #> 1765 b_wt -2.187290579 2.281652e-01 #> 1766 b_wt -2.180946339 2.258023e-01 #> 1767 b_wt -2.174602100 2.234385e-01 #> 1768 b_wt -2.168257860 2.210733e-01 #> 1769 b_wt -2.161913621 2.187062e-01 #> 1770 b_wt -2.155569382 2.163365e-01 #> 1771 b_wt -2.149225142 2.139631e-01 #> 1772 b_wt -2.142880903 2.115859e-01 #> 1773 b_wt -2.136536663 2.092047e-01 #> 1774 b_wt -2.130192424 2.068194e-01 #> 1775 b_wt -2.123848185 2.044297e-01 #> 1776 b_wt -2.117503945 2.020353e-01 #> 1777 b_wt -2.111159706 1.996357e-01 #> 1778 b_wt -2.104815466 1.972311e-01 #> 1779 b_wt -2.098471227 1.948220e-01 #> 1780 b_wt -2.092126987 1.924088e-01 #> 1781 b_wt -2.085782748 1.899916e-01 #> 1782 b_wt -2.079438509 1.875709e-01 #> 1783 b_wt -2.073094269 1.851466e-01 #> 1784 b_wt -2.066750030 1.827197e-01 #> 1785 b_wt -2.060405790 1.802911e-01 #> 1786 b_wt -2.054061551 1.778615e-01 #> 1787 b_wt -2.047717312 1.754316e-01 #> 1788 b_wt -2.041373072 1.730023e-01 #> 1789 b_wt -2.035028833 1.705745e-01 #> 1790 b_wt -2.028684593 1.681499e-01 #> 1791 b_wt -2.022340354 1.657292e-01 #> 1792 b_wt -2.015996114 1.633135e-01 #> 1793 b_wt -2.009651875 1.609037e-01 #> 1794 b_wt -2.003307636 1.585010e-01 #> 1795 b_wt -1.996963396 1.561071e-01 #> 1796 b_wt -1.990619157 1.537249e-01 #> 1797 b_wt -1.984274917 1.513539e-01 #> 1798 b_wt -1.977930678 1.489954e-01 #> 1799 b_wt -1.971586439 1.466506e-01 #> 1800 b_wt -1.965242199 1.443206e-01 #> 1801 b_wt -1.958897960 1.420070e-01 #> 1802 b_wt -1.952553720 1.397150e-01 #> 1803 b_wt -1.946209481 1.374420e-01 #> 1804 b_wt -1.939865242 1.351892e-01 #> 1805 b_wt -1.933521002 1.329577e-01 #> 1806 b_wt -1.927176763 1.307485e-01 #> 1807 b_wt -1.920832523 1.285627e-01 #> 1808 b_wt -1.914488284 1.264070e-01 #> 1809 b_wt -1.908144044 1.242776e-01 #> 1810 b_wt -1.901799805 1.221750e-01 #> 1811 b_wt -1.895455566 1.200999e-01 #> 1812 b_wt -1.889111326 1.180530e-01 #> 1813 b_wt -1.882767087 1.160352e-01 #> 1814 b_wt -1.876422847 1.140527e-01 #> 1815 b_wt -1.870078608 1.121023e-01 #> 1816 b_wt -1.863734369 1.101830e-01 #> 1817 b_wt -1.857390129 1.082952e-01 #> 1818 b_wt -1.851045890 1.064392e-01 #> 1819 b_wt -1.844701650 1.046153e-01 #> 1820 b_wt -1.838357411 1.028287e-01 #> 1821 b_wt -1.832013171 1.010777e-01 #> 1822 b_wt -1.825668932 9.935938e-02 #> 1823 b_wt -1.819324693 9.767385e-02 #> 1824 b_wt -1.812980453 9.602105e-02 #> 1825 b_wt -1.806636214 9.440092e-02 #> 1826 b_wt -1.800291974 9.281703e-02 #> 1827 b_wt -1.793947735 9.126951e-02 #> 1828 b_wt -1.787603496 8.975397e-02 #> 1829 b_wt -1.781259256 8.827014e-02 #> 1830 b_wt -1.774915017 8.681769e-02 #> 1831 b_wt -1.768570777 8.539627e-02 #> 1832 b_wt -1.762226538 8.400786e-02 #> 1833 b_wt -1.755882298 8.265439e-02 #> 1834 b_wt -1.749538059 8.133041e-02 #> 1835 b_wt -1.743193820 8.003546e-02 #> 1836 b_wt -1.736849580 7.876904e-02 #> 1837 b_wt -1.730505341 7.753064e-02 #> 1838 b_wt -1.724161101 7.632089e-02 #> 1839 b_wt -1.717816862 7.514324e-02 #> 1840 b_wt -1.711472623 7.399169e-02 #> 1841 b_wt -1.705128383 7.286569e-02 #> 1842 b_wt -1.698784144 7.176470e-02 #> 1843 b_wt -1.692439904 7.068816e-02 #> 1844 b_wt -1.686095665 6.963569e-02 #> 1845 b_wt -1.679751425 6.861184e-02 #> 1846 b_wt -1.673407186 6.761045e-02 #> 1847 b_wt -1.667062947 6.663100e-02 #> 1848 b_wt -1.660718707 6.567293e-02 #> 1849 b_wt -1.654374468 6.473572e-02 #> 1850 b_wt -1.648030228 6.381883e-02 #> 1851 b_wt -1.641685989 6.292573e-02 #> 1852 b_wt -1.635341750 6.205214e-02 #> 1853 b_wt -1.628997510 6.119694e-02 #> 1854 b_wt -1.622653271 6.035962e-02 #> 1855 b_wt -1.616309031 5.953964e-02 #> 1856 b_wt -1.609964792 5.873649e-02 #> 1857 b_wt -1.603620552 5.795233e-02 #> 1858 b_wt -1.597276313 5.718466e-02 #> 1859 b_wt -1.590932074 5.643187e-02 #> 1860 b_wt -1.584587834 5.569343e-02 #> 1861 b_wt -1.578243595 5.496879e-02 #> 1862 b_wt -1.571899355 5.425741e-02 #> 1863 b_wt -1.565555116 5.356038e-02 #> 1864 b_wt -1.559210877 5.287641e-02 #> 1865 b_wt -1.552866637 5.220368e-02 #> 1866 b_wt -1.546522398 5.154159e-02 #> 1867 b_wt -1.540178158 5.088959e-02 #> 1868 b_wt -1.533833919 5.024711e-02 #> 1869 b_wt -1.527489679 4.961439e-02 #> 1870 b_wt -1.521145440 4.899089e-02 #> 1871 b_wt -1.514801201 4.837482e-02 #> 1872 b_wt -1.508456961 4.776560e-02 #> 1873 b_wt -1.502112722 4.716267e-02 #> 1874 b_wt -1.495768482 4.656547e-02 #> 1875 b_wt -1.489424243 4.597376e-02 #> 1876 b_wt -1.483080004 4.538726e-02 #> 1877 b_wt -1.476735764 4.480455e-02 #> 1878 b_wt -1.470391525 4.422514e-02 #> 1879 b_wt -1.464047285 4.364856e-02 #> 1880 b_wt -1.457703046 4.307433e-02 #> 1881 b_wt -1.451358806 4.250205e-02 #> 1882 b_wt -1.445014567 4.193142e-02 #> 1883 b_wt -1.438670328 4.136166e-02 #> 1884 b_wt -1.432326088 4.079243e-02 #> 1885 b_wt -1.425981849 4.022341e-02 #> 1886 b_wt -1.419637609 3.965432e-02 #> 1887 b_wt -1.413293370 3.908489e-02 #> 1888 b_wt -1.406949131 3.851469e-02 #> 1889 b_wt -1.400604891 3.794363e-02 #> 1890 b_wt -1.394260652 3.737159e-02 #> 1891 b_wt -1.387916412 3.679846e-02 #> 1892 b_wt -1.381572173 3.622417e-02 #> 1893 b_wt -1.375227933 3.564868e-02 #> 1894 b_wt -1.368883694 3.507174e-02 #> 1895 b_wt -1.362539455 3.449360e-02 #> 1896 b_wt -1.356195215 3.391439e-02 #> 1897 b_wt -1.349850976 3.333422e-02 #> 1898 b_wt -1.343506736 3.275322e-02 #> 1899 b_wt -1.337162497 3.217155e-02 #> 1900 b_wt -1.330818258 3.158935e-02 #> 1901 b_wt -1.324474018 3.100698e-02 #> 1902 b_wt -1.318129779 3.042471e-02 #> 1903 b_wt -1.311785539 2.984280e-02 #> 1904 b_wt -1.305441300 2.926152e-02 #> 1905 b_wt -1.299097060 2.868117e-02 #> 1906 b_wt -1.292752821 2.810227e-02 #> 1907 b_wt -1.286408582 2.752525e-02 #> 1908 b_wt -1.280064342 2.695031e-02 #> 1909 b_wt -1.273720103 2.637778e-02 #> 1910 b_wt -1.267375863 2.580799e-02 #> 1911 b_wt -1.261031624 2.524129e-02 #> 1912 b_wt -1.254687385 2.467838e-02 #> 1913 b_wt -1.248343145 2.411998e-02 #> 1914 b_wt -1.241998906 2.356590e-02 #> 1915 b_wt -1.235654666 2.301649e-02 #> 1916 b_wt -1.229310427 2.247207e-02 #> 1917 b_wt -1.222966188 2.193296e-02 #> 1918 b_wt -1.216621948 2.139984e-02 #> 1919 b_wt -1.210277709 2.087389e-02 #> 1920 b_wt -1.203933469 2.035438e-02 #> 1921 b_wt -1.197589230 1.984159e-02 #> 1922 b_wt -1.191244990 1.933578e-02 #> 1923 b_wt -1.184900751 1.883724e-02 #> 1924 b_wt -1.178556512 1.834636e-02 #> 1925 b_wt -1.172212272 1.786505e-02 #> 1926 b_wt -1.165868033 1.739186e-02 #> 1927 b_wt -1.159523793 1.692701e-02 #> 1928 b_wt -1.153179554 1.647068e-02 #> 1929 b_wt -1.146835315 1.602306e-02 #> 1930 b_wt -1.140491075 1.558433e-02 #> 1931 b_wt -1.134146836 1.515673e-02 #> 1932 b_wt -1.127802596 1.473855e-02 #> 1933 b_wt -1.121458357 1.432977e-02 #> 1934 b_wt -1.115114117 1.393048e-02 #> 1935 b_wt -1.108769878 1.354079e-02 #> 1936 b_wt -1.102425639 1.316077e-02 #> 1937 b_wt -1.096081399 1.279237e-02 #> 1938 b_wt -1.089737160 1.243430e-02 #> 1939 b_wt -1.083392920 1.208608e-02 #> 1940 b_wt -1.077048681 1.174772e-02 #> 1941 b_wt -1.070704442 1.141924e-02 #> 1942 b_wt -1.064360202 1.110062e-02 #> 1943 b_wt -1.058015963 1.079333e-02 #> 1944 b_wt -1.051671723 1.049666e-02 #> 1945 b_wt -1.045327484 1.020968e-02 #> 1946 b_wt -1.038983244 9.932323e-03 #> 1947 b_wt -1.032639005 9.664488e-03 #> 1948 b_wt -1.026294766 9.406076e-03 #> 1949 b_wt -1.019950526 9.158038e-03 #> 1950 b_wt -1.013606287 8.920240e-03 #> 1951 b_wt -1.007262047 8.691378e-03 #> 1952 b_wt -1.000917808 8.471290e-03 #> 1953 b_wt -0.994573569 8.259803e-03 #> 1954 b_wt -0.988229329 8.056733e-03 #> 1955 b_wt -0.981885090 7.862535e-03 #> 1956 b_wt -0.975540850 7.677537e-03 #> 1957 b_wt -0.969196611 7.500211e-03 #> 1958 b_wt -0.962852371 7.330330e-03 #> 1959 b_wt -0.956508132 7.167661e-03 #> 1960 b_wt -0.950163893 7.011967e-03 #> 1961 b_wt -0.943819653 6.863293e-03 #> 1962 b_wt -0.937475414 6.722259e-03 #> 1963 b_wt -0.931131174 6.587294e-03 #> 1964 b_wt -0.924786935 6.458140e-03 #> 1965 b_wt -0.918442696 6.334540e-03 #> 1966 b_wt -0.912098456 6.216233e-03 #> 1967 b_wt -0.905754217 6.103004e-03 #> 1968 b_wt -0.899409977 5.995553e-03 #> 1969 b_wt -0.893065738 5.892459e-03 #> 1970 b_wt -0.886721498 5.793470e-03 #> 1971 b_wt -0.880377259 5.698336e-03 #> 1972 b_wt -0.874033020 5.606812e-03 #> 1973 b_wt -0.867688780 5.518658e-03 #> 1974 b_wt -0.861344541 5.434230e-03 #> 1975 b_wt -0.855000301 5.352649e-03 #> 1976 b_wt -0.848656062 5.273614e-03 #> 1977 b_wt -0.842311823 5.196916e-03 #> 1978 b_wt -0.835967583 5.122353e-03 #> 1979 b_wt -0.829623344 5.049727e-03 #> 1980 b_wt -0.823279104 4.979115e-03 #> 1981 b_wt -0.816934865 4.910074e-03 #> 1982 b_wt -0.810590625 4.842337e-03 #> 1983 b_wt -0.804246386 4.775748e-03 #> 1984 b_wt -0.797902147 4.710164e-03 #> 1985 b_wt -0.791557907 4.645446e-03 #> 1986 b_wt -0.785213668 4.581546e-03 #> 1987 b_wt -0.778869428 4.518268e-03 #> 1988 b_wt -0.772525189 4.455440e-03 #> 1989 b_wt -0.766180950 4.392969e-03 #> 1990 b_wt -0.759836710 4.330770e-03 #> 1991 b_wt -0.753492471 4.268765e-03 #> 1992 b_wt -0.747148231 4.206889e-03 #> 1993 b_wt -0.740803992 4.145060e-03 #> 1994 b_wt -0.734459752 4.083217e-03 #> 1995 b_wt -0.728115513 4.021321e-03 #> 1996 b_wt -0.721771274 3.959341e-03 #> 1997 b_wt -0.715427034 3.897251e-03 #> 1998 b_wt -0.709082795 3.835020e-03 #> 1999 b_wt -0.702738555 3.772621e-03 #> 2000 b_wt -0.696394316 3.710075e-03 #> 2001 b_wt -0.690050077 3.647385e-03 #> 2002 b_wt -0.683705837 3.584560e-03 #> 2003 b_wt -0.677361598 3.521614e-03 #> 2004 b_wt -0.671017358 3.458562e-03 #> 2005 b_wt -0.664673119 3.395420e-03 #> 2006 b_wt -0.658328879 3.332242e-03 #> 2007 b_wt -0.651984640 3.269057e-03 #> 2008 b_wt -0.645640401 3.205901e-03 #> 2009 b_wt -0.639296161 3.142810e-03 #> 2010 b_wt -0.632951922 3.079823e-03 #> 2011 b_wt -0.626607682 3.017031e-03 #> 2012 b_wt -0.620263443 2.954455e-03 #> 2013 b_wt -0.613919204 2.892138e-03 #> 2014 b_wt -0.607574964 2.830124e-03 #> 2015 b_wt -0.601230725 2.768458e-03 #> 2016 b_wt -0.594886485 2.707186e-03 #> 2017 b_wt -0.588542246 2.646454e-03 #> 2018 b_wt -0.582198007 2.586248e-03 #> 2019 b_wt -0.575853767 2.526593e-03 #> 2020 b_wt -0.569509528 2.467529e-03 #> 2021 b_wt -0.563165288 2.409095e-03 #> 2022 b_wt -0.556821049 2.351327e-03 #> 2023 b_wt -0.550476809 2.294388e-03 #> 2024 b_wt -0.544132570 2.238255e-03 #> 2025 b_wt -0.537788331 2.182905e-03 #> 2026 b_wt -0.531444091 2.128361e-03 #> 2027 b_wt -0.525099852 2.074648e-03 #> 2028 b_wt -0.518755612 2.021785e-03 #> 2029 b_wt -0.512411373 1.969906e-03 #> 2030 b_wt -0.506067134 1.919010e-03 #> 2031 b_wt -0.499722894 1.869011e-03 #> 2032 b_wt -0.493378655 1.819914e-03 #> 2033 b_wt -0.487034415 1.771721e-03 #> 2034 b_wt -0.480690176 1.724433e-03 #> 2035 b_wt -0.474345936 1.678131e-03 #> 2036 b_wt -0.468001697 1.632853e-03 #> 2037 b_wt -0.461657458 1.588457e-03 #> 2038 b_wt -0.455313218 1.544929e-03 #> 2039 b_wt -0.448968979 1.502258e-03 #> 2040 b_wt -0.442624739 1.460427e-03 #> 2041 b_wt -0.436280500 1.419465e-03 #> 2042 b_wt -0.429936261 1.379446e-03 #> 2043 b_wt -0.423592021 1.340197e-03 #> 2044 b_wt -0.417247782 1.301700e-03 #> 2045 b_wt -0.410903542 1.263931e-03 #> 2046 b_wt -0.404559303 1.226869e-03 #> 2047 b_wt -0.398215063 1.190508e-03 #> 2048 b_wt -0.391870824 1.154946e-03 #> 2049 b_cyl -3.179600735 1.304231e-03 #> 2050 b_cyl -3.176131314 1.317718e-03 #> 2051 b_cyl -3.172661893 1.330586e-03 #> 2052 b_cyl -3.169192471 1.342903e-03 #> 2053 b_cyl -3.165723050 1.354751e-03 #> 2054 b_cyl -3.162253628 1.366345e-03 #> 2055 b_cyl -3.158784207 1.377810e-03 #> 2056 b_cyl -3.155314786 1.389275e-03 #> 2057 b_cyl -3.151845364 1.400880e-03 #> 2058 b_cyl -3.148375943 1.412770e-03 #> 2059 b_cyl -3.144906521 1.425262e-03 #> 2060 b_cyl -3.141437100 1.438453e-03 #> 2061 b_cyl -3.137967679 1.452509e-03 #> 2062 b_cyl -3.134498257 1.467603e-03 #> 2063 b_cyl -3.131028836 1.483906e-03 #> 2064 b_cyl -3.127559414 1.501593e-03 #> 2065 b_cyl -3.124089993 1.521188e-03 #> 2066 b_cyl -3.120620572 1.542707e-03 #> 2067 b_cyl -3.117151150 1.566243e-03 #> 2068 b_cyl -3.113681729 1.591967e-03 #> 2069 b_cyl -3.110212307 1.620045e-03 #> 2070 b_cyl -3.106742886 1.650638e-03 #> 2071 b_cyl -3.103273465 1.684309e-03 #> 2072 b_cyl -3.099804043 1.721149e-03 #> 2073 b_cyl -3.096334622 1.761029e-03 #> 2074 b_cyl -3.092865200 1.804071e-03 #> 2075 b_cyl -3.089395779 1.850390e-03 #> 2076 b_cyl -3.085926358 1.900090e-03 #> 2077 b_cyl -3.082456936 1.953595e-03 #> 2078 b_cyl -3.078987515 2.011202e-03 #> 2079 b_cyl -3.075518093 2.072457e-03 #> 2080 b_cyl -3.072048672 2.137403e-03 #> 2081 b_cyl -3.068579251 2.206066e-03 #> 2082 b_cyl -3.065109829 2.278462e-03 #> 2083 b_cyl -3.061640408 2.354756e-03 #> 2084 b_cyl -3.058170986 2.435477e-03 #> 2085 b_cyl -3.054701565 2.519861e-03 #> 2086 b_cyl -3.051232144 2.607849e-03 #> 2087 b_cyl -3.047762722 2.699375e-03 #> 2088 b_cyl -3.044293301 2.794357e-03 #> 2089 b_cyl -3.040823879 2.892702e-03 #> 2090 b_cyl -3.037354458 2.995020e-03 #> 2091 b_cyl -3.033885037 3.100409e-03 #> 2092 b_cyl -3.030415615 3.208715e-03 #> 2093 b_cyl -3.026946194 3.319791e-03 #> 2094 b_cyl -3.023476772 3.433482e-03 #> 2095 b_cyl -3.020007351 3.549627e-03 #> 2096 b_cyl -3.016537930 3.668426e-03 #> 2097 b_cyl -3.013068508 3.789338e-03 #> 2098 b_cyl -3.009599087 3.912062e-03 #> 2099 b_cyl -3.006129665 4.036413e-03 #> 2100 b_cyl -3.002660244 4.162204e-03 #> 2101 b_cyl -2.999190823 4.289249e-03 #> 2102 b_cyl -2.995721401 4.417469e-03 #> 2103 b_cyl -2.992251980 4.546565e-03 #> 2104 b_cyl -2.988782558 4.676252e-03 #> 2105 b_cyl -2.985313137 4.806360e-03 #> 2106 b_cyl -2.981843716 4.936719e-03 #> 2107 b_cyl -2.978374294 5.067169e-03 #> 2108 b_cyl -2.974904873 5.197530e-03 #> 2109 b_cyl -2.971435451 5.327581e-03 #> 2110 b_cyl -2.967966030 5.457205e-03 #> 2111 b_cyl -2.964496609 5.586281e-03 #> 2112 b_cyl -2.961027187 5.714698e-03 #> 2113 b_cyl -2.957557766 5.842354e-03 #> 2114 b_cyl -2.954088344 5.969116e-03 #> 2115 b_cyl -2.950618923 6.094726e-03 #> 2116 b_cyl -2.947149502 6.219291e-03 #> 2117 b_cyl -2.943680080 6.342758e-03 #> 2118 b_cyl -2.940210659 6.465083e-03 #> 2119 b_cyl -2.936741238 6.586229e-03 #> 2120 b_cyl -2.933271816 6.706169e-03 #> 2121 b_cyl -2.929802395 6.824598e-03 #> 2122 b_cyl -2.926332973 6.941778e-03 #> 2123 b_cyl -2.922863552 7.057719e-03 #> 2124 b_cyl -2.919394131 7.172430e-03 #> 2125 b_cyl -2.915924709 7.285920e-03 #> 2126 b_cyl -2.912455288 7.398206e-03 #> 2127 b_cyl -2.908985866 7.509099e-03 #> 2128 b_cyl -2.905516445 7.618777e-03 #> 2129 b_cyl -2.902047024 7.727335e-03 #> 2130 b_cyl -2.898577602 7.834802e-03 #> 2131 b_cyl -2.895108181 7.941205e-03 #> 2132 b_cyl -2.891638759 8.046577e-03 #> 2133 b_cyl -2.888169338 8.150819e-03 #> 2134 b_cyl -2.884699917 8.254001e-03 #> 2135 b_cyl -2.881230495 8.356261e-03 #> 2136 b_cyl -2.877761074 8.457626e-03 #> 2137 b_cyl -2.874291652 8.558128e-03 #> 2138 b_cyl -2.870822231 8.657796e-03 #> 2139 b_cyl -2.867352810 8.756597e-03 #> 2140 b_cyl -2.863883388 8.854524e-03 #> 2141 b_cyl -2.860413967 8.951734e-03 #> 2142 b_cyl -2.856944545 9.048265e-03 #> 2143 b_cyl -2.853475124 9.144158e-03 #> 2144 b_cyl -2.850005703 9.239457e-03 #> 2145 b_cyl -2.846536281 9.334193e-03 #> 2146 b_cyl -2.843066860 9.428365e-03 #> 2147 b_cyl -2.839597438 9.522153e-03 #> 2148 b_cyl -2.836128017 9.615632e-03 #> 2149 b_cyl -2.832658596 9.708889e-03 #> 2150 b_cyl -2.829189174 9.802018e-03 #> 2151 b_cyl -2.825719753 9.895124e-03 #> 2152 b_cyl -2.822250331 9.988394e-03 #> 2153 b_cyl -2.818780910 1.008198e-02 #> 2154 b_cyl -2.815311489 1.017604e-02 #> 2155 b_cyl -2.811842067 1.027074e-02 #> 2156 b_cyl -2.808372646 1.036627e-02 #> 2157 b_cyl -2.804903224 1.046282e-02 #> 2158 b_cyl -2.801433803 1.056089e-02 #> 2159 b_cyl -2.797964382 1.066065e-02 #> 2160 b_cyl -2.794494960 1.076229e-02 #> 2161 b_cyl -2.791025539 1.086609e-02 #> 2162 b_cyl -2.787556117 1.097232e-02 #> 2163 b_cyl -2.784086696 1.108128e-02 #> 2164 b_cyl -2.780617275 1.119375e-02 #> 2165 b_cyl -2.777147853 1.131015e-02 #> 2166 b_cyl -2.773678432 1.143049e-02 #> 2167 b_cyl -2.770209010 1.155512e-02 #> 2168 b_cyl -2.766739589 1.168442e-02 #> 2169 b_cyl -2.763270168 1.181876e-02 #> 2170 b_cyl -2.759800746 1.195900e-02 #> 2171 b_cyl -2.756331325 1.210616e-02 #> 2172 b_cyl -2.752861903 1.225977e-02 #> 2173 b_cyl -2.749392482 1.242023e-02 #> 2174 b_cyl -2.745923061 1.258794e-02 #> 2175 b_cyl -2.742453639 1.276329e-02 #> 2176 b_cyl -2.738984218 1.294694e-02 #> 2177 b_cyl -2.735514796 1.314092e-02 #> 2178 b_cyl -2.732045375 1.334396e-02 #> 2179 b_cyl -2.728575954 1.355641e-02 #> 2180 b_cyl -2.725106532 1.377865e-02 #> 2181 b_cyl -2.721637111 1.401104e-02 #> 2182 b_cyl -2.718167689 1.425391e-02 #> 2183 b_cyl -2.714698268 1.451009e-02 #> 2184 b_cyl -2.711228847 1.477780e-02 #> 2185 b_cyl -2.707759425 1.505714e-02 #> 2186 b_cyl -2.704290004 1.534838e-02 #> 2187 b_cyl -2.700820582 1.565180e-02 #> 2188 b_cyl -2.697351161 1.596764e-02 #> 2189 b_cyl -2.693881740 1.629837e-02 #> 2190 b_cyl -2.690412318 1.664293e-02 #> 2191 b_cyl -2.686942897 1.700068e-02 #> 2192 b_cyl -2.683473475 1.737176e-02 #> 2193 b_cyl -2.680004054 1.775634e-02 #> 2194 b_cyl -2.676534633 1.815454e-02 #> 2195 b_cyl -2.673065211 1.856819e-02 #> 2196 b_cyl -2.669595790 1.899729e-02 #> 2197 b_cyl -2.666126368 1.944032e-02 #> 2198 b_cyl -2.662656947 1.989732e-02 #> 2199 b_cyl -2.659187526 2.036832e-02 #> 2200 b_cyl -2.655718104 2.085332e-02 #> 2201 b_cyl -2.652248683 2.135336e-02 #> 2202 b_cyl -2.648779261 2.186961e-02 #> 2203 b_cyl -2.645309840 2.239973e-02 #> 2204 b_cyl -2.641840419 2.294364e-02 #> 2205 b_cyl -2.638370997 2.350125e-02 #> 2206 b_cyl -2.634901576 2.407246e-02 #> 2207 b_cyl -2.631432154 2.465749e-02 #> 2208 b_cyl -2.627962733 2.525862e-02 #> 2209 b_cyl -2.624493312 2.587283e-02 #> 2210 b_cyl -2.621023890 2.649997e-02 #> 2211 b_cyl -2.617554469 2.713985e-02 #> 2212 b_cyl -2.614085047 2.779228e-02 #> 2213 b_cyl -2.610615626 2.845707e-02 #> 2214 b_cyl -2.607146205 2.913657e-02 #> 2215 b_cyl -2.603676783 2.982816e-02 #> 2216 b_cyl -2.600207362 3.053136e-02 #> 2217 b_cyl -2.596737940 3.124595e-02 #> 2218 b_cyl -2.593268519 3.197173e-02 #> 2219 b_cyl -2.589799098 3.270848e-02 #> 2220 b_cyl -2.586329676 3.345776e-02 #> 2221 b_cyl -2.582860255 3.421825e-02 #> 2222 b_cyl -2.579390833 3.498901e-02 #> 2223 b_cyl -2.575921412 3.576988e-02 #> 2224 b_cyl -2.572451991 3.656070e-02 #> 2225 b_cyl -2.568982569 3.736132e-02 #> 2226 b_cyl -2.565513148 3.817275e-02 #> 2227 b_cyl -2.562043726 3.899485e-02 #> 2228 b_cyl -2.558574305 3.982639e-02 #> 2229 b_cyl -2.555104884 4.066734e-02 #> 2230 b_cyl -2.551635462 4.151768e-02 #> 2231 b_cyl -2.548166041 4.237740e-02 #> 2232 b_cyl -2.544696619 4.324720e-02 #> 2233 b_cyl -2.541227198 4.412810e-02 #> 2234 b_cyl -2.537757777 4.501864e-02 #> 2235 b_cyl -2.534288355 4.591895e-02 #> 2236 b_cyl -2.530818934 4.682921e-02 #> 2237 b_cyl -2.527349512 4.774962e-02 #> 2238 b_cyl -2.523880091 4.868062e-02 #> 2239 b_cyl -2.520410670 4.962467e-02 #> 2240 b_cyl -2.516941248 5.057983e-02 #> 2241 b_cyl -2.513471827 5.154644e-02 #> 2242 b_cyl -2.510002405 5.252486e-02 #> 2243 b_cyl -2.506532984 5.351548e-02 #> 2244 b_cyl -2.503063563 5.451870e-02 #> 2245 b_cyl -2.499594141 5.553787e-02 #> 2246 b_cyl -2.496124720 5.657118e-02 #> 2247 b_cyl -2.492655298 5.761875e-02 #> 2248 b_cyl -2.489185877 5.868109e-02 #> 2249 b_cyl -2.485716456 5.975871e-02 #> 2250 b_cyl -2.482247034 6.085212e-02 #> 2251 b_cyl -2.478777613 6.196464e-02 #> 2252 b_cyl -2.475308191 6.309559e-02 #> 2253 b_cyl -2.471838770 6.424426e-02 #> 2254 b_cyl -2.468369349 6.541119e-02 #> 2255 b_cyl -2.464899927 6.659691e-02 #> 2256 b_cyl -2.461430506 6.780195e-02 #> 2257 b_cyl -2.457961084 6.902918e-02 #> 2258 b_cyl -2.454491663 7.027953e-02 #> 2259 b_cyl -2.451022242 7.155104e-02 #> 2260 b_cyl -2.447552820 7.284416e-02 #> 2261 b_cyl -2.444083399 7.415937e-02 #> 2262 b_cyl -2.440613977 7.549711e-02 #> 2263 b_cyl -2.437144556 7.685937e-02 #> 2264 b_cyl -2.433675135 7.824921e-02 #> 2265 b_cyl -2.430205713 7.966304e-02 #> 2266 b_cyl -2.426736292 8.110124e-02 #> 2267 b_cyl -2.423266870 8.256415e-02 #> 2268 b_cyl -2.419797449 8.405209e-02 #> 2269 b_cyl -2.416328028 8.556583e-02 #> 2270 b_cyl -2.412858606 8.711099e-02 #> 2271 b_cyl -2.409389185 8.868225e-02 #> 2272 b_cyl -2.405919763 9.027985e-02 #> 2273 b_cyl -2.402450342 9.190405e-02 #> 2274 b_cyl -2.398980921 9.355508e-02 #> 2275 b_cyl -2.395511499 9.523318e-02 #> 2276 b_cyl -2.392042078 9.694428e-02 #> 2277 b_cyl -2.388572656 9.868387e-02 #> 2278 b_cyl -2.385103235 1.004513e-01 #> 2279 b_cyl -2.381633814 1.022468e-01 #> 2280 b_cyl -2.378164392 1.040705e-01 #> 2281 b_cyl -2.374694971 1.059227e-01 #> 2282 b_cyl -2.371225549 1.078083e-01 #> 2283 b_cyl -2.367756128 1.097251e-01 #> 2284 b_cyl -2.364286707 1.116713e-01 #> 2285 b_cyl -2.360817285 1.136470e-01 #> 2286 b_cyl -2.357347864 1.156525e-01 #> 2287 b_cyl -2.353878442 1.176880e-01 #> 2288 b_cyl -2.350409021 1.197572e-01 #> 2289 b_cyl -2.346939600 1.218610e-01 #> 2290 b_cyl -2.343470178 1.239959e-01 #> 2291 b_cyl -2.340000757 1.261621e-01 #> 2292 b_cyl -2.336531335 1.283600e-01 #> 2293 b_cyl -2.333061914 1.305898e-01 #> 2294 b_cyl -2.329592493 1.328540e-01 #> 2295 b_cyl -2.326123071 1.351567e-01 #> 2296 b_cyl -2.322653650 1.374925e-01 #> 2297 b_cyl -2.319184228 1.398618e-01 #> 2298 b_cyl -2.315714807 1.422650e-01 #> 2299 b_cyl -2.312245386 1.447023e-01 #> 2300 b_cyl -2.308775964 1.471745e-01 #> 2301 b_cyl -2.305306543 1.496895e-01 #> 2302 b_cyl -2.301837121 1.522399e-01 #> 2303 b_cyl -2.298367700 1.548259e-01 #> 2304 b_cyl -2.294898279 1.574479e-01 #> 2305 b_cyl -2.291428857 1.601061e-01 #> 2306 b_cyl -2.287959436 1.628008e-01 #> 2307 b_cyl -2.284490014 1.655399e-01 #> 2308 b_cyl -2.281020593 1.683174e-01 #> 2309 b_cyl -2.277551172 1.711324e-01 #> 2310 b_cyl -2.274081750 1.739851e-01 #> 2311 b_cyl -2.270612329 1.768756e-01 #> 2312 b_cyl -2.267142907 1.798042e-01 #> 2313 b_cyl -2.263673486 1.827770e-01 #> 2314 b_cyl -2.260204065 1.857914e-01 #> 2315 b_cyl -2.256734643 1.888446e-01 #> 2316 b_cyl -2.253265222 1.919365e-01 #> 2317 b_cyl -2.249795800 1.950673e-01 #> 2318 b_cyl -2.246326379 1.982372e-01 #> 2319 b_cyl -2.242856958 2.014504e-01 #> 2320 b_cyl -2.239387536 2.047081e-01 #> 2321 b_cyl -2.235918115 2.080053e-01 #> 2322 b_cyl -2.232448693 2.113420e-01 #> 2323 b_cyl -2.228979272 2.147183e-01 #> 2324 b_cyl -2.225509851 2.181343e-01 #> 2325 b_cyl -2.222040429 2.215925e-01 #> 2326 b_cyl -2.218571008 2.250978e-01 #> 2327 b_cyl -2.215101586 2.286432e-01 #> 2328 b_cyl -2.211632165 2.322289e-01 #> 2329 b_cyl -2.208162744 2.358549e-01 #> 2330 b_cyl -2.204693322 2.395213e-01 #> 2331 b_cyl -2.201223901 2.432287e-01 #> 2332 b_cyl -2.197754479 2.469861e-01 #> 2333 b_cyl -2.194285058 2.507844e-01 #> 2334 b_cyl -2.190815637 2.546236e-01 #> 2335 b_cyl -2.187346215 2.585040e-01 #> 2336 b_cyl -2.183876794 2.624256e-01 #> 2337 b_cyl -2.180407372 2.663886e-01 #> 2338 b_cyl -2.176937951 2.704011e-01 #> 2339 b_cyl -2.173468530 2.744568e-01 #> 2340 b_cyl -2.169999108 2.785540e-01 #> 2341 b_cyl -2.166529687 2.826927e-01 #> 2342 b_cyl -2.163060265 2.868729e-01 #> 2343 b_cyl -2.159590844 2.910945e-01 #> 2344 b_cyl -2.156121423 2.953635e-01 #> 2345 b_cyl -2.152652001 2.996772e-01 #> 2346 b_cyl -2.149182580 3.040317e-01 #> 2347 b_cyl -2.145713158 3.084266e-01 #> 2348 b_cyl -2.142243737 3.128616e-01 #> 2349 b_cyl -2.138774316 3.173363e-01 #> 2350 b_cyl -2.135304894 3.218539e-01 #> 2351 b_cyl -2.131835473 3.264149e-01 #> 2352 b_cyl -2.128366051 3.310133e-01 #> 2353 b_cyl -2.124896630 3.356482e-01 #> 2354 b_cyl -2.121427209 3.403187e-01 #> 2355 b_cyl -2.117957787 3.450238e-01 #> 2356 b_cyl -2.114488366 3.497640e-01 #> 2357 b_cyl -2.111018944 3.545418e-01 #> 2358 b_cyl -2.107549523 3.593498e-01 #> 2359 b_cyl -2.104080102 3.641863e-01 #> 2360 b_cyl -2.100610680 3.690498e-01 #> 2361 b_cyl -2.097141259 3.739386e-01 #> 2362 b_cyl -2.093671837 3.788511e-01 #> 2363 b_cyl -2.090202416 3.837897e-01 #> 2364 b_cyl -2.086732995 3.887468e-01 #> 2365 b_cyl -2.083263573 3.937204e-01 #> 2366 b_cyl -2.079794152 3.987082e-01 #> 2367 b_cyl -2.076324730 4.037080e-01 #> 2368 b_cyl -2.072855309 4.087177e-01 #> 2369 b_cyl -2.069385888 4.137355e-01 #> 2370 b_cyl -2.065916466 4.187571e-01 #> 2371 b_cyl -2.062447045 4.237799e-01 #> 2372 b_cyl -2.058977623 4.288012e-01 #> 2373 b_cyl -2.055508202 4.338186e-01 #> 2374 b_cyl -2.052038781 4.388295e-01 #> 2375 b_cyl -2.048569359 4.438297e-01 #> 2376 b_cyl -2.045099938 4.488161e-01 #> 2377 b_cyl -2.041630516 4.537870e-01 #> 2378 b_cyl -2.038161095 4.587400e-01 #> 2379 b_cyl -2.034691674 4.636727e-01 #> 2380 b_cyl -2.031222252 4.685828e-01 #> 2381 b_cyl -2.027752831 4.734652e-01 #> 2382 b_cyl -2.024283409 4.783155e-01 #> 2383 b_cyl -2.020813988 4.831352e-01 #> 2384 b_cyl -2.017344567 4.879223e-01 #> 2385 b_cyl -2.013875145 4.926748e-01 #> 2386 b_cyl -2.010405724 4.973909e-01 #> 2387 b_cyl -2.006936302 5.020669e-01 #> 2388 b_cyl -2.003466881 5.066949e-01 #> 2389 b_cyl -1.999997460 5.112807e-01 #> 2390 b_cyl -1.996528038 5.158229e-01 #> 2391 b_cyl -1.993058617 5.203204e-01 #> 2392 b_cyl -1.989589195 5.247720e-01 #> 2393 b_cyl -1.986119774 5.291767e-01 #> 2394 b_cyl -1.982650353 5.335220e-01 #> 2395 b_cyl -1.979180931 5.378185e-01 #> 2396 b_cyl -1.975711510 5.420655e-01 #> 2397 b_cyl -1.972242088 5.462626e-01 #> 2398 b_cyl -1.968772667 5.504096e-01 #> 2399 b_cyl -1.965303246 5.545063e-01 #> 2400 b_cyl -1.961833824 5.585429e-01 #> 2401 b_cyl -1.958364403 5.625268e-01 #> 2402 b_cyl -1.954894981 5.664606e-01 #> 2403 b_cyl -1.951425560 5.703446e-01 #> 2404 b_cyl -1.947956139 5.741794e-01 #> 2405 b_cyl -1.944486717 5.779654e-01 #> 2406 b_cyl -1.941017296 5.816967e-01 #> 2407 b_cyl -1.937547874 5.853766e-01 #> 2408 b_cyl -1.934078453 5.890105e-01 #> 2409 b_cyl -1.930609032 5.925996e-01 #> 2410 b_cyl -1.927139610 5.961450e-01 #> 2411 b_cyl -1.923670189 5.996478e-01 #> 2412 b_cyl -1.920200767 6.031057e-01 #> 2413 b_cyl -1.916731346 6.065188e-01 #> 2414 b_cyl -1.913261925 6.098943e-01 #> 2415 b_cyl -1.909792503 6.132341e-01 #> 2416 b_cyl -1.906323082 6.165398e-01 #> 2417 b_cyl -1.902853660 6.198132e-01 #> 2418 b_cyl -1.899384239 6.230548e-01 #> 2419 b_cyl -1.895914818 6.262632e-01 #> 2420 b_cyl -1.892445396 6.294462e-01 #> 2421 b_cyl -1.888975975 6.326059e-01 #> 2422 b_cyl -1.885506553 6.357446e-01 #> 2423 b_cyl -1.882037132 6.388643e-01 #> 2424 b_cyl -1.878567711 6.419674e-01 #> 2425 b_cyl -1.875098289 6.450535e-01 #> 2426 b_cyl -1.871628868 6.481290e-01 #> 2427 b_cyl -1.868159446 6.511962e-01 #> 2428 b_cyl -1.864690025 6.542575e-01 #> 2429 b_cyl -1.861220604 6.573153e-01 #> 2430 b_cyl -1.857751182 6.603720e-01 #> 2431 b_cyl -1.854281761 6.634307e-01 #> 2432 b_cyl -1.850812339 6.664946e-01 #> 2433 b_cyl -1.847342918 6.695659e-01 #> 2434 b_cyl -1.843873497 6.726467e-01 #> 2435 b_cyl -1.840404075 6.757393e-01 #> 2436 b_cyl -1.836934654 6.788458e-01 #> 2437 b_cyl -1.833465232 6.819709e-01 #> 2438 b_cyl -1.829995811 6.851168e-01 #> 2439 b_cyl -1.826526390 6.882836e-01 #> 2440 b_cyl -1.823056968 6.914731e-01 #> 2441 b_cyl -1.819587547 6.946868e-01 #> 2442 b_cyl -1.816118125 6.979262e-01 #> 2443 b_cyl -1.812648704 7.011952e-01 #> 2444 b_cyl -1.809179283 7.044970e-01 #> 2445 b_cyl -1.805709861 7.078285e-01 #> 2446 b_cyl -1.802240440 7.111905e-01 #> 2447 b_cyl -1.798771018 7.145834e-01 #> 2448 b_cyl -1.795301597 7.180077e-01 #> 2449 b_cyl -1.791832176 7.214650e-01 #> 2450 b_cyl -1.788362754 7.249601e-01 #> 2451 b_cyl -1.784893333 7.284864e-01 #> 2452 b_cyl -1.781423911 7.320436e-01 #> 2453 b_cyl -1.777954490 7.356310e-01 #> 2454 b_cyl -1.774485069 7.392476e-01 #> 2455 b_cyl -1.771015647 7.428926e-01 #> 2456 b_cyl -1.767546226 7.465705e-01 #> 2457 b_cyl -1.764076804 7.502735e-01 #> 2458 b_cyl -1.760607383 7.539997e-01 #> 2459 b_cyl -1.757137962 7.577474e-01 #> 2460 b_cyl -1.753668540 7.615145e-01 #> 2461 b_cyl -1.750199119 7.652990e-01 #> 2462 b_cyl -1.746729697 7.691010e-01 #> 2463 b_cyl -1.743260276 7.729153e-01 #> 2464 b_cyl -1.739790855 7.767385e-01 #> 2465 b_cyl -1.736321433 7.805682e-01 #> 2466 b_cyl -1.732852012 7.844016e-01 #> 2467 b_cyl -1.729382590 7.882361e-01 #> 2468 b_cyl -1.725913169 7.920683e-01 #> 2469 b_cyl -1.722443748 7.958940e-01 #> 2470 b_cyl -1.718974326 7.997107e-01 #> 2471 b_cyl -1.715504905 8.035157e-01 #> 2472 b_cyl -1.712035483 8.073060e-01 #> 2473 b_cyl -1.708566062 8.110790e-01 #> 2474 b_cyl -1.705096641 8.148299e-01 #> 2475 b_cyl -1.701627219 8.185532e-01 #> 2476 b_cyl -1.698157798 8.222494e-01 #> 2477 b_cyl -1.694688376 8.259161e-01 #> 2478 b_cyl -1.691218955 8.295508e-01 #> 2479 b_cyl -1.687749534 8.331511e-01 #> 2480 b_cyl -1.684280112 8.367134e-01 #> 2481 b_cyl -1.680810691 8.402282e-01 #> 2482 b_cyl -1.677341269 8.437010e-01 #> 2483 b_cyl -1.673871848 8.471300e-01 #> 2484 b_cyl -1.670402427 8.505134e-01 #> 2485 b_cyl -1.666933005 8.538496e-01 #> 2486 b_cyl -1.663463584 8.571372e-01 #> 2487 b_cyl -1.659994162 8.603629e-01 #> 2488 b_cyl -1.656524741 8.635360e-01 #> 2489 b_cyl -1.653055320 8.666560e-01 #> 2490 b_cyl -1.649585898 8.697219e-01 #> 2491 b_cyl -1.646116477 8.727330e-01 #> 2492 b_cyl -1.642647055 8.756885e-01 #> 2493 b_cyl -1.639177634 8.785776e-01 #> 2494 b_cyl -1.635708213 8.814060e-01 #> 2495 b_cyl -1.632238791 8.841767e-01 #> 2496 b_cyl -1.628769370 8.868895e-01 #> 2497 b_cyl -1.625299948 8.895439e-01 #> 2498 b_cyl -1.621830527 8.921396e-01 #> 2499 b_cyl -1.618361106 8.946688e-01 #> 2500 b_cyl -1.614891684 8.971321e-01 #> 2501 b_cyl -1.611422263 8.995359e-01 #> 2502 b_cyl -1.607952841 9.018798e-01 #> 2503 b_cyl -1.604483420 9.041636e-01 #> 2504 b_cyl -1.601013999 9.063870e-01 #> 2505 b_cyl -1.597544577 9.085449e-01 #> 2506 b_cyl -1.594075156 9.106317e-01 #> 2507 b_cyl -1.590605734 9.126566e-01 #> 2508 b_cyl -1.587136313 9.146192e-01 #> 2509 b_cyl -1.583666892 9.165187e-01 #> 2510 b_cyl -1.580197470 9.183545e-01 #> 2511 b_cyl -1.576728049 9.201238e-01 #> 2512 b_cyl -1.573258627 9.218137e-01 #> 2513 b_cyl -1.569789206 9.234367e-01 #> 2514 b_cyl -1.566319785 9.249916e-01 #> 2515 b_cyl -1.562850363 9.264771e-01 #> 2516 b_cyl -1.559380942 9.278920e-01 #> 2517 b_cyl -1.555911520 9.292350e-01 #> 2518 b_cyl -1.552442099 9.304878e-01 #> 2519 b_cyl -1.548972678 9.316634e-01 #> 2520 b_cyl -1.545503256 9.327613e-01 #> 2521 b_cyl -1.542033835 9.337799e-01 #> 2522 b_cyl -1.538564413 9.347175e-01 #> 2523 b_cyl -1.535094992 9.355723e-01 #> 2524 b_cyl -1.531625571 9.363274e-01 #> 2525 b_cyl -1.528156149 9.369893e-01 #> 2526 b_cyl -1.524686728 9.375619e-01 #> 2527 b_cyl -1.521217306 9.380433e-01 #> 2528 b_cyl -1.517747885 9.384319e-01 #> 2529 b_cyl -1.514278464 9.387259e-01 #> 2530 b_cyl -1.510809042 9.389116e-01 #> 2531 b_cyl -1.507339621 9.389877e-01 #> 2532 b_cyl -1.503870200 9.389637e-01 #> 2533 b_cyl -1.500400778 9.388386e-01 #> 2534 b_cyl -1.496931357 9.386112e-01 #> 2535 b_cyl -1.493461935 9.382804e-01 #> 2536 b_cyl -1.489992514 9.378377e-01 #> 2537 b_cyl -1.486523093 9.372730e-01 #> 2538 b_cyl -1.483053671 9.366028e-01 #> 2539 b_cyl -1.479584250 9.358271e-01 #> 2540 b_cyl -1.476114828 9.349460e-01 #> 2541 b_cyl -1.472645407 9.339596e-01 #> 2542 b_cyl -1.469175986 9.328657e-01 #> 2543 b_cyl -1.465706564 9.316461e-01 #> 2544 b_cyl -1.462237143 9.303239e-01 #> 2545 b_cyl -1.458767721 9.289004e-01 #> 2546 b_cyl -1.455298300 9.273771e-01 #> 2547 b_cyl -1.451828879 9.257557e-01 #> 2548 b_cyl -1.448359457 9.240379e-01 #> 2549 b_cyl -1.444890036 9.222062e-01 #> 2550 b_cyl -1.441420614 9.202819e-01 #> 2551 b_cyl -1.437951193 9.182695e-01 #> 2552 b_cyl -1.434481772 9.161719e-01 #> 2553 b_cyl -1.431012350 9.139917e-01 #> 2554 b_cyl -1.427542929 9.117320e-01 #> 2555 b_cyl -1.424073507 9.093837e-01 #> 2556 b_cyl -1.420604086 9.069588e-01 #> 2557 b_cyl -1.417134665 9.044659e-01 #> 2558 b_cyl -1.413665243 9.019084e-01 #> 2559 b_cyl -1.410195822 8.992897e-01 #> 2560 b_cyl -1.406726400 8.966132e-01 #> 2561 b_cyl -1.403256979 8.938765e-01 #> 2562 b_cyl -1.399787558 8.910848e-01 #> 2563 b_cyl -1.396318136 8.882477e-01 #> 2564 b_cyl -1.392848715 8.853685e-01 #> 2565 b_cyl -1.389379293 8.824507e-01 #> 2566 b_cyl -1.385909872 8.794974e-01 #> 2567 b_cyl -1.382440451 8.765097e-01 #> 2568 b_cyl -1.378971029 8.734892e-01 #> 2569 b_cyl -1.375501608 8.704439e-01 #> 2570 b_cyl -1.372032186 8.673765e-01 #> 2571 b_cyl -1.368562765 8.642896e-01 #> 2572 b_cyl -1.365093344 8.611855e-01 #> 2573 b_cyl -1.361623922 8.580662e-01 #> 2574 b_cyl -1.358154501 8.549321e-01 #> 2575 b_cyl -1.354685079 8.517881e-01 #> 2576 b_cyl -1.351215658 8.486357e-01 #> 2577 b_cyl -1.347746237 8.454765e-01 #> 2578 b_cyl -1.344276815 8.423116e-01 #> 2579 b_cyl -1.340807394 8.391421e-01 #> 2580 b_cyl -1.337337972 8.359684e-01 #> 2581 b_cyl -1.333868551 8.327921e-01 #> 2582 b_cyl -1.330399130 8.296135e-01 #> 2583 b_cyl -1.326929708 8.264330e-01 #> 2584 b_cyl -1.323460287 8.232507e-01 #> 2585 b_cyl -1.319990865 8.200666e-01 #> 2586 b_cyl -1.316521444 8.168801e-01 #> 2587 b_cyl -1.313052023 8.136908e-01 #> 2588 b_cyl -1.309582601 8.104985e-01 #> 2589 b_cyl -1.306113180 8.073023e-01 #> 2590 b_cyl -1.302643758 8.041017e-01 #> 2591 b_cyl -1.299174337 8.008956e-01 #> 2592 b_cyl -1.295704916 7.976824e-01 #> 2593 b_cyl -1.292235494 7.944604e-01 #> 2594 b_cyl -1.288766073 7.912293e-01 #> 2595 b_cyl -1.285296651 7.879879e-01 #> 2596 b_cyl -1.281827230 7.847349e-01 #> 2597 b_cyl -1.278357809 7.814691e-01 #> 2598 b_cyl -1.274888387 7.781882e-01 #> 2599 b_cyl -1.271418966 7.748886e-01 #> 2600 b_cyl -1.267949544 7.715715e-01 #> 2601 b_cyl -1.264480123 7.682356e-01 #> 2602 b_cyl -1.261010702 7.648794e-01 #> 2603 b_cyl -1.257541280 7.615018e-01 #> 2604 b_cyl -1.254071859 7.581011e-01 #> 2605 b_cyl -1.250602437 7.546706e-01 #> 2606 b_cyl -1.247133016 7.512143e-01 #> 2607 b_cyl -1.243663595 7.477311e-01 #> 2608 b_cyl -1.240194173 7.442199e-01 #> 2609 b_cyl -1.236724752 7.406797e-01 #> 2610 b_cyl -1.233255330 7.371096e-01 #> 2611 b_cyl -1.229785909 7.335020e-01 #> 2612 b_cyl -1.226316488 7.298616e-01 #> 2613 b_cyl -1.222847066 7.261885e-01 #> 2614 b_cyl -1.219377645 7.224822e-01 #> 2615 b_cyl -1.215908223 7.187423e-01 #> 2616 b_cyl -1.212438802 7.149683e-01 #> 2617 b_cyl -1.208969381 7.111544e-01 #> 2618 b_cyl -1.205499959 7.073032e-01 #> 2619 b_cyl -1.202030538 7.034172e-01 #> 2620 b_cyl -1.198561116 6.994965e-01 #> 2621 b_cyl -1.195091695 6.955413e-01 #> 2622 b_cyl -1.191622274 6.915518e-01 #> 2623 b_cyl -1.188152852 6.875246e-01 #> 2624 b_cyl -1.184683431 6.834597e-01 #> 2625 b_cyl -1.181214009 6.793620e-01 #> 2626 b_cyl -1.177744588 6.752323e-01 #> 2627 b_cyl -1.174275167 6.710711e-01 #> 2628 b_cyl -1.170805745 6.668794e-01 #> 2629 b_cyl -1.167336324 6.626563e-01 #> 2630 b_cyl -1.163866902 6.583999e-01 #> 2631 b_cyl -1.160397481 6.541165e-01 #> 2632 b_cyl -1.156928060 6.498074e-01 #> 2633 b_cyl -1.153458638 6.454737e-01 #> 2634 b_cyl -1.149989217 6.411168e-01 #> 2635 b_cyl -1.146519795 6.367378e-01 #> 2636 b_cyl -1.143050374 6.323343e-01 #> 2637 b_cyl -1.139580953 6.279128e-01 #> 2638 b_cyl -1.136111531 6.234749e-01 #> 2639 b_cyl -1.132642110 6.190221e-01 #> 2640 b_cyl -1.129172688 6.145560e-01 #> 2641 b_cyl -1.125703267 6.100785e-01 #> 2642 b_cyl -1.122233846 6.055897e-01 #> 2643 b_cyl -1.118764424 6.010935e-01 #> 2644 b_cyl -1.115295003 5.965921e-01 #> 2645 b_cyl -1.111825581 5.920873e-01 #> 2646 b_cyl -1.108356160 5.875809e-01 #> 2647 b_cyl -1.104886739 5.830745e-01 #> 2648 b_cyl -1.101417317 5.785709e-01 #> 2649 b_cyl -1.097947896 5.740723e-01 #> 2650 b_cyl -1.094478474 5.695805e-01 #> 2651 b_cyl -1.091009053 5.650970e-01 #> 2652 b_cyl -1.087539632 5.606237e-01 #> 2653 b_cyl -1.084070210 5.561624e-01 #> 2654 b_cyl -1.080600789 5.517163e-01 #> 2655 b_cyl -1.077131367 5.472885e-01 #> 2656 b_cyl -1.073661946 5.428785e-01 #> 2657 b_cyl -1.070192525 5.384880e-01 #> 2658 b_cyl -1.066723103 5.341186e-01 #> 2659 b_cyl -1.063253682 5.297715e-01 #> 2660 b_cyl -1.059784260 5.254498e-01 #> 2661 b_cyl -1.056314839 5.211584e-01 #> 2662 b_cyl -1.052845418 5.168943e-01 #> 2663 b_cyl -1.049375996 5.126585e-01 #> 2664 b_cyl -1.045906575 5.084522e-01 #> 2665 b_cyl -1.042437153 5.042764e-01 #> 2666 b_cyl -1.038967732 5.001322e-01 #> 2667 b_cyl -1.035498311 4.960280e-01 #> 2668 b_cyl -1.032028889 4.919571e-01 #> 2669 b_cyl -1.028559468 4.879203e-01 #> 2670 b_cyl -1.025090046 4.839179e-01 #> 2671 b_cyl -1.021620625 4.799504e-01 #> 2672 b_cyl -1.018151204 4.760182e-01 #> 2673 b_cyl -1.014681782 4.721284e-01 #> 2674 b_cyl -1.011212361 4.682756e-01 #> 2675 b_cyl -1.007742939 4.644584e-01 #> 2676 b_cyl -1.004273518 4.606764e-01 #> 2677 b_cyl -1.000804097 4.569296e-01 #> 2678 b_cyl -0.997334675 4.532176e-01 #> 2679 b_cyl -0.993865254 4.495450e-01 #> 2680 b_cyl -0.990395832 4.459088e-01 #> 2681 b_cyl -0.986926411 4.423054e-01 #> 2682 b_cyl -0.983456990 4.387341e-01 #> 2683 b_cyl -0.979987568 4.351939e-01 #> 2684 b_cyl -0.976518147 4.316841e-01 #> 2685 b_cyl -0.973048725 4.282063e-01 #> 2686 b_cyl -0.969579304 4.247601e-01 #> 2687 b_cyl -0.966109883 4.213404e-01 #> 2688 b_cyl -0.962640461 4.179458e-01 #> 2689 b_cyl -0.959171040 4.145752e-01 #> 2690 b_cyl -0.955701618 4.112273e-01 #> 2691 b_cyl -0.952232197 4.079018e-01 #> 2692 b_cyl -0.948762776 4.045993e-01 #> 2693 b_cyl -0.945293354 4.013145e-01 #> 2694 b_cyl -0.941823933 3.980460e-01 #> 2695 b_cyl -0.938354511 3.947924e-01 #> 2696 b_cyl -0.934885090 3.915523e-01 #> 2697 b_cyl -0.931415669 3.883242e-01 #> 2698 b_cyl -0.927946247 3.851086e-01 #> 2699 b_cyl -0.924476826 3.819013e-01 #> 2700 b_cyl -0.921007404 3.787010e-01 #> 2701 b_cyl -0.917537983 3.755062e-01 #> 2702 b_cyl -0.914068562 3.723156e-01 #> 2703 b_cyl -0.910599140 3.691279e-01 #> 2704 b_cyl -0.907129719 3.659419e-01 #> 2705 b_cyl -0.903660297 3.627555e-01 #> 2706 b_cyl -0.900190876 3.595677e-01 #> 2707 b_cyl -0.896721455 3.563773e-01 #> 2708 b_cyl -0.893252033 3.531833e-01 #> 2709 b_cyl -0.889782612 3.499845e-01 #> 2710 b_cyl -0.886313190 3.467792e-01 #> 2711 b_cyl -0.882843769 3.435662e-01 #> 2712 b_cyl -0.879374348 3.403453e-01 #> 2713 b_cyl -0.875904926 3.371157e-01 #> 2714 b_cyl -0.872435505 3.338769e-01 #> 2715 b_cyl -0.868966083 3.306280e-01 #> 2716 b_cyl -0.865496662 3.273675e-01 #> 2717 b_cyl -0.862027241 3.240940e-01 #> 2718 b_cyl -0.858557819 3.208088e-01 #> 2719 b_cyl -0.855088398 3.175115e-01 #> 2720 b_cyl -0.851618976 3.142018e-01 #> 2721 b_cyl -0.848149555 3.108795e-01 #> 2722 b_cyl -0.844680134 3.075438e-01 #> 2723 b_cyl -0.841210712 3.041927e-01 #> 2724 b_cyl -0.837741291 3.008289e-01 #> 2725 b_cyl -0.834271869 2.974524e-01 #> 2726 b_cyl -0.830802448 2.940634e-01 #> 2727 b_cyl -0.827333027 2.906622e-01 #> 2728 b_cyl -0.823863605 2.872491e-01 #> 2729 b_cyl -0.820394184 2.838220e-01 #> 2730 b_cyl -0.816924762 2.803842e-01 #> 2731 b_cyl -0.813455341 2.769365e-01 #> 2732 b_cyl -0.809985920 2.734796e-01 #> 2733 b_cyl -0.806516498 2.700143e-01 #> 2734 b_cyl -0.803047077 2.665414e-01 #> 2735 b_cyl -0.799577655 2.630610e-01 #> 2736 b_cyl -0.796108234 2.595755e-01 #> 2737 b_cyl -0.792638813 2.560864e-01 #> 2738 b_cyl -0.789169391 2.525951e-01 #> 2739 b_cyl -0.785699970 2.491028e-01 #> 2740 b_cyl -0.782230548 2.456110e-01 #> 2741 b_cyl -0.778761127 2.421219e-01 #> 2742 b_cyl -0.775291706 2.386378e-01 #> 2743 b_cyl -0.771822284 2.351600e-01 #> 2744 b_cyl -0.768352863 2.316904e-01 #> 2745 b_cyl -0.764883441 2.282309e-01 #> 2746 b_cyl -0.761414020 2.247833e-01 #> 2747 b_cyl -0.757944599 2.213511e-01 #> 2748 b_cyl -0.754475177 2.179381e-01 #> 2749 b_cyl -0.751005756 2.145442e-01 #> 2750 b_cyl -0.747536334 2.111715e-01 #> 2751 b_cyl -0.744066913 2.078220e-01 #> 2752 b_cyl -0.740597492 2.044978e-01 #> 2753 b_cyl -0.737128070 2.012023e-01 #> 2754 b_cyl -0.733658649 1.979428e-01 #> 2755 b_cyl -0.730189227 1.947159e-01 #> 2756 b_cyl -0.726719806 1.915235e-01 #> 2757 b_cyl -0.723250385 1.883676e-01 #> 2758 b_cyl -0.719780963 1.852500e-01 #> 2759 b_cyl -0.716311542 1.821723e-01 #> 2760 b_cyl -0.712842120 1.791467e-01 #> 2761 b_cyl -0.709372699 1.761654e-01 #> 2762 b_cyl -0.705903278 1.732297e-01 #> 2763 b_cyl -0.702433856 1.703408e-01 #> 2764 b_cyl -0.698964435 1.674999e-01 #> 2765 b_cyl -0.695495013 1.647081e-01 #> 2766 b_cyl -0.692025592 1.619757e-01 #> 2767 b_cyl -0.688556171 1.592971e-01 #> 2768 b_cyl -0.685086749 1.566702e-01 #> 2769 b_cyl -0.681617328 1.540952e-01 #> 2770 b_cyl -0.678147906 1.515725e-01 #> 2771 b_cyl -0.674678485 1.491022e-01 #> 2772 b_cyl -0.671209064 1.466912e-01 #> 2773 b_cyl -0.667739642 1.443375e-01 #> 2774 b_cyl -0.664270221 1.420351e-01 #> 2775 b_cyl -0.660800799 1.397835e-01 #> 2776 b_cyl -0.657331378 1.375820e-01 #> 2777 b_cyl -0.653861957 1.354298e-01 #> 2778 b_cyl -0.650392535 1.333299e-01 #> 2779 b_cyl -0.646923114 1.312838e-01 #> 2780 b_cyl -0.643453692 1.292828e-01 #> 2781 b_cyl -0.639984271 1.273257e-01 #> 2782 b_cyl -0.636514850 1.254110e-01 #> 2783 b_cyl -0.633045428 1.235372e-01 #> 2784 b_cyl -0.629576007 1.217042e-01 #> 2785 b_cyl -0.626106585 1.199158e-01 #> 2786 b_cyl -0.622637164 1.181624e-01 #> 2787 b_cyl -0.619167743 1.164424e-01 #> 2788 b_cyl -0.615698321 1.147540e-01 #> 2789 b_cyl -0.612228900 1.130956e-01 #> 2790 b_cyl -0.608759478 1.114654e-01 #> 2791 b_cyl -0.605290057 1.098674e-01 #> 2792 b_cyl -0.601820636 1.082935e-01 #> 2793 b_cyl -0.598351214 1.067420e-01 #> 2794 b_cyl -0.594881793 1.052115e-01 #> 2795 b_cyl -0.591412371 1.037004e-01 #> 2796 b_cyl -0.587942950 1.022075e-01 #> 2797 b_cyl -0.584473529 1.007340e-01 #> 2798 b_cyl -0.581004107 9.927636e-02 #> 2799 b_cyl -0.577534686 9.783256e-02 #> 2800 b_cyl -0.574065264 9.640166e-02 #> 2801 b_cyl -0.570595843 9.498279e-02 #> 2802 b_cyl -0.567126422 9.357518e-02 #> 2803 b_cyl -0.563657000 9.217946e-02 #> 2804 b_cyl -0.560187579 9.079460e-02 #> 2805 b_cyl -0.556718157 8.941911e-02 #> 2806 b_cyl -0.553248736 8.805272e-02 #> 2807 b_cyl -0.549779315 8.669525e-02 #> 2808 b_cyl -0.546309893 8.534660e-02 #> 2809 b_cyl -0.542840472 8.400751e-02 #> 2810 b_cyl -0.539371050 8.267875e-02 #> 2811 b_cyl -0.535901629 8.135920e-02 #> 2812 b_cyl -0.532432208 8.004918e-02 #> 2813 b_cyl -0.528962786 7.874903e-02 #> 2814 b_cyl -0.525493365 7.745917e-02 #> 2815 b_cyl -0.522023943 7.618047e-02 #> 2816 b_cyl -0.518554522 7.491558e-02 #> 2817 b_cyl -0.515085101 7.366289e-02 #> 2818 b_cyl -0.511615679 7.242302e-02 #> 2819 b_cyl -0.508146258 7.119660e-02 #> 2820 b_cyl -0.504676836 6.998427e-02 #> 2821 b_cyl -0.501207415 6.878668e-02 #> 2822 b_cyl -0.497737994 6.760816e-02 #> 2823 b_cyl -0.494268572 6.644627e-02 #> 2824 b_cyl -0.490799151 6.530140e-02 #> 2825 b_cyl -0.487329729 6.417415e-02 #> 2826 b_cyl -0.483860308 6.306505e-02 #> 2827 b_cyl -0.480390887 6.197466e-02 #> 2828 b_cyl -0.476921465 6.090694e-02 #> 2829 b_cyl -0.473452044 5.986026e-02 #> 2830 b_cyl -0.469982622 5.883379e-02 #> 2831 b_cyl -0.466513201 5.782781e-02 #> 2832 b_cyl -0.463043780 5.684257e-02 #> 2833 b_cyl -0.459574358 5.587824e-02 #> 2834 b_cyl -0.456104937 5.493767e-02 #> 2835 b_cyl -0.452635515 5.402051e-02 #> 2836 b_cyl -0.449166094 5.312440e-02 #> 2837 b_cyl -0.445696673 5.224921e-02 #> 2838 b_cyl -0.442227251 5.139477e-02 #> 2839 b_cyl -0.438757830 5.056085e-02 #> 2840 b_cyl -0.435288408 4.974872e-02 #> 2841 b_cyl -0.431818987 4.895941e-02 #> 2842 b_cyl -0.428349566 4.818927e-02 #> 2843 b_cyl -0.424880144 4.743779e-02 #> 2844 b_cyl -0.421410723 4.670442e-02 #> 2845 b_cyl -0.417941301 4.598857e-02 #> 2846 b_cyl -0.414471880 4.529010e-02 #> 2847 b_cyl -0.411002459 4.461092e-02 #> 2848 b_cyl -0.407533037 4.394680e-02 #> 2849 b_cyl -0.404063616 4.329699e-02 #> 2850 b_cyl -0.400594194 4.266072e-02 #> 2851 b_cyl -0.397124773 4.203721e-02 #> 2852 b_cyl -0.393655352 4.142567e-02 #> 2853 b_cyl -0.390185930 4.082745e-02 #> 2854 b_cyl -0.386716509 4.023928e-02 #> 2855 b_cyl -0.383247087 3.966016e-02 #> 2856 b_cyl -0.379777666 3.908931e-02 #> 2857 b_cyl -0.376308245 3.852595e-02 #> 2858 b_cyl -0.372838823 3.796934e-02 #> 2859 b_cyl -0.369369402 3.741956e-02 #> 2860 b_cyl -0.365899980 3.687504e-02 #> 2861 b_cyl -0.362430559 3.633474e-02 #> 2862 b_cyl -0.358961138 3.579805e-02 #> 2863 b_cyl -0.355491716 3.526438e-02 #> 2864 b_cyl -0.352022295 3.473319e-02 #> 2865 b_cyl -0.348552873 3.420414e-02 #> 2866 b_cyl -0.345083452 3.367656e-02 #> 2867 b_cyl -0.341614031 3.314988e-02 #> 2868 b_cyl -0.338144609 3.262377e-02 #> 2869 b_cyl -0.334675188 3.209795e-02 #> 2870 b_cyl -0.331205766 3.157218e-02 #> 2871 b_cyl -0.327736345 3.104626e-02 #> 2872 b_cyl -0.324266924 3.051998e-02 #> 2873 b_cyl -0.320797502 2.999331e-02 #> 2874 b_cyl -0.317328081 2.946627e-02 #> 2875 b_cyl -0.313858659 2.893887e-02 #> 2876 b_cyl -0.310389238 2.841119e-02 #> 2877 b_cyl -0.306919817 2.788334e-02 #> 2878 b_cyl -0.303450395 2.735555e-02 #> 2879 b_cyl -0.299980974 2.682807e-02 #> 2880 b_cyl -0.296511552 2.630114e-02 #> 2881 b_cyl -0.293042131 2.577505e-02 #> 2882 b_cyl -0.289572710 2.525007e-02 #> 2883 b_cyl -0.286103288 2.472654e-02 #> 2884 b_cyl -0.282633867 2.420527e-02 #> 2885 b_cyl -0.279164445 2.368640e-02 #> 2886 b_cyl -0.275695024 2.317025e-02 #> 2887 b_cyl -0.272225603 2.265721e-02 #> 2888 b_cyl -0.268756181 2.214765e-02 #> 2889 b_cyl -0.265286760 2.164197e-02 #> 2890 b_cyl -0.261817338 2.114136e-02 #> 2891 b_cyl -0.258347917 2.064592e-02 #> 2892 b_cyl -0.254878496 2.015572e-02 #> 2893 b_cyl -0.251409074 1.967112e-02 #> 2894 b_cyl -0.247939653 1.919246e-02 #> 2895 b_cyl -0.244470231 1.872007e-02 #> 2896 b_cyl -0.241000810 1.825511e-02 #> 2897 b_cyl -0.237531389 1.779801e-02 #> 2898 b_cyl -0.234061967 1.734824e-02 #> 2899 b_cyl -0.230592546 1.690604e-02 #> 2900 b_cyl -0.227123124 1.647163e-02 #> 2901 b_cyl -0.223653703 1.604523e-02 #> 2902 b_cyl -0.220184282 1.562761e-02 #> 2903 b_cyl -0.216714860 1.521979e-02 #> 2904 b_cyl -0.213245439 1.482053e-02 #> 2905 b_cyl -0.209776017 1.442993e-02 #> 2906 b_cyl -0.206306596 1.404805e-02 #> 2907 b_cyl -0.202837175 1.367497e-02 #> 2908 b_cyl -0.199367753 1.331093e-02 #> 2909 b_cyl -0.195898332 1.295766e-02 #> 2910 b_cyl -0.192428910 1.261325e-02 #> 2911 b_cyl -0.188959489 1.227768e-02 #> 2912 b_cyl -0.185490068 1.195090e-02 #> 2913 b_cyl -0.182020646 1.163288e-02 #> 2914 b_cyl -0.178551225 1.132354e-02 #> 2915 b_cyl -0.175081803 1.102461e-02 #> 2916 b_cyl -0.171612382 1.073439e-02 #> 2917 b_cyl -0.168142961 1.045255e-02 #> 2918 b_cyl -0.164673539 1.017898e-02 #> 2919 b_cyl -0.161204118 9.913579e-03 #> 2920 b_cyl -0.157734696 9.656235e-03 #> 2921 b_cyl -0.154265275 9.408094e-03 #> 2922 b_cyl -0.150795854 9.168290e-03 #> 2923 b_cyl -0.147326432 8.936109e-03 #> 2924 b_cyl -0.143857011 8.711428e-03 #> 2925 b_cyl -0.140387589 8.494125e-03 #> 2926 b_cyl -0.136918168 8.284078e-03 #> 2927 b_cyl -0.133448747 8.081962e-03 #> 2928 b_cyl -0.129979325 7.887665e-03 #> 2929 b_cyl -0.126509904 7.700190e-03 #> 2930 b_cyl -0.123040482 7.519422e-03 #> 2931 b_cyl -0.119571061 7.345247e-03 #> 2932 b_cyl -0.116101640 7.177553e-03 #> 2933 b_cyl -0.112632218 7.016638e-03 #> 2934 b_cyl -0.109162797 6.863006e-03 #> 2935 b_cyl -0.105693375 6.715459e-03 #> 2936 b_cyl -0.102223954 6.573887e-03 #> 2937 b_cyl -0.098754533 6.438185e-03 #> 2938 b_cyl -0.095285111 6.308246e-03 #> 2939 b_cyl -0.091815690 6.184061e-03 #> 2940 b_cyl -0.088346269 6.066602e-03 #> 2941 b_cyl -0.084876847 5.954516e-03 #> 2942 b_cyl -0.081407426 5.847690e-03 #> 2943 b_cyl -0.077938004 5.746012e-03 #> 2944 b_cyl -0.074468583 5.649367e-03 #> 2945 b_cyl -0.070999162 5.557637e-03 #> 2946 b_cyl -0.067529740 5.471656e-03 #> 2947 b_cyl -0.064060319 5.390414e-03 #> 2948 b_cyl -0.060590897 5.313636e-03 #> 2949 b_cyl -0.057121476 5.241187e-03 #> 2950 b_cyl -0.053652055 5.172932e-03 #> 2951 b_cyl -0.050182633 5.108730e-03 #> 2952 b_cyl -0.046713212 5.049023e-03 #> 2953 b_cyl -0.043243790 4.993289e-03 #> 2954 b_cyl -0.039774369 4.941068e-03 #> 2955 b_cyl -0.036304948 4.892199e-03 #> 2956 b_cyl -0.032835526 4.846522e-03 #> 2957 b_cyl -0.029366105 4.803871e-03 #> 2958 b_cyl -0.025896683 4.764367e-03 #> 2959 b_cyl -0.022427262 4.727805e-03 #> 2960 b_cyl -0.018957841 4.693646e-03 #> 2961 b_cyl -0.015488419 4.661715e-03 #> 2962 b_cyl -0.012018998 4.631831e-03 #> 2963 b_cyl -0.008549576 4.603816e-03 #> 2964 b_cyl -0.005080155 4.577577e-03 #> 2965 b_cyl -0.001610734 4.553026e-03 #> 2966 b_cyl 0.001858688 4.529688e-03 #> 2967 b_cyl 0.005328109 4.507383e-03 #> 2968 b_cyl 0.008797531 4.485934e-03 #> 2969 b_cyl 0.012266952 4.465165e-03 #> 2970 b_cyl 0.015736373 4.444904e-03 #> 2971 b_cyl 0.019205795 4.424986e-03 #> 2972 b_cyl 0.022675216 4.405135e-03 #> 2973 b_cyl 0.026144638 4.385191e-03 #> 2974 b_cyl 0.029614059 4.365001e-03 #> 2975 b_cyl 0.033083480 4.344416e-03 #> 2976 b_cyl 0.036552902 4.323293e-03 #> 2977 b_cyl 0.040022323 4.301316e-03 #> 2978 b_cyl 0.043491745 4.278435e-03 #> 2979 b_cyl 0.046961166 4.254560e-03 #> 2980 b_cyl 0.050430587 4.229586e-03 #> 2981 b_cyl 0.053900009 4.203415e-03 #> 2982 b_cyl 0.057369430 4.175956e-03 #> 2983 b_cyl 0.060838852 4.146903e-03 #> 2984 b_cyl 0.064308273 4.116263e-03 #> 2985 b_cyl 0.067777694 4.084100e-03 #> 2986 b_cyl 0.071247116 4.050376e-03 #> 2987 b_cyl 0.074716537 4.015061e-03 #> 2988 b_cyl 0.078185959 3.978138e-03 #> 2989 b_cyl 0.081655380 3.939427e-03 #> 2990 b_cyl 0.085124801 3.898898e-03 #> 2991 b_cyl 0.088594223 3.856791e-03 #> 2992 b_cyl 0.092063644 3.813142e-03 #> 2993 b_cyl 0.095533066 3.767998e-03 #> 2994 b_cyl 0.099002487 3.721413e-03 #> 2995 b_cyl 0.102471908 3.673381e-03 #> 2996 b_cyl 0.105941330 3.623857e-03 #> 2997 b_cyl 0.109410751 3.573184e-03 #> 2998 b_cyl 0.112880173 3.521465e-03 #> 2999 b_cyl 0.116349594 3.468810e-03 #> 3000 b_cyl 0.119819015 3.415337e-03 #> 3001 b_cyl 0.123288437 3.361166e-03 #> 3002 b_cyl 0.126757858 3.306357e-03 #> 3003 b_cyl 0.130227280 3.251211e-03 #> 3004 b_cyl 0.133696701 3.195872e-03 #> 3005 b_cyl 0.137166122 3.140486e-03 #> 3006 b_cyl 0.140635544 3.085204e-03 #> 3007 b_cyl 0.144104965 3.030175e-03 #> 3008 b_cyl 0.147574387 2.975678e-03 #> 3009 b_cyl 0.151043808 2.921847e-03 #> 3010 b_cyl 0.154513229 2.868811e-03 #> 3011 b_cyl 0.157982651 2.816715e-03 #> 3012 b_cyl 0.161452072 2.765698e-03 #> 3013 b_cyl 0.164921494 2.715896e-03 #> 3014 b_cyl 0.168390915 2.667670e-03 #> 3015 b_cyl 0.171860336 2.621106e-03 #> 3016 b_cyl 0.175329758 2.576193e-03 #> 3017 b_cyl 0.178799179 2.533033e-03 #> 3018 b_cyl 0.182268601 2.491718e-03 #> 3019 b_cyl 0.185738022 2.452332e-03 #> 3020 b_cyl 0.189207443 2.415162e-03 #> 3021 b_cyl 0.192676865 2.380367e-03 #> 3022 b_cyl 0.196146286 2.347715e-03 #> 3023 b_cyl 0.199615708 2.317236e-03 #> 3024 b_cyl 0.203085129 2.288952e-03 #> 3025 b_cyl 0.206554550 2.262874e-03 #> 3026 b_cyl 0.210023972 2.239116e-03 #> 3027 b_cyl 0.213493393 2.217947e-03 #> 3028 b_cyl 0.216962815 2.198923e-03 #> 3029 b_cyl 0.220432236 2.182000e-03 #> 3030 b_cyl 0.223901657 2.167126e-03 #> 3031 b_cyl 0.227371079 2.154238e-03 #> 3032 b_cyl 0.230840500 2.143271e-03 #> 3033 b_cyl 0.234309922 2.134530e-03 #> 3034 b_cyl 0.237779343 2.127469e-03 #> 3035 b_cyl 0.241248764 2.121985e-03 #> 3036 b_cyl 0.244718186 2.117967e-03 #> 3037 b_cyl 0.248187607 2.115300e-03 #> 3038 b_cyl 0.251657029 2.113862e-03 #> 3039 b_cyl 0.255126450 2.113702e-03 #> 3040 b_cyl 0.258595871 2.114479e-03 #> 3041 b_cyl 0.262065293 2.116011e-03 #> 3042 b_cyl 0.265534714 2.118161e-03 #> 3043 b_cyl 0.269004136 2.120792e-03 #> 3044 b_cyl 0.272473557 2.123764e-03 #> 3045 b_cyl 0.275942978 2.126940e-03 #> 3046 b_cyl 0.279412400 2.130115e-03 #> 3047 b_cyl 0.282881821 2.133137e-03 #> 3048 b_cyl 0.286351243 2.135878e-03 #> 3049 b_cyl 0.289820664 2.138209e-03 #> 3050 b_cyl 0.293290085 2.140006e-03 #> 3051 b_cyl 0.296759507 2.141069e-03 #> 3052 b_cyl 0.300228928 2.141200e-03 #> 3053 b_cyl 0.303698350 2.140389e-03 #> 3054 b_cyl 0.307167771 2.138538e-03 #> 3055 b_cyl 0.310637192 2.135554e-03 #> 3056 b_cyl 0.314106614 2.131351e-03 #> 3057 b_cyl 0.317576035 2.125780e-03 #> 3058 b_cyl 0.321045457 2.118549e-03 #> 3059 b_cyl 0.324514878 2.109847e-03 #> 3060 b_cyl 0.327984299 2.099622e-03 #> 3061 b_cyl 0.331453721 2.087831e-03 #> 3062 b_cyl 0.334923142 2.074437e-03 #> 3063 b_cyl 0.338392564 2.059411e-03 #> 3064 b_cyl 0.341861985 2.042333e-03 #> 3065 b_cyl 0.345331406 2.023585e-03 #> 3066 b_cyl 0.348800828 2.003168e-03 #> 3067 b_cyl 0.352270249 1.981088e-03 #> 3068 b_cyl 0.355739671 1.957361e-03 #> 3069 b_cyl 0.359209092 1.932006e-03 #> 3070 b_cyl 0.362678513 1.904759e-03 #> 3071 b_cyl 0.366147935 1.875895e-03 #> 3072 b_cyl 0.369617356 1.845538e-03 # }"},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":null,"dir":"Reference","previous_headings":"","what":"Equal-Tailed Interval (ETI) — eti","title":"Equal-Tailed Interval (ETI) — eti","text":"Compute Equal-Tailed Interval (ETI) posterior distributions using quantiles method. probability interval equal probability . ETI can used context uncertainty characterisation posterior distributions Credible Interval (CI).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Equal-Tailed Interval (ETI) — eti","text":"","code":"eti(x, ...) # S3 method for class 'numeric' eti(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'data.frame' eti(x, ci = 0.95, rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'stanreg' eti( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' eti( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'get_predicted' eti(x, ci = 0.95, use_iterations = FALSE, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Equal-Tailed Interval (ETI) — eti","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. ci Value vector probability (credible) interval - CI (0 1) estimated. Default .95 (95%). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Equal-Tailed Interval (ETI) — eti","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability credible interval. CI_low, CI_high lower upper credible interval limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Equal-Tailed Interval (ETI) — eti","text":"Unlike equal-tailed intervals (see eti()) typically exclude 2.5% tail distribution always include median, HDI equal-tailed therefore always includes mode(s) posterior distributions. can useful better represent credibility mass distribution, HDI also limitations. See spi() details. 95% 89% Credible Intervals (CI) two reasonable ranges characterize uncertainty related estimation (see discussion differences two values). 89% intervals (ci = 0.89) deemed stable , instance, 95% intervals (Kruschke, 2014). effective sample size least 10.000 recommended one wants estimate 95% intervals high precision (Kruschke, 2014, p. 183ff). Unfortunately, default number posterior samples Bayes packages (e.g., rstanarm brms) 4.000 (thus, might want increase fitting model). Moreover, 89 indicates arbitrariness interval limits - remarkable property highest prime number exceed already unstable 95% threshold (McElreath, 2015). However, 95% advantages . instance, shares (case normal posterior distribution) intuitive relationship standard deviation conveys accurate image (artificial) bounds distribution. Also, wider, makes analyses conservative (.e., probability covering 0 larger 95% CI lower ranges 89%), good thing context reproducibility crisis. 95% equal-tailed interval (ETI) 2.5% distribution either side limits. indicates 2.5th percentile 97.5h percentile. symmetric distributions, two methods computing credible intervals, ETI HDI, return similar results. case skewed distributions. Indeed, possible parameter values ETI lower credibility (less probable) parameter values outside ETI. property seems undesirable summary credible values distribution. hand, ETI range change transformations applied distribution (instance, log odds scale probabilities): lower higher bounds transformed distribution correspond transformed lower higher bounds original distribution. contrary, applying transformations distribution change resulting HDI.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/eti.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Equal-Tailed Interval (ETI) — eti","text":"","code":"library(bayestestR) posterior <- rnorm(1000) eti(posterior) #> 95% ETI: [-1.93, 1.84] eti(posterior, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> 80% ETI | 89% ETI | 95% ETI #> --------------------------------------------- #> [-1.29, 1.25] | [-1.58, 1.53] | [-1.93, 1.84] df <- data.frame(replicate(4, rnorm(100))) eti(df) #> Equal-Tailed Interval #> #> Parameter | 95% ETI #> ------------------------- #> X1 | [-1.93, 2.19] #> X2 | [-1.70, 1.96] #> X3 | [-1.91, 1.63] #> X4 | [-1.87, 1.87] eti(df, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | 95% ETI #> --------------------------------------------------------- #> X1 | [-1.16, 1.28] | [-1.74, 1.78] | [-1.93, 2.19] #> X2 | [-0.96, 1.62] | [-1.40, 1.70] | [-1.70, 1.96] #> X3 | [-1.07, 0.88] | [-1.52, 1.15] | [-1.91, 1.63] #> X4 | [-1.20, 1.18] | [-1.67, 1.47] | [-1.87, 1.87] # \\donttest{ model <- suppressWarnings( rstanarm::stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) eti(model) #> Equal-Tailed Interval #> #> Parameter | 95% ETI | Effects | Component #> ---------------------------------------------------- #> (Intercept) | [29.80, 50.38] | fixed | conditional #> wt | [-6.99, -3.94] | fixed | conditional #> gear | [-2.21, 1.24] | fixed | conditional eti(model, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | 95% ETI | Effects | Component #> -------------------------------------------------------------------------------------- #> (Intercept) | [32.43, 45.66] | [31.18, 47.98] | [29.80, 50.38] | fixed | conditional #> wt | [-6.44, -4.62] | [-6.66, -4.35] | [-6.99, -3.94] | fixed | conditional #> gear | [-1.52, 0.79] | [-1.88, 0.99] | [-2.21, 1.24] | fixed | conditional eti(emmeans::emtrends(model, ~1, \"wt\", data = mtcars)) #> Equal-Tailed Interval #> #> X1 | 95% ETI #> ------------------------ #> overall | [-6.99, -3.94] model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 1: 0.018 seconds (Sampling) #> Chain 1: 0.036 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 2: 0.017 seconds (Sampling) #> Chain 2: 0.037 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 3: 0.018 seconds (Sampling) #> Chain 3: 0.039 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 4: 0.018 seconds (Sampling) #> Chain 4: 0.039 seconds (Total) #> Chain 4: eti(model) #> Equal-Tailed Interval #> #> Parameter | 95% ETI | Effects | Component #> ---------------------------------------------------- #> b_Intercept | [36.12, 43.25] | fixed | conditional #> b_wt | [-4.78, -1.64] | fixed | conditional #> b_cyl | [-2.38, -0.66] | fixed | conditional eti(model, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | 95% ETI | Effects | Component #> -------------------------------------------------------------------------------------- #> b_Intercept | [37.42, 41.98] | [36.82, 42.64] | [36.12, 43.25] | fixed | conditional #> b_wt | [-4.19, -2.20] | [-4.45, -1.95] | [-4.78, -1.64] | fixed | conditional #> b_cyl | [-2.06, -0.96] | [-2.19, -0.82] | [-2.38, -0.66] | fixed | conditional bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) eti(bf) #> Equal-Tailed Interval #> #> Parameter | 95% ETI #> ------------------------- #> Difference | [0.85, 1.26] eti(bf, ci = c(0.80, 0.89, 0.95)) #> Equal-Tailed Interval #> #> Parameter | 80% ETI | 89% ETI | 95% ETI #> ------------------------------------------------------- #> Difference | [0.92, 1.19] | [0.89, 1.22] | [0.85, 1.26] # }"},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":null,"dir":"Reference","previous_headings":"","what":"Highest Density Interval (HDI) — hdi","title":"Highest Density Interval (HDI) — hdi","text":"Compute Highest Density Interval (HDI) posterior distributions. points within interval higher probability density points outside interval. HDI can used context uncertainty characterisation posterior distributions Credible Interval (CI).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Highest Density Interval (HDI) — hdi","text":"","code":"hdi(x, ...) # S3 method for class 'numeric' hdi(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'data.frame' hdi(x, ci = 0.95, rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'stanreg' hdi( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' hdi( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'get_predicted' hdi(x, ci = 0.95, use_iterations = FALSE, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Highest Density Interval (HDI) — hdi","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. ci Value vector probability (credible) interval - CI (0 1) estimated. Default .95 (95%). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Highest Density Interval (HDI) — hdi","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability credible interval. CI_low, CI_high lower upper credible interval limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Highest Density Interval (HDI) — hdi","text":"Unlike equal-tailed intervals (see eti()) typically exclude 2.5% tail distribution always include median, HDI equal-tailed therefore always includes mode(s) posterior distributions. can useful better represent credibility mass distribution, HDI also limitations. See spi() details. 95% 89% Credible Intervals (CI) two reasonable ranges characterize uncertainty related estimation (see discussion differences two values). 89% intervals (ci = 0.89) deemed stable , instance, 95% intervals (Kruschke, 2014). effective sample size least 10.000 recommended one wants estimate 95% intervals high precision (Kruschke, 2014, p. 183ff). Unfortunately, default number posterior samples Bayes packages (e.g., rstanarm brms) 4.000 (thus, might want increase fitting model). Moreover, 89 indicates arbitrariness interval limits - remarkable property highest prime number exceed already unstable 95% threshold (McElreath, 2015). However, 95% advantages . instance, shares (case normal posterior distribution) intuitive relationship standard deviation conveys accurate image (artificial) bounds distribution. Also, wider, makes analyses conservative (.e., probability covering 0 larger 95% CI lower ranges 89%), good thing context reproducibility crisis. 95% equal-tailed interval (ETI) 2.5% distribution either side limits. indicates 2.5th percentile 97.5h percentile. symmetric distributions, two methods computing credible intervals, ETI HDI, return similar results. case skewed distributions. Indeed, possible parameter values ETI lower credibility (less probable) parameter values outside ETI. property seems undesirable summary credible values distribution. hand, ETI range change transformations applied distribution (instance, log odds scale probabilities): lower higher bounds transformed distribution correspond transformed lower higher bounds original distribution. contrary, applying transformations distribution change resulting HDI.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Highest Density Interval (HDI) — hdi","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Highest Density Interval (HDI) — hdi","text":"Kruschke, J. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press. McElreath, R. (2015). Statistical rethinking: Bayesian course examples R Stan. Chapman Hall/CRC.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Highest Density Interval (HDI) — hdi","text":"Credits go ggdistribute HDInterval.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/hdi.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Highest Density Interval (HDI) — hdi","text":"","code":"library(bayestestR) posterior <- rnorm(1000) hdi(posterior, ci = 0.89) #> 89% HDI: [-1.46, 1.79] hdi(posterior, ci = c(0.80, 0.90, 0.95)) #> Highest Density Interval #> #> 80% HDI | 90% HDI | 95% HDI #> --------------------------------------------- #> [-1.39, 1.24] | [-1.59, 1.77] | [-2.10, 1.80] bayestestR::hdi(iris[1:4]) #> Identical densities found along different segments of the distribution, #> choosing rightmost. #> Highest Density Interval #> #> Parameter | 95% HDI #> --------------------------- #> Sepal.Length | [4.60, 7.70] #> Sepal.Width | [2.20, 3.90] #> Petal.Length | [1.00, 6.10] #> Petal.Width | [0.10, 2.30] bayestestR::hdi(iris[1:4], ci = c(0.80, 0.90, 0.95)) #> Identical densities found along different segments of the distribution, #> choosing rightmost. #> Identical densities found along different segments of the distribution, #> choosing rightmost. #> Identical densities found along different segments of the distribution, #> choosing rightmost. #> Highest Density Interval #> #> Parameter | 80% HDI | 90% HDI | 95% HDI #> --------------------------------------------------------- #> Sepal.Length | [4.90, 6.90] | [4.40, 6.90] | [4.60, 7.70] #> Sepal.Width | [2.50, 3.60] | [2.40, 3.80] | [2.20, 3.90] #> Petal.Length | [1.30, 5.50] | [1.10, 5.80] | [1.00, 6.10] #> Petal.Width | [0.10, 1.90] | [0.20, 2.30] | [0.10, 2.30] # \\donttest{ model <- suppressWarnings( rstanarm::stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) bayestestR::hdi(model) #> Highest Density Interval #> #> Parameter | 95% HDI #> ---------------------------- #> (Intercept) | [29.21, 49.50] #> wt | [-6.99, -4.05] #> gear | [-2.18, 1.68] bayestestR::hdi(model, ci = c(0.80, 0.90, 0.95)) #> Highest Density Interval #> #> Parameter | 80% HDI | 90% HDI | 95% HDI #> -------------------------------------------------------------- #> (Intercept) | [31.68, 46.67] | [29.21, 47.18] | [29.21, 49.50] #> wt | [-6.30, -4.23] | [-6.70, -4.11] | [-6.99, -4.05] #> gear | [-1.53, 1.08] | [-1.89, 1.41] | [-2.18, 1.68] bayestestR::hdi(emmeans::emtrends(model, ~1, \"wt\", data = mtcars)) #> Highest Density Interval #> #> X1 | 95% HDI #> ------------------------ #> overall | [-6.99, -4.05] model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 9e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 1: 0.016 seconds (Sampling) #> Chain 1: 0.036 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 2: 0.017 seconds (Sampling) #> Chain 2: 0.036 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 3: 0.017 seconds (Sampling) #> Chain 3: 0.037 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 4: 0.017 seconds (Sampling) #> Chain 4: 0.036 seconds (Total) #> Chain 4: bayestestR::hdi(model) #> Highest Density Interval #> #> Parameter | 95% HDI #> ---------------------------- #> (Intercept) | [36.15, 43.24] #> wt | [-4.81, -1.60] #> cyl | [-2.37, -0.66] bayestestR::hdi(model, ci = c(0.80, 0.90, 0.95)) #> Highest Density Interval #> #> Parameter | 80% HDI | 90% HDI | 95% HDI #> -------------------------------------------------------------- #> (Intercept) | [37.05, 41.78] | [36.79, 42.75] | [36.15, 43.24] #> wt | [-4.22, -2.23] | [-4.53, -1.88] | [-4.81, -1.60] #> cyl | [-2.07, -0.97] | [-2.20, -0.78] | [-2.37, -0.66] bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) bayestestR::hdi(bf) #> Highest Density Interval #> #> Parameter | 95% HDI #> ------------------------- #> Difference | [0.77, 1.19] bayestestR::hdi(bf, ci = c(0.80, 0.90, 0.95)) #> Highest Density Interval #> #> Parameter | 80% HDI | 90% HDI | 95% HDI #> ------------------------------------------------------- #> Difference | [0.85, 1.12] | [0.82, 1.17] | [0.78, 1.20] # }"},{"path":"https://easystats.github.io/bayestestR/reference/map_estimate.html","id":null,"dir":"Reference","previous_headings":"","what":"Maximum A Posteriori probability estimate (MAP) — map_estimate","title":"Maximum A Posteriori probability estimate (MAP) — map_estimate","text":"Find Highest Maximum Posteriori probability estimate (MAP) posterior, .e., value associated highest probability density (\"peak\" posterior distribution). words, estimation mode continuous parameters. Note function relies estimate_density(), default uses different smoothing bandwidth (\"SJ\") compared legacy default implemented base R density() function (\"nrd0\").","code":""},{"path":"https://easystats.github.io/bayestestR/reference/map_estimate.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Maximum A Posteriori probability estimate (MAP) — map_estimate","text":"","code":"map_estimate(x, ...) # S3 method for class 'numeric' map_estimate(x, precision = 2^10, method = \"kernel\", ...) # S3 method for class 'stanreg' map_estimate( x, precision = 2^10, method = \"kernel\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... ) # S3 method for class 'brmsfit' map_estimate( x, precision = 2^10, method = \"kernel\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ... ) # S3 method for class 'data.frame' map_estimate(x, precision = 2^10, method = \"kernel\", rvar_col = NULL, ...) # S3 method for class 'get_predicted' map_estimate( x, precision = 2^10, method = \"kernel\", use_iterations = FALSE, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/map_estimate.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Maximum A Posteriori probability estimate (MAP) — map_estimate","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. precision Number points density data. See n parameter density. method Density estimation method. Can \"kernel\" (default), \"logspline\" \"KernSmooth\". effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. rvar_col single character - name rvar column data frame processed. See example p_direction(). use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/map_estimate.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Maximum A Posteriori probability estimate (MAP) — map_estimate","text":"numeric value x vector. x model-object, returns data frame following columns: Parameter: model parameter(s), x model-object. x vector, column missing. MAP_Estimate: MAP estimate posterior model parameter.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/map_estimate.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Maximum A Posteriori probability estimate (MAP) — map_estimate","text":"","code":"# \\donttest{ library(bayestestR) posterior <- rnorm(10000) map_estimate(posterior) #> MAP Estimate #> #> Parameter | MAP_Estimate #> ------------------------ #> x | 0.06 plot(density(posterior)) abline(v = as.numeric(map_estimate(posterior)), col = \"red\") model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.042 seconds (Warm-up) #> Chain 1: 0.041 seconds (Sampling) #> Chain 1: 0.083 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 9e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.05 seconds (Warm-up) #> Chain 2: 0.047 seconds (Sampling) #> Chain 2: 0.097 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 8e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.047 seconds (Warm-up) #> Chain 3: 0.045 seconds (Sampling) #> Chain 3: 0.092 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 8e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 4: 0.048 seconds (Sampling) #> Chain 4: 0.093 seconds (Total) #> Chain 4: map_estimate(model) #> MAP Estimate #> #> Parameter | MAP_Estimate #> -------------------------- #> (Intercept) | 39.51 #> wt | -3.24 #> cyl | -1.39 model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.018 seconds (Warm-up) #> Chain 1: 0.017 seconds (Sampling) #> Chain 1: 0.035 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 2: 0.017 seconds (Sampling) #> Chain 2: 0.036 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.016 seconds (Sampling) #> Chain 3: 0.035 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.019 seconds (Sampling) #> Chain 4: 0.039 seconds (Total) #> Chain 4: map_estimate(model) #> MAP Estimate #> #> Parameter | MAP_Estimate #> -------------------------- #> b_Intercept | 39.67 #> b_wt | -3.06 #> b_cyl | -1.58 # }"},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":null,"dir":"Reference","previous_headings":"","what":"Monte-Carlo Standard Error (MCSE) — mcse","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"function returns Monte Carlo Standard Error (MCSE).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"","code":"mcse(model, ...) # S3 method for class 'stanreg' mcse( model, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"model stanreg, stanfit, brmsfit, blavaan, MCMCglmm object. ... Currently used. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"Monte Carlo Standard Error (MCSE) another measure accuracy chains. defined standard deviation chains divided effective sample size (formula mcse() Kruschke 2015, p. 187). MCSE “provides quantitative suggestion big estimation noise ”.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"Kruschke, J. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mcse.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Monte-Carlo Standard Error (MCSE) — mcse","text":"","code":"# \\donttest{ library(bayestestR) model <- suppressWarnings( rstanarm::stan_glm(mpg ~ wt + am, data = mtcars, chains = 1, refresh = 0) ) mcse(model) #> Parameter MCSE #> 1 (Intercept) 0.15783056 #> 2 wt 0.04047611 #> 3 am 0.07802692 # }"},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":null,"dir":"Reference","previous_headings":"","what":"Summary of Bayesian multivariate-response mediation-models — mediation","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"mediation() short summary multivariate-response mediation-models, .e. function computes average direct average causal mediation effects multivariate response models.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"","code":"mediation(model, ...) # S3 method for class 'brmsfit' mediation( model, treatment, mediator, response = NULL, centrality = \"median\", ci = 0.95, method = \"ETI\", ... ) # S3 method for class 'stanmvreg' mediation( model, treatment, mediator, response = NULL, centrality = \"median\", ci = 0.95, method = \"ETI\", ... )"},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"model brmsfit stanmvreg object. ... used. treatment Character, name treatment variable (direct effect) (multivariate response) mediator-model. missing, mediation() tries find treatment variable automatically, however, may fail. mediator Character, name mediator variable (multivariate response) mediator-model. missing, mediation() tries find treatment variable automatically, however, may fail. response named character vector, indicating names response variables used mediation analysis. Usually can NULL, case variables retrieved automatically. NULL, names match names model formulas, names(insight::find_response(model, combine = TRUE)). can useful , instance, mediator variable used predictor different name mediator variable used response. might occur mediator transformed one model, used \"\" response variable model. Example: mediator m used response variable, centered version m_center used mediator variable. second response variable (treatment model, mediator additional predictor), y, transformed. use response like : mediation(model, response = c(m = \"m_center\", y = \"y\")). centrality point-estimates (centrality indices) compute. Character (vector) list one options: \"median\", \"mean\", \"MAP\" (see map_estimate()), \"trimmed\" (just mean(x, trim = threshold)), \"mode\" \"\". ci Value vector probability CI (0 1) estimated. Default 0.95 (95%). method Can \"ETI\" (default), \"HDI\", \"BCI\", \"SPI\" \"SI\".","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"data frame direct, indirect, mediator total effect multivariate-response mediation-model, well proportion mediated. effect sizes median values posterior samples (use centrality centrality indices).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"mediation() returns data frame information direct effect (mean value posterior samples treatment outcome model), mediator effect (mean value posterior samples mediator outcome model), indirect effect (mean value multiplication posterior samples mediator outcome model posterior samples treatment mediation model) total effect (mean value sums posterior samples used direct indirect effect). proportion mediated indirect effect divided total effect. values, 89% credible intervals calculated default. Use ci calculate different interval. arguments treatment mediator necessarily need specified. missing, mediation() tries find treatment mediator variable automatically. work, specify variables. direct effect also called average direct effect (ADE), indirect effect also called average causal mediation effects (ACME). See also Tingley et al. 2014 Imai et al. 2010.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":".data.frame() method returns posterior samples effects, can used processing different bayestestR package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"Imai, K., Keele, L. Tingley, D. (2010) General Approach Causal Mediation Analysis, Psychological Methods, Vol. 15, . 4 (December), pp. 309-334. Tingley, D., Yamamoto, T., Hirose, K., Imai, K. Keele, L. (2014). mediation: R package Causal Mediation Analysis, Journal Statistical Software, Vol. 59, . 5, pp. 1-38.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/mediation.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Summary of Bayesian multivariate-response mediation-models — mediation","text":"","code":"# \\donttest{ library(mediation) library(brms) library(rstanarm) # load sample data data(jobs) set.seed(123) # linear models, for mediation analysis b1 <- lm(job_seek ~ treat + econ_hard + sex + age, data = jobs) b2 <- lm(depress2 ~ treat + job_seek + econ_hard + sex + age, data = jobs) # mediation analysis, for comparison with Stan models m1 <- mediate(b1, b2, sims = 1000, treat = \"treat\", mediator = \"job_seek\") # Fit Bayesian mediation model in brms f1 <- bf(job_seek ~ treat + econ_hard + sex + age) f2 <- bf(depress2 ~ treat + job_seek + econ_hard + sex + age) m2 <- brm(f1 + f2 + set_rescor(FALSE), data = jobs, refresh = 0) #> Compiling Stan program... #> Start sampling # Fit Bayesian mediation model in rstanarm m3 <- suppressWarnings(stan_mvmer( list( job_seek ~ treat + econ_hard + sex + age + (1 | occp), depress2 ~ treat + job_seek + econ_hard + sex + age + (1 | occp) ), data = jobs, refresh = 0 )) #> Fitting a multivariate glmer model. #> #> Please note the warmup may be much slower than later iterations! summary(m1) #> #> Causal Mediation Analysis #> #> Quasi-Bayesian Confidence Intervals #> #> Estimate 95% CI Lower 95% CI Upper p-value #> ACME -0.0157 -0.0387 0.01 0.19 #> ADE -0.0438 -0.1315 0.04 0.35 #> Total Effect -0.0595 -0.1530 0.02 0.21 #> Prop. Mediated 0.2137 -2.0277 2.70 0.32 #> #> Sample Size Used: 899 #> #> #> Simulations: 1000 #> mediation(m2, centrality = \"mean\", ci = 0.95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.125, 0.045] #> Indirect Effect (ACME) | -0.016 | [-0.041, 0.009] #> Mediator Effect | -0.240 | [-0.295, -0.183] #> Total Effect | -0.056 | [-0.139, 0.032] #> #> Proportion mediated: 27.87% [-169.41%, 225.15%] #> mediation(m3, centrality = \"mean\", ci = 0.95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.041 | [-0.124, 0.041] #> Indirect Effect (ACME) | -0.018 | [-0.043, 0.006] #> Mediator Effect | -0.241 | [-0.298, -0.183] #> Total Effect | -0.059 | [-0.145, 0.029] #> #> Proportion mediated: 30.30% [-216.03%, 276.63%] #> # }"},{"path":"https://easystats.github.io/bayestestR/reference/model_to_priors.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert model's posteriors to priors (EXPERIMENTAL) — model_to_priors","title":"Convert model's posteriors to priors (EXPERIMENTAL) — model_to_priors","text":"Convert model's posteriors (normal) priors.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/model_to_priors.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert model's posteriors to priors (EXPERIMENTAL) — model_to_priors","text":"","code":"model_to_priors(model, scale_multiply = 3, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/model_to_priors.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert model's posteriors to priors (EXPERIMENTAL) — model_to_priors","text":"model Bayesian model. scale_multiply SD posterior multiplied amount set prior avoid overly narrow priors. ... arguments insight::get_prior() describe_posterior.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/model_to_priors.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convert model's posteriors to priors (EXPERIMENTAL) — model_to_priors","text":"","code":"# \\donttest{ # brms models # ----------------------------------------------- if (require(\"brms\")) { formula <- brms::brmsformula(mpg ~ wt + cyl, center = FALSE) model <- brms::brm(formula, data = mtcars, refresh = 0) priors <- model_to_priors(model) priors <- brms::validate_prior(priors, formula, data = mtcars) priors model2 <- brms::brm(formula, data = mtcars, prior = priors, refresh = 0) } #> Compiling Stan program... #> Start sampling #> Compiling Stan program... #> Start sampling # }"},{"path":"https://easystats.github.io/bayestestR/reference/overlap.html","id":null,"dir":"Reference","previous_headings":"","what":"Overlap Coefficient — overlap","title":"Overlap Coefficient — overlap","text":"method calculate overlap coefficient two empirical distributions (can used measure similarity two samples).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/overlap.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Overlap Coefficient — overlap","text":"","code":"overlap( x, y, method_density = \"kernel\", method_auc = \"trapezoid\", precision = 2^10, extend = TRUE, extend_scale = 0.1, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/overlap.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Overlap Coefficient — overlap","text":"x Vector x values. y Vector x values. method_density Density estimation method. See estimate_density(). method_auc Area Curve (AUC) estimation method. See area_under_curve(). precision Number points density data. See n parameter density. extend Extend range x axis factor extend_scale. extend_scale Ratio range extend x axis. value 0.1 means x axis extended 1/10 range data. ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/overlap.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Overlap Coefficient — overlap","text":"","code":"library(bayestestR) x <- distribution_normal(1000, 2, 0.5) y <- distribution_normal(1000, 0, 1) overlap(x, y) #> # Overlap #> #> 18.6% plot(overlap(x, y))"},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":null,"dir":"Reference","previous_headings":"","what":"Probability of Direction (pd) — p_direction","title":"Probability of Direction (pd) — p_direction","text":"Compute Probability Direction (pd, also known Maximum Probability Effect - MPE). can interpreted probability parameter (described posterior distribution) strictly positive negative (whichever probable). Although differently expressed, index fairly similar (.e., strongly correlated) frequentist p-value (see details).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Probability of Direction (pd) — p_direction","text":"","code":"p_direction(x, ...) pd(x, ...) # S3 method for class 'numeric' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'data.frame' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, rvar_col = NULL, ... ) # S3 method for class 'MCMCglmm' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'emmGrid' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'slopes' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'stanreg' p_direction( x, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'brmsfit' p_direction( x, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'BFBayesFactor' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, ... ) # S3 method for class 'get_predicted' p_direction( x, method = \"direct\", null = 0, as_p = FALSE, remove_na = TRUE, use_iterations = FALSE, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Probability of Direction (pd) — p_direction","text":"x vector representing posterior distribution, data frame posterior draws (samples parameter). Can also Bayesian model. ... Currently used. method Can \"direct\" one methods estimate_density(), \"kernel\", \"logspline\" \"KernSmooth\". See details. null value considered \"null\" effect. Traditionally 0, also 1 case ratios change (, IRR, ...). as_p TRUE, p-direction (pd) values converted frequentist p-value using pd_to_p(). remove_na missing values removed computation? Note Inf (infinity) removed. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Probability of Direction (pd) — p_direction","text":"Values 0.5 1 0 1 (see ) corresponding probability direction (pd).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Probability of Direction (pd) — p_direction","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"what-is-the-pd-","dir":"Reference","previous_headings":"","what":"What is the pd?","title":"Probability of Direction (pd) — p_direction","text":"Probability Direction (pd) index effect existence, representing certainty effect goes particular direction (.e., positive negative / sign), typically ranging 0.5 1 (see next section cases can range 0 1). Beyond simplicity interpretation, understanding computation, index also presents interesting properties: Like posterior-based indices, pd solely based posterior distributions require additional information data model (e.g., priors, case Bayes factors). robust scale response variable predictors. strongly correlated frequentist p-value, can thus used draw parallels give reference readers non-familiar Bayesian statistics (Makowski et al., 2019).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"relationship-with-the-p-value","dir":"Reference","previous_headings":"","what":"Relationship with the p-value","title":"Probability of Direction (pd) — p_direction","text":"cases, seems pd direct correspondence frequentist one-sided p-value formula (two-sided p): p = 2 * (1 - pd) Thus, two-sided p-value respectively .1, .05, .01 .001 correspond approximately pd 95%, 97.5%, 99.5% 99.95%. See pd_to_p() details.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"possible-range-of-values","dir":"Reference","previous_headings":"","what":"Possible Range of Values","title":"Probability of Direction (pd) — p_direction","text":"largest value pd can take 1 - posterior strictly directional. However, smallest value pd can take depends parameter space represented posterior. continuous parameter space, exact values 0 (point null value) possible, 100% posterior sign, positive, negative. Therefore, smallest pd can 0.5 - equal posterior mass positive negative values. Values close 0.5 used support null hypothesis (parameter direction) similar large p-values used support null hypothesis (see pd_to_p(); Makowski et al., 2019). discrete parameter space parameter space mixture discrete continuous spaces, exact values 0 (point null value) possible! Therefore, smallest pd can 0 - 100% posterior mass 0. Thus values close 0 can used support null hypothesis (see van den Bergh et al., 2021). Examples posteriors representing discrete parameter space: parameter can take discrete values. mixture prior/posterior used (spike--slab prior; see van den Bergh et al., 2021). conducting Bayesian model averaging (e.g., weighted_posteriors() brms::posterior_average).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"methods-of-computation","dir":"Reference","previous_headings":"","what":"Methods of computation","title":"Probability of Direction (pd) — p_direction","text":"pd defined : $$p_d = max({Pr(\\hat{\\theta} < \\theta_{null}), Pr(\\hat{\\theta} > \\theta_{null})})$$ simple direct way compute pd compute proportion positive (larger null) posterior samples, proportion negative (smaller null) posterior samples, take larger two. \"simple\" method straightforward, precision directly tied number posterior draws. second approach relies density estimation: starts estimating continuous-smooth density function (many methods available), computing area curve (AUC) density curve either side null taking maximum . Note approach assumes continuous density function, posterior represents (partially) discrete parameter space, direct method must used (see ).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Probability of Direction (pd) — p_direction","text":"Makowski, D., Ben-Shachar, M. S., Chen, S. ., & Lüdecke, D. (2019). Indices effect existence significance Bayesian framework. Frontiers psychology, 10, 2767. doi:10.3389/fpsyg.2019.02767 van den Bergh, D., Haaf, J. M., Ly, ., Rouder, J. N., & Wagenmakers, E. J. (2021). cautionary note estimating effect size. Advances Methods Practices Psychological Science, 4(1). doi:10.1177/2515245921992035","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/p_direction.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Probability of Direction (pd) — p_direction","text":"","code":"library(bayestestR) # Simulate a posterior distribution of mean 1 and SD 1 # ---------------------------------------------------- posterior <- rnorm(1000, mean = 1, sd = 1) p_direction(posterior) #> Probability of Direction #> #> Parameter | pd #> ------------------ #> Posterior | 84.50% p_direction(posterior, method = \"kernel\") #> Probability of Direction #> #> Parameter | pd #> ------------------ #> Posterior | 83.17% # Simulate a dataframe of posterior distributions # ----------------------------------------------- df <- data.frame(replicate(4, rnorm(100))) p_direction(df) #> Probability of Direction #> #> Parameter | pd #> ------------------ #> X1 | 51.00% #> X2 | 52.00% #> X3 | 51.00% #> X4 | 58.00% p_direction(df, method = \"kernel\") #> Probability of Direction #> #> Parameter | pd #> ------------------ #> X1 | 51.24% #> X2 | 51.93% #> X3 | 50.15% #> X4 | 59.86% # \\donttest{ # rstanarm models # ----------------------------------------------- model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars, chains = 2, refresh = 0 ) p_direction(model) #> Probability of Direction #> #> Parameter | pd #> ------------------ #> (Intercept) | 100% #> wt | 100% #> cyl | 100% p_direction(model, method = \"kernel\") #> Probability of Direction #> #> Parameter | pd #> --------------------- #> (Intercept) | 100.00% #> wt | 99.98% #> cyl | 99.97% # emmeans # ----------------------------------------------- p_direction(emmeans::emtrends(model, ~1, \"wt\", data = mtcars)) #> Probability of Direction #> #> X1 | pd #> -------------- #> overall | 100% # brms models # ----------------------------------------------- model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 5e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.022 seconds (Warm-up) #> Chain 1: 0.021 seconds (Sampling) #> Chain 1: 0.043 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 2: 0.016 seconds (Sampling) #> Chain 2: 0.036 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 3: 0.017 seconds (Sampling) #> Chain 3: 0.037 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.017 seconds (Sampling) #> Chain 4: 0.037 seconds (Total) #> Chain 4: p_direction(model) #> Probability of Direction #> #> Parameter | pd #> -------------------- #> (Intercept) | 100% #> wt | 100% #> cyl | 99.98% p_direction(model, method = \"kernel\") #> Probability of Direction #> #> Parameter | pd #> -------------------- #> (Intercept) | 100% #> wt | 99.99% #> cyl | 99.97% # BayesFactor objects # ----------------------------------------------- bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) p_direction(bf) #> Probability of Direction #> #> Parameter | pd #> ----------------- #> Difference | 100% p_direction(bf, method = \"kernel\") #> Probability of Direction #> #> Parameter | pd #> ----------------- #> Difference | 100% # } # Using \"rvar_col\" x <- data.frame(mu = c(0, 0.5, 1), sigma = c(1, 0.5, 0.25)) x$my_rvar <- posterior::rvar_rng(rnorm, 3, mean = x$mu, sd = x$sigma) x #> mu sigma my_rvar #> 1 0.0 1.00 -0.01 ± 0.98 #> 2 0.5 0.50 0.49 ± 0.50 #> 3 1.0 0.25 1.00 ± 0.25 p_direction(x, rvar_col = \"my_rvar\") #> Probability of Direction #> #> mu | sigma | pd #> --------------------- #> 0.00 | 1.00 | 50.10% #> 0.50 | 0.50 | 83.90% #> 1.00 | 0.25 | 100%"},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"Compute Bayesian equivalent p-value, related odds parameter (described posterior distribution) null hypothesis (h0) using Mills' (2014, 2017) Objective Bayesian Hypothesis Testing framework. corresponds density value null (e.g., 0) divided density Maximum Posteriori (MAP).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"","code":"p_map(x, ...) p_pointnull(x, ...) # S3 method for class 'numeric' p_map(x, null = 0, precision = 2^10, method = \"kernel\", ...) # S3 method for class 'get_predicted' p_map( x, null = 0, precision = 2^10, method = \"kernel\", use_iterations = FALSE, verbose = TRUE, ... ) # S3 method for class 'data.frame' p_map(x, null = 0, precision = 2^10, method = \"kernel\", rvar_col = NULL, ...) # S3 method for class 'stanreg' p_map( x, null = 0, precision = 2^10, method = \"kernel\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... ) # S3 method for class 'brmsfit' p_map( x, null = 0, precision = 2^10, method = \"kernel\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. null value considered \"null\" effect. Traditionally 0, also 1 case ratios change (, IRR, ...). precision Number points density data. See n parameter density. method Density estimation method. Can \"kernel\" (default), \"logspline\" \"KernSmooth\". use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"Note method sensitive density estimation method (see section examples ).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"strengths-and-limitations","dir":"Reference","previous_headings":"","what":"Strengths and Limitations","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"Strengths: Straightforward computation. Objective property posterior distribution. Limitations: Limited information favoring null hypothesis. Relates density approximation. Indirect relationship mathematical definition interpretation. suitable weak / diffused priors.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"Makowski D, Ben-Shachar MS, Chen SHA, Lüdecke D (2019) Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767 Mills, J. . (2018). Objective Bayesian Precise Hypothesis Testing. University Cincinnati.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/p_map.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayesian p-value based on the density at the Maximum A Posteriori (MAP) — p_map","text":"","code":"library(bayestestR) p_map(rnorm(1000, 0, 1)) #> MAP-based p-value #> #> Parameter | p (MAP) #> ------------------- #> Posterior | 0.998 p_map(rnorm(1000, 10, 1)) #> MAP-based p-value #> #> Parameter | p (MAP) #> ------------------- #> Posterior | < .001 # \\donttest{ model <- suppressWarnings( rstanarm::stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) p_map(model) #> MAP-based p-value #> #> Parameter | p (MAP) #> --------------------- #> (Intercept) | < .001 #> wt | < .001 #> gear | 0.963 p_map(suppressWarnings( emmeans::emtrends(model, ~1, \"wt\", data = mtcars) )) #> MAP-based p-value #> #> X1 | p (MAP) #> ----------------- #> overall | < .001 model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 1: 0.017 seconds (Sampling) #> Chain 1: 0.038 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 2: 0.018 seconds (Sampling) #> Chain 2: 0.037 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.021 seconds (Warm-up) #> Chain 3: 0.02 seconds (Sampling) #> Chain 3: 0.041 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 4: 0.02 seconds (Sampling) #> Chain 4: 0.039 seconds (Total) #> Chain 4: p_map(model) #> MAP-based p-value #> #> Parameter | p (MAP) #> --------------------- #> (Intercept) | < .001 #> wt | 0.002 #> cyl | 0.005 bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) p_map(bf) #> MAP-based p-value #> #> Parameter | p (MAP) #> -------------------- #> Difference | < .001 # --------------------------------------- # Robustness to density estimation method set.seed(333) data <- data.frame() for (iteration in 1:250) { x <- rnorm(1000, 1, 1) result <- data.frame( Kernel = as.numeric(p_map(x, method = \"kernel\")), KernSmooth = as.numeric(p_map(x, method = \"KernSmooth\")), logspline = as.numeric(p_map(x, method = \"logspline\")) ) data <- rbind(data, result) } data$KernSmooth <- data$Kernel - data$KernSmooth data$logspline <- data$Kernel - data$logspline summary(data$KernSmooth) #> Min. 1st Qu. Median Mean 3rd Qu. Max. #> -0.039724 -0.007909 -0.003885 -0.005338 -0.001128 0.056325 summary(data$logspline) #> Min. 1st Qu. Median Mean 3rd Qu. Max. #> -0.092243 -0.009008 0.022214 0.026966 0.066303 0.166870 boxplot(data[c(\"KernSmooth\", \"logspline\")]) # }"},{"path":"https://easystats.github.io/bayestestR/reference/p_rope.html","id":null,"dir":"Reference","previous_headings":"","what":"Probability of being in the ROPE — p_rope","title":"Probability of being in the ROPE — p_rope","text":"Compute proportion whole posterior distribution lie within region practical equivalence (ROPE). equivalent running rope(..., ci = 1).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_rope.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Probability of being in the ROPE — p_rope","text":"","code":"p_rope(x, ...) # S3 method for class 'numeric' p_rope(x, range = \"default\", verbose = TRUE, ...) # S3 method for class 'data.frame' p_rope(x, range = \"default\", rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'stanreg' p_rope( x, range = \"default\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' p_rope( x, range = \"default\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/p_rope.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Probability of being in the ROPE — p_rope","text":"x Vector representing posterior distribution. Can also stanreg brmsfit model. ... Currently used. range ROPE's lower higher bounds. \"default\" depending number outcome variables vector list. models one response, range can : vector length two (e.g., c(-0.1, 0.1)), list numeric vector length numbers parameters (see 'Examples'). list named numeric vectors, names correspond parameter names. case, parameters matching name range set \"default\". multivariate models, range list another list (one response variable) numeric vectors . Vector names correspond name response variables. \"default\" input vector, range set c(-0.1, 0.1). \"default\" input Bayesian model, rope_range() used. See 'Examples'. verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_rope.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Probability of being in the ROPE — p_rope","text":"","code":"library(bayestestR) p_rope(x = rnorm(1000, 0, 0.01), range = c(-0.1, 0.1)) #> Proportion of samples inside the ROPE [-0.10, 0.10]: > .999 p_rope(x = mtcars, range = c(-0.1, 0.1)) #> Proportion of samples inside the ROPE [-0.10, 0.10] #> #> Parameter | p (ROPE) #> -------------------- #> mpg | < .001 #> cyl | < .001 #> disp | < .001 #> hp | < .001 #> drat | < .001 #> wt | < .001 #> qsec | < .001 #> vs | 0.562 #> am | 0.594 #> gear | < .001 #> carb | < .001"},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":null,"dir":"Reference","previous_headings":"","what":"Practical Significance (ps) — p_significance","title":"Practical Significance (ps) — p_significance","text":"Compute probability Practical Significance (ps), can conceptualized unidirectional equivalence test. returns probability effect given threshold corresponding negligible effect median's direction. Mathematically, defined proportion posterior distribution median sign threshold.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Practical Significance (ps) — p_significance","text":"","code":"p_significance(x, ...) # S3 method for class 'numeric' p_significance(x, threshold = \"default\", ...) # S3 method for class 'get_predicted' p_significance( x, threshold = \"default\", use_iterations = FALSE, verbose = TRUE, ... ) # S3 method for class 'data.frame' p_significance(x, threshold = \"default\", rvar_col = NULL, ...) # S3 method for class 'stanreg' p_significance( x, threshold = \"default\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' p_significance( x, threshold = \"default\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Practical Significance (ps) — p_significance","text":"x Vector representing posterior distribution. Can also stanreg brmsfit model. ... Currently used. threshold threshold value separates significant negligible effect, can following possible values: \"default\", case range set 0.1 input vector, based rope_range() (Bayesian) model provided. single numeric value (e.g., 0.1), used range around zero (.e. threshold range set -0.1 0.1, .e. reflects symmetric interval) numeric vector length two (e.g., c(-0.2, 0.1)), useful asymmetric intervals list numeric vectors, vector corresponds parameter list named numeric vectors, names correspond parameter names. case, parameters matching name threshold set \"default\". use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Practical Significance (ps) — p_significance","text":"Values 0 1 corresponding probability practical significance (ps).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Practical Significance (ps) — p_significance","text":"p_significance() returns proportion probability distribution (x) outside certain range (negligible effect, ROPE, see argument threshold). values distribution ROPE, p_significance() returns higher probability value outside ROPE. Typically, value larger 0.5 indicate practical significance. However, range negligible effect rather large compared range probability distribution x, p_significance() less 0.5, indicates clear practical significance.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Practical Significance (ps) — p_significance","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_significance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Practical Significance (ps) — p_significance","text":"","code":"library(bayestestR) # Simulate a posterior distribution of mean 1 and SD 1 # ---------------------------------------------------- posterior <- rnorm(1000, mean = 1, sd = 1) p_significance(posterior) #> Practical Significance (threshold: 0.10) #> #> Parameter | ps #> ---------------- #> Posterior | 0.80 # Simulate a dataframe of posterior distributions # ----------------------------------------------- df <- data.frame(replicate(4, rnorm(100))) p_significance(df) #> Practical Significance (threshold: 0.10) #> #> Parameter | ps #> ---------------- #> X1 | 0.51 #> X2 | 0.55 #> X3 | 0.47 #> X4 | 0.47 # \\donttest{ # rstanarm models # ----------------------------------------------- model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars, chains = 2, refresh = 0 ) p_significance(model) #> Practical Significance (threshold: 0.60) #> #> Parameter | ps #> ------------------ #> (Intercept) | 1.00 #> wt | 1.00 #> cyl | 0.98 # multiple thresholds - asymmetric, symmetric, default p_significance(model, threshold = list(c(-10, 5), 0.2, \"default\")) #> Practical Significance #> #> Parameter | ps | ROPE #> ----------------------------------- #> (Intercept) | 1.00 | [-10.00, 5.00] #> wt | 1.00 | [ -0.20, 0.20] #> cyl | 0.98 | [ -0.60, 0.60] # named thresholds p_significance(model, threshold = list(wt = 0.2, `(Intercept)` = c(-10, 5))) #> Practical Significance #> #> Parameter | ps | ROPE #> ----------------------------------- #> (Intercept) | 1.00 | [-10.00, 5.00] #> wt | 1.00 | [ -0.20, 0.20] #> cyl | 0.98 | [ -0.60, 0.60] # }"},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"Convert p-values (pseudo) Bayes Factors. transformation suggested Wagenmakers (2022), based vast amount assumptions. might therefore reliable. Use risks. accurate approximate Bayes factors, use bic_to_bf() instead.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"","code":"p_to_bf(x, ...) # S3 method for class 'numeric' p_to_bf(x, log = FALSE, n_obs = NULL, ...) # Default S3 method p_to_bf(x, log = FALSE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"x (frequentist) model object, (numeric) vector p-values. ... arguments passed (used now). log Wether return log Bayes Factors. Note: print() method always shows BF - \"log_BF\" column accessible returned data frame. n_obs Number observations. Either length 1, length p.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"data frame p-values pseudo-Bayes factors (null).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"Wagenmakers, E.J. (2022). Approximate objective Bayes factors p-values sample size: 3p(sqrt(n)) rule. Preprint available ArXiv: https://psyarxiv.com/egydq","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/p_to_bf.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convert p-values to (pseudo) Bayes Factors — p_to_bf","text":"","code":"data(iris) model <- lm(Petal.Length ~ Sepal.Length + Species, data = iris) p_to_bf(model) #> Pseudo-BF (against NULL) #> #> Parameter | p | BF #> ------------------------------------- #> (Intercept) | < .001 | 2.71e+09 #> Sepal.Length | < .001 | 2.43e+26 #> Speciesversicolor | < .001 | 2.82e+64 #> Speciesvirginica | < .001 | 5.53e+68 # Examples that demonstrate comparison between # BIC-approximated and pseudo BF # -------------------------------------------- m0 <- lm(mpg ~ 1, mtcars) m1 <- lm(mpg ~ am, mtcars) m2 <- lm(mpg ~ factor(cyl), mtcars) # In this first example, BIC-approximated BF and # pseudo-BF based on p-values are close... # BIC-approximated BF, m1 against null model bic_to_bf(BIC(m1), denominator = BIC(m0)) #> [1] 222.005 # pseudo-BF based on p-values - dropping intercept p_to_bf(m1)[-1, ] #> Pseudo-BF (against NULL) #> #> Parameter | p | BF #> --------------------------- #> am | < .001 | 206.74 # The second example shows that results from pseudo-BF are less accurate # and should be handled wit caution! bic_to_bf(BIC(m2), denominator = BIC(m0)) #> [1] 45355714 p_to_bf(anova(m2), n_obs = nrow(mtcars)) #> Pseudo-BF (against NULL) #> #> Parameter | p | BF #> ------------------------------- #> factor(cyl) | < .001 | 1.18e+07"},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":null,"dir":"Reference","previous_headings":"","what":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"Enables conversion Probability Direction (pd) p-value.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"","code":"pd_to_p(pd, ...) # S3 method for class 'numeric' pd_to_p(pd, direction = \"two-sided\", verbose = TRUE, ...) p_to_pd(p, direction = \"two-sided\", ...) convert_p_to_pd(p, direction = \"two-sided\", ...) convert_pd_to_p(pd, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"pd Probability Direction (pd) value (0 1). Can also data frame column named pd, p_direction, PD, returned p_direction(). case, column converted p-values new data frame returned. ... Arguments passed methods. direction type p-value requested provided. Can \"two-sided\" (default, two tailed) \"one-sided\" (one tailed). verbose Toggle warnings. p p-value.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"p-value data frame p-value column.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"Conversion done using following equation (see Makowski et al., 2019): direction = \"two-sided\" p = 2 * (1 - pd) direction = \"one-sided\" p = 1 - pd Note conversion valid lowest possible values pd 0.5 - .e., posterior represents continuous parameter space (see p_direction()). pd < 0.5 detected, converted p 1, warning given.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"Makowski, D., Ben-Shachar, M. S., Chen, S. H. ., Lüdecke, D. (2019). Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767","code":""},{"path":"https://easystats.github.io/bayestestR/reference/pd_to_p.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convert between Probability of Direction (pd) and p-value. — pd_to_p","text":"","code":"pd_to_p(pd = 0.95) #> [1] 0.1 pd_to_p(pd = 0.95, direction = \"one-sided\") #> [1] 0.05"},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":null,"dir":"Reference","previous_headings":"","what":"Point-estimates of posterior distributions — point_estimate","title":"Point-estimates of posterior distributions — point_estimate","text":"Compute various point-estimates, mean, median MAP, describe posterior distributions.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Point-estimates of posterior distributions — point_estimate","text":"","code":"point_estimate(x, ...) # S3 method for class 'numeric' point_estimate(x, centrality = \"all\", dispersion = FALSE, threshold = 0.1, ...) # S3 method for class 'data.frame' point_estimate( x, centrality = \"all\", dispersion = FALSE, threshold = 0.1, rvar_col = NULL, ... ) # S3 method for class 'stanreg' point_estimate( x, centrality = \"all\", dispersion = FALSE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, ... ) # S3 method for class 'brmsfit' point_estimate( x, centrality = \"all\", dispersion = FALSE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, ... ) # S3 method for class 'BFBayesFactor' point_estimate(x, centrality = \"all\", dispersion = FALSE, ...) # S3 method for class 'get_predicted' point_estimate( x, centrality = \"all\", dispersion = FALSE, use_iterations = FALSE, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Point-estimates of posterior distributions — point_estimate","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Additional arguments passed methods. centrality point-estimates (centrality indices) compute. Character (vector) list one options: \"median\", \"mean\", \"MAP\" (see map_estimate()), \"trimmed\" (just mean(x, trim = threshold)), \"mode\" \"\". dispersion Logical, TRUE, computes indices dispersion related estimate(s) (SD MAD mean median, respectively). Dispersion available \"MAP\" \"mode\" centrality indices. threshold centrality = \"trimmed\" (.e. trimmed mean), indicates fraction (0 0.5) observations trimmed end vector mean computed. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Point-estimates of posterior distributions — point_estimate","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Point-estimates of posterior distributions — point_estimate","text":"Makowski, D., Ben-Shachar, M. S., Chen, S. H. ., Lüdecke, D. (2019). Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767","code":""},{"path":"https://easystats.github.io/bayestestR/reference/point_estimate.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Point-estimates of posterior distributions — point_estimate","text":"","code":"library(bayestestR) point_estimate(rnorm(1000)) #> Point Estimate #> #> Median | Mean | MAP #> ---------------------- #> 5.58e-03 | 0.01 | 0.03 point_estimate(rnorm(1000), centrality = \"all\", dispersion = TRUE) #> Point Estimate #> #> Median | MAD | Mean | SD | MAP #> --------------------------------------- #> -0.02 | 0.97 | -5.33e-03 | 1.00 | 0.05 point_estimate(rnorm(1000), centrality = c(\"median\", \"MAP\")) #> Point Estimate #> #> Median | MAP #> -------------- #> 0.03 | -0.07 df <- data.frame(replicate(4, rnorm(100))) point_estimate(df, centrality = \"all\", dispersion = TRUE) #> Point Estimate #> #> Parameter | Median | MAD | Mean | SD | MAP #> ------------------------------------------------ #> X1 | -0.02 | 1.21 | 0.02 | 1.12 | 0.85 #> X2 | -0.07 | 1.07 | -0.10 | 1.00 | -0.18 #> X3 | -0.29 | 1.02 | -0.33 | 0.89 | -0.10 #> X4 | -0.08 | 0.90 | -0.14 | 0.89 | 0.11 point_estimate(df, centrality = c(\"median\", \"MAP\")) #> Point Estimate #> #> Parameter | Median | MAP #> -------------------------- #> X1 | -0.02 | 0.85 #> X2 | -0.07 | -0.18 #> X3 | -0.29 | -0.10 #> X4 | -0.08 | 0.11 # \\donttest{ # rstanarm models # ----------------------------------------------- model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.044 seconds (Warm-up) #> Chain 1: 0.041 seconds (Sampling) #> Chain 1: 0.085 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 9e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 2: 0.04 seconds (Sampling) #> Chain 2: 0.085 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 9e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.044 seconds (Warm-up) #> Chain 3: 0.049 seconds (Sampling) #> Chain 3: 0.093 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 9e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 4: 0.041 seconds (Sampling) #> Chain 4: 0.086 seconds (Total) #> Chain 4: point_estimate(model, centrality = \"all\", dispersion = TRUE) #> Point Estimate #> #> Parameter | Median | MAD | Mean | SD | MAP #> -------------------------------------------------- #> (Intercept) | 39.65 | 1.86 | 39.69 | 1.84 | 39.57 #> wt | -3.19 | 0.81 | -3.18 | 0.82 | -3.19 #> cyl | -1.51 | 0.44 | -1.52 | 0.44 | -1.54 point_estimate(model, centrality = c(\"median\", \"MAP\")) #> Point Estimate #> #> Parameter | Median | MAP #> ---------------------------- #> (Intercept) | 39.65 | 39.57 #> wt | -3.19 | -3.19 #> cyl | -1.51 | -1.54 # emmeans estimates # ----------------------------------------------- point_estimate( emmeans::emtrends(model, ~1, \"wt\", data = mtcars), centrality = c(\"median\", \"MAP\") ) #> Point Estimate #> #> X1 | Median | MAP #> ------------------------ #> overall | -3.19 | -3.19 # brms models # ----------------------------------------------- model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 8e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 1: 0.019 seconds (Sampling) #> Chain 1: 0.039 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 2: 0.017 seconds (Sampling) #> Chain 2: 0.037 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.019 seconds (Sampling) #> Chain 3: 0.038 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.019 seconds (Sampling) #> Chain 4: 0.039 seconds (Total) #> Chain 4: point_estimate(model, centrality = \"all\", dispersion = TRUE) #> Point Estimate #> #> Parameter | Median | MAD | Mean | SD | MAP #> -------------------------------------------------- #> (Intercept) | 39.67 | 1.71 | 39.67 | 1.78 | 39.86 #> wt | -3.22 | 0.78 | -3.20 | 0.80 | -3.32 #> cyl | -1.49 | 0.44 | -1.50 | 0.43 | -1.46 point_estimate(model, centrality = c(\"median\", \"MAP\")) #> Point Estimate #> #> Parameter | Median | MAP #> ---------------------------- #> (Intercept) | 39.67 | 39.86 #> wt | -3.22 | -3.32 #> cyl | -1.49 | -1.46 # BayesFactor objects # ----------------------------------------------- bf <- BayesFactor::ttestBF(x = rnorm(100, 1, 1)) point_estimate(bf, centrality = \"all\", dispersion = TRUE) #> Point Estimate #> #> Parameter | Median | MAD | Mean | SD | MAP #> ----------------------------------------------- #> Difference | 1.03 | 0.11 | 1.03 | 0.11 | 1.02 point_estimate(bf, centrality = c(\"median\", \"MAP\")) #> Point Estimate #> #> Parameter | Median | MAP #> -------------------------- #> Difference | 1.03 | 1.04 # }"},{"path":"https://easystats.github.io/bayestestR/reference/reexports.html","id":null,"dir":"Reference","previous_headings":"","what":"Objects exported from other packages — reexports","title":"Objects exported from other packages — reexports","text":"objects imported packages. Follow links see documentation. insight print_html, print_md","code":""},{"path":"https://easystats.github.io/bayestestR/reference/reshape_iterations.html","id":null,"dir":"Reference","previous_headings":"","what":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","title":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","text":"Reshape wide data.frame iterations (posterior draws bootsrapped samples) columns long format. Instead iterations columns (e.g., iter_1, iter_2, ...), return 3 columns \\*_index (previous index row), \\*_group (iteration number) \\*_value (value said iteration).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/reshape_iterations.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","text":"","code":"reshape_iterations(x, prefix = c(\"draw\", \"iter\", \"iteration\", \"sim\")) reshape_draws(x, prefix = c(\"draw\", \"iter\", \"iteration\", \"sim\"))"},{"path":"https://easystats.github.io/bayestestR/reference/reshape_iterations.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","text":"x data.frame containing posterior draws obtained estimate_response estimate_link. prefix prefix draws (instance, \"iter_\" columns named iter_1, iter_2, iter_3). one provided, search first one matches.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/reshape_iterations.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","text":"Data frame reshaped draws long format.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/reshape_iterations.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Reshape estimations with multiple iterations (draws) to long format — reshape_iterations","text":"","code":"# \\donttest{ if (require(\"rstanarm\")) { model <- stan_glm(mpg ~ am, data = mtcars, refresh = 0) draws <- insight::get_predicted(model) long_format <- reshape_iterations(draws) head(long_format) } #> Predicted iter_index iter_group iter_value #> 1 24.38890 1 1 24.05244 #> 2 24.38890 2 1 24.05244 #> 3 24.38890 3 1 24.05244 #> 4 17.14047 4 1 17.27725 #> 5 17.14047 5 1 17.27725 #> 6 17.14047 6 1 17.27725 # }"},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":null,"dir":"Reference","previous_headings":"","what":"Region of Practical Equivalence (ROPE) — rope","title":"Region of Practical Equivalence (ROPE) — rope","text":"Compute proportion HDI (default 89% HDI) posterior distribution lies within region practical equivalence.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Region of Practical Equivalence (ROPE) — rope","text":"","code":"rope(x, ...) # S3 method for class 'numeric' rope(x, range = \"default\", ci = 0.95, ci_method = \"ETI\", verbose = TRUE, ...) # S3 method for class 'data.frame' rope( x, range = \"default\", ci = 0.95, ci_method = \"ETI\", rvar_col = NULL, verbose = TRUE, ... ) # S3 method for class 'stanreg' rope( x, range = \"default\", ci = 0.95, ci_method = \"ETI\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' rope( x, range = \"default\", ci = 0.95, ci_method = \"ETI\", effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Region of Practical Equivalence (ROPE) — rope","text":"x Vector representing posterior distribution. Can also stanreg brmsfit model. ... Currently used. range ROPE's lower higher bounds. \"default\" depending number outcome variables vector list. models one response, range can : vector length two (e.g., c(-0.1, 0.1)), list numeric vector length numbers parameters (see 'Examples'). list named numeric vectors, names correspond parameter names. case, parameters matching name range set \"default\". multivariate models, range list another list (one response variable) numeric vectors . Vector names correspond name response variables. \"default\" input vector, range set c(-0.1, 0.1). \"default\" input Bayesian model, rope_range() used. See 'Examples'. ci Credible Interval (CI) probability, corresponding proportion HDI, use percentage ROPE. ci_method type interval use quantify percentage ROPE. Can 'HDI' (default) 'ETI'. See ci(). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Region of Practical Equivalence (ROPE) — rope","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"rope","dir":"Reference","previous_headings":"","what":"ROPE","title":"Region of Practical Equivalence (ROPE) — rope","text":"Statistically, probability posterior distribution different 0 make much sense (probability single value null hypothesis continuous distribution 0). Therefore, idea underlining ROPE let user define area around null value enclosing values equivalent null value practical purposes (Kruschke 2010, 2011, 2014). Kruschke (2018) suggests null value set, default, -0.1 0.1 range standardized parameter (negligible effect size according Cohen, 1988). generalized: instance, linear models, ROPE set 0 +/- .1 * sd(y). ROPE range can automatically computed models using rope_range() function. Kruschke (2010, 2011, 2014) suggests using proportion 95% (89%, considered stable) HDI falls within ROPE index \"null-hypothesis\" testing (understood Bayesian framework, see equivalence_test()).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"sensitivity-to-parameter-s-scale","dir":"Reference","previous_headings":"","what":"Sensitivity to parameter's scale","title":"Region of Practical Equivalence (ROPE) — rope","text":"important consider unit (.e., scale) predictors using index based ROPE, correct interpretation ROPE representing region practical equivalence zero dependent scale predictors. Indeed, percentage ROPE depend unit parameter. words, ROPE represents fixed portion response's scale, proximity coefficient depends scale coefficient .","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"multicollinearity-non-independent-covariates","dir":"Reference","previous_headings":"","what":"Multicollinearity - Non-independent covariates","title":"Region of Practical Equivalence (ROPE) — rope","text":"parameters show strong correlations, .e. covariates independent, joint parameter distributions may shift towards away ROPE. Collinearity invalidates ROPE hypothesis testing based univariate marginals, probabilities conditional independence. problematic parameters partial overlap ROPE region. case collinearity, (joint) distributions parameters may either get increased decreased ROPE, means inferences based rope() inappropriate (Kruschke 2014, 340f). rope() performs simple check pairwise correlations parameters, can collinearity two variables, first step check assumptions hypothesis testing look different pair plots. even sophisticated check projection predictive variable selection (Piironen Vehtari 2017).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"strengths-and-limitations","dir":"Reference","previous_headings":"","what":"Strengths and Limitations","title":"Region of Practical Equivalence (ROPE) — rope","text":"Strengths: Provides information related practical relevance effects. Limitations: ROPE range needs arbitrarily defined. Sensitive scale (unit) predictors. sensitive highly significant effects.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Region of Practical Equivalence (ROPE) — rope","text":"Cohen, J. (1988). Statistical power analysis behavioural sciences. Kruschke, J. K. (2010). believe: Bayesian methods data analysis. Trends cognitive sciences, 14(7), 293-300. doi:10.1016/j.tics.2010.05.001 . Kruschke, J. K. (2011). Bayesian assessment null values via parameter estimation model comparison. Perspectives Psychological Science, 6(3), 299-312. doi:10.1177/1745691611406925 . Kruschke, J. K. (2014). Bayesian data analysis: tutorial R, JAGS, Stan. Academic Press. doi:10.1177/2515245918771304 . Kruschke, J. K. (2018). Rejecting accepting parameter values Bayesian estimation. Advances Methods Practices Psychological Science, 1(2), 270-280. doi:10.1177/2515245918771304 . Makowski D, Ben-Shachar MS, Chen SHA, Lüdecke D (2019) Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767 Piironen, J., & Vehtari, . (2017). Comparison Bayesian predictive methods model selection. Statistics Computing, 27(3), 711–735. doi:10.1007/s11222-016-9649-y","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Region of Practical Equivalence (ROPE) — rope","text":"","code":"library(bayestestR) rope(x = rnorm(1000, 0, 0.01), range = c(-0.1, 0.1)) #> # Proportion of samples inside the ROPE [-0.10, 0.10]: #> #> inside ROPE #> ----------- #> 100.00 % #> rope(x = rnorm(1000, 0, 1), range = c(-0.1, 0.1)) #> # Proportion of samples inside the ROPE [-0.10, 0.10]: #> #> inside ROPE #> ----------- #> 8.32 % #> rope(x = rnorm(1000, 1, 0.01), range = c(-0.1, 0.1)) #> # Proportion of samples inside the ROPE [-0.10, 0.10]: #> #> inside ROPE #> ----------- #> 0.00 % #> rope(x = rnorm(1000, 1, 1), ci = c(0.90, 0.95)) #> # Proportions of samples inside the ROPE [-0.10, 0.10]: #> #> ROPE for the 90% HDI: #> #> inside ROPE #> ----------- #> 4.89 % #> #> #> ROPE for the 95% HDI: #> #> inside ROPE #> ----------- #> 4.63 % #> #> # \\donttest{ library(rstanarm) model <- suppressWarnings( stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) rope(model) #> # Proportion of samples inside the ROPE [-0.60, 0.60]: #> #> Parameter | inside ROPE #> ------------------------- #> (Intercept) | 0.00 % #> wt | 0.00 % #> gear | 43.68 % #> rope(model, ci = c(0.90, 0.95)) #> # Proportions of samples inside the ROPE [-0.60, 0.60]: #> #> ROPE for the 90% HDI: #> #> Parameter | inside ROPE #> ------------------------- #> (Intercept) | 0.00 % #> wt | 0.00 % #> gear | 46.11 % #> #> #> ROPE for the 95% HDI: #> #> Parameter | inside ROPE #> ------------------------- #> (Intercept) | 0.00 % #> wt | 0.00 % #> gear | 43.68 % #> #> # multiple ROPE ranges rope(model, range = list(c(-10, 5), c(-0.2, 0.2), \"default\")) #> # Proportion of samples inside the ROPE [-10.00, 5.00]: #> #> Parameter | inside ROPE #> ------------------------- #> (Intercept) | 0.00 % #> wt | 0.00 % #> gear | 10.53 % #> # named ROPE ranges rope(model, range = list(gear = c(-3, 2), wt = c(-0.2, 0.2))) #> # Proportion of samples inside the ROPE [-0.10, 0.10]: #> #> Parameter | inside ROPE #> ------------------------- #> (Intercept) | 0.00 % #> wt | 0.00 % #> gear | 100.00 % #> library(emmeans) rope(emtrends(model, ~1, \"wt\"), ci = c(0.90, 0.95)) #> # Proportions of samples inside the ROPE [-0.10, 0.10]: #> #> ROPE for the 90% HDI: #> #> X1 | inside ROPE #> --------------------- #> overall | 0.00 % #> #> #> ROPE for the 95% HDI: #> #> X1 | inside ROPE #> --------------------- #> overall | 0.00 % #> #> library(brms) model <- brm(mpg ~ wt + cyl, data = mtcars, refresh = 0) #> Compiling Stan program... #> Start sampling rope(model) #> Possible multicollinearity between b_cyl and b_wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?rope'. #> # Proportion of samples inside the ROPE [-0.60, 0.60]: #> #> Parameter | inside ROPE #> ----------------------- #> Intercept | 0.00 % #> wt | 0.00 % #> cyl | 0.00 % #> rope(model, ci = c(0.90, 0.95)) #> Possible multicollinearity between b_cyl and b_wt (r = 0.78). This might #> lead to inappropriate results. See 'Details' in '?rope'. #> # Proportions of samples inside the ROPE [-0.60, 0.60]: #> #> ROPE for the 90% HDI: #> #> Parameter | inside ROPE #> ----------------------- #> Intercept | 0.00 % #> wt | 0.00 % #> cyl | 0.00 % #> #> #> ROPE for the 95% HDI: #> #> Parameter | inside ROPE #> ----------------------- #> Intercept | 0.00 % #> wt | 0.00 % #> cyl | 0.00 % #> #> library(brms) model <- brm( bf(mvbind(mpg, disp) ~ wt + cyl) + set_rescor(rescor = TRUE), data = mtcars, refresh = 0 ) #> Compiling Stan program... #> Start sampling rope(model) #> Possible multicollinearity between b_mpg_cyl and b_mpg_wt (r = 0.79), #> b_disp_cyl and b_disp_wt (r = 0.79). This might lead to inappropriate #> results. See 'Details' in '?rope'. #> # Proportion of samples inside the ROPE. #> ROPE with depends on outcome variable. #> #> Parameter | inside ROPE | ROPE width #> ---------------------------------------------- #> mpg_Intercept | 0.00 % | [-0.60, 0.60] #> mpg_wt | 0.00 % | [-0.60, 0.60] #> mpg_cyl | 0.00 % | [-0.60, 0.60] #> disp_Intercept | 0.00 % | [-12.39, 12.39] #> disp_wt | 0.00 % | [-12.39, 12.39] #> disp_cyl | 0.00 % | [-12.39, 12.39] #> rope(model, ci = c(0.90, 0.95)) #> Possible multicollinearity between b_mpg_cyl and b_mpg_wt (r = 0.79), #> b_disp_cyl and b_disp_wt (r = 0.79). This might lead to inappropriate #> results. See 'Details' in '?rope'. #> # Proportions of samples inside the ROPE. #> ROPE with depends on outcome variable. #> #> ROPE for the 90% HDI: #> #> Parameter | inside ROPE | ROPE width #> ---------------------------------------------- #> mpg_Intercept | 0.00 % | [-0.60, 0.60] #> mpg_wt | 0.00 % | [-0.60, 0.60] #> mpg_cyl | 0.00 % | [-0.60, 0.60] #> disp_Intercept | 0.00 % | [-12.39, 12.39] #> disp_wt | 0.00 % | [-12.39, 12.39] #> disp_cyl | 0.00 % | [-12.39, 12.39] #> #> #> ROPE for the 95% HDI: #> #> Parameter | inside ROPE | ROPE width #> ---------------------------------------------- #> mpg_Intercept | 0.00 % | [-0.60, 0.60] #> mpg_wt | 0.00 % | [-0.60, 0.60] #> mpg_cyl | 0.00 % | [-0.60, 0.60] #> disp_Intercept | 0.00 % | [-12.39, 12.39] #> disp_wt | 0.00 % | [-12.39, 12.39] #> disp_cyl | 0.00 % | [-12.39, 12.39] #> #> # different ROPE ranges for model parameters. For each response, a named # list (with the name of the response variable) is required as list-element # for the `range` argument. rope( model, range = list( mpg = list(b_mpg_wt = c(-1, 1), b_mpg_cyl = c(-2, 2)), disp = list(b_disp_wt = c(-5, 5), b_disp_cyl = c(-4, 4)) ) ) #> Possible multicollinearity between b_mpg_cyl and b_mpg_wt (r = 0.79), #> b_disp_cyl and b_disp_wt (r = 0.79). This might lead to inappropriate #> results. See 'Details' in '?rope'. #> # Proportion of samples inside the ROPE. #> ROPE with depends on outcome variable. #> #> Parameter | inside ROPE | ROPE width #> -------------------------------------------- #> mpg_Intercept | 0.00 % | [-0.10, 0.10] #> mpg_wt | 0.00 % | [-1.00, 1.00] #> mpg_cyl | 89.82 % | [-2.00, 2.00] #> disp_Intercept | 0.00 % | [-0.10, 0.10] #> disp_wt | 0.00 % | [-5.00, 5.00] #> disp_cyl | 0.00 % | [-4.00, 4.00] #> library(BayesFactor) bf <- ttestBF(x = rnorm(100, 1, 1)) rope(bf) #> # Proportion of samples inside the ROPE [-0.10, 0.10]: #> #> Parameter | inside ROPE #> ------------------------ #> Difference | 0.00 % #> rope(bf, ci = c(0.90, 0.95)) #> # Proportions of samples inside the ROPE [-0.10, 0.10]: #> #> ROPE for the 90% HDI: #> #> Parameter | inside ROPE #> ------------------------ #> Difference | 0.00 % #> #> #> ROPE for the 95% HDI: #> #> Parameter | inside ROPE #> ------------------------ #> Difference | 0.00 % #> #> # }"},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":null,"dir":"Reference","previous_headings":"","what":"Find Default Equivalence (ROPE) Region Bounds — rope_range","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"function attempts automatically finding suitable \"default\" values Region Practical Equivalence (ROPE).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"","code":"rope_range(x, ...) # Default S3 method rope_range(x, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"x stanreg, brmsfit BFBayesFactor object, frequentist regression model. ... Currently used. verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"Kruschke (2018) suggests region practical equivalence set, default, range -0.1 0.1 standardized parameter (negligible effect size according Cohen, 1988). linear models (lm), can generalised -0.1 * SDy, 0.1 * SDy. logistic models, parameters expressed log odds ratio can converted standardized difference formula π/√(3), resulting range -0.18 0.18. models binary outcome, strongly recommended manually specify rope argument. Currently, default applied logistic models. models count data, residual variance used. rather experimental threshold probably often similar -0.1, 0.1, used care! t-tests, standard deviation response used, similarly linear models (see ). correlations, -0.05, 0.05 used, .e., half value negligible correlation suggested Cohen's (1988) rules thumb. models, -0.1, 0.1 used determine ROPE limits, strongly advised specify manually.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"Kruschke, J. K. (2018). Rejecting accepting parameter values Bayesian estimation. Advances Methods Practices Psychological Science, 1(2), 270-280. doi:10.1177/2515245918771304 .","code":""},{"path":"https://easystats.github.io/bayestestR/reference/rope_range.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Find Default Equivalence (ROPE) Region Bounds — rope_range","text":"","code":"# \\donttest{ model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0 )) rope_range(model) #> [1] -0.6026948 0.6026948 model <- suppressWarnings( rstanarm::stan_glm(vs ~ mpg, data = mtcars, family = \"binomial\", refresh = 0) ) rope_range(model) #> [1] -0.1813799 0.1813799 model <- brms::brm(mpg ~ wt + cyl, data = mtcars) #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 1: 0.02 seconds (Sampling) #> Chain 1: 0.04 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 2: 0.018 seconds (Sampling) #> Chain 2: 0.038 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 4e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.019 seconds (Sampling) #> Chain 3: 0.038 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 4: 0.016 seconds (Sampling) #> Chain 4: 0.035 seconds (Total) #> Chain 4: rope_range(model) #> [1] -0.6026948 0.6026948 model <- BayesFactor::ttestBF(mtcars[mtcars$vs == 1, \"mpg\"], mtcars[mtcars$vs == 0, \"mpg\"]) rope_range(model) #> [1] -0.6026948 0.6026948 model <- lmBF(mpg ~ vs, data = mtcars) rope_range(model) #> [1] -0.6026948 0.6026948 # }"},{"path":"https://easystats.github.io/bayestestR/reference/sensitivity_to_prior.html","id":null,"dir":"Reference","previous_headings":"","what":"Sensitivity to Prior — sensitivity_to_prior","title":"Sensitivity to Prior — sensitivity_to_prior","text":"Computes sensitivity priors specification. represents proportion change indices model fitted antagonistic prior (prior shape located opposite effect).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sensitivity_to_prior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sensitivity to Prior — sensitivity_to_prior","text":"","code":"sensitivity_to_prior(model, ...) # S3 method for class 'stanreg' sensitivity_to_prior(model, index = \"Median\", magnitude = 10, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/sensitivity_to_prior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sensitivity to Prior — sensitivity_to_prior","text":"model Bayesian model (stanreg brmsfit). ... Arguments passed methods. index indices compute sensitivity. Can one multiple names columns returned describe_posterior. case important (e.g., write 'Median' instead 'median'). magnitude represent magnitude shift antagonistic prior (test sensitivity). instance, magnitude 10 (default) means mode wil updated prior located 10 standard deviations original location.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/sensitivity_to_prior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sensitivity to Prior — sensitivity_to_prior","text":"","code":"# \\donttest{ library(bayestestR) # rstanarm models # ----------------------------------------------- model <- rstanarm::stan_glm(mpg ~ wt, data = mtcars) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.03 seconds (Warm-up) #> Chain 1: 0.027 seconds (Sampling) #> Chain 1: 0.057 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 8e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.037 seconds (Warm-up) #> Chain 2: 0.028 seconds (Sampling) #> Chain 2: 0.065 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 8e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.026 seconds (Warm-up) #> Chain 3: 0.028 seconds (Sampling) #> Chain 3: 0.054 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 2.8e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.28 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.029 seconds (Warm-up) #> Chain 4: 0.032 seconds (Sampling) #> Chain 4: 0.061 seconds (Total) #> Chain 4: sensitivity_to_prior(model) #> Parameter Sensitivity_Median #> 1 wt 0.04105146 model <- rstanarm::stan_glm(mpg ~ wt + cyl, data = mtcars) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.1e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.041 seconds (Warm-up) #> Chain 1: 0.043 seconds (Sampling) #> Chain 1: 0.084 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.048 seconds (Warm-up) #> Chain 2: 0.04 seconds (Sampling) #> Chain 2: 0.088 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 9e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 3: 0.042 seconds (Sampling) #> Chain 3: 0.087 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.045 seconds (Warm-up) #> Chain 4: 0.036 seconds (Sampling) #> Chain 4: 0.081 seconds (Total) #> Chain 4: sensitivity_to_prior(model, index = c(\"Median\", \"MAP\")) #> Parameter Sensitivity_Median Sensitivity_MAP #> 1 wt 0.03611038 0.03354017 #> 2 cyl 0.02489034 0.05805163 # }"},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":null,"dir":"Reference","previous_headings":"","what":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"SEXIT new framework describe Bayesian effects, guiding indices use. Accordingly, sexit() function returns minimal (optimal) required information describe models' parameters Bayesian framework. includes following indices: Centrality: median posterior distribution. probabilistic terms, 50% probability effect higher lower. See point_estimate(). Uncertainty: 95% Highest Density Interval (HDI). probabilistic terms, 95% probability effect within confidence interval. See ci(). Existence: probability direction allows quantify certainty effect positive negative. critical index show effect manipulation harmful (instance clinical studies) assess direction link. See p_direction(). Significance: existence demonstrated high certainty, can assess whether effect sufficient size considered significant (.e., negligible). useful index determine effects actually important worthy discussion given process. See p_significance(). Size: Finally, index gives idea strength effect. However, beware, studies shown big effect size can also suggestive low statistical power (see details section).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"","code":"sexit(x, significant = \"default\", large = \"default\", ci = 0.95, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"x vector representing posterior distribution, data frame posterior draws (samples parameter). Can also Bayesian model. significant, large threshold values use significant large probabilities. left 'default', selected sexit_thresholds(). See details section . ci Value vector probability (credible) interval - CI (0 1) estimated. Default .95 (95%). ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"dataframe text attribute.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"rationale","dir":"Reference","previous_headings":"","what":"Rationale","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"assessment \"significance\" (broadest meaning) pervasive issue science, historical index, p-value, strongly criticized deemed played important role replicability crisis. reaction, scientists tuned Bayesian methods, offering alternative set tools answer questions. However, Bayesian framework offers wide variety possible indices related \"significance\", debate raging index best, one report. situation can lead mindless reporting possible indices (hopes reader satisfied), often without writer understanding interpreting . indeed complicated juggle many indices complicated definitions subtle differences. SEXIT aims offering practical framework Bayesian effects reporting, focus put intuitiveness, explicitness usefulness indices' interpretation. end, suggest system description parameters intuitive, easy learn apply, mathematically accurate useful taking decision. thresholds significance (.e., ROPE) one \"large\" effect explicitly defined, SEXIT framework make interpretation, .e., label effects, just sequentially gives 3 probabilities (direction, significance large, respectively) -top characteristics posterior (using median HDI centrality uncertainty description). Thus, provides lot information posterior distribution (mass different 'sections' posterior) clear meaningful way.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"threshold-selection","dir":"Reference","previous_headings":"","what":"Threshold selection","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"One important thing SEXIT framework relies two \"arbitrary\" thresholds (.e., absolute meaning). ones related effect size (inherently subjective notion), namely thresholds significant large effects. set, default, 0.05 0.3 standard deviation outcome variable (tiny large effect sizes correlations according Funder Ozer, 2019). However, defaults chosen lack better option, might adapted case. Thus, handled care, chosen thresholds always explicitly reported justified. linear models (lm), can generalised 0.05 * SDy 0.3 * SDy significant large effects, respectively. logistic models, parameters expressed log odds ratio can converted standardized difference formula π/√(3), resulting threshold 0.09 0.54. models binary outcome, strongly recommended manually specify rope argument. Currently, default applied logistic models. models count data, residual variance used. rather experimental threshold probably often similar 0.05 0.3, used care! t-tests, standard deviation response used, similarly linear models (see ). correlations,0.05 0.3 used. models, 0.05 0.3 used, strongly advised specify manually.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"three values existence, significance size provide useful description posterior distribution effects. possible scenarios include: probability existence low, probability large high: suggests posterior wide (covering large territories side 0). statistical power might low, warrant confident conclusion. probability existence significance high, probability large small: suggests effect , high confidence, large (posterior mostly contained significance large thresholds). 3 indices low: suggests effect null high confidence (posterior closely centred around 0).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"Makowski, D., Ben-Shachar, M. S., & Lüdecke, D. (2019). bayestestR: Describing Effects Uncertainty, Existence Significance within Bayesian Framework. Journal Open Source Software, 4(40), 1541. doi:10.21105/joss.01541 Makowski D, Ben-Shachar MS, Chen SHA, Lüdecke D (2019) Indices Effect Existence Significance Bayesian Framework. Frontiers Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Sequential Effect eXistence and sIgnificance Testing (SEXIT) — sexit","text":"","code":"# \\donttest{ library(bayestestR) s <- sexit(rnorm(1000, -1, 1)) s #> # Following the Sequential Effect eXistence and sIgnificance Testing (SEXIT) framework, we report the median of the posterior distribution and its 95% CI (Highest Density Interval), along the probability of direction (pd), the probability of significance and the probability of being large. The thresholds beyond which the effect is considered as significant (i.e., non-negligible) and large are |0.05| and |0.30|. #> #> The effect (Median = -0.96, 95% CI [-2.80, 1.08]) has a 84.70% probability of being negative (< 0), 83.50% of being significant (< -0.05), and 76.70% of being large (< -0.30) #> #> Median | 95% CI | Direction | Significance (> |0.05|) | Large (> |0.30|) #> ------------------------------------------------------------------------------- #> -0.96 | [-2.80, 1.08] | 0.85 | 0.83 | 0.77 #> print(s, summary = TRUE) #> # The thresholds beyond which the effect is considered as significant (i.e., non-negligible) and large are |0.05| and |0.30|. #> #> The effect (Median = -0.96, 95% CI [-2.80, 1.08]) has 84.70%, 83.50% and 76.70% probability of being negative (< 0), significant (< -0.05) and large (< -0.30) s <- sexit(iris) s #> # Following the Sequential Effect eXistence and sIgnificance Testing (SEXIT) framework, we report the median of the posterior distribution and its 95% CI (Highest Density Interval), along the probability of direction (pd), the probability of significance and the probability of being large. The thresholds beyond which the effect is considered as significant (i.e., non-negligible) and large are |0.05| and |0.30|. #> #> - Sepal.Length (Median = 5.80, 95% CI [4.47, 7.70]) has a 100.00% probability of being positive (> 0), 100.00% of being significant (> 0.05), and 100.00% of being large (> 0.30) #> - Sepal.Width (Median = 3.00, 95% CI [2.27, 3.93]) has a 100.00% probability of being positive (> 0), 100.00% of being significant (> 0.05), and 100.00% of being large (> 0.30) #> - Petal.Length (Median = 4.35, 95% CI [1.27, 6.46]) has a 100.00% probability of being positive (> 0), 100.00% of being significant (> 0.05), and 100.00% of being large (> 0.30) #> - Petal.Width (Median = 1.30, 95% CI [0.10, 2.40]) has a 100.00% probability of being positive (> 0), 100.00% of being significant (> 0.05), and 72.67% of being large (> 0.30) #> #> Parameter | Median | 95% CI | Direction | Significance (> |0.05|) | Large (> |0.30|) #> --------------------------------------------------------------------------------------------- #> Sepal.Length | 5.80 | [4.47, 7.70] | 1 | 1 | 1.00 #> Sepal.Width | 3.00 | [2.27, 3.93] | 1 | 1 | 1.00 #> Petal.Length | 4.35 | [1.27, 6.46] | 1 | 1 | 1.00 #> Petal.Width | 1.30 | [0.10, 2.40] | 1 | 1 | 0.73 #> print(s, summary = TRUE) #> # The thresholds beyond which the effect is considered as significant (i.e., non-negligible) and large are |0.05| and |0.30|. #> #> - Sepal.Length (Median = 5.80, 95% CI [4.47, 7.70]) has 100.00%, 100.00% and 100.00% probability of being positive (> 0), significant (> 0.05) and large (> 0.30) #> - Sepal.Width (Median = 3.00, 95% CI [2.27, 3.93]) has 100.00%, 100.00% and 100.00% probability of being positive (> 0), significant (> 0.05) and large (> 0.30) #> - Petal.Length (Median = 4.35, 95% CI [1.27, 6.46]) has 100.00%, 100.00% and 100.00% probability of being positive (> 0), significant (> 0.05) and large (> 0.30) #> - Petal.Width (Median = 1.30, 95% CI [0.10, 2.40]) has 100.00%, 100.00% and 72.67% probability of being positive (> 0), significant (> 0.05) and large (> 0.30) if (require(\"rstanarm\")) { model <- suppressWarnings(rstanarm::stan_glm(mpg ~ wt * cyl, data = mtcars, iter = 400, refresh = 0 )) s <- sexit(model) s print(s, summary = TRUE) } #> # The thresholds beyond which the effect is considered as significant (i.e., non-negligible) and large are |0.30| and |1.81| (corresponding respectively to 0.05 and 0.30 of the outcome's SD). #> #> - (Intercept) (Median = 52.52, 95% CI [40.70, 64.08]) has 100.00%, 100.00% and 100.00% probability of being positive (> 0), significant (> 0.30) and large (> 1.81) #> - wt (Median = -8.04, 95% CI [-12.59, -3.18]) has 100.00%, 99.88% and 99.50% probability of being negative (< 0), significant (< -0.30) and large (< -1.81) #> - cyl (Median = -3.49, 95% CI [-5.59, -1.60]) has 100.00%, 99.88% and 96.00% probability of being negative (< 0), significant (< -0.30) and large (< -1.81) #> - wt:cyl (Median = 0.71, 95% CI [0.08, 1.32]) has 98.88%, 89.12% and 0.00% probability of being positive (> 0), significant (> 0.30) and large (> 1.81) # }"},{"path":"https://easystats.github.io/bayestestR/reference/sexit_thresholds.html","id":null,"dir":"Reference","previous_headings":"","what":"Find Effect Size Thresholds — sexit_thresholds","title":"Find Effect Size Thresholds — sexit_thresholds","text":"function attempts automatically finding suitable default values \"significant\" (.e., non-negligible) \"large\" effect. used care, chosen threshold always explicitly reported justified. See detail section sexit() information.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit_thresholds.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Find Effect Size Thresholds — sexit_thresholds","text":"","code":"sexit_thresholds(x, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/sexit_thresholds.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Find Effect Size Thresholds — sexit_thresholds","text":"x Vector representing posterior distribution. Can also stanreg brmsfit model. ... Currently used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit_thresholds.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Find Effect Size Thresholds — sexit_thresholds","text":"Kruschke, J. K. (2018). Rejecting accepting parameter values Bayesian estimation. Advances Methods Practices Psychological Science, 1(2), 270-280. doi:10.1177/2515245918771304 .","code":""},{"path":"https://easystats.github.io/bayestestR/reference/sexit_thresholds.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Find Effect Size Thresholds — sexit_thresholds","text":"","code":"sexit_thresholds(rnorm(1000)) #> [1] 0.05 0.30 # \\donttest{ if (require(\"rstanarm\")) { model <- suppressWarnings(stan_glm( mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0 )) sexit_thresholds(model) model <- suppressWarnings( stan_glm(vs ~ mpg, data = mtcars, family = \"binomial\", refresh = 0) ) sexit_thresholds(model) } #> [1] 0.09068997 0.54413981 if (require(\"brms\")) { model <- brm(mpg ~ wt + cyl, data = mtcars) sexit_thresholds(model) } #> Compiling Stan program... #> Start sampling #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e-06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 1: 0.019 seconds (Sampling) #> Chain 1: 0.038 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 3e-06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 2: 0.014 seconds (Sampling) #> Chain 2: 0.034 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 3e-06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.019 seconds (Warm-up) #> Chain 3: 0.016 seconds (Sampling) #> Chain 3: 0.035 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 3e-06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.02 seconds (Warm-up) #> Chain 4: 0.017 seconds (Sampling) #> Chain 4: 0.037 seconds (Total) #> Chain 4: #> [1] 0.3013474 1.8080844 if (require(\"BayesFactor\")) { bf <- ttestBF(x = rnorm(100, 1, 1)) sexit_thresholds(bf) } #> [1] 0.0498231 0.2989386 # }"},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute Support Intervals — si","title":"Compute Support Intervals — si","text":"support interval contains values parameter predict observed data better average, degree k; values parameter associated updating factor greater equal k. perspective Savage-Dickey Bayes factor, testing point null hypothesis value within support interval yield Bayes factor smaller 1/k.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute Support Intervals — si","text":"","code":"si(posterior, ...) # S3 method for class 'numeric' si(posterior, prior = NULL, BF = 1, verbose = TRUE, ...) # S3 method for class 'stanreg' si( posterior, prior = NULL, BF = 1, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"conditional\", \"all\", \"smooth_terms\", \"sigma\", \"auxiliary\", \"distributional\"), parameters = NULL, ... ) # S3 method for class 'brmsfit' si( posterior, prior = NULL, BF = 1, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"conditional\", \"all\", \"smooth_terms\", \"sigma\", \"auxiliary\", \"distributional\"), parameters = NULL, ... ) # S3 method for class 'blavaan' si( posterior, prior = NULL, BF = 1, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"conditional\", \"all\", \"smooth_terms\", \"sigma\", \"auxiliary\", \"distributional\"), parameters = NULL, ... ) # S3 method for class 'emmGrid' si(posterior, prior = NULL, BF = 1, verbose = TRUE, ...) # S3 method for class 'get_predicted' si( posterior, prior = NULL, BF = 1, use_iterations = FALSE, verbose = TRUE, ... ) # S3 method for class 'data.frame' si(posterior, prior = NULL, BF = 1, rvar_col = NULL, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute Support Intervals — si","text":"posterior numerical vector, stanreg / brmsfit object, emmGrid data frame - representing posterior distribution(s) (see 'Details'). ... Arguments passed methods. (Can used pass arguments internal logspline::logspline().) prior object representing prior distribution (see 'Details'). BF amount support required included support interval. verbose Toggle warnings. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models). rvar_col single character - name rvar column data frame processed. See example p_direction().","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compute Support Intervals — si","text":"data frame containing lower upper bounds SI. Note level requested support higher observed data, interval [NA,NA].","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Compute Support Intervals — si","text":"info, particular specifying correct priors factors 2 levels, see Bayes factors vignette. method used compute support intervals based prior posterior distributions. computation support intervals, model priors must proper priors (least flat, preferable informative - note default, brms::brm() uses flat priors fixed-effects; see example ).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Compute Support Intervals — si","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"choosing-a-value-of-bf","dir":"Reference","previous_headings":"","what":"Choosing a value of BF","title":"Compute Support Intervals — si","text":"choice BF (level support) depends want interval represent: BF = 1 contains values whose credibility decreased observing data. BF > 1 contains values received impressive support data. BF < 1 contains values whose credibility impressively decreased observing data. Testing values outside interval produce Bayes factor larger 1/BF support alternative. E.g., SI (BF = 1/3) excludes 0, Bayes factor point-null larger 3.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"setting-the-correct-prior","dir":"Reference","previous_headings":"","what":"Setting the correct prior","title":"Compute Support Intervals — si","text":"computation Bayes factors, model priors must proper priors (least flat, preferable informative); priors alternative get wider, likelihood null value(s) increases, extreme completely flat priors null infinitely favorable alternative (called Jeffreys-Lindley-Bartlett paradox). Thus, ever try (want) compute Bayes factor informed prior. (Note default, brms::brm() uses flat priors fixed-effects; See example .) important provide correct prior meaningful results, match posterior-type input: numeric vector - prior also numeric vector, representing prior-estimate. data frame - prior also data frame, representing prior-estimates, matching column order. rvar_col specified, prior name rvar column represents prior-estimates. Supported Bayesian model (stanreg, brmsfit, etc.) prior model equivalent model MCMC samples priors . See unupdate(). prior set NULL, unupdate() called internally (supported brmsfit_multiple model). Output {marginaleffects} function - prior also equivalent output {marginaleffects} function based prior-model (See unupdate()). Output {emmeans} function prior also equivalent output {emmeans} function based prior-model (See unupdate()). prior can also original (posterior) model, case function try \"unupdate\" estimates (supported estimates undergone transformations – \"log\", \"response\", etc. – regriding).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Compute Support Intervals — si","text":"Wagenmakers, E., Gronau, Q. F., Dablander, F., & Etz, . (2018, November 22). Support Interval. doi:10.31234/osf.io/zwnxb","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/si.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compute Support Intervals — si","text":"","code":"library(bayestestR) prior <- distribution_normal(1000, mean = 0, sd = 1) posterior <- distribution_normal(1000, mean = 0.5, sd = 0.3) si(posterior, prior, verbose = FALSE) #> BF = 1 SI: [0.04, 1.04] # \\donttest{ # rstanarm models # --------------- library(rstanarm) contrasts(sleep$group) <- contr.equalprior_pairs # see vignette stan_model <- stan_lmer(extra ~ group + (1 | ID), data = sleep) #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 2.8e-05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.28 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.193 seconds (Warm-up) #> Chain 1: 0.197 seconds (Sampling) #> Chain 1: 0.39 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 1.6e-05 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.16 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.197 seconds (Warm-up) #> Chain 2: 0.174 seconds (Sampling) #> Chain 2: 0.371 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.5e-05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.17 seconds (Warm-up) #> Chain 3: 0.254 seconds (Sampling) #> Chain 3: 0.424 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 1.5e-05 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.176 seconds (Warm-up) #> Chain 4: 0.185 seconds (Sampling) #> Chain 4: 0.361 seconds (Total) #> Chain 4: si(stan_model, verbose = FALSE) #> Support Interval #> #> Parameter | BF = 1 SI | Effects | Component #> -------------------------------------------------- #> (Intercept) | [0.41, 2.72] | fixed | conditional #> group1 | [0.44, 2.75] | fixed | conditional si(stan_model, BF = 3, verbose = FALSE) #> Support Interval #> #> Parameter | BF = 3 SI | Effects | Component #> -------------------------------------------------- #> (Intercept) | [0.83, 2.33] | fixed | conditional #> group1 | [0.66, 2.44] | fixed | conditional # emmGrid objects # --------------- library(emmeans) group_diff <- pairs(emmeans(stan_model, ~group)) si(group_diff, prior = stan_model, verbose = FALSE) #> Support Interval #> #> contrast | BF = 1 SI #> -------------------------------- #> group1 - group2 | [-2.76, -0.34] # brms models # ----------- library(brms) contrasts(sleep$group) <- contr.equalprior_pairs # see vingette my_custom_priors <- set_prior(\"student_t(3, 0, 1)\", class = \"b\") + set_prior(\"student_t(3, 0, 1)\", class = \"sd\", group = \"ID\") brms_model <- suppressWarnings(brm(extra ~ group + (1 | ID), data = sleep, prior = my_custom_priors, refresh = 0 )) #> Compiling Stan program... #> Start sampling si(brms_model, verbose = FALSE) #> Support Interval #> #> Parameter | BF = 1 SI | Effects | Component #> -------------------------------------------------- #> b_Intercept | [0.65, 2.47] | fixed | conditional #> b_group1 | [0.70, 2.43] | fixed | conditional # }"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_correlation.html","id":null,"dir":"Reference","previous_headings":"","what":"Data Simulation — simulate_correlation","title":"Data Simulation — simulate_correlation","text":"Simulate data specific characteristics.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_correlation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Data Simulation — simulate_correlation","text":"","code":"simulate_correlation(n = 100, r = 0.5, mean = 0, sd = 1, names = NULL, ...) simulate_ttest(n = 100, d = 0.5, names = NULL, ...) simulate_difference(n = 100, d = 0.5, names = NULL, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_correlation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Data Simulation — simulate_correlation","text":"n number observations generated. r value vector corresponding desired correlation coefficients. mean value vector corresponding mean variables. sd value vector corresponding SD variables. names character vector desired variable names. ... Arguments passed methods. d value vector corresponding desired difference groups.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_correlation.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Data Simulation — simulate_correlation","text":"","code":"# Correlation -------------------------------- data <- simulate_correlation(r = 0.5) plot(data$V1, data$V2) cor.test(data$V1, data$V2) #> #> \tPearson's product-moment correlation #> #> data: data$V1 and data$V2 #> t = 5.7155, df = 98, p-value = 1.18e-07 #> alternative hypothesis: true correlation is not equal to 0 #> 95 percent confidence interval: #> 0.3366433 0.6341398 #> sample estimates: #> cor #> 0.5 #> summary(lm(V2 ~ V1, data = data)) #> #> Call: #> lm(formula = V2 ~ V1, data = data) #> #> Residuals: #> Min 1Q Median 3Q Max #> -1.8566 -0.5694 -0.1116 0.5070 2.4567 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) -1.548e-17 8.704e-02 0.000 1 #> V1 5.000e-01 8.748e-02 5.715 1.18e-07 *** #> --- #> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 #> #> Residual standard error: 0.8704 on 98 degrees of freedom #> Multiple R-squared: 0.25,\tAdjusted R-squared: 0.2423 #> F-statistic: 32.67 on 1 and 98 DF, p-value: 1.18e-07 #> # Specify mean and SD data <- simulate_correlation(r = 0.5, n = 50, mean = c(0, 1), sd = c(0.7, 1.7)) cor.test(data$V1, data$V2) #> #> \tPearson's product-moment correlation #> #> data: data$V1 and data$V2 #> t = 4, df = 48, p-value = 0.000218 #> alternative hypothesis: true correlation is not equal to 0 #> 95 percent confidence interval: #> 0.2574879 0.6832563 #> sample estimates: #> cor #> 0.5 #> round(c(mean(data$V1), sd(data$V1)), 1) #> [1] 0.0 0.7 round(c(mean(data$V2), sd(data$V2)), 1) #> [1] 1.0 1.7 summary(lm(V2 ~ V1, data = data)) #> #> Call: #> lm(formula = V2 ~ V1, data = data) #> #> Residuals: #> Min 1Q Median 3Q Max #> -3.2354 -0.9753 -0.0633 1.2648 3.3477 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) 1.0000 0.2104 4.754 1.86e-05 *** #> V1 1.2143 0.3036 4.000 0.000218 *** #> --- #> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 #> #> Residual standard error: 1.487 on 48 degrees of freedom #> Multiple R-squared: 0.25,\tAdjusted R-squared: 0.2344 #> F-statistic: 16 on 1 and 48 DF, p-value: 0.000218 #> # Generate multiple variables cor_matrix <- matrix( c( 1.0, 0.2, 0.4, 0.2, 1.0, 0.3, 0.4, 0.3, 1.0 ), nrow = 3 ) data <- simulate_correlation(r = cor_matrix, names = c(\"y\", \"x1\", \"x2\")) cor(data) #> y x1 x2 #> y 1.0 0.2 0.4 #> x1 0.2 1.0 0.3 #> x2 0.4 0.3 1.0 summary(lm(y ~ x1, data = data)) #> #> Call: #> lm(formula = y ~ x1, data = data) #> #> Residuals: #> Min 1Q Median 3Q Max #> -2.12568 -0.76836 -0.08657 0.61647 2.76996 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) -2.077e-17 9.848e-02 0.000 1.000 #> x1 2.000e-01 9.897e-02 2.021 0.046 * #> --- #> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 #> #> Residual standard error: 0.9848 on 98 degrees of freedom #> Multiple R-squared: 0.04,\tAdjusted R-squared: 0.0302 #> F-statistic: 4.083 on 1 and 98 DF, p-value: 0.04604 #> # t-test -------------------------------- data <- simulate_ttest(n = 30, d = 0.3) plot(data$V1, data$V0) round(c(mean(data$V1), sd(data$V1)), 1) #> [1] 0 1 diff(t.test(data$V1 ~ data$V0)$estimate) #> mean in group 1 #> 0.09185722 summary(lm(V1 ~ V0, data = data)) #> #> Call: #> lm(formula = V1 ~ V0, data = data) #> #> Residuals: #> Min 1Q Median 3Q Max #> -2.0821 -0.6721 0.0000 0.6032 2.0821 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) -0.04593 0.26139 -0.176 0.862 #> V01 0.09186 0.36966 0.248 0.806 #> #> Residual standard error: 1.012 on 28 degrees of freedom #> Multiple R-squared: 0.0022,\tAdjusted R-squared: -0.03344 #> F-statistic: 0.06175 on 1 and 28 DF, p-value: 0.8056 #> summary(glm(V0 ~ V1, data = data, family = \"binomial\")) #> #> Call: #> glm(formula = V0 ~ V1, family = \"binomial\", data = data) #> #> Coefficients: #> Estimate Std. Error z value Pr(>|z|) #> (Intercept) -2.983e-17 3.656e-01 0.000 1.000 #> V1 9.601e-02 3.740e-01 0.257 0.797 #> #> (Dispersion parameter for binomial family taken to be 1) #> #> Null deviance: 41.589 on 29 degrees of freedom #> Residual deviance: 41.523 on 28 degrees of freedom #> AIC: 45.523 #> #> Number of Fisher Scoring iterations: 3 #> # Difference -------------------------------- data <- simulate_difference(n = 30, d = 0.3) plot(data$V1, data$V0) round(c(mean(data$V1), sd(data$V1)), 1) #> [1] 0 1 diff(t.test(data$V1 ~ data$V0)$estimate) #> mean in group 1 #> 0.3 summary(lm(V1 ~ V0, data = data)) #> #> Call: #> lm(formula = V1 ~ V0, data = data) #> #> Residuals: #> Min 1Q Median 3Q Max #> -1.834 -0.677 0.000 0.677 1.834 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) -0.1500 0.2562 -0.586 0.563 #> V01 0.3000 0.3623 0.828 0.415 #> #> Residual standard error: 0.9922 on 28 degrees of freedom #> Multiple R-squared: 0.0239,\tAdjusted R-squared: -0.01096 #> F-statistic: 0.6857 on 1 and 28 DF, p-value: 0.4146 #> summary(glm(V0 ~ V1, data = data, family = \"binomial\")) #> #> Call: #> glm(formula = V0 ~ V1, family = \"binomial\", data = data) #> #> Coefficients: #> Estimate Std. Error z value Pr(>|z|) #> (Intercept) -4.569e-17 3.696e-01 0.000 1.000 #> V1 3.251e-01 3.877e-01 0.839 0.402 #> #> (Dispersion parameter for binomial family taken to be 1) #> #> Null deviance: 41.589 on 29 degrees of freedom #> Residual deviance: 40.865 on 28 degrees of freedom #> AIC: 44.865 #> #> Number of Fisher Scoring iterations: 4 #>"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_prior.html","id":null,"dir":"Reference","previous_headings":"","what":"Returns Priors of a Model as Empirical Distributions — simulate_prior","title":"Returns Priors of a Model as Empirical Distributions — simulate_prior","text":"Transforms priors information actual distributions.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_prior.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Returns Priors of a Model as Empirical Distributions — simulate_prior","text":"","code":"simulate_prior(model, n = 1000, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_prior.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Returns Priors of a Model as Empirical Distributions — simulate_prior","text":"model stanreg, stanfit, brmsfit, blavaan, MCMCglmm object. n Size simulated prior distributions. ... Currently used.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/simulate_prior.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Returns Priors of a Model as Empirical Distributions — simulate_prior","text":"","code":"# \\donttest{ library(bayestestR) if (require(\"rstanarm\")) { model <- suppressWarnings( stan_glm(mpg ~ wt + am, data = mtcars, chains = 1, refresh = 0) ) simulate_prior(model) } #> (Intercept) wt am #> 1 -29.48895919 -50.67117077 -99.35969268 #> 2 -24.62538077 -45.70051165 -89.61286514 #> 3 -22.20399176 -43.22581126 -84.76029381 #> 4 -20.54372566 -41.52899133 -81.43304669 #> 5 -19.26616155 -40.22329926 -78.87275136 #> 6 -18.22115924 -39.15528929 -76.77852025 #> 7 -17.33324626 -38.24782726 -74.99910312 #> 8 -16.55893057 -37.45646357 -73.44734004 #> 9 -15.87078383 -36.75316600 -72.06826334 #> 10 -15.25035829 -36.11908064 -70.82490296 #> 11 -14.68463134 -35.54089816 -69.69116098 #> 12 -14.16405635 -35.00886175 -68.64790554 #> 13 -13.68141951 -34.51559876 -67.68067983 #> 14 -13.23113238 -34.05539773 -66.77828439 #> 15 -12.80877469 -33.62374105 -65.93186079 #> 16 -12.41078784 -33.21699178 -65.13427743 #> 17 -12.03426309 -32.83217714 -64.37970508 #> 18 -11.67679137 -32.46683501 -63.66331584 #> 19 -11.33635417 -32.11890246 -62.98106456 #> 20 -11.01124290 -31.78663325 -62.32952709 #> 21 -10.69999799 -31.46853572 -61.70577846 #> 22 -10.40136247 -31.16332518 -61.10729958 #> 23 -10.11424576 -30.86988706 -60.53190492 #> 24 -9.83769537 -30.58724790 -59.97768563 #> 25 -9.57087431 -30.31455229 -59.44296437 #> 26 -9.31304305 -30.05104439 -58.92625904 #> 27 -9.06354475 -29.79605292 -58.42625335 #> 28 -8.82179333 -29.54897890 -57.94177276 #> 29 -8.58726358 -29.30928553 -57.47176469 #> 30 -8.35948289 -29.07648983 -57.01528205 #> 31 -8.13802443 -28.85015555 -56.57146946 #> 32 -7.92250139 -28.62988737 -56.13955169 #> 33 -7.71256203 -28.41532581 -55.71882387 #> 34 -7.50788557 -28.20614302 -55.30864314 #> 35 -7.30817862 -28.00203915 -54.90842153 #> 36 -7.11317211 -27.80273920 -54.51761980 #> 37 -6.92261864 -27.60799035 -54.13574218 #> 38 -6.73629024 -27.41755958 -53.76233176 #> 39 -6.55397631 -27.23123167 -53.39696653 #> 40 -6.37548191 -27.04880738 -53.03925581 #> 41 -6.20062624 -26.87010194 -52.68883727 #> 42 -6.02924127 -26.69494362 -52.34537417 #> 43 -5.86117058 -26.52317254 -52.00855303 #> 44 -5.69626827 -26.35463960 -51.67808147 #> 45 -5.53439806 -26.18920551 -51.35368636 #> 46 -5.37543243 -26.02673996 -51.03511216 #> 47 -5.21925189 -25.86712082 -50.72211942 #> 48 -5.06574433 -25.71023350 -50.41448343 #> 49 -4.91480437 -25.55597032 -50.11199304 #> 50 -4.76633288 -25.40422995 -49.81444956 #> 51 -4.62023645 -25.25491694 -49.52166582 #> 52 -4.47642699 -25.10794124 -49.23346526 #> 53 -4.33482129 -24.96321783 -48.94968113 #> 54 -4.19534069 -24.82066630 -48.67015580 #> 55 -4.05791074 -24.68021059 -48.39474008 #> 56 -3.92246092 -24.54177859 -48.12329261 #> 57 -3.78892433 -24.40530194 -47.85567935 #> 58 -3.65723748 -24.27071576 -47.59177303 #> 59 -3.52734003 -24.13795837 -47.33145276 #> 60 -3.39917460 -24.00697114 -47.07460353 #> 61 -3.27268657 -23.87769825 -46.82111591 #> 62 -3.14782393 -23.75008652 -46.57088562 #> 63 -3.02453707 -23.62408527 -46.32381326 #> 64 -2.90277865 -23.49964612 -46.07980397 #> 65 -2.78250349 -23.37672287 -45.83876719 #> 66 -2.66366839 -23.25527140 -45.60061638 #> 67 -2.54623207 -23.13524951 -45.36526877 #> 68 -2.43015501 -23.01661679 -45.13264517 #> 69 -2.31539937 -22.89933459 -44.90266975 #> 70 -2.20192888 -22.78336584 -44.67526985 #> 71 -2.08970880 -22.66867502 -44.45037581 #> 72 -1.97870577 -22.55522805 -44.22792080 #> 73 -1.86888777 -22.44299221 -44.00784065 #> 74 -1.76022407 -22.33193608 -43.79007376 #> 75 -1.65268511 -22.22202945 -43.57456090 #> 76 -1.54624248 -22.11324328 -43.36124513 #> 77 -1.44086885 -22.00554966 -43.15007168 #> 78 -1.33653790 -21.89892167 -42.94098782 #> 79 -1.23322430 -21.79333343 -42.73394277 #> 80 -1.13090363 -21.68875998 -42.52888760 #> 81 -1.02955236 -21.58517727 -42.32577514 #> 82 -0.92914779 -21.48256211 -42.12455992 #> 83 -0.82966803 -21.38089212 -41.92519805 #> 84 -0.73109193 -21.28014568 -41.72764716 #> 85 -0.63339908 -21.18030194 -41.53186634 #> 86 -0.53656976 -21.08134074 -41.33781607 #> 87 -0.44058490 -20.98324260 -41.14545815 #> 88 -0.34542608 -20.88598868 -40.95475563 #> 89 -0.25107546 -20.78956076 -40.76567280 #> 90 -0.15751581 -20.69394121 -40.57817509 #> 91 -0.06473041 -20.59911297 -40.39222901 #> 92 0.02729691 -20.50505950 -40.20780218 #> 93 0.11858180 -20.41176480 -40.02486319 #> 94 0.20913946 -20.31921335 -39.84338162 #> 95 0.29898459 -20.22739012 -39.66332799 #> 96 0.38813146 -20.13628051 -39.48467368 #> 97 0.47659393 -20.04587038 -39.30739097 #> 98 0.56438541 -19.95614600 -39.13145293 #> 99 0.65151895 -19.86709405 -38.95683343 #> 100 0.73800722 -19.77870159 -38.78350711 #> 101 0.82386249 -19.69095605 -38.61144932 #> 102 0.90909672 -19.60384523 -38.44063612 #> 103 0.99372152 -19.51735726 -38.27104425 #> 104 1.07774818 -19.43148060 -38.10265110 #> 105 1.16118766 -19.34620404 -37.93543467 #> 106 1.24405064 -19.26151668 -37.76937357 #> 107 1.32634750 -19.17740790 -37.60444700 #> 108 1.40808835 -19.09386737 -37.44063469 #> 109 1.48928303 -19.01088505 -37.27791695 #> 110 1.56994110 -18.92845114 -37.11627458 #> 111 1.65007191 -18.84655610 -36.95568888 #> 112 1.72968453 -18.76519066 -36.79614165 #> 113 1.80878781 -18.68434577 -36.63761514 #> 114 1.88739040 -18.60401260 -36.48009207 #> 115 1.96550069 -18.52418257 -36.32355557 #> 116 2.04312689 -18.44484728 -36.16798921 #> 117 2.12027700 -18.36599857 -36.01337696 #> 118 2.19695882 -18.28762846 -35.85970318 #> 119 2.27317996 -18.20972917 -35.70695262 #> 120 2.34894785 -18.13229310 -35.55511039 #> 121 2.42426975 -18.05531285 -35.40416195 #> 122 2.49915274 -17.97878118 -35.25409312 #> 123 2.57360372 -17.90269102 -35.10489004 #> 124 2.64762945 -17.82703547 -34.95653917 #> 125 2.72123654 -17.75180779 -34.80902730 #> 126 2.79443142 -17.67700139 -34.66234151 #> 127 2.86722039 -17.60260984 -34.51646917 #> 128 2.93960961 -17.52862683 -34.37139794 #> 129 3.01160511 -17.45504622 -34.22711577 #> 130 3.08321276 -17.38186199 -34.08361085 #> 131 3.15443832 -17.30906827 -33.94087165 #> 132 3.22528743 -17.23665928 -33.79888689 #> 133 3.29576559 -17.16462942 -33.65764552 #> 134 3.36587820 -17.09297315 -33.51713674 #> 135 3.43563052 -17.02168511 -33.37734998 #> 136 3.50502772 -16.95075999 -33.23827490 #> 137 3.57407486 -16.88019266 -33.09990136 #> 138 3.64277688 -16.80997803 -32.96221945 #> 139 3.71113864 -16.74011116 -32.82521945 #> 140 3.77916488 -16.67058720 -32.68889184 #> 141 3.84686025 -16.60140139 -32.55322731 #> 142 3.91422930 -16.53254908 -32.41821673 #> 143 3.98127651 -16.46402570 -32.28385114 #> 144 4.04800624 -16.39582679 -32.15012178 #> 145 4.11442279 -16.32794796 -32.01702006 #> 146 4.18053037 -16.26038490 -31.88453754 #> 147 4.24633308 -16.19313342 -31.75266597 #> 148 4.31183498 -16.12618937 -31.62139725 #> 149 4.37704004 -16.05954871 -31.49072342 #> 150 4.44195213 -15.99320746 -31.36063671 #> 151 4.50657507 -15.92716172 -31.23112945 #> 152 4.57091261 -15.86140767 -31.10219416 #> 153 4.63496842 -15.79594155 -30.97382346 #> 154 4.69874610 -15.73075968 -30.84601015 #> 155 4.76224920 -15.66585845 -30.71874712 #> 156 4.82548118 -15.60123430 -30.59202742 #> 157 4.88844545 -15.53688376 -30.46584422 #> 158 4.95114536 -15.47280339 -30.34019081 #> 159 5.01358420 -15.40898984 -30.21506060 #> 160 5.07576519 -15.34543982 -30.09044711 #> 161 5.13769152 -15.28215007 -29.96634400 #> 162 5.19936629 -15.21911742 -29.84274503 #> 163 5.26079256 -15.15633874 -29.71964405 #> 164 5.32197334 -15.09381095 -29.59703504 #> 165 5.38291158 -15.03153103 -29.47491209 #> 166 5.44361020 -14.96949602 -29.35326936 #> 167 5.50407204 -14.90770300 -29.23210114 #> 168 5.56429991 -14.84614910 -29.11140180 #> 169 5.62429657 -14.78483150 -28.99116583 #> 170 5.68406474 -14.72374743 -28.87138777 #> 171 5.74360707 -14.66289417 -28.75206229 #> 172 5.80292619 -14.60226902 -28.63318412 #> 173 5.86202469 -14.54186936 -28.51474811 #> 174 5.92090509 -14.48169260 -28.39674915 #> 175 5.97956990 -14.42173617 -28.27918226 #> 176 6.03802158 -14.36199757 -28.16204251 #> 177 6.09626253 -14.30247434 -28.04532505 #> 178 6.15429514 -14.24316404 -27.92902512 #> 179 6.21212174 -14.18406427 -27.81313804 #> 180 6.26974464 -14.12517269 -27.69765917 #> 181 6.32716611 -14.06648698 -27.58258399 #> 182 6.38438838 -14.00800486 -27.46790803 #> 183 6.44141364 -13.94972408 -27.35362686 #> 184 6.49824406 -13.89164244 -27.23973617 #> 185 6.55488177 -13.83375775 -27.12623169 #> 186 6.61132886 -13.77606787 -27.01310920 #> 187 6.66758741 -13.71857068 -26.90036458 #> 188 6.72365944 -13.66126413 -26.78799373 #> 189 6.77954695 -13.60414614 -26.67599266 #> 190 6.83525194 -13.54721471 -26.56435740 #> 191 6.89077633 -13.49046784 -26.45308404 #> 192 6.94612205 -13.43390359 -26.34216877 #> 193 7.00129098 -13.37752001 -26.23160778 #> 194 7.05628498 -13.32131521 -26.12139735 #> 195 7.11110589 -13.26528732 -26.01153380 #> 196 7.16575552 -13.20943448 -25.90201352 #> 197 7.22023564 -13.15375487 -25.79283292 #> 198 7.27454802 -13.09824671 -25.68398851 #> 199 7.32869438 -13.04290822 -25.57547679 #> 200 7.38267643 -12.98773765 -25.46729436 #> 201 7.43649585 -12.93273329 -25.35943784 #> 202 7.49015431 -12.87789344 -25.25190390 #> 203 7.54365344 -12.82321643 -25.14468927 #> 204 7.59699485 -12.76870061 -25.03779071 #> 205 7.65018014 -12.71434435 -24.93120503 #> 206 7.70321087 -12.66014605 -24.82492908 #> 207 7.75608859 -12.60610412 -24.71895976 #> 208 7.80881484 -12.55221701 -24.61329402 #> 209 7.86139112 -12.49848317 -24.50792882 #> 210 7.91381891 -12.44490108 -24.40286118 #> 211 7.96609969 -12.39146924 -24.29808818 #> 212 8.01823491 -12.33818617 -24.19360689 #> 213 8.07022598 -12.28505042 -24.08941447 #> 214 8.12207432 -12.23206054 -23.98550808 #> 215 8.17378133 -12.17921510 -23.88188493 #> 216 8.22534838 -12.12651271 -23.77854228 #> 217 8.27677682 -12.07395198 -23.67547739 #> 218 8.32806800 -12.02153153 -23.57268759 #> 219 8.37922323 -11.96925002 -23.47017023 #> 220 8.43024382 -11.91710612 -23.36792270 #> 221 8.48113106 -11.86509850 -23.26594240 #> 222 8.53188623 -11.81322587 -23.16422679 #> 223 8.58251057 -11.76148693 -23.06277335 #> 224 8.63300533 -11.70988043 -22.96157960 #> 225 8.68337174 -11.65840511 -22.86064307 #> 226 8.73361101 -11.60705973 -22.75996135 #> 227 8.78372433 -11.55584308 -22.65953202 #> 228 8.83371289 -11.50475393 -22.55935273 #> 229 8.88357785 -11.45379110 -22.45942114 #> 230 8.93332036 -11.40295342 -22.35973493 #> 231 8.98294156 -11.35223971 -22.26029182 #> 232 9.03244259 -11.30164882 -22.16108956 #> 233 9.08182455 -11.25117963 -22.06212592 #> 234 9.13108854 -11.20083100 -21.96339869 #> 235 9.18023565 -11.15060182 -21.86490569 #> 236 9.22926695 -11.10049101 -21.76664478 #> 237 9.27818351 -11.05049746 -21.66861383 #> 238 9.32698637 -11.00062012 -21.57081073 #> 239 9.37567657 -10.95085792 -21.47323341 #> 240 9.42425513 -10.90120981 -21.37587981 #> 241 9.47272307 -10.85167476 -21.27874790 #> 242 9.52108139 -10.80225174 -21.18183568 #> 243 9.56933108 -10.75293974 -21.08514115 #> 244 9.61747311 -10.70373777 -20.98866237 #> 245 9.66550847 -10.65464482 -20.89239737 #> 246 9.71343811 -10.60565993 -20.79634425 #> 247 9.76126296 -10.55678212 -20.70050111 #> 248 9.80898398 -10.50801043 -20.60486607 #> 249 9.85660209 -10.45934393 -20.50943726 #> 250 9.90411820 -10.41078166 -20.41421287 #> 251 9.95153321 -10.36232271 -20.31919106 #> 252 9.99884804 -10.31396617 -20.22437005 #> 253 10.04606356 -10.26571111 -20.12974805 #> 254 10.09318065 -10.21755665 -20.03532330 #> 255 10.14020018 -10.16950190 -19.94109407 #> 256 10.18712301 -10.12154597 -19.84705863 #> 257 10.23394999 -10.07368801 -19.75321528 #> 258 10.28068196 -10.02592715 -19.65956234 #> 259 10.32731975 -9.97826254 -19.56609813 #> 260 10.37386419 -9.93069334 -19.47282100 #> 261 10.42031609 -9.88321871 -19.37972932 #> 262 10.46667626 -9.83583784 -19.28682148 #> 263 10.51294550 -9.78854990 -19.19409587 #> 264 10.55912460 -9.74135408 -19.10155090 #> 265 10.60521433 -9.69424959 -19.00918501 #> 266 10.65121549 -9.64723563 -18.91699665 #> 267 10.69712883 -9.60031143 -18.82498428 #> 268 10.74295511 -9.55347620 -18.73314638 #> 269 10.78869508 -9.50672918 -18.64148144 #> 270 10.83434949 -9.46006960 -18.54998796 #> 271 10.87991908 -9.41349671 -18.45866448 #> 272 10.92540457 -9.36700977 -18.36750953 #> 273 10.97080670 -9.32060803 -18.27652165 #> 274 11.01612616 -9.27429077 -18.18569943 #> 275 11.06136368 -9.22805727 -18.09504142 #> 276 11.10651996 -9.18190679 -18.00454624 #> 277 11.15159568 -9.13583864 -17.91421248 #> 278 11.19659155 -9.08985211 -17.82403877 #> 279 11.24150823 -9.04394650 -17.73402373 #> 280 11.28634642 -8.99812112 -17.64416601 #> 281 11.33110677 -8.95237528 -17.55446427 #> 282 11.37578995 -8.90670832 -17.46491719 #> 283 11.42039663 -8.86111955 -17.37552343 #> 284 11.46492744 -8.81560831 -17.28628171 #> 285 11.50938303 -8.77017394 -17.19719072 #> 286 11.55376405 -8.72481579 -17.10824918 #> 287 11.59807112 -8.67953321 -17.01945583 #> 288 11.64230489 -8.63432557 -16.93080940 #> 289 11.68646595 -8.58919221 -16.84230866 #> 290 11.73055495 -8.54413251 -16.75395236 #> 291 11.77457247 -8.49914586 -16.66573927 #> 292 11.81851914 -8.45423162 -16.57766819 #> 293 11.86239555 -8.40938919 -16.48973791 #> 294 11.90620230 -8.36461796 -16.40194724 #> 295 11.94993998 -8.31991732 -16.31429500 #> 296 11.99360916 -8.27528667 -16.22678001 #> 297 12.03721044 -8.23072544 -16.13940111 #> 298 12.08074438 -8.18623301 -16.05215715 #> 299 12.12421156 -8.14180882 -15.96504699 #> 300 12.16761254 -8.09745229 -15.87806950 #> 301 12.21094788 -8.05316284 -15.79122354 #> 302 12.25421814 -8.00893991 -15.70450802 #> 303 12.29742386 -7.96478293 -15.61792182 #> 304 12.34056561 -7.92069134 -15.53146385 #> 305 12.38364390 -7.87666459 -15.44513303 #> 306 12.42665930 -7.83270214 -15.35892827 #> 307 12.46961232 -7.78880343 -15.27284851 #> 308 12.51250349 -7.74496792 -15.18689268 #> 309 12.55533334 -7.70119509 -15.10105975 #> 310 12.59810240 -7.65748440 -15.01534866 #> 311 12.64081117 -7.61383531 -14.92975839 #> 312 12.68346017 -7.57024732 -14.84428789 #> 313 12.72604990 -7.52671989 -14.75893617 #> 314 12.76858088 -7.48325252 -14.67370221 #> 315 12.81105359 -7.43984469 -14.58858500 #> 316 12.85346854 -7.39649589 -14.50358355 #> 317 12.89582622 -7.35320563 -14.41869688 #> 318 12.93812711 -7.30997341 -14.33392401 #> 319 12.98037170 -7.26679872 -14.24926396 #> 320 13.02256048 -7.22368108 -14.16471578 #> 321 13.06469391 -7.18062000 -14.08027851 #> 322 13.10677248 -7.13761500 -13.99595119 #> 323 13.14879665 -7.09466559 -13.91173289 #> 324 13.19076688 -7.05177130 -13.82762266 #> 325 13.23268365 -7.00893166 -13.74361960 #> 326 13.27454741 -6.96614619 -13.65972276 #> 327 13.31635862 -6.92341443 -13.57593124 #> 328 13.35811773 -6.88073592 -13.49224413 #> 329 13.39982519 -6.83811019 -13.40866052 #> 330 13.44148144 -6.79553680 -13.32517953 #> 331 13.48308694 -6.75301528 -13.24180027 #> 332 13.52464211 -6.71054519 -13.15852185 #> 333 13.56614741 -6.66812608 -13.07534340 #> 334 13.60760325 -6.62575751 -12.99226404 #> 335 13.64901007 -6.58343904 -12.90928293 #> 336 13.69036830 -6.54117023 -12.82639919 #> 337 13.73167837 -6.49895064 -12.74361197 #> 338 13.77294069 -6.45677985 -12.66092044 #> 339 13.81415569 -6.41465743 -12.57832375 #> 340 13.85532378 -6.37258295 -12.49582107 #> 341 13.89644537 -6.33055599 -12.41341157 #> 342 13.93752088 -6.28857613 -12.33109443 #> 343 13.97855071 -6.24664295 -12.24886882 #> 344 14.01953527 -6.20475604 -12.16673395 #> 345 14.06047496 -6.16291499 -12.08468899 #> 346 14.10137017 -6.12111939 -12.00273316 #> 347 14.14222132 -6.07936883 -11.92086565 #> 348 14.18302878 -6.03766292 -11.83908567 #> 349 14.22379296 -5.99600124 -11.75739245 #> 350 14.26451424 -5.95438340 -11.67578519 #> 351 14.30519301 -5.91280902 -11.59426312 #> 352 14.34582965 -5.87127768 -11.51282548 #> 353 14.38642456 -5.82978901 -11.43147149 #> 354 14.42697809 -5.78834261 -11.35020040 #> 355 14.46749064 -5.74693810 -11.26901144 #> 356 14.50796258 -5.70557509 -11.18790388 #> 357 14.54839428 -5.66425321 -11.10687695 #> 358 14.58878612 -5.62297208 -11.02592992 #> 359 14.62913845 -5.58173132 -10.94506205 #> 360 14.66945165 -5.54053055 -10.86427261 #> 361 14.70972608 -5.49936940 -10.78356086 #> 362 14.74996210 -5.45824751 -10.70292608 #> 363 14.79016007 -5.41716451 -10.62236756 #> 364 14.83032035 -5.37612002 -10.54188457 #> 365 14.87044329 -5.33511370 -10.46147640 #> 366 14.91052926 -5.29414517 -10.38114235 #> 367 14.95057859 -5.25321408 -10.30088171 #> 368 14.99059164 -5.21232007 -10.22069378 #> 369 15.03056875 -5.17146278 -10.14057786 #> 370 15.07051028 -5.13064187 -10.06053327 #> 371 15.11041656 -5.08985698 -9.98055931 #> 372 15.15028793 -5.04910776 -9.90065530 #> 373 15.19012474 -5.00839387 -9.82082056 #> 374 15.22992733 -4.96771495 -9.74105440 #> 375 15.26969602 -4.92707068 -9.66135617 #> 376 15.30943116 -4.88646070 -9.58172519 #> 377 15.34913308 -4.84588467 -9.50216078 #> 378 15.38880210 -4.80534226 -9.42266230 #> 379 15.42843855 -4.76483314 -9.34322907 #> 380 15.46804277 -4.72435695 -9.26386045 #> 381 15.50761508 -4.68391339 -9.18455579 #> 382 15.54715581 -4.64350210 -9.10531442 #> 383 15.58666526 -4.60312277 -9.02613571 #> 384 15.62614378 -4.56277507 -8.94701901 #> 385 15.66559166 -4.52245866 -8.86796369 #> 386 15.70500924 -4.48217323 -8.78896911 #> 387 15.74439683 -4.44191845 -8.71003463 #> 388 15.78375474 -4.40169400 -8.63115963 #> 389 15.82308328 -4.36149957 -8.55234348 #> 390 15.86238278 -4.32133483 -8.47358555 #> 391 15.90165352 -4.28119946 -8.39488522 #> 392 15.94089584 -4.24109315 -8.31624188 #> 393 15.98011002 -4.20101560 -8.23765491 #> 394 16.01929638 -4.16096648 -8.15912370 #> 395 16.05845522 -4.12094548 -8.08064764 #> 396 16.09758685 -4.08095230 -8.00222611 #> 397 16.13669156 -4.04098662 -7.92385853 #> 398 16.17576966 -4.00104815 -7.84554428 #> 399 16.21482144 -3.96113657 -7.76728277 #> 400 16.25384721 -3.92125158 -7.68907341 #> 401 16.29284725 -3.88139288 -7.61091559 #> 402 16.33182186 -3.84156017 -7.53280873 #> 403 16.37077134 -3.80175314 -7.45475224 #> 404 16.40969598 -3.76197150 -7.37674553 #> 405 16.44859607 -3.72221496 -7.29878802 #> 406 16.48747190 -3.68248321 -7.22087913 #> 407 16.52632376 -3.64277595 -7.14301828 #> 408 16.56515193 -3.60309291 -7.06520490 #> 409 16.60395670 -3.56343377 -6.98743840 #> 410 16.64273836 -3.52379826 -6.90971823 #> 411 16.68149720 -3.48418608 -6.83204380 #> 412 16.72023348 -3.44459694 -6.75441456 #> 413 16.75894751 -3.40503056 -6.67682993 #> 414 16.79763955 -3.36548664 -6.59928936 #> 415 16.83630989 -3.32596490 -6.52179228 #> 416 16.87495880 -3.28646506 -6.44433813 #> 417 16.91358657 -3.24698683 -6.36692637 #> 418 16.95219347 -3.20752992 -6.28955642 #> 419 16.99077978 -3.16809407 -6.21222775 #> 420 17.02934576 -3.12867898 -6.13493980 #> 421 17.06789170 -3.08928438 -6.05769202 #> 422 17.10641787 -3.04990999 -5.98048386 #> 423 17.14492454 -3.01055552 -5.90331478 #> 424 17.18341198 -2.97122071 -5.82618424 #> 425 17.22188046 -2.93190528 -5.74909170 #> 426 17.26033025 -2.89260895 -5.67203661 #> 427 17.29876161 -2.85333144 -5.59501844 #> 428 17.33717483 -2.81407249 -5.51803666 #> 429 17.37557015 -2.77483182 -5.44109072 #> 430 17.41394785 -2.73560916 -5.36418010 #> 431 17.45230820 -2.69640425 -5.28730426 #> 432 17.49065145 -2.65721680 -5.21046268 #> 433 17.52897787 -2.61804655 -5.13365483 #> 434 17.56728772 -2.57889323 -5.05688017 #> 435 17.60558127 -2.53975658 -4.98013820 #> 436 17.64385878 -2.50063633 -4.90342838 #> 437 17.68212049 -2.46153220 -4.82675019 #> 438 17.72036669 -2.42244395 -4.75010312 #> 439 17.75859762 -2.38337129 -4.67348663 #> 440 17.79681354 -2.34431398 -4.59690023 #> 441 17.83501471 -2.30527174 -4.52034338 #> 442 17.87320139 -2.26624431 -4.44381558 #> 443 17.91137383 -2.22723144 -4.36731632 #> 444 17.94953228 -2.18823285 -4.29084507 #> 445 17.98767701 -2.14924829 -4.21440134 #> 446 18.02580827 -2.11027751 -4.13798460 #> 447 18.06392630 -2.07132023 -4.06159436 #> 448 18.10203137 -2.03237621 -3.98523010 #> 449 18.14012373 -1.99344518 -3.90889133 #> 450 18.17820362 -1.95452688 -3.83257753 #> 451 18.21627131 -1.91562107 -3.75628820 #> 452 18.25432703 -1.87672748 -3.68002284 #> 453 18.29237104 -1.83784586 -3.60378095 #> 454 18.33040359 -1.79897595 -3.52756203 #> 455 18.36842493 -1.76011750 -3.45136557 #> 456 18.40643530 -1.72127026 -3.37519109 #> 457 18.44443496 -1.68243397 -3.29903808 #> 458 18.48242415 -1.64360837 -3.22290605 #> 459 18.52040313 -1.60479322 -3.14679450 #> 460 18.55837212 -1.56598827 -3.07070294 #> 461 18.59633139 -1.52719325 -2.99463087 #> 462 18.63428118 -1.48840793 -2.91857781 #> 463 18.67222174 -1.44963205 -2.84254325 #> 464 18.71015330 -1.41086535 -2.76652672 #> 465 18.74807611 -1.37210760 -2.69052771 #> 466 18.78599042 -1.33335853 -2.61454574 #> 467 18.82389647 -1.29461791 -2.53858033 #> 468 18.86179451 -1.25588547 -2.46263098 #> 469 18.89968477 -1.21716099 -2.38669720 #> 470 18.93756751 -1.17844419 -2.31077852 #> 471 18.97544296 -1.13973485 -2.23487444 #> 472 19.01331136 -1.10103270 -2.15898447 #> 473 19.05117296 -1.06233751 -2.08310815 #> 474 19.08902799 -1.02364903 -2.00724497 #> 475 19.12687670 -0.98496700 -1.93139446 #> 476 19.16471934 -0.94629119 -1.85555614 #> 477 19.20255613 -0.90762135 -1.77972952 #> 478 19.24038733 -0.86895723 -1.70391412 #> 479 19.27821316 -0.83029859 -1.62810946 #> 480 19.31603388 -0.79164518 -1.55231507 #> 481 19.35384972 -0.75299676 -1.47653045 #> 482 19.39166091 -0.71435308 -1.40075513 #> 483 19.42946771 -0.67571390 -1.32498863 #> 484 19.46727034 -0.63707897 -1.24923047 #> 485 19.50506905 -0.59844805 -1.17348017 #> 486 19.54286408 -0.55982090 -1.09773726 #> 487 19.58065566 -0.52119727 -1.02200126 #> 488 19.61844403 -0.48257691 -0.94627168 #> 489 19.65622943 -0.44395960 -0.87054806 #> 490 19.69401210 -0.40534507 -0.79482991 #> 491 19.73179227 -0.36673310 -0.71911675 #> 492 19.76957019 -0.32812343 -0.64340812 #> 493 19.80734609 -0.28951582 -0.56770353 #> 494 19.84512021 -0.25091003 -0.49200252 #> 495 19.88289279 -0.21230582 -0.41630459 #> 496 19.92066406 -0.17370294 -0.34060928 #> 497 19.95843427 -0.13510116 -0.26491611 #> 498 19.99620364 -0.09650022 -0.18922460 #> 499 20.03397242 -0.05789989 -0.11353429 #> 500 20.07174085 -0.01929992 -0.03784468 #> 501 20.10950915 0.01929992 0.03784468 #> 502 20.14727758 0.05789989 0.11353429 #> 503 20.18504636 0.09650022 0.18922460 #> 504 20.22281573 0.13510116 0.26491611 #> 505 20.26058594 0.17370294 0.34060928 #> 506 20.29835721 0.21230582 0.41630459 #> 507 20.33612979 0.25091003 0.49200252 #> 508 20.37390391 0.28951582 0.56770353 #> 509 20.41167981 0.32812343 0.64340812 #> 510 20.44945773 0.36673310 0.71911675 #> 511 20.48723790 0.40534507 0.79482991 #> 512 20.52502057 0.44395960 0.87054806 #> 513 20.56280597 0.48257691 0.94627168 #> 514 20.60059434 0.52119727 1.02200126 #> 515 20.63838592 0.55982090 1.09773726 #> 516 20.67618095 0.59844805 1.17348017 #> 517 20.71397966 0.63707897 1.24923047 #> 518 20.75178229 0.67571390 1.32498863 #> 519 20.78958909 0.71435308 1.40075513 #> 520 20.82740028 0.75299676 1.47653045 #> 521 20.86521612 0.79164518 1.55231507 #> 522 20.90303684 0.83029859 1.62810946 #> 523 20.94086267 0.86895723 1.70391412 #> 524 20.97869387 0.90762135 1.77972952 #> 525 21.01653066 0.94629119 1.85555614 #> 526 21.05437330 0.98496700 1.93139446 #> 527 21.09222201 1.02364903 2.00724497 #> 528 21.13007704 1.06233751 2.08310815 #> 529 21.16793864 1.10103270 2.15898447 #> 530 21.20580704 1.13973485 2.23487444 #> 531 21.24368249 1.17844419 2.31077852 #> 532 21.28156523 1.21716099 2.38669720 #> 533 21.31945549 1.25588547 2.46263098 #> 534 21.35735353 1.29461791 2.53858033 #> 535 21.39525958 1.33335853 2.61454574 #> 536 21.43317389 1.37210760 2.69052771 #> 537 21.47109670 1.41086535 2.76652672 #> 538 21.50902826 1.44963205 2.84254325 #> 539 21.54696882 1.48840793 2.91857781 #> 540 21.58491861 1.52719325 2.99463087 #> 541 21.62287788 1.56598827 3.07070294 #> 542 21.66084687 1.60479322 3.14679450 #> 543 21.69882585 1.64360837 3.22290605 #> 544 21.73681504 1.68243397 3.29903808 #> 545 21.77481470 1.72127026 3.37519109 #> 546 21.81282507 1.76011750 3.45136557 #> 547 21.85084641 1.79897595 3.52756203 #> 548 21.88887896 1.83784586 3.60378095 #> 549 21.92692297 1.87672748 3.68002284 #> 550 21.96497869 1.91562107 3.75628820 #> 551 22.00304638 1.95452688 3.83257753 #> 552 22.04112627 1.99344518 3.90889133 #> 553 22.07921863 2.03237621 3.98523010 #> 554 22.11732370 2.07132023 4.06159436 #> 555 22.15544173 2.11027751 4.13798460 #> 556 22.19357299 2.14924829 4.21440134 #> 557 22.23171772 2.18823285 4.29084507 #> 558 22.26987617 2.22723144 4.36731632 #> 559 22.30804861 2.26624431 4.44381558 #> 560 22.34623529 2.30527174 4.52034338 #> 561 22.38443646 2.34431398 4.59690023 #> 562 22.42265238 2.38337129 4.67348663 #> 563 22.46088331 2.42244395 4.75010312 #> 564 22.49912951 2.46153220 4.82675019 #> 565 22.53739122 2.50063633 4.90342838 #> 566 22.57566873 2.53975658 4.98013820 #> 567 22.61396228 2.57889323 5.05688017 #> 568 22.65227213 2.61804655 5.13365483 #> 569 22.69059855 2.65721680 5.21046268 #> 570 22.72894180 2.69640425 5.28730426 #> 571 22.76730215 2.73560916 5.36418010 #> 572 22.80567985 2.77483182 5.44109072 #> 573 22.84407517 2.81407249 5.51803666 #> 574 22.88248839 2.85333144 5.59501844 #> 575 22.92091975 2.89260895 5.67203661 #> 576 22.95936954 2.93190528 5.74909170 #> 577 22.99783802 2.97122071 5.82618424 #> 578 23.03632546 3.01055552 5.90331478 #> 579 23.07483213 3.04990999 5.98048386 #> 580 23.11335830 3.08928438 6.05769202 #> 581 23.15190424 3.12867898 6.13493980 #> 582 23.19047022 3.16809407 6.21222775 #> 583 23.22905653 3.20752992 6.28955642 #> 584 23.26766343 3.24698683 6.36692637 #> 585 23.30629120 3.28646506 6.44433813 #> 586 23.34494011 3.32596490 6.52179228 #> 587 23.38361045 3.36548664 6.59928936 #> 588 23.42230249 3.40503056 6.67682993 #> 589 23.46101652 3.44459694 6.75441456 #> 590 23.49975280 3.48418608 6.83204380 #> 591 23.53851164 3.52379826 6.90971823 #> 592 23.57729330 3.56343377 6.98743840 #> 593 23.61609807 3.60309291 7.06520490 #> 594 23.65492624 3.64277595 7.14301828 #> 595 23.69377810 3.68248321 7.22087913 #> 596 23.73265393 3.72221496 7.29878802 #> 597 23.77155402 3.76197150 7.37674553 #> 598 23.81047866 3.80175314 7.45475224 #> 599 23.84942814 3.84156017 7.53280873 #> 600 23.88840275 3.88139288 7.61091559 #> 601 23.92740279 3.92125158 7.68907341 #> 602 23.96642856 3.96113657 7.76728277 #> 603 24.00548034 4.00104815 7.84554428 #> 604 24.04455844 4.04098662 7.92385853 #> 605 24.08366315 4.08095230 8.00222611 #> 606 24.12279478 4.12094548 8.08064764 #> 607 24.16195362 4.16096648 8.15912370 #> 608 24.20113998 4.20101560 8.23765491 #> 609 24.24035416 4.24109315 8.31624188 #> 610 24.27959648 4.28119946 8.39488522 #> 611 24.31886722 4.32133483 8.47358555 #> 612 24.35816672 4.36149957 8.55234348 #> 613 24.39749526 4.40169400 8.63115963 #> 614 24.43685317 4.44191845 8.71003463 #> 615 24.47624076 4.48217323 8.78896911 #> 616 24.51565834 4.52245866 8.86796369 #> 617 24.55510622 4.56277507 8.94701901 #> 618 24.59458474 4.60312277 9.02613571 #> 619 24.63409419 4.64350210 9.10531442 #> 620 24.67363492 4.68391339 9.18455579 #> 621 24.71320723 4.72435695 9.26386045 #> 622 24.75281145 4.76483314 9.34322907 #> 623 24.79244790 4.80534226 9.42266230 #> 624 24.83211692 4.84588467 9.50216078 #> 625 24.87181884 4.88646070 9.58172519 #> 626 24.91155398 4.92707068 9.66135617 #> 627 24.95132267 4.96771495 9.74105440 #> 628 24.99112526 5.00839387 9.82082056 #> 629 25.03096207 5.04910776 9.90065530 #> 630 25.07083344 5.08985698 9.98055931 #> 631 25.11073972 5.13064187 10.06053327 #> 632 25.15068125 5.17146278 10.14057786 #> 633 25.19065836 5.21232007 10.22069378 #> 634 25.23067141 5.25321408 10.30088171 #> 635 25.27072074 5.29414517 10.38114235 #> 636 25.31080671 5.33511370 10.46147640 #> 637 25.35092965 5.37612002 10.54188457 #> 638 25.39108993 5.41716451 10.62236756 #> 639 25.43128790 5.45824751 10.70292608 #> 640 25.47152392 5.49936940 10.78356086 #> 641 25.51179835 5.54053055 10.86427261 #> 642 25.55211155 5.58173132 10.94506205 #> 643 25.59246388 5.62297208 11.02592992 #> 644 25.63285572 5.66425321 11.10687695 #> 645 25.67328742 5.70557509 11.18790388 #> 646 25.71375936 5.74693810 11.26901144 #> 647 25.75427191 5.78834261 11.35020040 #> 648 25.79482544 5.82978901 11.43147149 #> 649 25.83542035 5.87127768 11.51282548 #> 650 25.87605699 5.91280902 11.59426312 #> 651 25.91673576 5.95438340 11.67578519 #> 652 25.95745704 5.99600124 11.75739245 #> 653 25.99822122 6.03766292 11.83908567 #> 654 26.03902868 6.07936883 11.92086565 #> 655 26.07987983 6.12111939 12.00273316 #> 656 26.12077504 6.16291499 12.08468899 #> 657 26.16171473 6.20475604 12.16673395 #> 658 26.20269929 6.24664295 12.24886882 #> 659 26.24372912 6.28857613 12.33109443 #> 660 26.28480463 6.33055599 12.41341157 #> 661 26.32592622 6.37258295 12.49582107 #> 662 26.36709431 6.41465743 12.57832375 #> 663 26.40830931 6.45677985 12.66092044 #> 664 26.44957163 6.49895064 12.74361197 #> 665 26.49088170 6.54117023 12.82639919 #> 666 26.53223993 6.58343904 12.90928293 #> 667 26.57364675 6.62575751 12.99226404 #> 668 26.61510259 6.66812608 13.07534340 #> 669 26.65660789 6.71054519 13.15852185 #> 670 26.69816306 6.75301528 13.24180027 #> 671 26.73976856 6.79553680 13.32517953 #> 672 26.78142481 6.83811019 13.40866052 #> 673 26.82313227 6.88073592 13.49224413 #> 674 26.86489138 6.92341443 13.57593124 #> 675 26.90670259 6.96614619 13.65972276 #> 676 26.94856635 7.00893166 13.74361960 #> 677 26.99048312 7.05177130 13.82762266 #> 678 27.03245335 7.09466559 13.91173289 #> 679 27.07447752 7.13761500 13.99595119 #> 680 27.11655609 7.18062000 14.08027851 #> 681 27.15868952 7.22368108 14.16471578 #> 682 27.20087830 7.26679872 14.24926396 #> 683 27.24312289 7.30997341 14.33392401 #> 684 27.28542378 7.35320563 14.41869688 #> 685 27.32778146 7.39649589 14.50358355 #> 686 27.37019641 7.43984469 14.58858500 #> 687 27.41266912 7.48325252 14.67370221 #> 688 27.45520010 7.52671989 14.75893617 #> 689 27.49778983 7.57024732 14.84428789 #> 690 27.54043883 7.61383531 14.92975839 #> 691 27.58314760 7.65748440 15.01534866 #> 692 27.62591666 7.70119509 15.10105975 #> 693 27.66874651 7.74496792 15.18689268 #> 694 27.71163768 7.78880343 15.27284851 #> 695 27.75459070 7.83270214 15.35892827 #> 696 27.79760610 7.87666459 15.44513303 #> 697 27.84068439 7.92069134 15.53146385 #> 698 27.88382614 7.96478293 15.61792182 #> 699 27.92703186 8.00893991 15.70450802 #> 700 27.97030212 8.05316284 15.79122354 #> 701 28.01363746 8.09745229 15.87806950 #> 702 28.05703844 8.14180882 15.96504699 #> 703 28.10050562 8.18623301 16.05215715 #> 704 28.14403956 8.23072544 16.13940111 #> 705 28.18764084 8.27528667 16.22678001 #> 706 28.23131002 8.31991732 16.31429500 #> 707 28.27504770 8.36461796 16.40194724 #> 708 28.31885445 8.40938919 16.48973791 #> 709 28.36273086 8.45423162 16.57766819 #> 710 28.40667753 8.49914586 16.66573927 #> 711 28.45069505 8.54413251 16.75395236 #> 712 28.49478405 8.58919221 16.84230866 #> 713 28.53894511 8.63432557 16.93080940 #> 714 28.58317888 8.67953321 17.01945583 #> 715 28.62748595 8.72481579 17.10824918 #> 716 28.67186697 8.77017394 17.19719072 #> 717 28.71632256 8.81560831 17.28628171 #> 718 28.76085337 8.86111955 17.37552343 #> 719 28.80546005 8.90670832 17.46491719 #> 720 28.85014323 8.95237528 17.55446427 #> 721 28.89490358 8.99812112 17.64416601 #> 722 28.93974177 9.04394650 17.73402373 #> 723 28.98465845 9.08985211 17.82403877 #> 724 29.02965432 9.13583864 17.91421248 #> 725 29.07473004 9.18190679 18.00454624 #> 726 29.11988632 9.22805727 18.09504142 #> 727 29.16512384 9.27429077 18.18569943 #> 728 29.21044330 9.32060803 18.27652165 #> 729 29.25584543 9.36700977 18.36750953 #> 730 29.30133092 9.41349671 18.45866448 #> 731 29.34690051 9.46006960 18.54998796 #> 732 29.39255492 9.50672918 18.64148144 #> 733 29.43829489 9.55347620 18.73314638 #> 734 29.48412117 9.60031143 18.82498428 #> 735 29.53003451 9.64723563 18.91699665 #> 736 29.57603567 9.69424959 19.00918501 #> 737 29.62212540 9.74135408 19.10155090 #> 738 29.66830450 9.78854990 19.19409587 #> 739 29.71457374 9.83583784 19.28682148 #> 740 29.76093391 9.88321871 19.37972932 #> 741 29.80738581 9.93069334 19.47282100 #> 742 29.85393025 9.97826254 19.56609813 #> 743 29.90056804 10.02592715 19.65956234 #> 744 29.94730001 10.07368801 19.75321528 #> 745 29.99412699 10.12154597 19.84705863 #> 746 30.04104982 10.16950190 19.94109407 #> 747 30.08806935 10.21755665 20.03532330 #> 748 30.13518644 10.26571111 20.12974805 #> 749 30.18240196 10.31396617 20.22437005 #> 750 30.22971679 10.36232271 20.31919106 #> 751 30.27713180 10.41078166 20.41421287 #> 752 30.32464791 10.45934393 20.50943726 #> 753 30.37226602 10.50801043 20.60486607 #> 754 30.41998704 10.55678212 20.70050111 #> 755 30.46781189 10.60565993 20.79634425 #> 756 30.51574153 10.65464482 20.89239737 #> 757 30.56377689 10.70373777 20.98866237 #> 758 30.61191892 10.75293974 21.08514115 #> 759 30.66016861 10.80225174 21.18183568 #> 760 30.70852693 10.85167476 21.27874790 #> 761 30.75699487 10.90120981 21.37587981 #> 762 30.80557343 10.95085792 21.47323341 #> 763 30.85426363 11.00062012 21.57081073 #> 764 30.90306649 11.05049746 21.66861383 #> 765 30.95198305 11.10049101 21.76664478 #> 766 31.00101435 11.15060182 21.86490569 #> 767 31.05016146 11.20083100 21.96339869 #> 768 31.09942545 11.25117963 22.06212592 #> 769 31.14880741 11.30164882 22.16108956 #> 770 31.19830844 11.35223971 22.26029182 #> 771 31.24792964 11.40295342 22.35973493 #> 772 31.29767215 11.45379110 22.45942114 #> 773 31.34753711 11.50475393 22.55935273 #> 774 31.39752567 11.55584308 22.65953202 #> 775 31.44763899 11.60705973 22.75996135 #> 776 31.49787826 11.65840511 22.86064307 #> 777 31.54824467 11.70988043 22.96157960 #> 778 31.59873943 11.76148693 23.06277335 #> 779 31.64936377 11.81322587 23.16422679 #> 780 31.70011894 11.86509850 23.26594240 #> 781 31.75100618 11.91710612 23.36792270 #> 782 31.80202677 11.96925002 23.47017023 #> 783 31.85318200 12.02153153 23.57268759 #> 784 31.90447318 12.07395198 23.67547739 #> 785 31.95590162 12.12651271 23.77854228 #> 786 32.00746867 12.17921510 23.88188493 #> 787 32.05917568 12.23206054 23.98550808 #> 788 32.11102402 12.28505042 24.08941447 #> 789 32.16301509 12.33818617 24.19360689 #> 790 32.21515031 12.39146924 24.29808818 #> 791 32.26743109 12.44490108 24.40286118 #> 792 32.31985888 12.49848317 24.50792882 #> 793 32.37243516 12.55221701 24.61329402 #> 794 32.42516141 12.60610412 24.71895976 #> 795 32.47803913 12.66014605 24.82492908 #> 796 32.53106986 12.71434435 24.93120503 #> 797 32.58425515 12.76870061 25.03779071 #> 798 32.63759656 12.82321643 25.14468927 #> 799 32.69109569 12.87789344 25.25190390 #> 800 32.74475415 12.93273329 25.35943784 #> 801 32.79857357 12.98773765 25.46729436 #> 802 32.85255562 13.04290822 25.57547679 #> 803 32.90670198 13.09824671 25.68398851 #> 804 32.96101436 13.15375487 25.79283292 #> 805 33.01549448 13.20943448 25.90201352 #> 806 33.07014411 13.26528732 26.01153380 #> 807 33.12496502 13.32131521 26.12139735 #> 808 33.17995902 13.37752001 26.23160778 #> 809 33.23512795 13.43390359 26.34216877 #> 810 33.29047367 13.49046784 26.45308404 #> 811 33.34599806 13.54721471 26.56435740 #> 812 33.40170305 13.60414614 26.67599266 #> 813 33.45759056 13.66126413 26.78799373 #> 814 33.51366259 13.71857068 26.90036458 #> 815 33.56992114 13.77606787 27.01310920 #> 816 33.62636823 13.83375775 27.12623169 #> 817 33.68300594 13.89164244 27.23973617 #> 818 33.73983636 13.94972408 27.35362686 #> 819 33.79686162 14.00800486 27.46790803 #> 820 33.85408389 14.06648698 27.58258399 #> 821 33.91150536 14.12517269 27.69765917 #> 822 33.96912826 14.18406427 27.81313804 #> 823 34.02695486 14.24316404 27.92902512 #> 824 34.08498747 14.30247434 28.04532505 #> 825 34.14322842 14.36199757 28.16204251 #> 826 34.20168010 14.42173617 28.27918226 #> 827 34.26034491 14.48169260 28.39674915 #> 828 34.31922531 14.54186936 28.51474811 #> 829 34.37832381 14.60226902 28.63318412 #> 830 34.43764293 14.66289417 28.75206229 #> 831 34.49718526 14.72374743 28.87138777 #> 832 34.55695343 14.78483150 28.99116583 #> 833 34.61695009 14.84614910 29.11140180 #> 834 34.67717796 14.90770300 29.23210114 #> 835 34.73763980 14.96949602 29.35326936 #> 836 34.79833842 15.03153103 29.47491209 #> 837 34.85927666 15.09381095 29.59703504 #> 838 34.92045744 15.15633874 29.71964405 #> 839 34.98188371 15.21911742 29.84274503 #> 840 35.04355848 15.28215007 29.96634400 #> 841 35.10548481 15.34543982 30.09044711 #> 842 35.16766580 15.40898984 30.21506060 #> 843 35.23010464 15.47280339 30.34019081 #> 844 35.29280455 15.53688376 30.46584422 #> 845 35.35576882 15.60123430 30.59202742 #> 846 35.41900080 15.66585845 30.71874712 #> 847 35.48250390 15.73075968 30.84601015 #> 848 35.54628158 15.79594155 30.97382346 #> 849 35.61033739 15.86140767 31.10219416 #> 850 35.67467493 15.92716172 31.23112945 #> 851 35.73929787 15.99320746 31.36063671 #> 852 35.80420996 16.05954871 31.49072342 #> 853 35.86941502 16.12618937 31.62139725 #> 854 35.93491692 16.19313342 31.75266597 #> 855 36.00071963 16.26038490 31.88453754 #> 856 36.06682721 16.32794796 32.01702006 #> 857 36.13324376 16.39582679 32.15012178 #> 858 36.19997349 16.46402570 32.28385114 #> 859 36.26702070 16.53254908 32.41821673 #> 860 36.33438975 16.60140139 32.55322731 #> 861 36.40208512 16.67058720 32.68889184 #> 862 36.47011136 16.74011116 32.82521945 #> 863 36.53847312 16.80997803 32.96221945 #> 864 36.60717514 16.88019266 33.09990136 #> 865 36.67622228 16.95075999 33.23827490 #> 866 36.74561948 17.02168511 33.37734998 #> 867 36.81537180 17.09297315 33.51713674 #> 868 36.88548441 17.16462942 33.65764552 #> 869 36.95596257 17.23665928 33.79888689 #> 870 37.02681168 17.30906827 33.94087165 #> 871 37.09803724 17.38186199 34.08361085 #> 872 37.16964489 17.45504622 34.22711577 #> 873 37.24164039 17.52862683 34.37139794 #> 874 37.31402961 17.60260984 34.51646917 #> 875 37.38681858 17.67700139 34.66234151 #> 876 37.46001346 17.75180779 34.80902730 #> 877 37.53362055 17.82703547 34.95653917 #> 878 37.60764628 17.90269102 35.10489004 #> 879 37.68209726 17.97878118 35.25409312 #> 880 37.75698025 18.05531285 35.40416195 #> 881 37.83230215 18.13229310 35.55511039 #> 882 37.90807004 18.20972917 35.70695262 #> 883 37.98429118 18.28762846 35.85970318 #> 884 38.06097300 18.36599857 36.01337696 #> 885 38.13812311 18.44484728 36.16798921 #> 886 38.21574931 18.52418257 36.32355557 #> 887 38.29385960 18.60401260 36.48009207 #> 888 38.37246219 18.68434577 36.63761514 #> 889 38.45156547 18.76519066 36.79614165 #> 890 38.53117809 18.84655610 36.95568888 #> 891 38.61130890 18.92845114 37.11627458 #> 892 38.69196697 19.01088505 37.27791695 #> 893 38.77316165 19.09386737 37.44063469 #> 894 38.85490250 19.17740790 37.60444700 #> 895 38.93719936 19.26151668 37.76937357 #> 896 39.02006234 19.34620404 37.93543467 #> 897 39.10350182 19.43148060 38.10265110 #> 898 39.18752848 19.51735726 38.27104425 #> 899 39.27215328 19.60384523 38.44063612 #> 900 39.35738751 19.69095605 38.61144932 #> 901 39.44324278 19.77870159 38.78350711 #> 902 39.52973105 19.86709405 38.95683343 #> 903 39.61686459 19.95614600 39.13145293 #> 904 39.70465607 20.04587038 39.30739097 #> 905 39.79311854 20.13628051 39.48467368 #> 906 39.88226541 20.22739012 39.66332799 #> 907 39.97211054 20.31921335 39.84338162 #> 908 40.06266820 20.41176480 40.02486319 #> 909 40.15395309 20.50505950 40.20780218 #> 910 40.24598041 20.59911297 40.39222901 #> 911 40.33876581 20.69394121 40.57817509 #> 912 40.43232546 20.78956076 40.76567280 #> 913 40.52667608 20.88598868 40.95475563 #> 914 40.62183490 20.98324260 41.14545815 #> 915 40.71781976 21.08134074 41.33781607 #> 916 40.81464908 21.18030194 41.53186634 #> 917 40.91234193 21.28014568 41.72764716 #> 918 41.01091803 21.38089212 41.92519805 #> 919 41.11039779 21.48256211 42.12455992 #> 920 41.21080236 21.58517727 42.32577514 #> 921 41.31215363 21.68875998 42.52888760 #> 922 41.41447430 21.79333343 42.73394277 #> 923 41.51778790 21.89892167 42.94098782 #> 924 41.62211885 22.00554966 43.15007168 #> 925 41.72749248 22.11324328 43.36124513 #> 926 41.83393511 22.22202945 43.57456090 #> 927 41.94147407 22.33193608 43.79007376 #> 928 42.05013777 22.44299221 44.00784065 #> 929 42.15995577 22.55522805 44.22792080 #> 930 42.27095880 22.66867502 44.45037581 #> 931 42.38317888 22.78336584 44.67526985 #> 932 42.49664937 22.89933459 44.90266975 #> 933 42.61140501 23.01661679 45.13264517 #> 934 42.72748207 23.13524951 45.36526877 #> 935 42.84491839 23.25527140 45.60061638 #> 936 42.96375349 23.37672287 45.83876719 #> 937 43.08402865 23.49964612 46.07980397 #> 938 43.20578707 23.62408527 46.32381326 #> 939 43.32907393 23.75008652 46.57088562 #> 940 43.45393657 23.87769825 46.82111591 #> 941 43.58042460 24.00697114 47.07460353 #> 942 43.70859003 24.13795837 47.33145276 #> 943 43.83848748 24.27071576 47.59177303 #> 944 43.97017433 24.40530194 47.85567935 #> 945 44.10371092 24.54177859 48.12329261 #> 946 44.23916074 24.68021059 48.39474008 #> 947 44.37659069 24.82066630 48.67015580 #> 948 44.51607129 24.96321783 48.94968113 #> 949 44.65767699 25.10794124 49.23346526 #> 950 44.80148645 25.25491694 49.52166582 #> 951 44.94758288 25.40422995 49.81444956 #> 952 45.09605437 25.55597032 50.11199304 #> 953 45.24699433 25.71023350 50.41448343 #> 954 45.40050189 25.86712082 50.72211942 #> 955 45.55668243 26.02673996 51.03511216 #> 956 45.71564806 26.18920551 51.35368636 #> 957 45.87751827 26.35463960 51.67808147 #> 958 46.04242058 26.52317254 52.00855303 #> 959 46.21049127 26.69494362 52.34537417 #> 960 46.38187624 26.87010194 52.68883727 #> 961 46.55673191 27.04880738 53.03925581 #> 962 46.73522631 27.23123167 53.39696653 #> 963 46.91754024 27.41755958 53.76233176 #> 964 47.10386864 27.60799035 54.13574218 #> 965 47.29442211 27.80273920 54.51761980 #> 966 47.48942862 28.00203915 54.90842153 #> 967 47.68913557 28.20614302 55.30864314 #> 968 47.89381203 28.41532581 55.71882387 #> 969 48.10375139 28.62988737 56.13955169 #> 970 48.31927443 28.85015555 56.57146946 #> 971 48.54073289 29.07648983 57.01528205 #> 972 48.76851358 29.30928553 57.47176469 #> 973 49.00304333 29.54897890 57.94177276 #> 974 49.24479475 29.79605292 58.42625335 #> 975 49.49429305 30.05104439 58.92625904 #> 976 49.75212431 30.31455229 59.44296437 #> 977 50.01894537 30.58724790 59.97768563 #> 978 50.29549576 30.86988706 60.53190492 #> 979 50.58261247 31.16332518 61.10729958 #> 980 50.88124799 31.46853572 61.70577846 #> 981 51.19249290 31.78663325 62.32952709 #> 982 51.51760417 32.11890246 62.98106456 #> 983 51.85804137 32.46683501 63.66331584 #> 984 52.21551309 32.83217714 64.37970508 #> 985 52.59203784 33.21699178 65.13427743 #> 986 52.99002469 33.62374105 65.93186079 #> 987 53.41238238 34.05539773 66.77828439 #> 988 53.86266951 34.51559876 67.68067983 #> 989 54.34530635 35.00886175 68.64790554 #> 990 54.86588134 35.54089816 69.69116098 #> 991 55.43160829 36.11908064 70.82490296 #> 992 56.05203383 36.75316600 72.06826334 #> 993 56.74018057 37.45646357 73.44734004 #> 994 57.51449626 38.24782726 74.99910312 #> 995 58.40240924 39.15528929 76.77852025 #> 996 59.44741155 40.22329926 78.87275136 #> 997 60.72497566 41.52899133 81.43304669 #> 998 62.38524176 43.22581126 84.76029381 #> 999 64.80663077 45.70051165 89.61286514 #> 1000 69.67020919 50.67117077 99.35969268 # }"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_simpson.html","id":null,"dir":"Reference","previous_headings":"","what":"Simpson's paradox dataset simulation — simulate_simpson","title":"Simpson's paradox dataset simulation — simulate_simpson","text":"Simpson's paradox, Yule-Simpson effect, phenomenon probability statistics, trend appears several different groups data disappears reverses groups combined.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_simpson.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simpson's paradox dataset simulation — simulate_simpson","text":"","code":"simulate_simpson( n = 100, r = 0.5, groups = 3, difference = 1, group_prefix = \"G_\" )"},{"path":"https://easystats.github.io/bayestestR/reference/simulate_simpson.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simpson's paradox dataset simulation — simulate_simpson","text":"n number observations group generated (minimum 4). r value vector corresponding desired correlation coefficients. groups Number groups (groups can participants, clusters, anything). difference Difference groups. group_prefix prefix group name (e.g., \"G_1\", \"G_2\", \"G_3\", ...).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_simpson.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simpson's paradox dataset simulation — simulate_simpson","text":"dataset.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/simulate_simpson.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Simpson's paradox dataset simulation — simulate_simpson","text":"","code":"data <- simulate_simpson(n = 10, groups = 5, r = 0.5) if (require(\"ggplot2\")) { ggplot(data, aes(x = V1, y = V2)) + geom_point(aes(color = Group)) + geom_smooth(aes(color = Group), method = \"lm\") + geom_smooth(method = \"lm\") } #> Loading required package: ggplot2 #> `geom_smooth()` using formula = 'y ~ x' #> `geom_smooth()` using formula = 'y ~ x'"},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":null,"dir":"Reference","previous_headings":"","what":"Shortest Probability Interval (SPI) — spi","title":"Shortest Probability Interval (SPI) — spi","text":"Compute Shortest Probability Interval (SPI) posterior distributions. SPI computationally stable HDI. implementation based algorithm SPIn package.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Shortest Probability Interval (SPI) — spi","text":"","code":"spi(x, ...) # S3 method for class 'numeric' spi(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'data.frame' spi(x, ci = 0.95, rvar_col = NULL, verbose = TRUE, ...) # S3 method for class 'stanreg' spi( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"location\", \"all\", \"conditional\", \"smooth_terms\", \"sigma\", \"distributional\", \"auxiliary\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'brmsfit' spi( x, ci = 0.95, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL, verbose = TRUE, ... ) # S3 method for class 'get_predicted' spi(x, ci = 0.95, use_iterations = FALSE, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Shortest Probability Interval (SPI) — spi","text":"x Vector representing posterior distribution, data frame vectors. Can also Bayesian model. bayestestR supports wide range models (see, example, methods(\"hdi\")) documented 'Usage' section, methods classes mostly resemble arguments .numeric .data.framemethods. ... Currently used. ci Value vector probability (credible) interval - CI (0 1) estimated. Default .95 (95%). verbose Toggle warnings. rvar_col single character - name rvar column data frame processed. See example p_direction(). effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. use_iterations Logical, TRUE x get_predicted object, (returned insight::get_predicted()), function applied iterations instead predictions. applies models return iterations predicted values (e.g., brmsfit models).","code":""},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Shortest Probability Interval (SPI) — spi","text":"data frame following columns: Parameter model parameter(s), x model-object. x vector, column missing. CI probability credible interval. CI_low, CI_high lower upper credible interval limits parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Shortest Probability Interval (SPI) — spi","text":"SPI alternative method HDI (hdi()) quantify uncertainty (posterior) distributions. SPI said stable HDI, , \"HDI can noisy (, high Monte Carlo error)\" (Liu et al. 2015). Furthermore, HDI sensitive additional assumptions, particular assumptions related different estimation methods, can make HDI less accurate reliable.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Shortest Probability Interval (SPI) — spi","text":"code compute SPI adapted SPIn package, slightly modified robust Stan models. Thus, credits go Ying Liu original SPI algorithm R implementation.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Shortest Probability Interval (SPI) — spi","text":"Liu, Y., Gelman, ., & Zheng, T. (2015). Simulation-efficient shortest probability intervals. Statistics Computing, 25(4), 809–819. https://doi.org/10.1007/s11222-015-9563-8","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/spi.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Shortest Probability Interval (SPI) — spi","text":"","code":"library(bayestestR) posterior <- rnorm(1000) spi(posterior) #> 95% SPI: [-1.96, 1.96] spi(posterior, ci = c(0.80, 0.89, 0.95)) #> Shortest Probability Interval #> #> 80% SPI | 89% SPI | 95% SPI #> --------------------------------------------- #> [-1.17, 1.34] | [-1.50, 1.70] | [-1.96, 1.96] df <- data.frame(replicate(4, rnorm(100))) spi(df) #> Shortest Probability Interval #> #> Parameter | 95% SPI #> ------------------------- #> X1 | [-2.04, 1.89] #> X2 | [-1.65, 1.96] #> X3 | [-2.09, 1.74] #> X4 | [-2.11, 1.97] spi(df, ci = c(0.80, 0.89, 0.95)) #> Shortest Probability Interval #> #> Parameter | 80% SPI | 89% SPI | 95% SPI #> --------------------------------------------------------- #> X1 | [-1.05, 1.49] | [-1.41, 1.76] | [-2.04, 1.89] #> X2 | [-0.90, 1.09] | [-1.33, 1.41] | [-1.65, 1.96] #> X3 | [-1.38, 1.44] | [-1.84, 1.49] | [-2.09, 1.74] #> X4 | [-1.27, 1.40] | [-1.61, 1.56] | [-2.11, 1.97] # \\donttest{ library(rstanarm) model <- suppressWarnings( stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0) ) spi(model) #> Shortest Probability Interval #> #> Parameter | 95% SPI #> ---------------------------- #> (Intercept) | [29.08, 47.70] #> wt | [-6.75, -4.12] #> gear | [-1.74, 1.70] # }"},{"path":"https://easystats.github.io/bayestestR/reference/unupdate.html","id":null,"dir":"Reference","previous_headings":"","what":"Un-update Bayesian models to their prior-to-data state — unupdate","title":"Un-update Bayesian models to their prior-to-data state — unupdate","text":"posteriors priors updated observing data, goal function un-update posteriors obtain models representing priors. models can used examine prior predictive distribution, compare priors posteriors.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/unupdate.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Un-update Bayesian models to their prior-to-data state — unupdate","text":"","code":"unupdate(model, verbose = TRUE, ...) # S3 method for class 'stanreg' unupdate(model, verbose = TRUE, ...) # S3 method for class 'brmsfit' unupdate(model, verbose = TRUE, ...) # S3 method for class 'brmsfit_multiple' unupdate(model, verbose = TRUE, newdata = NULL, ...) # S3 method for class 'blavaan' unupdate(model, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/bayestestR/reference/unupdate.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Un-update Bayesian models to their prior-to-data state — unupdate","text":"model fitted Bayesian model. verbose Toggle warnings. ... used newdata List data.frames update model new data. Required even original data used.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/unupdate.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Un-update Bayesian models to their prior-to-data state — unupdate","text":"model un-fitted data, representing prior model.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/unupdate.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Un-update Bayesian models to their prior-to-data state — unupdate","text":"function used internally compute Bayes factors.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate posterior distributions weighted across models — weighted_posteriors","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"Extract posterior samples parameters, weighted across models. Weighting done comparing posterior model probabilities, via bayesfactor_models().","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"","code":"weighted_posteriors(..., prior_odds = NULL, missing = 0, verbose = TRUE) # S3 method for class 'data.frame' weighted_posteriors(..., prior_odds = NULL, missing = 0, verbose = TRUE) # S3 method for class 'stanreg' weighted_posteriors( ..., prior_odds = NULL, missing = 0, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL ) # S3 method for class 'brmsfit' weighted_posteriors( ..., prior_odds = NULL, missing = 0, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL ) # S3 method for class 'blavaan' weighted_posteriors( ..., prior_odds = NULL, missing = 0, verbose = TRUE, effects = c(\"fixed\", \"random\", \"all\"), component = c(\"conditional\", \"zi\", \"zero_inflated\", \"all\"), parameters = NULL ) # S3 method for class 'BFBayesFactor' weighted_posteriors( ..., prior_odds = NULL, missing = 0, verbose = TRUE, iterations = 4000 )"},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"... Fitted models (see details), fit data, single BFBayesFactor object. prior_odds Optional vector prior odds models compared first model (denominator, BFBayesFactor objects). data.frames, used basis weighting. missing optional numeric value use model contain parameter appears models. Defaults 0. verbose Toggle warnings. effects results fixed effects, random effects returned? applies mixed models. May abbreviated. component results parameters, parameters conditional model zero-inflated part model returned? May abbreviated. applies brms-models. parameters Regular expression pattern describes parameters returned. Meta-parameters (like lp__ prior_) filtered default, parameters typically appear summary() returned. Use parameters select specific parameters output. iterations BayesFactor models, many posterior samples draw.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"data frame posterior distributions (weighted across models) .","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"Note across models parameters might play different roles. example, parameter plays different role model Y ~ + B (main effect) model Y ~ + B + :B (simple effect). many cases centering predictors (mean subtracting continuous variables, effects coding via contr.sum orthonormal coding via contr.equalprior_pairs factors) can reduce issue. case mindful issue. See bayesfactor_models() details info passed models. Note BayesFactor models, posterior samples generated intercept models. function similar function brms::posterior_average.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"BayesFactor < 0.9.12-4.3, instances might problems duplicate columns random effects resulting data frame.","code":""},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"Clyde, M., Desimone, H., & Parmigiani, G. (1996). Prediction via orthogonalized model mixing. Journal American Statistical Association, 91(435), 1197-1208. Hinne, M., Gronau, Q. F., van den Bergh, D., Wagenmakers, E. (2019, March 25). conceptual introduction Bayesian Model Averaging. doi:10.31234/osf.io/wgb64 Rouder, J. N., Haaf, J. M., & Vandekerckhove, J. (2018). Bayesian inference psychology, part IV: Parameter estimation Bayes factors. Psychonomic bulletin & review, 25(1), 102-113. van den Bergh, D., Haaf, J. M., Ly, ., Rouder, J. N., & Wagenmakers, E. J. (2019). cautionary note estimating effect size.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/reference/weighted_posteriors.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate posterior distributions weighted across models — weighted_posteriors","text":"","code":"# \\donttest{ if (require(\"rstanarm\") && require(\"see\") && interactive()) { stan_m0 <- suppressWarnings(stan_glm(extra ~ 1, data = sleep, family = gaussian(), refresh = 0, diagnostic_file = file.path(tempdir(), \"df0.csv\") )) stan_m1 <- suppressWarnings(stan_glm(extra ~ group, data = sleep, family = gaussian(), refresh = 0, diagnostic_file = file.path(tempdir(), \"df1.csv\") )) res <- weighted_posteriors(stan_m0, stan_m1, verbose = FALSE) plot(eti(res)) } ## With BayesFactor if (require(\"BayesFactor\")) { extra_sleep <- ttestBF(formula = extra ~ group, data = sleep) wp <- weighted_posteriors(extra_sleep, verbose = FALSE) describe_posterior(extra_sleep, test = NULL, verbose = FALSE) # also considers the null describe_posterior(wp$delta, test = NULL, verbose = FALSE) } #> Summary of Posterior Distribution #> #> Parameter | Median | 95% CI #> ---------------------------------- #> Posterior | -0.09 | [-1.38, 0.07] ## weighted prediction distributions via data.frames if (require(\"rstanarm\") && interactive()) { m0 <- suppressWarnings(stan_glm( mpg ~ 1, data = mtcars, family = gaussian(), diagnostic_file = file.path(tempdir(), \"df0.csv\"), refresh = 0 )) m1 <- suppressWarnings(stan_glm( mpg ~ carb, data = mtcars, family = gaussian(), diagnostic_file = file.path(tempdir(), \"df1.csv\"), refresh = 0 )) # Predictions: pred_m0 <- data.frame(posterior_predict(m0)) pred_m1 <- data.frame(posterior_predict(m1)) BFmods <- bayesfactor_models(m0, m1, verbose = FALSE) wp <- weighted_posteriors( pred_m0, pred_m1, prior_odds = as.numeric(BFmods)[2], verbose = FALSE ) # look at first 5 prediction intervals hdi(pred_m0[1:5]) hdi(pred_m1[1:5]) hdi(wp[1:5]) # between, but closer to pred_m1 } # }"},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0151","dir":"Changelog","previous_headings":"","what":"bayestestR 0.15.1","title":"bayestestR 0.15.1","text":"CRAN release: 2025-01-17","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-15-1","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.15.1","text":"Several minor changes deal recent changes packages.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-15-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.15.1","text":"Fix emmeans / marginaleffects / data.frame() methods using multiple credible levels (#688).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0150","dir":"Changelog","previous_headings":"","what":"bayestestR 0.15.0","title":"bayestestR 0.15.0","text":"CRAN release: 2024-10-17","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-15-0","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.15.0","text":"Support posterior::rvar-type column data frames. example, data frame df rvar column \".pred\" can now called directly via p_direction(df, rvar_col = \".pred\"). Added support marginaleffects ROPE threshold ranges rope(), describe_posterior(), p_significance() equivalence_test() can now specified list. allows different ranges different parameters. Results objects generated emmeans (emmGrid/emm_list) now return results appended grid-data. Usability improvements p_direction(): Results p_direction() can directly used pd_to_p(). p_direction() gets as_p argument, directly convert pd-values frequentist p-values. p_direction() gets remove_na argument, defaults TRUE, remove NA values input calculating pd-values. Besides existing .numeric() method, p_direction() now also .vector() method. p_significance() now accepts non-symmetric ranges threshold argument. p_to_pd() now also works data frames returned p_direction(). data frame contains pd, p_direction PD column name, assumed pd-values, converted p-values. p_to_pd() data frame inputs gets .numeric() .vector() method.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-15-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.15.0","text":"Fixed warning CRAN check results.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0140","dir":"Changelog","previous_headings":"","what":"bayestestR 0.14.0","title":"bayestestR 0.14.0","text":"CRAN release: 2024-07-24","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-14-0","dir":"Changelog","previous_headings":"","what":"Breaking Changes","title":"bayestestR 0.14.0","text":"Arguments named group, , group_by split_by deprecated future releases easystats packages. Please use instead. affects following functions bayestestR: estimate_density().","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-14-0","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.14.0","text":"bayesian_as_frequentist() now supports model families Bayesian models can successfully converted frequentists counterparts. bayesfactor_models() now throws informative error Bayes factors comparisons calculated.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-14-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.14.0","text":"Fixed issue bayesian_as_frequentist() brms models 0 + Intercept specification model formula.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0132","dir":"Changelog","previous_headings":"","what":"bayestestR 0.13.2","title":"bayestestR 0.13.2","text":"CRAN release: 2024-02-12","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-13-2","dir":"Changelog","previous_headings":"","what":"Breaking Changes","title":"bayestestR 0.13.2","text":"pd_to_p() now returns 1 warning values smaller 0.5. map_estimate(), p_direction(), p_map(), p_significance() now return data-frame input numeric vector. (making output consistently data frame inputs.) Argument posteriors renamed posterior. , mix spellings, now consistently posterior.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-13-2","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.13.2","text":"Retrieving models environment improved.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-13-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.13.2","text":"Fixed issues various format() methods, work properly functions (like p_direction()). Fixed issue estimate_density() double vectors also class attributes. Fixed several minor issues tests.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0131","dir":"Changelog","previous_headings":"","what":"bayestestR 0.13.1","title":"bayestestR 0.13.1","text":"CRAN release: 2023-04-07","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-13-1","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.13.1","text":"Improved speed performance functions called using .call(). Improved speed performance bayesfactor_models() brmsfit objects already included marglik element model object.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functionality-0-13-1","dir":"Changelog","previous_headings":"","what":"New functionality","title":"bayestestR 0.13.1","text":".logical() bayesfactor_restricted() results, extracts boolean vector(s) mark draws part order restriction.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-13-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.13.1","text":"p_map() gains new null argument specify non-0 nulls. Fixed non-working examples ci(method = \"SI\"). Fixed wrong calculation rope range model objects describe_posterior(). smaller bug fixes.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0130","dir":"Changelog","previous_headings":"","what":"bayestestR 0.13.0","title":"bayestestR 0.13.0","text":"CRAN release: 2022-09-18","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-0-13-0","dir":"Changelog","previous_headings":"","what":"Breaking","title":"bayestestR 0.13.0","text":"minimum needed R version bumped 3.6. contr.equalprior(contrasts = FALSE) (previously contr.orthonorm) longer returns identity matrix, shifted diag(n) - 1/n, consistency.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functionality-0-13-0","dir":"Changelog","previous_headings":"","what":"New functionality","title":"bayestestR 0.13.0","text":"p_to_bf(), convert p-values Bayes factors. accurate approximate Bayes factors, use bic_to_bf(). bayestestR now supports objects class rvar package posterior. contr.equalprior (previously contr.orthonorm) gains two new functions: contr.equalprior_pairs contr.equalprior_deviations aide setting intuitive priors.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-13-0","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.13.0","text":"renamed contr.equalprior explicit function. p_direction() now accepts objects class parameters_model() (parameters::model_parameters()), compute probability direction parameters frequentist models.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0121","dir":"Changelog","previous_headings":"","what":"bayestestR 0.12.1","title":"bayestestR 0.12.1","text":"CRAN release: 2022-05-02","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-0-12-1","dir":"Changelog","previous_headings":"","what":"Breaking","title":"bayestestR 0.12.1","text":"Bayesfactor_models() frequentist models now relies updated insight::get_loglikelihood(). might change results REML based models. See documentation. estimate_density() argument group_by renamed . distribution_*(random = FALSE) functions now rely ppoints(), result slightly different results, especially small ns. Uncertainty estimation now defaults \"eti\" (formerly \"hdi\").","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-12-1","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.12.1","text":"bayestestR functions now support draws objects package posterior. rope_range() now handles log(normal)-families models log-transformed outcomes. New function spi(), compute shortest probability intervals. Furthermore, \"spi\" option added new method compute uncertainty intervals.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-12-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.12.1","text":"bci() objects incorrectly returned equal-tailed intervals.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0115","dir":"Changelog","previous_headings":"","what":"bayestestR 0.11.5","title":"bayestestR 0.11.5","text":"CRAN release: 2021-10-30 Fixes failing tests CRAN checks.","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions-0-11-1","dir":"Changelog","previous_headings":"","what":"New functions","title":"bayestestR 0.11.1","text":"describe_posterior() gains plot() method, short cut plot(estimate_density(describe_posterior())).","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-11","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.11","text":"Fixed issues related last brms update. Fixed bug describe_posterior.BFBayesFactor() Bayes factors missing put ( #442 ).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-0100","dir":"Changelog","previous_headings":"","what":"bayestestR 0.10.0","title":"bayestestR 0.10.0","text":"CRAN release: 2021-05-31","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-0-10-0","dir":"Changelog","previous_headings":"","what":"Breaking","title":"bayestestR 0.10.0","text":"Bayes factors now returned log(BF) (column name log_BF). Printing unaffected. retrieve raw BFs, can run exp(result$log_BF).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions-0-10-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"bayestestR 0.10.0","text":"bci() (alias bcai()) compute bias-corrected accelerated bootstrap intervals. Along new function, ci() describe_posterior() gain new ci_method type, \"bci\".","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-0-10-0","dir":"Changelog","previous_headings":"","what":"Changes","title":"bayestestR 0.10.0","text":"contr.bayes renamed contr.orthonorm explicit function.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-090","dir":"Changelog","previous_headings":"","what":"bayestestR 0.9.0","title":"bayestestR 0.9.0","text":"CRAN release: 2021-04-08","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-0-9-0","dir":"Changelog","previous_headings":"","what":"Breaking","title":"bayestestR 0.9.0","text":"default ci width changed 0.95 instead 0.89 (see ). come surprise long-time users bayestestR warning impending change now :) Column names bayesfactor_restricted() now p_prior p_posterior (Prior_prob Posterior_prob), consistent bayesfactor_inclusion() output. Removed experimental function mhdior.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-9-0","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.9.0","text":"Support blavaan models. Support blrm models (rmsb). Support BGGM models (BGGM). check_prior() describe_prior() now also work ways prior definition models rstanarm brms.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-9-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.9.0","text":"Fixed bug print() method mediation() function. Fixed remaining inconsistencies CI values, reported fraction rope(). Fixed issues special prior definitions check_prior(), describe_prior() simulate_prior().","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-082","dir":"Changelog","previous_headings":"","what":"bayestestR 0.8.2","title":"bayestestR 0.8.2","text":"CRAN release: 2021-01-26","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-8-2","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.8.2","text":"Support bamlss models. Roll-back R dependency R >= 3.4.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-to-functions-0-8-2","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"bayestestR 0.8.2","text":".stanreg methods gain component argument, also include auxiliary parameters.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-8-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.8.2","text":"bayesfactor_parameters() longer errors reason computing extremely un/likely direction hypotheses. bayesfactor_pointull() / bf_pointull() now bayesfactor_pointnull() / bf_pointnull() (can spot difference? #363 ).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-080","dir":"Changelog","previous_headings":"","what":"bayestestR 0.8.0","title":"bayestestR 0.8.0","text":"CRAN release: 2020-12-05","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions-0-8-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"bayestestR 0.8.0","text":"sexit(), function sequential effect existence significance testing (SEXIT).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-8-0","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.8.0","text":"Added startup-message warn users default ci-width might change future update. Added support mcmc.list objects.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-8-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.8.0","text":"unupdate() gains newdata argument work brmsfit_multiple models. Fixed issue Bayes factor vignette (don’t evaluate code chunks packages available).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-075","dir":"Changelog","previous_headings":"","what":"bayestestR 0.7.5","title":"bayestestR 0.7.5","text":"CRAN release: 2020-10-22","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions-0-7-5","dir":"Changelog","previous_headings":"","what":"New functions","title":"bayestestR 0.7.5","text":"Added .matrix() function bayesfactor_model arrays. unupdate(), utility function get Bayesian models un-fitted data, representing priors .","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-to-functions-0-7-5","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"bayestestR 0.7.5","text":"ci() supports emmeans - Bayesian frequentist ( #312 - cross fix parameters)","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-7-5","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.7.5","text":"Fixed issue default rope range BayesFactor models. Fixed issue collinearity-check rope() models less two parameters. Fixed issue print-method mediation() stanmvreg-models, displays wrong name response-value. Fixed issue effective_sample() models one parameter. rope_range() BayesFactor models returns non-NA values ( #343 )","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-072","dir":"Changelog","previous_headings":"","what":"bayestestR 0.7.2","title":"bayestestR 0.7.2","text":"CRAN release: 2020-07-20","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions-0-7-2","dir":"Changelog","previous_headings":"","what":"New functions","title":"bayestestR 0.7.2","text":"mediation(), compute average direct average causal mediation effects multivariate response models (brmsfit, stanmvreg).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-7-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.7.2","text":"bayesfactor_parameters() works R<3.6.0.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-070","dir":"Changelog","previous_headings":"","what":"bayestestR 0.7.0","title":"bayestestR 0.7.0","text":"CRAN release: 2020-06-19","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-7-0","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.7.0","text":"Preliminary support stanfit objects. Added support bayesQR objects.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-to-functions-0-7-0","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"bayestestR 0.7.0","text":"weighted_posteriors() can now used data frames. Revised print() describe_posterior(). Improved value formatting Bayesfactor functions.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-7-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.7.0","text":"Link transformation now taken account emmeans objets. E.g., describe_posterior(). Fix diagnostic_posterior() algorithm “sampling”. Minor revisions documentations. Fix CRAN check issues win-old-release.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-060","dir":"Changelog","previous_headings":"","what":"bayestestR 0.6.0","title":"bayestestR 0.6.0","text":"CRAN release: 2020-04-20","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-to-functions-0-6-0","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"bayestestR 0.6.0","text":"describe_posterior() now also works effectsize::standardize_posteriors(). p_significance() now also works parameters::simulate_model(). rope_range() supports (frequentis) models.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-6-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.6.0","text":"Fixed issue plot() data.frame-methods p_direction() equivalence_test(). Fix check issues forthcoming insight-update.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-053","dir":"Changelog","previous_headings":"","what":"bayestestR 0.5.3","title":"bayestestR 0.5.3","text":"CRAN release: 2020-03-26","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-5-3","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.5.3","text":"Support bcplm objects (package cplm)","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"changes-to-functions-0-5-3","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"bayestestR 0.5.3","text":"estimate_density() now also works grouped data frames.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-5-3","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.5.3","text":"Fixed bug weighted_posteriors() properly weight Intercept-BFBayesFactor models. Fixed bug weighted_posteriors() models low posterior probability ( #286 ). Fixed bug describe_posterior(), rope() equivalence_test() brmsfit models monotonic effect. Fixed issues related latest changes .data.frame.brmsfit() brms package.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-050","dir":"Changelog","previous_headings":"","what":"bayestestR 0.5.0","title":"bayestestR 0.5.0","text":"CRAN release: 2020-01-18","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-5-0","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.5.0","text":"Added p_pointnull() alias p_MAP(). Added si() function compute support intervals. Added weighted_posteriors() generating posterior samples averaged across models. Added plot()-method p_significance(). p_significance() now also works brmsfit-objects. estimate_density() now also works MCMCglmm-objects. equivalence_test() gets effects component arguments stanreg brmsfit models, print specific model components. Support mcmc objects (package coda) Provide distributions via distribution(). Added distribution_tweedie(). Better handling stanmvreg models describe_posterior(), diagnostic_posterior() describe_prior().","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-5-0","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"bayestestR 0.5.0","text":"point_estimate(): argument centrality default value changed ‘median’ ‘’. p_rope(), previously exploratory index, renamed mhdior() (Max HDI inside/outside ROPE), p_rope() refer rope(..., ci = 1) ( #258 )","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-5-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.5.0","text":"Fixed mistake description p_significance(). Fixed error computing BFs emmGrid based non-linear models ( #260 ). Fixed wrong output percentage-values print.equivalence_test(). Fixed issue describe_posterior() BFBayesFactor-objects one model.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-040","dir":"Changelog","previous_headings":"","what":"bayestestR 0.4.0","title":"bayestestR 0.4.0","text":"CRAN release: 2019-10-20","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions--features-0-4-0","dir":"Changelog","previous_headings":"","what":"New functions / features","title":"bayestestR 0.4.0","text":"convert_bayesian_to_frequentist() Convert (refit) Bayesian model frequentist distribution_binomial() perfect binomial distributions simulate_ttest() Simulate data mean difference simulate_correlation() Simulate correlated datasets p_significance() Compute probability Practical Significance (ps) overlap() Compute overlap two empirical distributions estimate_density(): method = \"mixture\" argument added mixture density estimation","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-4-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.4.0","text":"Fixed bug simulate_prior() stanreg-models autoscale set FALSE","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-030","dir":"Changelog","previous_headings":"","what":"bayestestR 0.3.0","title":"bayestestR 0.3.0","text":"CRAN release: 2019-09-22","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"general-0-3-0","dir":"Changelog","previous_headings":"","what":"General","title":"bayestestR 0.3.0","text":"revised print()-methods functions like rope(), p_direction(), describe_posterior() etc., particular model objects random effects /zero-inflation component","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions--features-0-3-0","dir":"Changelog","previous_headings":"","what":"New functions / features","title":"bayestestR 0.3.0","text":"check_prior() check prior informative simulate_prior() simulate model’s priors distributions distribution_gamma() generate (near-perfect random) Gamma distribution contr.bayes function orthogonal factor coding (implementation Singmann & Gronau’s bfrms, used proper prior estimation factor 3 levels . See Bayes factor vignette ## Changes functions Added support sim, sim.merMod (arm::sim()) MCMCglmm-objects many functions (like hdi(), ci(), eti(), rope(), p_direction(), point_estimate(), …) describe_posterior() gets effects component argument, include description posterior samples random effects /zero-inflation component. user-friendly warning non-supported models bayesfactor()-methods","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-3-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.3.0","text":"Fixed bug bayesfactor_inclusion() interaction sometimes appeared (#223) Fixed bug describe_posterior() stanreg models fitted fullrank-algorithm","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-025","dir":"Changelog","previous_headings":"","what":"bayestestR 0.2.5","title":"bayestestR 0.2.5","text":"CRAN release: 2019-08-06","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-2-5","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"bayestestR 0.2.5","text":"rope_range() binomial model now different default (-.18; .18 ; instead -.055; .055) rope(): returns proportion (0 1) instead value 0 100 p_direction(): returns proportion (0.5 1) instead value 50 100 (#168) bayesfactor_savagedickey(): hypothesis argument replaced null part new bayesfactor_parameters() function","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions--features-0-2-5","dir":"Changelog","previous_headings":"","what":"New functions / features","title":"bayestestR 0.2.5","text":"density_at(), p_map() map_estimate(): method argument added rope(): ci_method argument added eti(): Computes equal-tailed intervals reshape_ci(): Reshape CIs wide/long bayesfactor_parameters(): New function, replacing bayesfactor_savagedickey(), allows computing Bayes factors point-null interval-null bayesfactor_restricted(): Function computing Bayes factors order restricted models","code":""},{"path":[]},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-2-5","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.2.5","text":"bayesfactor_inclusion() now works R < 3.6.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-022","dir":"Changelog","previous_headings":"","what":"bayestestR 0.2.2","title":"bayestestR 0.2.2","text":"CRAN release: 2019-06-20","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-2-2","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"bayestestR 0.2.2","text":"equivalence_test(): returns capitalized output (e.g., Rejected instead rejected) describe_posterior.numeric(): dispersion defaults FALSE consistency methods","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions--features-0-2-2","dir":"Changelog","previous_headings":"","what":"New functions / features","title":"bayestestR 0.2.2","text":"pd_to_p() p_to_pd(): Functions convert probability direction (pd) p-value Support emmGrid objects: ci(), rope(), bayesfactor_savagedickey(), describe_posterior(), …","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"minor-changes-0-2-2","dir":"Changelog","previous_headings":"","what":"Minor changes","title":"bayestestR 0.2.2","text":"Improved tutorial 2","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-2-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.2.2","text":"describe_posterior(): Fixed column order restoration bayesfactor_inclusion(): Inclusion BFs matched models inline JASP results.","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-020","dir":"Changelog","previous_headings":"","what":"bayestestR 0.2.0","title":"bayestestR 0.2.0","text":"CRAN release: 2019-05-29","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"breaking-changes-0-2-0","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"bayestestR 0.2.0","text":"plotting functions now require installation see package estimate argument name describe_posterior() point_estimate() changed centrality hdi(), ci(), rope() equivalence_test() default ci 0.89 rnorm_perfect() deprecated favour distribution_normal() map_estimate() now returns single value instead dataframe density parameter removed. MAP density value now accessible via attributes(map_output)$MAP_density","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"new-functions--features-0-2-0","dir":"Changelog","previous_headings":"","what":"New functions / features","title":"bayestestR 0.2.0","text":"describe_posterior(), describe_prior(), diagnostic_posterior(): added wrapper function point_estimate() added function compute point estimates p_direction(): new argument method compute pd based AUC area_under_curve(): compute AUC distribution() functions added bayesfactor_savagedickey(), bayesfactor_models() bayesfactor_inclusion() functions added Started adding plotting methods (currently see package) p_direction() hdi() probability_at() alias density_at() effective_sample() return effective sample size Stan-models mcse() return Monte Carlo standard error Stan-models","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"minor-changes-0-2-0","dir":"Changelog","previous_headings":"","what":"Minor changes","title":"bayestestR 0.2.0","text":"Improved documentation Improved testing p_direction(): improved printing rope() model-objects now returns HDI values parameters attribute consistent way Changes legend-labels plot.equivalence_test() align plots output print()-method (#78)","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bug-fixes-0-2-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"bayestestR 0.2.0","text":"hdi() returned multiple class attributes (#72) Printing results hdi() failed ci-argument fractional parts percentage values (e.g. ci = 0.995). plot.equivalence_test() work properly brms-models (#76).","code":""},{"path":"https://easystats.github.io/bayestestR/news/index.html","id":"bayestestr-010","dir":"Changelog","previous_headings":"","what":"bayestestR 0.1.0","title":"bayestestR 0.1.0","text":"CRAN release: 2019-04-08 CRAN initial publication 0.1.0 release Added NEWS.md file track changes package","code":""}]