---
title: "Extracting and visualizing tidy draws from brms models"
author: "Matthew Kay"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
df_print: kable
params:
EVAL: !r identical(Sys.getenv("NOT_CRAN"), "true")
vignette: >
%\VignetteIndexEntry{Extracting and visualizing tidy draws from brms models}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r chunk_options, include=FALSE}
if (requireNamespace("pkgdown", quietly = TRUE) && pkgdown::in_pkgdown()) {
tiny_width = small_width = med_width = 6.75
tiny_height = small_height = med_height = 4.5
large_width = 8
large_height = 5.25
} else {
tiny_width = 5.5
tiny_height = 3 + 2/3
small_width = med_width = 6.75
small_height = med_height = 4.5
large_width = 8
large_height = 5.25
}
knitr::opts_chunk$set(
fig.width = small_width,
fig.height = small_height,
eval = if (isTRUE(exists("params"))) params$EVAL else FALSE
)
if (capabilities("cairo") && Sys.info()[['sysname']] != "Darwin") {
knitr::opts_chunk$set(
dev.args = list(png = list(type = "cairo"))
)
}
dir.create("models", showWarnings = FALSE)
```
## Introduction
This vignette describes how to use the `tidybayes` and `ggdist` packages to extract and visualize [tidy](https://dx.doi.org/10.18637/jss.v059.i10) data frames of draws from posterior distributions of model variables, means, and predictions from `brms::brm`. For a more general introduction to `tidybayes` and its use on general-purpose Bayesian modeling languages (like Stan and JAGS), see `vignette("tidybayes")`.
## Setup
The following libraries are required to run this vignette:
```{r setup, message = FALSE, warning = FALSE}
library(magrittr)
library(dplyr)
library(purrr)
library(forcats)
library(tidyr)
library(modelr)
library(ggdist)
library(tidybayes)
library(ggplot2)
library(cowplot)
library(rstan)
library(brms)
library(ggrepel)
library(RColorBrewer)
library(gganimate)
library(posterior)
library(distributional)
theme_set(theme_tidybayes() + panel_border())
```
These options help Stan run faster:
```{r, eval=FALSE}
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
```
```{r hidden_options, include=FALSE}
# While the previous code chunk is the actual recommended approach,
# CRAN vignette building policy limits us to 2 cores, so we use at most
# 2 to build this vignette (but show the previous chunk to
# the reader as a best pratice example)
rstan_options(auto_write = TRUE)
options(mc.cores = 1) #min(2, parallel::detectCores()))
options(width = 120)
```
## Example dataset
To demonstrate `tidybayes`, we will use a simple dataset with 10 observations from 5 conditions each:
```{r}
set.seed(5)
n = 10
n_condition = 5
ABC =
tibble(
condition = rep(c("A","B","C","D","E"), n),
response = rnorm(n * 5, c(0,1,2,1,-1), 0.5)
)
```
A snapshot of the data looks like this:
```{r}
head(ABC, 10)
```
This is a typical tidy format data frame: one observation per row. Graphically:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
ggplot(aes(y = condition, x = response)) +
geom_point()
```
## Model
Let's fit a hierarchical model with shrinkage towards a global mean:
```{r m_brm, results = "hide", message = FALSE, cache = TRUE}
m = brm(
response ~ (1|condition),
data = ABC,
prior = c(
prior(normal(0, 1), class = Intercept),
prior(student_t(3, 0, 1), class = sd),
prior(student_t(3, 0, 1), class = sigma)
),
control = list(adapt_delta = .99),
file = "models/tidy-brms_m.rds" # cache model (can be removed)
)
```
The results look like this:
```{r}
m
```
## Extracting draws from a fit in tidy-format using `spread_draws`
Now that we have our results, the fun begins: getting the draws out in a tidy format! First, we'll use the `get_variables()` function to get a list of raw model variable names so that we know what variables we can extract from the model:
```{r}
get_variables(m)
```
Here, `b_Intercept` is the global mean, and the `r_condition[]` variables are offsets from that mean for each condition. Given these variables:
- `r_condition[A,Intercept]`
- `r_condition[B,Intercept]`
- `r_condition[C,Intercept]`
- `r_condition[D,Intercept]`
- `r_condition[E,Intercept]`
We might want a data frame where each row is a draw from either `r_condition[A,Intercept]`, `r_condition[B,Intercept]`, `...[C,...]`, `...[D,...]`, or `...[E,...]`, and where we have columns indexing which chain/iteration/draw the row came from and which condition (`A` to `E`) it is for. That would allow us to easily compute quantities grouped by condition, or generate plots by condition using ggplot, or even merge draws with the original data to plot data and posteriors simultaneously.
The workhorse of `tidybayes` is the `spread_draws()` function, which does this extraction for us. It includes a simple specification format that we can use to extract variables and their indices into tidy-format data frames.
### Gathering variable indices into a separate column in a tidy format data frame
Given a variable in the model like this:
`r_condition[D,Intercept]`
We can provide `spread_draws()` with a column specification like this:
`r_condition[condition,term]`
Where `condition` corresponds to `D` and `term` corresponds to `Intercept`. There is nothing too magical about what `spread_draws()` does with this specification: under the hood, it splits the variable indices by commas and spaces (you can split by other characters by changing the `sep` argument). It lets you assign columns to the resulting indices in order. So `r_condition[D,Intercept]` has indices `D` and `Intercept`, and `spread_draws()` lets us extract these indices as columns in the resulting tidy data frame of draws from `r_condition`:
```{r}
m %>%
spread_draws(r_condition[condition,term]) %>%
head(10)
```
We can choose whatever names we want for the index columns; e.g.:
```{r}
m %>%
spread_draws(r_condition[c,t]) %>%
head(10)
```
But the more descriptive and less cryptic names from the previous example are probably preferable.
In this particular model, there is only one term (`Intercept`), thus we could omit that index altogether to just get each `condition` and the value of `r_condition` for that condition:
```{r}
m %>%
spread_draws(r_condition[condition,]) %>%
head(10)
```
__Note:__ If you have used `spread_draws()` with a raw sample from Stan or JAGS, you may be used to using `recover_types` before `spread_draws()` to get index column values back (e.g. if the index was a factor). This is not necessary when using `spread_draws()` on `rstanarm` models, because those models already contain that information in their variable names. For more on `recover_types`, see `vignette("tidybayes")`.
## Point summaries and intervals
### With simple model variables
`tidybayes` provides a family of functions for generating point summaries and intervals from draws in a tidy format. These functions follow the naming scheme `[median|mean|mode]_[qi|hdi]`, for example, `median_qi()`, `mean_qi()`, `mode_hdi()`, and so on. The first name (before the `_`) indicates the type of point summary, and the second name indicates the type of interval. `qi` yields a quantile interval (a.k.a. equi-tailed interval, central interval, or percentile interval) and `hdi` yields a highest (posterior) density interval. Custom point summary or interval functions can also be applied using the `point_interval()` function.
For example, we might extract the draws corresponding to posterior distributions of the overall mean and standard deviation of observations:
```{r}
m %>%
spread_draws(b_Intercept, sigma) %>%
head(10)
```
Like with `r_condition[condition,term]`, this gives us a tidy data frame. If we want the median and 95% quantile interval of the variables, we can apply `median_qi()`:
```{r}
m %>%
spread_draws(b_Intercept, sigma) %>%
median_qi(b_Intercept, sigma)
```
We can specify the columns we want to get medians and intervals from, as above, or if we omit the list of columns, `median_qi()` will use every column that is not a grouping column or a special column (like `.chain`, `.iteration`, or `.draw`). Thus in the above example, `b_Intercept` and `sigma` are redundant arguments to `median_qi()` because they are also the only columns we gathered from the model. So we can simplify this to:
```{r}
m %>%
spread_draws(b_Intercept, sigma) %>%
median_qi()
```
If you would rather have a long-format list of intervals, use `gather_draws()` instead:
```{r}
m %>%
gather_draws(b_Intercept, sigma) %>%
median_qi()
```
For more on `gather_draws()`, see `vignette("tidybayes")`.
### With indexed model variables
When we have a model variable with one or more indices, such as `r_condition`, we can apply `median_qi()` (or other functions in the `point_interval()` family) as we did before:
```{r}
m %>%
spread_draws(r_condition[condition,]) %>%
median_qi()
```
How did `median_qi()` know what to aggregate? Data frames returned by `spread_draws()` are automatically grouped by all index variables you pass to it; in this case, that means `spread_draws()` groups its results by `condition`. `median_qi()` respects those groups, and calculates the point summaries and intervals within all groups. Then, because no columns were passed to `median_qi()`, it acts on the only non-special (`.`-prefixed) and non-group column, `r_condition`. So the above shortened syntax is equivalent to this more verbose call:
```{r}
m %>%
spread_draws(r_condition[condition,]) %>%
group_by(condition) %>% # this line not necessary (done by spread_draws)
median_qi(r_condition) # b is not necessary (it is the only non-group column)
```
`tidybayes` also provides an implementation of `posterior::summarise_draws()` for
grouped data frames (`tidybayes::summaries_draws.grouped_df()`), which you can
use to quickly get convergence diagnostics:
```{r}
m %>%
spread_draws(r_condition[condition,]) %>%
summarise_draws()
```
## Combining variables with different indices in a single tidy format data frame
`spread_draws()` and `gather_draws()` support extracting variables that have different indices into the same data frame. Indices with the same name are automatically matched up, and values are duplicated as necessary to produce one row per all combination of levels of all indices. For example, we might want to calculate the mean within each condition (call this `condition_mean`). In this model, that mean is the intercept (`b_Intercept`) plus the effect for a given condition (`r_condition`).
We can gather draws from `b_Intercept` and `r_condition` together in a single data frame:
```{r}
m %>%
spread_draws(b_Intercept, r_condition[condition,]) %>%
head(10)
```
Within each draw, `b_Intercept` is repeated as necessary to correspond to every index of `r_condition`. Thus, the `mutate` function from dplyr can be used to find their sum, `condition_mean` (which is the mean for each condition):
```{r}
m %>%
spread_draws(`b_Intercept`, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
median_qi(condition_mean)
```
`median_qi()` uses tidy evaluation (see `vignette("tidy-evaluation", package = "rlang")`), so it can take column expressions, not just column names. Thus, we can simplify the above example by moving the calculation of `condition_mean` from `mutate` into `median_qi()`:
```{r}
m %>%
spread_draws(b_Intercept, r_condition[condition,]) %>%
median_qi(condition_mean = b_Intercept + r_condition)
```
## Plotting intervals with multiple probability levels
`median_qi()` and its sister functions can produce an arbitrary number of probability intervals by setting the `.width =` argument:
```{r}
m %>%
spread_draws(b_Intercept, r_condition[condition,]) %>%
median_qi(condition_mean = b_Intercept + r_condition, .width = c(.95, .8, .5))
```
The results are in a tidy format: one row per group and uncertainty interval width (`.width`). This facilitates plotting. For example, assigning `-.width` to the `linewidth` aesthetic will show all intervals, making thicker lines correspond to smaller intervals. The `ggdist::geom_pointinterval()` geom automatically sets the `linewidth` aesthetic appropriately based on the `.width` column in the data to produce plots of points with multiple probability levels:
```{r fig.width = tiny_width, fig.height = tiny_height}
m %>%
spread_draws(b_Intercept, r_condition[condition,]) %>%
median_qi(condition_mean = b_Intercept + r_condition, .width = c(.95, .66)) %>%
ggplot(aes(y = condition, x = condition_mean, xmin = .lower, xmax = .upper)) +
geom_pointinterval()
```
## Intervals with densities
To see the density along with the intervals, we can use `ggdist::stat_eye()` ("eye plots", which combine intervals with violin plots), or `ggdist::stat_halfeye()` (interval + density plots):
```{r fig.width = tiny_width, fig.height = tiny_height}
m %>%
spread_draws(b_Intercept, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
ggplot(aes(y = condition, x = condition_mean)) +
stat_halfeye()
```
Or say you want to annotate portions of the densities in color; the `fill` aesthetic can vary within a slab in all geoms and stats in the `ggdist::geom_slabinterval()` family, including `ggdist::stat_halfeye()`. For example, if you want to annotate a domain-specific region of practical equivalence (ROPE), you could do something like this:
```{r fig.width = tiny_width, fig.height = tiny_height}
m %>%
spread_draws(b_Intercept, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
ggplot(aes(y = condition, x = condition_mean, fill = after_stat(abs(x) < .8))) +
stat_halfeye() +
geom_vline(xintercept = c(-.8, .8), linetype = "dashed") +
scale_fill_manual(values = c("gray80", "skyblue"))
```
## Other visualizations of distributions: `stat_slabinterval`
There are a variety of additional stats for visualizing distributions in the `ggdist::geom_slabinterval()` family of stats and geoms:
See `vignette("slabinterval", package = "ggdist")` for an overview.
## Posterior means and predictions
Rather than calculating conditional means manually as in the previous example, we could use `add_epred_draws()`, which is analogous to `brms::posterior_epred()` (giving posterior draws from the expectation of the posterior predictive; i.e. posterior distributions of conditional means), but uses a tidy data format. We can combine it with `modelr::data_grid()` to first generate a grid describing the predictions we want, then transform that grid into a long-format data frame of draws from conditional means:
```{r}
ABC %>%
data_grid(condition) %>%
add_epred_draws(m) %>%
head(10)
```
To plot this example, we'll also show the use of `ggdist::stat_pointinterval()` instead of `ggdist::geom_pointinterval()`, which summarizes draws into points and intervals within ggplot:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
data_grid(condition) %>%
add_epred_draws(m) %>%
ggplot(aes(x = .epred, y = condition)) +
stat_pointinterval(.width = c(.66, .95))
```
## Quantile dotplots
Intervals are nice if the alpha level happens to line up with whatever decision you are trying to make, but getting a shape of the posterior is better (hence eye plots, above). On the other hand, making inferences from density plots is imprecise (estimating the area of one shape as a proportion of another is a hard perceptual task). Reasoning about probability in frequency formats is easier, motivating [quantile dotplots](https://github.com/mjskay/when-ish-is-my-bus/blob/master/quantile-dotplots.md) ([Kay et al. 2016](https://doi.org/10.1145/2858036.2858558), [Fernandes et al. 2018](https://doi.org/10.1145/3173574.3173718)), which also allow precise estimation of arbitrary intervals (down to the dot resolution of the plot, 100 in the example below).
Within the slabinterval family of geoms in tidybayes is the `dots` and `dotsinterval` family, which automatically determine appropriate bin sizes for dotplots and can calculate quantiles from samples to construct quantile dotplots. `ggdist::stat_dotsinterval()` is the variant designed for use on samples:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
data_grid(condition) %>%
add_epred_draws(m) %>%
ggplot(aes(x = .epred, y = condition)) +
stat_dotsinterval(quantiles = 100)
```
The idea is to get away from thinking about the posterior as indicating one canonical point or interval, but instead to represent it as (say) 100 approximately equally likely points.
## Posterior predictions
Where `add_epred_draws()` is analogous to `brms::posterior_epred()`, `add_predicted_draws()` is analogous to `brms::posterior_predict()`, giving draws from the posterior predictive distribution.
Here is an example of posterior predictive distributions plotted using `ggdist::stat_slab()`:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
data_grid(condition) %>%
add_predicted_draws(m) %>%
ggplot(aes(x = .prediction, y = condition)) +
stat_slab()
```
We could also use `ggdist::stat_interval()` to plot predictive bands alongside the data:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
data_grid(condition) %>%
add_predicted_draws(m) %>%
ggplot(aes(y = condition, x = .prediction)) +
stat_interval(.width = c(.50, .80, .95, .99)) +
geom_point(aes(x = response), data = ABC) +
scale_color_brewer()
```
Altogether, data, posterior predictions, and posterior distributions of the means:
```{r fig.width = tiny_width, fig.height = tiny_height}
grid = ABC %>%
data_grid(condition)
means = grid %>%
add_epred_draws(m)
preds = grid %>%
add_predicted_draws(m)
ABC %>%
ggplot(aes(y = condition, x = response)) +
stat_interval(aes(x = .prediction), data = preds) +
stat_pointinterval(aes(x = .epred), data = means, .width = c(.66, .95), position = position_nudge(y = -0.3)) +
geom_point() +
scale_color_brewer()
```
## Posterior predictions, Kruschke-style
The above approach to posterior predictions integrates over the parameter uncertainty to give a single posterior predictive distribution. Another approach, often used by John Kruschke in his book [Doing Bayesian Data Analysis](https://sites.google.com/site/doingbayesiandataanalysis/), is to attempt to show both the predictive uncertainty and the parameter uncertainty simultaneously by showing several possible predictive distributions implied by the posterior.
We can do this pretty easily by asking for the distributional parameters for a given prediction implied by the posterior. We'll do it explicitly here by setting `dpar = c("mu", "sigma")` in `add_epred_draws()`. Rather than specifying the parameters explicitly, you can also just set `dpar = TRUE` to get draws from all distributional parameters in a model, and this will work for any response distribution supported by brms. Then, we can select a small number of draws using `sample_draws()` and then use `ggdist::stat_slab()` to visualize each predictive distribution implied by the values of `mu` and `sigma`:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
data_grid(condition) %>%
add_epred_draws(m, dpar = c("mu", "sigma")) %>%
sample_draws(30) %>%
ggplot(aes(y = condition)) +
stat_slab(aes(xdist = dist_normal(mu, sigma)),
slab_color = "gray65", alpha = 1/10, fill = NA
) +
geom_point(aes(x = response), data = ABC, shape = 21, fill = "#9ECAE1", size = 2)
```
We could even combine the Kruschke-style plots of predictive distributions with half-eyes showing the posterior means:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
data_grid(condition) %>%
add_epred_draws(m, dpar = c("mu", "sigma")) %>%
ggplot(aes(x = condition)) +
stat_slab(aes(ydist = dist_normal(mu, sigma)),
slab_color = "gray65", alpha = 1/10, fill = NA, data = . %>% sample_draws(30), scale = .5
) +
stat_halfeye(aes(y = .epred), side = "bottom", scale = .5) +
geom_point(aes(y = response), data = ABC, shape = 21, fill = "#9ECAE1", size = 2, position = position_nudge(x = -.2))
```
## Fit/prediction curves
To demonstrate drawing fit curves with uncertainty, let's fit a slightly naive model to part of the `mtcars` dataset:
```{r m_mpg_brm, results = "hide", message = FALSE, warning = FALSE, cache = TRUE}
m_mpg = brm(
mpg ~ hp * cyl,
data = mtcars,
file = "models/tidy-brms_m_mpg.rds" # cache model (can be removed)
)
```
We can draw fit curves with probability bands:
```{r fig.width = tiny_width, fig.height = tiny_height}
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 51)) %>%
add_epred_draws(m_mpg) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
stat_lineribbon(aes(y = .epred)) +
geom_point(data = mtcars) +
scale_fill_brewer(palette = "Greys") +
scale_color_brewer(palette = "Set2")
```
Or we can sample a reasonable number of fit lines (say 100) and overplot them:
```{r fig.width = tiny_width, fig.height = tiny_height}
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
# NOTE: this shows the use of ndraws to subsample within add_epred_draws()
# ONLY do this IF you are planning to make spaghetti plots, etc.
# NEVER subsample to a small sample to plot intervals, densities, etc.
add_epred_draws(m_mpg, ndraws = 100) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
geom_line(aes(y = .epred, group = paste(cyl, .draw)), alpha = .1) +
geom_point(data = mtcars) +
scale_color_brewer(palette = "Dark2")
```
Or we can create animated [hypothetical outcome plots (HOPs)](https://mucollective.northwestern.edu/project/hops-trends) of fit lines:
```{r hops_lines, results='hide'}
set.seed(123456)
# NOTE: using a small number of draws to keep this example
# small, but in practice you probably want 50 or 100
ndraws = 20
p = mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_epred_draws(m_mpg, ndraws = ndraws) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
geom_line(aes(y = .epred, group = paste(cyl, .draw))) +
geom_point(data = mtcars) +
scale_color_brewer(palette = "Dark2") +
transition_states(.draw, 0, 1) +
shadow_mark(future = TRUE, color = "gray50", alpha = 1/20)
animate(p, nframes = ndraws, fps = 2.5, width = 432, height = 288, units = "px", res = 96, dev = "ragg_png")
```
```{r echo=FALSE, results='asis'}
# animate() doesn't seem to put the images in the right place for pkgdown, so this is a manual workaround
anim_save("tidy-brms_hops_lines.gif")
cat("![](tidy-brms_hops_lines.gif)\n")
```
Or we could plot posterior predictions (instead of means). For this example
we'll also use `alpha` to make it easier to see overlapping bands:
```{r fig.width = tiny_width, fig.height = tiny_height}
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_predicted_draws(m_mpg) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl), fill = ordered(cyl))) +
stat_lineribbon(aes(y = .prediction), .width = c(.95, .80, .50), alpha = 1/4) +
geom_point(data = mtcars) +
scale_fill_brewer(palette = "Set2") +
scale_color_brewer(palette = "Dark2")
```
This gets difficult to judge by group, so probably better to facet into multiple plots. Fortunately, since we are using ggplot, that functionality is built in:
```{r fig.width = tiny_width, fig.height = tiny_height}
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_predicted_draws(m_mpg) %>%
ggplot(aes(x = hp, y = mpg)) +
stat_lineribbon(aes(y = .prediction), .width = c(.99, .95, .8, .5), color = brewer.pal(5, "Blues")[[5]]) +
geom_point(data = mtcars) +
scale_fill_brewer() +
facet_grid(. ~ cyl, space = "free_x", scales = "free_x")
```
### Extracting distributional regression parameters
`brms::brm()` also allows us to set up submodels for parameters of the response distribution *other than* the location (e.g., mean). For example, we can allow a variance parameter, such as the standard deviation, to also be some function of the predictors.
This approach can be helpful in cases of non-constant variance (also called _heteroskedasticity_ by folks who like obfuscation via Latin). E.g., imagine two groups, each with different mean response _and variance_:
```{r fig.width = tiny_width, fig.height = tiny_height}
set.seed(1234)
AB = tibble(
group = rep(c("a", "b"), each = 20),
response = rnorm(40, mean = rep(c(1, 5), each = 20), sd = rep(c(1, 3), each = 20))
)
AB %>%
ggplot(aes(x = response, y = group)) +
geom_point()
```
Here is a model that lets the mean _and standard deviation_ of `response` be dependent on `group`:
```{r m_ab_brm, results = "hide", message = FALSE, cache = TRUE}
m_ab = brm(
bf(
response ~ group,
sigma ~ group
),
data = AB,
file = "models/tidy-brms_m_ab.rds" # cache model (can be removed)
)
```
We can plot the posterior distribution of the mean `response` alongside posterior predictive intervals and the data:
```{r fig.width = tiny_width, fig.height = tiny_height}
grid = AB %>%
data_grid(group)
means = grid %>%
add_epred_draws(m_ab)
preds = grid %>%
add_predicted_draws(m_ab)
AB %>%
ggplot(aes(x = response, y = group)) +
stat_halfeye(aes(x = .epred), scale = 0.6, position = position_nudge(y = 0.175), data = means) +
stat_interval(aes(x = .prediction), data = preds) +
geom_point(data = AB) +
scale_color_brewer()
```
This shows posteriors of the mean of each group (black intervals and the density plots) and posterior predictive intervals (blue).
The predictive intervals in group `b` are larger than in group `a` because the model fits a different standard deviation for each group. We can see how the corresponding distributional parameter, `sigma`, changes by extracting it using the `dpar` argument to `add_epred_draws()`:
```{r fig.width = tiny_width, fig.height = tiny_height}
grid %>%
add_epred_draws(m_ab, dpar = TRUE) %>%
ggplot(aes(x = sigma, y = group)) +
stat_halfeye() +
geom_vline(xintercept = 0, linetype = "dashed")
```
By setting `dpar = TRUE`, all distributional parameters are added as additional columns in the result of `add_epred_draws()`; if you only want a specific parameter, you can specify it (or a list of just the parameters you want). In the above model, `dpar = TRUE` is equivalent to `dpar = list("mu", "sigma")`.
## Comparing levels of a factor
If we wish compare the means from each condition, `compare_levels()` facilitates comparisons of the value of some variable across levels of a factor. By default it computes all pairwise differences.
Let's demonstrate `compare_levels()` with `ggdist::stat_halfeye()`. We'll also
re-order by the mean of the difference:
```{r fig.width = tiny_width, fig.height = tiny_height}
m %>%
spread_draws(r_condition[condition,]) %>%
compare_levels(r_condition, by = condition) %>%
ungroup() %>%
mutate(condition = reorder(condition, r_condition)) %>%
ggplot(aes(y = condition, x = r_condition)) +
stat_halfeye() +
geom_vline(xintercept = 0, linetype = "dashed")
```
## Ordinal models
The `posterior_epred()` function for ordinal and multinomial regression models in brms returns multiple variables for each draw: one for each outcome category (in contrast to `rstanarm::stan_polr()` models, which return draws from the latent linear predictor). The philosophy of `tidybayes` is to tidy whatever format is output by a model, so in keeping with that philosophy, when applied to ordinal and multinomial `brms` models, `add_epred_draws()` adds an additional column called `.category` and a separate row containing the variable for each category is output for every draw and predictor.
### Ordinal model with continuous predictor
We'll fit a model using the `mtcars` dataset that predicts the number of cylinders in a car given the car's mileage (in miles per gallon). While this is a little backwards causality-wise (presumably the number of cylinders causes the mileage, if anything), that does not mean this is not a fine prediction task (I could probably tell someone who knows something about cars the MPG of a car and they could do reasonably well at guessing the number of cylinders in the engine).
Before we fit the model, let's clean the dataset by making the `cyl` column an ordered factor (by default it is just a number):
```{r}
mtcars_clean = mtcars %>%
mutate(cyl = ordered(cyl))
head(mtcars_clean)
```
Then we'll fit an ordinal regression model:
```{r m_cyl_brm, results = "hide", message = FALSE, cache = TRUE}
m_cyl = brm(
cyl ~ mpg,
data = mtcars_clean,
family = cumulative,
seed = 58393,
file = "models/tidy-brms_m_cyl.rds" # cache model (can be removed)
)
```
`add_epred_draws()` will include a `.category` column, and `.epred` will contain draws from the posterior distribution for the probability that the response is in that category. For example, here is the fit for the first row in the dataset:
```{r}
tibble(mpg = 21) %>%
add_epred_draws(m_cyl) %>%
median_qi(.epred)
```
Note: for the `.category` variable to retain its original factor level names you
must be using `brms` greater than or equal to version 2.15.9.
We could plot fit lines for predicted probabilities against the dataset:
```{r fig.width = med_width, fig.height = med_height}
data_plot = mtcars_clean %>%
ggplot(aes(x = mpg, y = cyl, color = cyl)) +
geom_point() +
scale_color_brewer(palette = "Dark2", name = "cyl")
fit_plot = mtcars_clean %>%
data_grid(mpg = seq_range(mpg, n = 101)) %>%
add_epred_draws(m_cyl, value = "P(cyl | mpg)", category = "cyl") %>%
ggplot(aes(x = mpg, y = `P(cyl | mpg)`, color = cyl)) +
stat_lineribbon(aes(fill = cyl), alpha = 1/5) +
scale_color_brewer(palette = "Dark2") +
scale_fill_brewer(palette = "Dark2")
plot_grid(ncol = 1, align = "v",
data_plot,
fit_plot
)
```
The above display does not let you see the correlation between `P(cyl|mpg)` for different values of `cyl` at a particular value of `mpg`. For example, in the portion of the posterior where `P(cyl = 6|mpg = 20)` is high, `P(cyl = 4|mpg = 20)` and `P(cyl = 8|mpg = 20)` must be low (since these must add up to 1).
One way to see this correlation might be to employ [hypothetical outcome plots (HOPs)](https://doi.org/10.1371/journal.pone.0142444) just for the fit line, "detaching" it from the ribbon (another alternative would be to use HOPs on top of line ensembles, as demonstrated earlier in this document). By employing animation, you can see how the lines move in tandem or opposition to each other, revealing some patterns in how they are correlated:
```{r hops_ordinal_ribbon_lines, results='hide'}
# NOTE: using a small number of draws to keep this example
# small, but in practice you probably want 50 or 100
ndraws = 20
p = mtcars_clean %>%
data_grid(mpg = seq_range(mpg, n = 101)) %>%
add_epred_draws(m_cyl, value = "P(cyl | mpg)", category = "cyl") %>%
ggplot(aes(x = mpg, y = `P(cyl | mpg)`, color = cyl)) +
# we remove the `.draw` column from the data for stat_lineribbon so that the same ribbons
# are drawn on every frame (since we use .draw to determine the transitions below)
stat_lineribbon(aes(fill = cyl), alpha = 1/5, color = NA, data = . %>% select(-.draw)) +
# we use sample_draws to subsample at the level of geom_line (rather than for the full dataset
# as in previous HOPs examples) because we need the full set of draws for stat_lineribbon above
geom_line(aes(group = paste(.draw, cyl)), linewidth = 1, data = . %>% sample_draws(ndraws)) +
scale_color_brewer(palette = "Dark2") +
scale_fill_brewer(palette = "Dark2") +
transition_manual(.draw)
animate(p, nframes = ndraws, fps = 2.5, width = 576, height = 192, units = "px", res = 96, dev = "ragg_png")
```
```{r echo=FALSE, results='asis'}
# animate() doesn't seem to put the images in the right place for pkgdown, so this is a manual workaround
anim_save("tidy-brms_hops_ordinal_ribbon_lines.gif")
cat("![](tidy-brms_hops_ordinal_ribbon_lines.gif)\n")
```
Notice how the lines move together, and how they move up or down together or in opposition. We could take a slice through these lines at an x position in the above chart (say, `mpg = 20`) and look at the correlation between them using a scatterplot matrix:
```{r fig.width = tiny_width, fig.height = tiny_height}
tibble(mpg = 20) %>%
add_epred_draws(m_cyl, value = "P(cyl | mpg = 20)", category = "cyl") %>%
ungroup() %>%
select(.draw, cyl, `P(cyl | mpg = 20)`) %>%
gather_pairs(cyl, `P(cyl | mpg = 20)`, triangle = "both") %>%
filter(.row != .col) %>%
ggplot(aes(.x, .y)) +
geom_point(alpha = 1/50) +
facet_grid(.row ~ .col) +
ylab("P(cyl = row | mpg = 20)") +
xlab("P(cyl = col | mpg = 20)")
```
While talking about the mean for an ordinal distribution often does not make sense, in this particular case one could argue that the expected number of cylinders for a car given its miles per gallon is a meaningful quantity. We could plot the posterior distribution for the average number of cylinders for a car given a particular miles per gallon as follows:
$$
\textrm{E}[\textrm{cyl}|\textrm{mpg}=m] = \sum_{c \in \{4,6,8\}} c\cdot \textrm{P}(\textrm{cyl}=c|\textrm{mpg}=m)
$$
We can use the above formula to derive a posterior distribution for $\textrm{E}[\textrm{cyl}|\textrm{mpg}=m]$ from the model. The model gives us a posterior distribution for $\textrm{P}(\textrm{cyl}=c|\textrm{mpg}=m)$: when `mpg` = $m$, the response-scale linear predictor (the `.epred` column from `add_epred_draws()`) for `cyl` (aka `.category`) = $c$ is $\textrm{P}(\textrm{cyl}=c|\textrm{mpg}=m)$. Thus, we can group within `.draw` and then use `summarise` to calculate the expected value:
```{r fig.width = med_width, fig.height = med_height}
label_data_function = . %>%
ungroup() %>%
filter(mpg == quantile(mpg, .47)) %>%
summarise_if(is.numeric, mean)
data_plot_with_mean = mtcars_clean %>%
data_grid(mpg = seq_range(mpg, n = 101)) %>%
# NOTE: this shows the use of ndraws to subsample within add_epred_draws()
# ONLY do this IF you are planning to make spaghetti plots, etc.
# NEVER subsample to a small sample to plot intervals, densities, etc.
add_epred_draws(m_cyl, value = "P(cyl | mpg)", category = "cyl", ndraws = 100) %>%
group_by(mpg, .draw) %>%
# calculate expected cylinder value
mutate(cyl = as.numeric(as.character(cyl))) %>%
summarise(cyl = sum(cyl * `P(cyl | mpg)`), .groups = "drop") %>%
ggplot(aes(x = mpg, y = cyl)) +
geom_line(aes(group = .draw), alpha = 5/100) +
geom_point(aes(y = as.numeric(as.character(cyl)), fill = cyl), data = mtcars_clean, shape = 21, size = 2) +
geom_text(aes(x = mpg + 4), label = "E[cyl | mpg]", data = label_data_function, hjust = 0) +
geom_segment(aes(yend = cyl, xend = mpg + 3.9), data = label_data_function) +
scale_fill_brewer(palette = "Set2", name = "cyl")
plot_grid(ncol = 1, align = "v",
data_plot_with_mean,
fit_plot
)
```
Now let's do some posterior predictive checking: do posterior predictions look like the data? For this, we'll make new predictions at the same values of `mpg` as were present in the original dataset (gray circles) and plot these with the observed data (colored circles):
```{r fig.width = large_width, fig.height = med_height}
mtcars_clean %>%
# we use `select` instead of `data_grid` here because we want to make posterior predictions
# for exactly the same set of observations we have in the original data
select(mpg) %>%
add_predicted_draws(m_cyl, seed = 1234) %>%
# recover original factor labels
mutate(cyl = levels(mtcars_clean$cyl)[.prediction]) %>%
ggplot(aes(x = mpg, y = cyl)) +
geom_count(color = "gray75") +
geom_point(aes(fill = cyl), data = mtcars_clean, shape = 21, size = 2) +
scale_fill_brewer(palette = "Dark2") +
geom_label_repel(
data = . %>% ungroup() %>% filter(cyl == "8") %>% filter(mpg == max(mpg)) %>% dplyr::slice(1),
label = "posterior predictions", xlim = c(26, NA), ylim = c(NA, 2.8), point.padding = 0.3,
label.size = NA, color = "gray50", segment.color = "gray75"
) +
geom_label_repel(
data = mtcars_clean %>% filter(cyl == "6") %>% filter(mpg == max(mpg)) %>% dplyr::slice(1),
label = "observed data", xlim = c(26, NA), ylim = c(2.2, NA), point.padding = 0.2,
label.size = NA, segment.color = "gray35"
)
```
This looks pretty good. Let's check using another typical posterior predictive checking plot: many simulated distributions of the response (`cyl`) against the observed distribution of the response. For a continuous response variable this is usually done with a density plot; here, we'll plot the number of posterior predictions in each bin as a line plot, since the response variable is discrete:
```{r fig.width = tiny_width, fig.height = tiny_height}
mtcars_clean %>%
select(mpg) %>%
add_predicted_draws(m_cyl, ndraws = 100, seed = 12345) %>%
# recover original factor labels
mutate(cyl = levels(mtcars_clean$cyl)[.prediction]) %>%
ggplot(aes(x = cyl)) +
stat_count(aes(group = NA), geom = "line", data = mtcars_clean, color = "red", linewidth = 3, alpha = .5) +
stat_count(aes(group = .draw), geom = "line", position = "identity", alpha = .05) +
geom_label(data = data.frame(cyl = "4"), y = 9.5, label = "posterior\npredictions",
hjust = 1, color = "gray50", lineheight = 1, label.size = NA) +
geom_label(data = data.frame(cyl = "8"), y = 14, label = "observed\ndata",
hjust = 0, color = "red", lineheight = 1, label.size = NA)
```
This also looks good.
Another way to look at these posterior predictions might be as a scatterplot matrix. `gather_pairs()` makes it easy to generate long-format data frames suitable for creating custom scatterplot matrices (or really, arbitrary matrix-style small multiples plots) in ggplot using `ggplot2::facet_grid()`:
```{r fig.width = tiny_width, fig.height = tiny_height}
set.seed(12345)
mtcars_clean %>%
select(mpg) %>%
add_predicted_draws(m_cyl) %>%
# recover original factor labels. Must ungroup first so that the
# factor is created in the same way in all groups; this is a workaround
# because brms no longer returns labelled predictions (hopefully that
# is fixed then this will no longer be necessary)
ungroup() %>%
mutate(cyl = ordered(levels(mtcars_clean$cyl)[.prediction], levels(mtcars_clean$cyl))) %>%
# need .drop = FALSE to ensure 0 counts are not dropped
group_by(.draw, .drop = FALSE) %>%
count(cyl) %>%
gather_pairs(cyl, n) %>%
ggplot(aes(.x, .y)) +
geom_count(color = "gray75") +
geom_point(data = mtcars_clean %>% count(cyl) %>% gather_pairs(cyl, n), color = "red") +
facet_grid(vars(.row), vars(.col)) +
xlab("Number of observations with cyl = col") +
ylab("Number of observations with cyl = row")
```
### Ordinal model with categorical predictor
Here's an ordinal model with a categorical predictor:
```{r m_esoph_brm, results = "hide", message = FALSE, cache = TRUE}
data(esoph)
m_esoph_brm = brm(
tobgp ~ agegp,
data = esoph,
family = cumulative(),
file = "models/tidy-brms_m_esoph_brm.rds"
)
```
Then we can plot predicted probabilities for each outcome category within each level of the predictor:
```{r fig.width = tiny_width, fig.height = tiny_height}
esoph %>%
data_grid(agegp) %>%
add_epred_draws(m_esoph_brm, dpar = TRUE, category = "tobgp") %>%
ggplot(aes(x = agegp, y = .epred, color = tobgp)) +
stat_pointinterval(position = position_dodge(width = .4)) +
scale_size_continuous(guide = "none") +
scale_color_manual(values = brewer.pal(6, "Blues")[-c(1,2)])
```
It is hard to see the changes in categories in the above plot; let's try something that gives a better gist of the distribution within each year:
```{r fig.width = med_width, fig.height = med_height/2}
esoph_plot = esoph %>%
data_grid(agegp) %>%
add_epred_draws(m_esoph_brm, category = "tobgp") %>%
ggplot(aes(x = .epred, y = tobgp)) +
coord_cartesian(expand = FALSE) +
facet_grid(. ~ agegp, switch = "x") +
theme_classic() +
theme(strip.background = element_blank(), strip.placement = "outside") +
ggtitle("P(tobacco consumption category | age group)") +
xlab("age group")
esoph_plot +
stat_summary(fun = median, geom = "bar", fill = "gray65", width = 1, color = "white") +
stat_pointinterval()
```
The bars in this case might present a false sense of precision, so we could also try CCDF barplots instead:
```{r fig.width = med_width, fig.height = med_height/2}
esoph_plot +
stat_ccdfinterval() +
expand_limits(x = 0) #ensure bars go to 0
```
This output should be very similar to the output from the corresponding `m_esoph_rs` model in `vignette("tidy-rstanarm")` (modulo different priors), though brms does more of the work for us to produce it than `rstanarm` does.